Adversity is not a gift

Nov 29 JDN 2459183

For the last several weeks I’ve been participating in a program called “positive intelligence” (which they abbreviate “PQ” even though that doesn’t make sense); it’s basically a self-help program that is designed to improve mood and increase productivity. I am generally skeptical of such things, and I could tell from the start that it was being massively oversold, but I had the opportunity to participate for free, and I looked into the techniques involved and most of them seem to be borrowed from cognitive-behavioral therapy and mindfulness meditation.

Overall, I would say that the program has had small but genuine benefits for me. I think the most helpful part was actually getting the chance to participate in group sessions (via Zoom of course) with others also going through the program. That kind of mutual social support can make a big difference. The group I joined was all comprised of fellow economists (some other grad students, some faculty), so we had a lot of shared experiences.

Some of the techniques feel very foolish, and others just don’t seem to work for me; but I did find at least some of the meditation techniques (which they annoyingly insist on calling by the silly name “PQ reps”) have helped me relax.

But there’s one part of the PQ program in particular that I just can’t buy into, and this is the idea that adversity is a gift and an opportunity.

They call it the “Sage perspective”: You observe the world without judging what is good or bad, and any time you think something is bad, you find a way to transform it into a gift and an opportunity. The claim is that everything—or nearly everything—that happens to you can make you better off. There’s a lot of overlap here with the attitude “Everything happens for a reason”.

I don’t doubt that sincerely believing this would make you happier. Nevertheless, it is obviously false.

If indeed adversity were a gift, we would seek it out. If getting fired or going bankrupt or getting sick were a gift and an opportunity, we’d work to make these things happen.

Yes, it’s true that sometimes an event which seems bad at the time can turn out to have good consequences in the long run. This is simply because we are unable to foresee all future ramifications. Sometimes things turn out differently than you think they will. But most of the time, when something seems bad, it is actually bad.

There might be some small amount of discomfort or risk that would be preferable to a life of complete safety and complacency; but we are perfectly capable of seeking out whatever discomfort or risk we choose. Most of us live with far more discomfort and risk than we would prefer, and simply have no choice in the matter.

If adversity were a gift, people would thank you for giving it to them. “Thanks for dumping me!” “Thanks for firing me!” “Thanks for punching me!” These aren’t the sort of thing we hear very often (at least not sincerely).

I think this is fairly obvious, honestly, so I won’t belabor it any further. But it raises a question: Is there a way to salvage the mental health benefits of this attitude while abandoning its obvious falsehood?

“Everything happens for a reason” doesn’t work; we live in a universe of deep randomness, ruled by the blind idiot gods of natural law.

“Every cloud has a silver lining” is better; but clearly not every bad thing has an upside, or if it does the upside can be so small as to be utterly negligible. (What was the upside of Rwandan genocide?) Restricted to ordinary events like getting fired this one works pretty well; but it obviously fails for the most extreme traumas, and doesn’t seem particularly helpful for the death of a loved one either.

“What doesn’t kill me makes me stronger” is better still, but clearly not true in every case; some bad events that don’t actually kill us can traumatize us and make the rest of our lives harder. Perhaps “What doesn’t permanently damage me makes me stronger”?

I think the version of this attitude that I have found closest to the truth is “Everything is raw material”. Sometimes bad things just happen: Bad luck, or bad actions, can harm just about anyone at just about any time. But it is within our power to decide how we will respond to what happens to us, and wallowing in despair is almost never the best response.

Thus, while it is foolish to see adversity as a gift, it is not so foolish to see it as an opportunity. Don’t try to pretend that bad things aren’t bad. There’s no sense in denying that we would prefer some outcomes over others, and we feel hurt or disappointed when things don’t turn out how we wanted. Yet even what is bad can still contain within it chances to learn or make things better.

What’s wrong with “should”?

Nov 8 JDN 2459162

I have been a patient in cognitive behavioral therapy (CBT) for many years now. The central premise that thoughts can influence emotions is well-founded, and the results of CBT are empirically well supported.

One of the central concepts in CBT is cognitive distortions: There are certain systematic patterns in how we tend to think, which often results in beliefs and emotions that are disproportionate with reality.

Most of the cognitive distortions CBT deals with make sense to me—and I am well aware that my mind applies them frequently: All-or-nothing, jumping to conclusions, overgeneralization, magnification and minimization, mental filtering, discounting the positive, personalization, emotional reasoning, and labeling are all clearly distorted modes of thinking that nevertheless are extremely common.

But there’s one “distortion” on CBT lists that always bothers me: “should statements”.

Listen to this definition of what is allegedly a cognitive distortion:

Another particularly damaging distortion is the tendency to make “should” statements. Should statements are statements that you make to yourself about what you “should” do, what you “ought” to do, or what you “must” do. They can also be applied to others, imposing a set of expectations that will likely not be met.

When we hang on too tightly to our “should” statements about ourselves, the result is often guilt that we cannot live up to them. When we cling to our “should” statements about others, we are generally disappointed by their failure to meet our expectations, leading to anger and resentment.

So any time we use “should”, “ought”, or “must”, we are guilty of distorted thinking? In other words, all of ethics is a cognitive distortion? The entire concept of obligation is a symptom of a mental disorder?

Different sources on CBT will define “should statements” differently, and sometimes they offer a more nuanced definition that doesn’t have such extreme implications:

Individuals thinking in ‘shoulds’, ‘oughts; or ‘musts’ have an ironclad view of how they and others ‘should’ and ‘ought’ to be. These rigid views or rules can generate feels of anger, frustration, resentment, disappointment and guilt if not followed.

Example: You don’t like playing tennis but take lessons as you feel you ‘should’, and that you ‘shouldn’t’ make so many mistakes on the court, and that your coach ‘ought to’ be stricter on you. You also feel that you ‘must’ please him by trying harder.

This is particularly problematic, I think, because of the All-or-Nothing distortion which does genuinely seem to be common among people with depression: Unless you are very clear from the start about where to draw the line, our minds will leap to saying that all statements involving the word “should” are wrong.

I think what therapists are trying to capture with this concept is something like having unrealistic expectations, or focusing too much on what could or should have happened instead of dealing with the actual situation you are in. But many seem to be unable to articulate that clearly, and instead end up asserting that entire concept of moral obligation is a cognitive distortion.

There may be a deeper error here as well: The way we study mental illness doesn’t involve enough comparison with the control group. Psychologists are accustomed to asking the question, “How do people with depression think?”; but they are not accustomed to asking the question, “How do people with depression think compared to people who don’t?” If you want to establish that A causes B, it’s not enough to show that those with B have A; you must also show that those who don’t have B also don’t have A.

This is an extreme example for illustration, but suppose someone became convinced that depression is caused by having a liver. They studied a bunch of people with depression, and found that they all had livers; hypothesis confirmed! Clearly, we need to remove the livers, and that will cure the depression.

The best example I can find of a study that actually asked that question compared nursing students and found that cognitive distortions explain about 20% of the variance in depression. This is a significant amount—but still leaves a lot unexplained. And most of the research on depression doesn’t even seem to think to compare against people without depression.

My impression is that some cognitive distortions are genuinely more common among people with depression—but not all of them. There is an ongoing controversy over what’s called the depressive realism effect, which is the finding that in at least some circumstances the beliefs of people with mild depression seem to be more accurate than the beliefs of people with no depression at all. The result is controversial both because it seems to threaten the paradigm that depression is caused by distortions, and because it seems to be very dependent on context; sometimes depression makes people more accurate in their beliefs, other times it makes them less accurate.

Overall, I am inclined to think that most people have a variety of cognitive distortions, but we only tend to notice when those distortions begin causing distress—such when are they involved in depression. Human thinking in general seems to be a muddled mess of heuristics, and the wonder is that we function as well as we do.

Does this mean that we should stop trying to remove cognitive distortions? Not at all. Distorted thinking can be harmful even if it doesn’t cause you distress: The obvious example is a fanatical religious or political belief that leads you to harm others. And indeed, recognizing and challenging cognitive distortions is a highly effective treatment for depression.

Actually I created a simple cognitive distortion worksheet based on the TEAM-CBT approach developed by David Burns that has helped me a great deal in a remarkably short time. You can download the worksheet yourself and try it out. Start with a blank page and write down as many negative thoughts as you can, and then pick 3-5 that seem particularly extreme or unlikely. Then make a copy of the cognitive distortion worksheet for each of those thoughts and follow through it step by step. Particularly do not ignore the step “This thought shows the following good things about me and my core values:”; that often feels the strangest, but it’s a critical part of what makes the TEAM-CBT approach better than conventional CBT.

So yes, we should try to challenge our cognitive distortions. But the mere fact that a thought is distressing doesn’t imply that it is wrong, and giving up on the entire concept of “should” and “ought” is throwing out a lot of babies with that bathwater.

We should be careful about labeling any thoughts that depressed people have as cognitive distortions—and “should statements” is a clear example where many psychologists have overreached in what they characterize as a distortion.

What meritocracy trap?

Nov 1 JDN 2459155

So I just finished reading The Meritocracy Trap by David Markovits.

The basic thesis of the book is that America’s rising inequality is not due to a defect in our meritocratic ideals, but is in fact their ultimate fruition. Markovits implores us to reject the very concept of meritocracy, and replace it with… well, something, and he’s never very clear about exactly what.

The most frustrating thing about reading this book is trying to figure out where Markovits draws the line for “elite”. He rapidly jumps between talking about the upper quartile, the upper decile, the top 1%, and even the top 0.1% or top 0.01% while weaving his narrative. The upper quartile of the US contains 75 million people; the top 0.01% contains only 300,000. The former is the size of Germany, the latter the size of Iceland (which has fewer people than Long Beach). Inequality which concentrates wealth in the top quartile of Americans is a much less serious problem than inequality which concentrates wealth in the top 0.01%. It could still be a problem—those lower three quartiles are people too—but it is definitely not nearly as bad.

I think it’s particularly frustrating to me personally, because I am an economist, which means both that such quantitative distinctions are important to me, and also that whether or not I myself am in this “elite” depends upon which line you are drawing. Do I have a post-graduate education? Yes. Was I born into the upper quartile? Not quite, but nearly. Was I raised by married parents in a stable home? Certainly. Am I in the upper decile and working as a high-paid professional? Hopefully I will be soon. Will I enter the top 1%? Maybe, maybe not. Will I join the top 0.1%? Probably not. Will I ever be in the top 0.01% and a captain of industry? Almost certainly not.

So, am I one of the middle class who are suffering alienation and stagnation, or one of the elite who are devouring themselves with cutthroat competition? Based on BLS statistics for economists and job offers I’ve been applying to, my long-term household income is likely to be about 20-50% higher than my parents’; this seems like neither the painful stagnation he attributes to the middle class nor the unsustainable skyrocketing of elite incomes. (Even 50% in 30 years is only 1.4% per year, about our average rate of real GDP growth.) Marxists would no doubt call me petit bourgeoisie; but isn’t that sort of the goal? We want as many people as possible to live comfortable upper-middle class lives in white-collar careers?

Markovits characterizes—dare I say caricatures—the habits of the middle-class versus the elite, and once again I and most people I know cross-cut them: I spend more time with friends than family (elite), but I cook familiar foods, not fancy dinners (middle); I exercise fairly regularly and don’t watch much television (elite) but play a lot of video games and sleep a lot as well (middle). My web searches involve technology and travel (elite), but also chronic illness (middle). I am a donor to Amnesty International (elite) but also play tabletop role-playing games (middle). I have a functional, inexpensive car (middle) but a top-of-the-line computer (elite)—then again that computer is a few years old now (middle). Most of the people I hang out with are well-educated (elite) but struggling financially (middle), civically engaged (elite) but pessimistic (middle). I rent my apartment and have a lot of student debt (middle) but own stocks (elite). (The latter seemed like a risky decision before the pandemic, but as stock prices have risen and student loan interest was put on moratorium, it now seems positively prescient.) So which class am I, again?

I went to public school (middle) but have a graduate degree (elite). I grew up in Ann Arbor (middle) but moved to Irvine (elite). Then again my bachelor’s was at a top-10 institution (elite) but my PhD will be at only a top-50 (middle). The beautiful irony there is that the top-10 institution is the University of Michigan and the top-50 institution is the University of California, Irvine. So I can’t even tell which class each of those events is supposed to represent! Did my experience of Ann Arbor suddenly shift from middle class to elite when I graduated from public school and started attending the University of Michigan—even though about a third of my high school cohort did exactly that? Was coming to UCI an elite act because it’s a PhD in Orange County, or a middle-class act because it’s only a top-50 university?

If the gap between these two classes is such a wide chasm, how am I straddling it? I honestly feel quite confident in characterizing myself as precisely the upwardly-mobile upper-middle class that Markovits claims no longer exists. Perhaps we’re rarer than we used to be; perhaps our status is more precarious; but we plainly aren’t gone.

Markovits keeps talking about “radical differences” “not merely in degree but in kind” between “subordinate” middle-class workers and “superordinate” elite workers, but if the differences are really that stark, why is it so hard to tell which group I’m in? From what I can see, the truth seems less like a sharp divide between middle-class and upper-class, and more like an increasingly steep slope from middle-class to upper-middle class to upper-class to rich to truly super-rich. If I had to put numbers on this, I’d say annual household incomes of about $50,000, $100,000, $200,000, $400,000, $1 million, and $10 million respectively. (And yet perhaps I should add more categories: Even someone who makes $10 million a year has only pocket change next to Elon Musk or Jeff Bezos.) The slope has gotten steeper over time, but it hasn’t (yet?) turned into a sharp cliff the way Markovits describes. America’s Lorenz curve is clearly too steep, but it doesn’t have a discontinuity as far as I can tell.

Some of the inequalities Markovits discusses are genuine, but don’t seem to be particularly related to meritocracy. The fact that students from richer families go to better schools indeed seems unjust, but the problem is clearly not that the rich schools are too good (except maybe at the very top, where truly elite schools seem a bit excessive—five-figure preschool tuition?), but that the poor schools are not good enough. So it absolutely makes sense to increase funding for poor schools and implement various reforms, but this is hardly a radical notion—nor is it in any way anti-meritocratic. Providing more equal opportunities for the poor to raise their own station is what meritocracy is all about.

Other inequalities he objects to seem, if not inevitable, far too costly to remove: Educated people are better parents, who raise their children in ways that make them healthier, happier, and smarter? No one is going to apologize for being a good parent, much less stop doing so because you’re concerned about what it does to inequality. If you have some ideas for how we might make other people into better parents, by all means let’s hear them. But I believe I speak for the entire upper-middle class when I say: when I have kids of my own, I’m going to read to them, I’m not going to spank them, and there’s not a damn thing you can do to change my mind on either front. Quite frankly, this seems like a heavy-handed satire of egalitarianism, right out of Harrison Bergeron: Let’s make society equal by forcing rich people to neglect and abuse their kids as much as poor people do! My apologies to Vonnegut: I thought you were ridiculously exaggerating, but apparently some people actually think like this.

This is closely tied with the deepest flaw in the argument: The meritocratic elite are actually more qualified. It’s easy to argue that someone like Donald Trump shouldn’t rule the world; he’s a deceitful, narcissistic, psychopathic, incompetent buffoon. (The only baffling part is that 40% of American voters apparently disagree.) But it’s a lot harder to see why someone like Bill Gates shouldn’t be in charge of things: He’s actually an extremely intelligent, dedicated, conscientious, hard-working, ethical, and competent individual. Does he deserve $100 billion? No, for reasons I’ve talked about before. But even he knows that! He’s giving most of it away to highly cost-effective charities! Bill Gates alone has saved several million lives by his philanthropy.

Markovits tries to argue that the merits of the meritocratic elite are arbitrary and contextual, like the alleged virtues of the aristocratic class: “The meritocratic virtues, that is, are artifacts of economic inequality in just the fashion in which the pitching virtues are artifacts of baseball.” (p. 264) “The meritocratic achievement commonly celebrated today, no less than the aristocratic virtue acclaimed in the ancien regime, is a sham.” (p. 268)

But it’s pretty hard for me to see how things like literacy, knowledge of history and science, and mathematical skill are purely arbitrary. Even the highly specialized skills of a quantum physicist, software engineer, or geneticist are clearly not arbitrary. Not everyone needs to know how to solve the Schrodinger equation or how to run a polymerase chain reaction, but our civilization greatly benefits from the fact that someone does. Software engineers aren’t super-productive because of high inequality; they are super-productive because they speak the secret language of the thinking machines. I suppose some of the skills involved in finance, consulting, and law are arbitrary and contextual; but he makes it sound like the only purpose graduate school serves is in teaching us table manners.

Precisely by attacking meritocracy, Markovits renders his own position absurd. So you want less competent people in charge? You want people assigned to jobs they’re not good at? You think businesses should go out of their way to hire employees who will do their jobs worse? Had he instead set out to show how American society fails at achieving its meritocratic ideals—indeed, failing to provide equality of opportunity for the poor is probably the clearest example of this—he might have succeeded. But instead he tries to attack the ideals themselves, and fails miserably.

Markovits avoids the error that David Graeber made: Graeber sees that there are many useless jobs but doesn’t seem to have a clue why these jobs exist (and turns to quite foolish Marxian conspiracy theories to explain it). Markovits understands that these jobs are profitable for the firms that employ them, but unproductive for society as a whole. He is right; this is precisely what virtually the entire fields of finance, sales, advertising, and corporate law consist of. Most people in our elite work very hard with great skill and competence, and produce great profits for the corporations that employ them, all while producing very little of genuine societal value. But I don’t see how this is a flaw in meritocracy per se.

Nor does Markovits stop at accusing employment of being rent-seeking; he takes aim at education as well: “when the rich make exceptional investments in schooling, this does reduce the value of ordinary, middle-class training and degrees. […] Meritocratic education inexorably engenders a wasteful and destructive arms educational arms race, which ultimately benefits no one, not even the victors.” (p.153) I don’t doubt that education is in part such a rent-seeking arms race, and it’s worthwhile to try to minimize that. But education is not entirely rent-seeking! At the very least, is there not genuine value in teaching children to read and write and do arithmetic? Perhaps by the time we get to calculus or quantum physics or psychopathology we have reached diminishing returns for most students (though clearly at least some people get genuine value out of such things!), but education is not entirely comprised of signaling or rent-seeking (and nor do “sheepskin effects” prove otherwise).

My PhD may be less valuable to me than it would be to someone in my place 40 years ago, simply because there are more people with PhDs now and thus I face steeper competition. Then again, perhaps not, as the wage premium for college and postgraduate education has been increasing, not decreasing, over that time period. (How much of that wage premium is genuine social benefit and how much is rent-seeking is difficult to say.) In any case it’s definitely still valuable. I have acquired many genuine skills, and will in fact be able to be genuinely more productive as well as compete better in the labor market than I would have without it. Some parts of it have felt like a game where I’m just trying to stay ahead of everyone else, but it hasn’t all been that. A world where nobody had PhDs would be a world with far fewer good scientists and far slower technological advancement.

Abandoning meritocracy entirely would mean that we no longer train people to be more productive or match people to the jobs they are most qualified to do. Do you want a world where surgery is not done by the best surgeons, where airplanes are not flown by the best pilots? This necessarily means less efficient production and an overall lower level of prosperity for society as a whole. The most efficient way may not be the best way, but it’s still worth noting that it’s the most efficient way.

Really, is meritocracy the problem, or is it something else?

Markovits is clearly right that something is going wrong with American society: Our inequality is much too high, and our job market is much too cutthroat. I can’t even relate to his description of what the job market was like in the 1960s (“Old Economy Steve” has it right): “Even applicants for white-collar jobs received startlingly little scrutiny. For most midcentury workers, getting a job did not involve any application at all, in the competitive sense of the term.” (p.203)

In fact, if anything he seems to understate the difference across time, perhaps because it lets him overstate the difference across class (p. 203):

Today, by contrast, the workplace is methodically arranged around gradations of skill. Firms screen job candidates intensively at hiring, and they then sort elite and non-elite workers into separate physical spaces.

Only the very lowest-wage employers, seeking unskilled workers, hire casually. Middle-class employers screen using formal cognitive tests and lengthy interviews. And elite employers screen with urgent intensity, recruiting from only a select pool and spending millions of dollars to probe applicants over several rounds of interviews, lasting entire days.

Today, not even the lowest-wage employers hire casually! Have you ever applied to work at Target? There is a personality test you have to complete, which I presume is designed to test your reliability as an obedient corporate drone. Never in my life have I gotten a job that didn’t involve either a lengthy application process or some form of personal connection—and I hate to admit it, but usually the latter. It is literally now harder to get a job as a cashier at Target than it was to get a job as an engineer at Ford 60 years ago.

But I still can’t shake the feeling that meritocracy is not exactly what’s wrong here. The problem with the sky-high compensation packages at top financial firms isn’t that they are paid to people who are really good at their jobs; it’s that those jobs don’t actually accomplish anything beneficial for society. Where elite talent and even elite compensation is combined with genuine productivity, such as in science and engineering, it seems unproblematic (and I note that Markovits barely even touches on these industries, perhaps because he sees they would undermine his argument). The reason our economic growth seems to have slowed as our inequality has massively surged isn’t that we are doing too good a job of rewarding people for being productive.

Indeed, it seems like the problem may be much simpler: Labor supply exceeds labor demand.

Take a look at this graph from the Federal Reserve Bank of San Francisco:

[Beveridge_curve_data.png]

This graph shows the relationship over time between unemployment and job vacancies. As you can see, they are generally inversely related: More vacancies means less unemployment. I have drawn in a green line which indicates the cutoff between having more vacancies than unemployment—upper left—and having more unemployment than vacancies—lower right. We have almost always been in the state of having more unemployment than we have vacancies; notably, the mid-1960s were one of the few periods in which we had significantly more vacancies than unemployment.

For decades we’ve been instituting policies to try to give people “incentives to work”; but there is no shortage of labor in this country. We seem to have plenty of incentives to work—what we need are incentives to hire people and pay them well.

Indeed, perhaps we need incentives not to work—like a basic income or an expanded social welfare system. Thanks to automation, productivity is now astonishingly high, and yet we work ourselves to death instead of enjoying leisure.

And of course there are various other policy changes that have made our inequality worse—chiefly the dramatic drops in income tax rates at the top brackets that occurred under Reagan.

In fact, many of the specific suggestions Markovits makes—which, much to my chagrin, he waits nearly 300 pages to even mention—are quite reasonable, or even banal: He wants to end tax deductions for alumni donations to universities and require universities to enroll more people from lower income brackets; I could support that. He wants to regulate finance more stringently, eliminate most kinds of complex derivatives, harmonize capital gains tax rates to ordinary income rates, and remove the arbitrary cap on payroll taxes; I’ve been arguing for all of those things for years. What about any of these policies is anti-meritocratic? I don’t see it.

More controversially, he wants to try to re-organize production to provide more opportunities for mid-skill labor. In some industries I’m not sure that’s possible: The 10X programmer is a real phenomenon, and even mediocre programmers and engineers can make software and machines that are a hundred times as productive as doing the work by hand would be. But some of his suggestions make sense, such as policies favoring nurse practitioners over specialist doctors and legal secretaries instead of bar-certified lawyers. (And please, please reform the medical residency system! People die from the overwork caused by our medical residency system.)

But I really don’t see how not educating people or assigning people to jobs they aren’t good at would help matters—which means that meritocracy, as I understand the concept, is not to blame after all.

How to be a good writer

Oct 25 JDN 2459148

“A writer is someone for whom writing is more difficult than it is for other people.”
~ Thomas Mann

“You simply sit down at the typewriter, open your veins, and bleed.”

~ Red Smith

Why is it so difficult to write well? Why is it that those of us who write the most often find it the most agonizing?

My guess is that many other art forms are similar, but writing is what I know best.

I have come to realize that there are four major factors which determine the quality of someone’s writing, and the pain and challenge of writing comes from the fact that they are not very compatible with one another.

The first is talent. To a certain degree, one can be born a better or worse writer, or become so through forces not of one’s own making. This one costs nothing to get if you already have it, but if you don’t have it, you can’t really acquire it. If you do lack talent, that doesn’t mean you can’t write; but it does limit how successful you are likely to be at writing. (Then again, some very poorly-written books have made some very large sums of money!) It’s also very difficult to know whether you really have talent; people tell me I do, so I suppose I believe them.

The second is practice. You must write and keep on writing. You must write many things in many contexts, and continue to write despite various pressures and obstacles trying to stop you from writing. Reading is also part of this process, as we learn new ways to use words by seeing how others have used them. In fact, you should read more words than you write.

The third is devotion. If you are to truly write well, you must pour your heart and soul into what you write. I can tell fairly quickly whether someone is serious about writing or not by seeing how they react to the metaphor I like to use: “I carve off shards of my soul and assemble them into robots that I release into the world; and when the robots fail, I wonder whether I have assembled them incorrectly, or if there is something fundamentally wrong with my soul itself.” Most people react with confusion. Serious writers nod along in agreement.

The fourth is criticism. You must seek out criticism from a variety of sources, you must accept that criticism, and you must apply it in improving your work in the future. You must avoid becoming defensive, but you must also recognize that disagreement will always exist. You will never satisfy everyone with what you write. The challenge is to satisfy as much of your target audience as possible.

And therein lies the paradox: For when you have devoted your heart and soul into a work, receiving criticism on it can make you want to shut down, wanting to avoid that pain. And thus, you stop practicing, and you stop improving.

What can be done about this?

I am told that it helps to “get a thick skin”, but seeing as I’ve spent the better part of my life trying to do that and failed completely, this may not be the most useful advice. Indeed, even if it can be done it may not be worth it: The most thick-skinned people I know of are generally quite incompetent at whatever they do, because they ignore criticism. There are two ways to be a narcissist: One is to be so sensitive to criticism that you refuse to hear it; the other is to be so immune to criticism that it has no effect on you. (The former is “covert narcissism”, the latter is “overt narcissism”.)

One thing that does seem to help is learning to develop some measure of detachment frrom your work, so that you can take criticism of your work as applying to that work and not to yourself. Usually the robots really are just misassembled, and there’s nothing wrong with your soul.

But this can be dangerous as well: If you detach yourself too much from your work, you lose your devotion to it, and it becomes mechanically polished but emotionally hollow. If you optimize over and over to what other people want, it eventually stops being the work that had meaning for you.

Perhaps what ultimately separates good writers from everyone else is not what they can do, but what they feel they must do: Serious writers feel a kind of compulsion to write, an addiction to transferring thoughts into words. Often they don’t even particularly enjoy it; they don’t “want” to write in the ordinary sense of the word. They simply must write, feeling as though they die or go mad if they ever were forced to stop. It is this compulsion that gets them to persevere in the face of failure and rejection—and the self-doubt that rejection drives.

And if you don’t feel that compulsion? Honestly, maybe you’re better off than those of us who do.

What would a better job market look like?

Sep 13 JDN 2459106

I probably don’t need to tell you this, but getting a job is really hard. Indeed, much harder than it seems like it ought to be.

Having all but completed my PhD, I am now entering the job market. The job market for economists is quite different from the job market most people deal with, and these differences highlight some potential opportunities for improving job matching in our whole economy—which, since employment is such a large part of our lives, could have wide-ranging benefits for our society.

The most obvious difference is that the job market for economists is centralized: Job postings are made through the American Economic Association listing of Job Openings for Economists (often abbrievated AEA JOE); in a typical year about 4,000 jobs are posted there. All of them have approximately the same application deadline, near the end of the year. Then, after applying to various positions, applicants get interviewed in rapid succession, all at the annual AEA conference. Then there is a matching system, where applicants get to send two “signals” indicating their top choices and then offers are made.

This year of course is different, because of COVID-19. The conference has been canceled, with all of its presentations moved online; interviews will also be conducted online. Perhaps more worrying, the number of postings has been greatly reduced, and based on past trends may be less than half of the usual number. (The number of applicants may also be reduced, but it seems unlikely to drop as much as the number of postings does.)

There are a number of flaws in even this system. First, it’s too focused on academia; very few private-sector positions use the AEA JOE system, and almost no government positions do. So those of us who are not so sure we want to stay in academia forever end up needing to deal with both this system and the conventional system in parallel. Second, I don’t understand why they use this signaling system and not a deferred-acceptance matching algorithm. I should be able to indicate more about my preferences than simply what my top two choices are—particularly when most applicants apply to over 100 positions. Third, it isn’t quite standardized enough—some positions do have earlier deadlines or different application materials, so you can’t simply put together one application packet and send it to everyone at once.

Still, it’s quite obvious that this system is superior to the decentralized job market that most people deal with. Indeed, this becomes particularly obvious when one is participating in both markets at once, as I am. The decentralized market has a wide range of deadlines, where upon seeing an application you may need to submit to it within that week, or you may have several months to respond. Nearly all applications require a resume, but different institutions will expect different content on it. Different applications may require different materials: Cover letters, references, writing samples, and transcripts are all things that some firms will want and others won’t.

Also, this is just my impression from a relatively small sample, but I feel like the AEA JOE listings are more realistic, in the following sense: They don’t all demand huge amounts of prior experience, and those that do ask for prior experience are either high-level positions where that’s totally reasonable, or are willing to substitute education for experience. For private-sector job openings you basically have to subtract three years from whatever amount of experience they say they require, because otherwise you’d never have anywhere you could apply to. (Federal government jobs are a weird case here; they all say they require a lot of experience at a specific government pay grade, but from talking with those who have dealt with the system before, they are apparently willing to make lots of substitutions—private-sector jobs, education, and even hobbies can sometimes substitute.)

I think this may be because the decentralized market has to some extent unraveled. The job market is the epitome of a matching market; unraveling in a matching market occurs when there is fierce competition for a small number of good candidates or, conversely, a small number of good openings. Each firm has the incentive to make a binding offer earlier than the others, with a short deadline so that candidates don’t have time to shop around. As firms compete with each other, they start making deadlines earlier and earlier until candidates feel like they are in a complete crapshoot: An offer made on Monday might be gone by Friday, and you have no way of knowing if you should accept it now or wait for a better one to come along. This is a Tragedy of the Commons: Given what other firms are doing, each firm benefits from making an earlier binding offer. But once they all make early offers, that benefit disappears and the result just makes the whole system less efficient.

The centralization of the AEA JOE market prevents this from happening: Everyone has common deadlines and does their interviews at the same time. Each institution may be tempted to try to break out of the constraints of the centralized market, but they know that if they do, they will be punished by receiving fewer applicants.

The fact that the centralized market is more efficient is likely a large part of why economics PhDs have the lowest unemployment rate of any PhD graduates and nearly the lowest unemployment rate of any job sector whatsoever. In some sense we should expect this: If anyone understands how to make employment work, it should be economists. Noah Smith wrote in 2013 (and I suppose I took it to heart): “If you get a PhD, get an economics PhD.” I think PhD graduates are the right comparison group here: If we looked at the population as a whole, employment rates and salaries for economists look amazing, but that isn’t really fair since it’s so much harder to become an economist than it is to get most other jobs. But I don’t think it’s particularly easier to get a PhD in physics or biochemistry than to get one in economics, and yet economists still have a lower unemployment rate than physicists or biochemists. (Though it’s worth noting that any PhD—yes, even in the humanities—will give you a far lower risk of unemployment than the general population.) The fact that we have AEA JOE and they don’t may be a major factor here.


So, here’s my question: Why don’t we do this in more job markets? It would be straightforward enough to do this for all PhD graduates, at least—actually my understanding is that some other disciplines do have centralized markets similar to the one in economics, but I’m not sure how common this is.

The federal government could relatively easily centralize its own job market as well; maybe not for positions that need to be urgently filled, but anything that can wait several months would be worth putting into a centralized system that has deadlines once or twice a year.

But what about the private sector, which after all is where most people work? Could we centralize that system as well?

It’s worth noting the additional challenges that immediately arise: Many positions need to be filled immediately, and centralization would make that impossible. There are thousands of firms that would need to be coordinated (there are at least 100,000 firms in the US with 100 or more employees). There are millions of different jobs to be filled, requiring a variety of different skills. In an average month over 5 million jobs are filled in the United States.

Most people want a job near where they live, so part of the solution might be to centralize only jobs within a certain region, such as a particular metro area. But if we are limited to open positions of a particular type within a particular city, there might not be enough openings at any given time to be worth centralizing. And what about applicants who don’t care so much about geography? Should they be applying separately to each regional market?

Yet even with all this in mind, I think some degree of centralization would be feasible and worthwhile. If nothing else, I think standardizing deadlines and application materials could make a significant difference—it’s far easier to apply to many places if they all use the same application and accept them at the same time.

Another option would be to institute widespread active labor market policies, which are a big part of why #ScandinaviaIsBetter. Denmark especially invests heavily in such programs, which provide training and job matching for unemployed citizens. It is no coincidence that Denmark has kept their unemployment rate under 7% even through the worst of the Great Recession. The US unemployment rate fluctuates wildly with the business cycle, while most of Europe has steadier but higher unemployment. Indeed, the lowest unemployment rates in France over the last 30 years have exceeded the highest rates in Denmark over the same period. Denmark spends a lot on their active labor market programs, but I think they’re getting their money’s worth.

Such a change would make our labor markets more efficient, matching people to jobs that fit them better, increasing productivity and likely decreasing turnover. Wages probably wouldn’t change much, but working in a better job for the same wage is still a major improvement in your life. Indeed, job satisfaction is one of the strongest predictors of life satisfaction, which isn’t too surprising given how much of our lives we spend at work.

Sincerity inflation

Aug 30 JDN 2459092

What is the most saccharine, empty, insincere way to end a letter? “Sincerely”.

Whence such irony? Well, we’ve all been using it for so long that we barely notice it anymore. It’s just the standard way to end a letter now.

This process is not unlike inflation: As more and more dollars get spent, the value of a dollar decreases, and as a word or phrase gets used more and more, its meaning weakens.

It’s hardly just the word “Sincerely” itself that has thus inflated. Indeed, almost any sincere expression of caring often feels empty. We routinely ask strangers “How are you?” when we don’t actually care how they are.

I felt this quite vividly when I was applying to GiveWell (alas, they decided not to hire me). I was trying to express how much I care about GiveWell’s mission to maximize the effectiveness of charity at saving lives, and it was quite hard to find the words. I kept find myself saying things that anyone could say, whether they really cared or not. Fighting global poverty is nothing less than my calling in life—but how could I say that without sounding obsequious or hyperbolic? Anyone can say that they care about global poverty—and if you asked them, hardly anyone would say that they don’t care at all about saving African children from malaria—but how many people actually give money to the Against Malaria Foundation?

Or think about how uncomfortable it can feel to tell a friend that you care about them. I’ve seen quite a few posts on social media that are sort of scattershot attempts at this: “I love you all!” Since that is obviously not true—you do not in fact love all 286 of your Facebook friends—it has plausible deniability. But you secretly hope that the ones you really do care about will see its truth.

Where is this ‘sincerity inflation’ coming from? It can’t really be from overuse of sincerity in ordinary conversation—the question is precisely why such conversation is so rare.

But there is a clear source of excessive sincerity, and it is all around us: Advertising.

Every product is the “best”. They will all “change your life”. You “need” every single one. Every corporation “supports family”. Every product will provide “better living”. The product could be a toothbrush or an automobile; the ads are never really about the product. They are about how the corporation will make your family happy.

Consider the following hilarious subversion by the Steak-umms Twitter account (which is a candle in the darkness of these sad times; they have lots of really great posts about Coronavirus and critical thinking).

Kevin Farzard (who I know almost nothing about, but gather he’s a comedian?) wrote this on Twitter: “I just want one brand to tell me that we are not in this together and their health is our lowest priority”

Steak-umms diligently responded: “Kevin we are not in this together and your health is our lowest priority”

Why is this amusing? Because every other corporation—whose executives surely care less about public health than whatever noble creature runs the Steak-umms Twitter feed—has been saying the opposite: “We are all in this together and your health is our highest priority.”

We are so inundated with this saccharine sincerity by advertisers that we learn to tune it out—we have to, or else we’d go crazy and/or bankrupt. But this has an unfortunate side effect: We tune out expressions of caring when they come from other human beings as well.

Therefore let us endeavor to change this, to express our feelings clearly and plainly to those around us, while continuing to shield ourselves from the bullshit of corporations. (I choose that word carefully: These aren’t lies, they’re bullshit. They aren’t false so much as they are utterly detached from truth.) Part of this means endeavoring to be accepting and supportive when others express their feelings to us, not retreating into the comfort of dismissal or sarcasm. Restoring the value of our sincerity will require a concerted effort from many people acting at once.

For this project to succeed, we must learn to make a sharp distinction between the institutions that are trying to extract profits from us and the people who have relationships with us. This is not to say that human beings cannot lie or be manipulative; of course they can. Trust is necessary for all human relationships, but there is such a thing as too much trust. There is a right amount to trust others you do not know, and it is neither complete distrust nor complete trust. Higher levels of trust must be earned.

But at least human beings are not systematically designed to be amoral and manipulative—which corporations are. A corporation exists to do one thing: Maximize profit for its shareholders. Whatever else a corporation is doing, it is in service of that one ultimate end. Corporations can do many good things; but they sort of do it by accident, along the way toward their goal of maximizing profit. And when those good things stop being profitable, they stop doing them. Keep these facts in mind, and you may have an easier time ignoring everything that corporations say without training yourself to tune out all expressions of sincerity.

Then, perhaps one day it won’t feel so uncomfortable to tell people that we care about them.

Terrible but not likely, likely but not terrible

May 17 JDN 2458985

The human brain is a remarkably awkward machine. It’s really quite bad at organizing data, relying on associations rather than formal categories.

It is particularly bad at negation. For instance, if I tell you that right now, no matter what, you must not think about a yellow submarine, the first thing you will do is think about a yellow submarine. (You may even get the Beatles song stuck in your head, especially now that I’ve mentioned it.) A computer would never make such a grievous error.

The human brain is also quite bad at separation. Daniel Dennett coined a word “deepity” for a particular kind of deep-sounding but ultimately trivial aphorism that seems to be quite common, which relies upon this feature of the brain. A deepity has at least two possible readings: On one reading, it is true, but utterly trivial. On another, it would be profound if true, but it simply isn’t true. But if you experience both at once, your brain is triggered for both “true” and “profound” and yields “profound truth”. The example he likes to use is “Love is just a word”. Well, yes, “love” is in fact just a word, but who cares? Yeah, words are words. But love, the underlying concept it describes, is not just a word—though if it were that would change a lot.

One thing I’ve come to realize about my own anxiety is that it involves a wide variety of different scenarios I imagine in my mind, and broadly speaking these can be sorted into two categories: Those that are likely but not terrible, and those that are terrible but not likely.

In the former category we have things like taking an extra year to finish my dissertation; the mean time to completion for a PhD is over 8 years, so finishing in 6 instead of 5 can hardly be considered catastrophic.

In the latter category we have things like dying from COVID-19. Yes, I’m a male with type A blood and asthma living in a high-risk county; but I’m also a young, healthy nonsmoker living under lockdown. Even without knowing the true fatality rate of the virus, my chances of actually dying from it are surely less than 1%.

But when both of those scenarios are running through my brain at the same time, the first triggers a reaction for “likely” and the second triggers a reaction for “terrible”, and I get this feeling that something terrible is actually likely to happen. And indeed if my probability of dying were as high as my probability of needing a 6th year to finish my PhD, that would be catastrophic.

I suppose it’s a bit strange that the opposite doesn’t happen: I never seem to get the improbability of dying attached to the mildness of needing an extra year. The confusion never seems to trigger “neither terrible nor likely”. Or perhaps it does, and my brain immediately disregards that as not worthy of consideration? It makes a certain sort of sense: An event that is neither probable nor severe doesn’t seem to merit much anxiety.

I suspect that many other people’s brains work the same way, eliding distinctions between different outcomes and ending up with a sort of maximal product of probability and severity.
The solution to this is not an easy one: It requires deliberate effort and extensive practice, and benefits greatly from formal training by a therapist. Counter-intuitively, you need to actually focus more on the scenarios that cause you anxiety, and accept the anxiety that such focus triggers in you. I find that it helps to actually write down the details of each scenario as vividly as possible, and review what I have written later. After doing this enough times, you can build up a greater separation in your mind, and more clearly categorize—this one is likely but not terrible, that one is terrible but not likely. It isn’t a cure, but it definitely helps me a great deal. Perhaps it could help you.

Motivation under trauma

May 3 JDN 2458971

Whenever I ask someone how they are doing lately, I get the same answer: “Pretty good, under the circumstances.” There seems to be a general sense that—at least among the sort of people I interact with regularly—that our own lives are still proceeding more or less normally, as we watch in horror the crises surrounding us. Nothing in particular is going wrong for us specifically. Everything is fine, except for the things that are wrong for everyone everywhere.

One thing that seems to be particularly difficult for a lot of us is the sense that we suddenly have so much time on our hands, but can’t find the motivation to actually use this time productively. So many hours of our lives were wasted on commuting or going to meetings or attending various events we didn’t really care much about but didn’t want to feel like we had missed out on. But now that we have these hours back, we can’t find the strength to use them well.

This is because we are now, as an entire society, experiencing a form of trauma. One of the most common long-term effects of post-traumatic stress disorder is a loss of motivation. Faced with suffering we have no power to control, we are made helpless by this traumatic experience; and this makes us learn to feel helpless in other domains.

There is a classic experiment about learned helplessness; like many old classic experiments, its ethics are a bit questionable. Though unlike many such experiments (glares at Zimbardo), its experimental rigor was ironclad. Dogs were divided into three groups. Group 1 was just a control, where the dogs were tied up for a while and then let go. Dogs in groups 2 and 3 were placed into a crate with a floor that could shock them. Dogs in group 2 had a lever they could press to make the shocks stop. Dogs in group 3 did not. (They actually gave the group 2 dogs control over the group 3 dogs to make the shock times exactly equal; but the dogs had no way to know that, so as far as they knew the shocks ended at random.)

Later, dogs from both groups were put into another crate, where they no longer had a lever to press, but they could jump over a barrier to a different part of the crate where the shocks wouldn’t happen. The dogs from group 2, who had previously had some control over their own pain, were able to quickly learn to do this. The dogs from group 3, who had previously felt pain apparently at random, had a very hard time learning this, if they could ever learn it at all. They’d just lay there and suffer the shocks, unable to bring themselves to even try to leap the barrier.

The group 3 dogs just knew there was nothing they could do. During their previous experience of the trauma, all their actions were futile, and so in this new trauma they were certain that their actions would remain futile. When nothing you do matters, the only sensible thing to do is nothing; and so they did. They had learned to be helpless.

I think for me, chronic migraines were my first crate. For years of my life there was basically nothing I could do to prevent myself from getting migraines—honestly the thing that would have helped most would have been to stop getting up for high school that started at 7:40 AM every morning. Eventually I found a good neurologist and got various treatments, as well as learned about various triggers and found ways to avoid most of them. (Let me know if you ever figure out a way to avoid stress.) My migraines are now far less frequent than they were when I was a teenager, though they are still far more frequent than I would prefer.

Yet, I think I still have not fully unlearned the helplessness that migraines taught me. Every time I get another migraine despite all the medications I’ve taken and all the triggers I’ve religiously avoided, this suffering beyond my control acts as another reminder of the ultimate caprice of the universe. There are so many things in our lives that we cannot control that it can be easy to lose sight of what we can.

This pandemic is a trauma that the whole world is now going through. And perhaps that unity of experience will ultimately save us—it will make us see the world and each other a little differently than we did before.

There are a few things you can do to reduce your own risk of getting or spreading the COVID-19 infection, like washing your hands regularly, avoiding social contact, and wearing masks when you go outside. And of course you should do these things. But the truth really is that there is very little any one of us can do to stop this global pandemic. We can watch the numbers tick up almost in real-time—as of this writing, 1 million cases and over 50,000 deaths in the US, 3 million cases and over 200,000 deaths worldwide—but there is very little we can do to change those numbers.

Sometimes we really are helpless. The challenge we face is not to let this genuine helplessness bleed over and make us feel helpless about other aspects of our lives. We are currently sitting in a crate with no lever, where the shocks will begin and end beyond our control. But the day will come when we are delivered to a new crate, and given the chance to leap over a barrier; we must find the strength to take that leap.

For now, I think we can forgive ourselves for getting less done than we might have hoped. We’re still not really out of that first crate.

Authoritarianism and Masculinity

Apr 19 JDN 2458957

There has always been a significant difference between men and women voters, at least as long as we have been gathering data—and probably as long as women have been voting, which is just about to hit its centennial in the United States.

But the 2016 and 2018 elections saw the largest gender gaps we’ve ever recorded. Dividing by Presidential administrations, Bush would be from 2000 to 2006, when the gender gap never exceeded 18 percentage points, and averaged less than 10 points. Obama would be from 2008 to 2014, when the gender gap never exceeded 20 points and averaged about 15 points. In 2018, the gap stood at 23 percentage points.

Indeed, it is quite clear at this point that Trump’s support base comes mainly from White men.

This is far from the only explanatory factor here: Younger voters are much more liberal than older voters, more educated voters are more liberal than less educated voters, and urban voters are much more liberal than rural voters.

But the gender and race gaps are large enough that even if only White men with a college degree had voted, Trump would have still won, and even if only women without a college degree had voted, Trump would have lost. Trumpism is a white male identity movement.

And indeed it seems significant that Trump’s opponent was the first woman to be a US Presidential nominee from a major party.

Why would men be so much more likely to support Trump than women? Well, there’s the fact that Trump has been accused of sexual harassment dozens of times and sexual assault several times. Women are more likely to be victims of such behavior, and men are more likely to be perpetrators of it.

But I think that’s really a symptom of a broader cause, which is that authoritarianism is masculine.

Think about it: Can you even name a woman who was an authoritarian dictator? There have been a few queen tyrants historically, but not many; tyrants are almost always kings. And for all her faults, Margaret Thatcher was assuredly no Joseph Stalin.

Masculinity is tied to power, authority, strength, dominance: All things that authoritarians promise. It doesn’t even seem to matter that it’s always the dictator asserting power and dominance upon us, taking away the power and authority we previously had; the mere fact that some man is exerting power and dominance on someone seems to satisfy this impulse. And of course those who support authoritarians always seem to imagine that the dictator will oppress someone else—never me. (“I never thought leopards would eat my face!”)

Conversely, the virtues of democracy, such as equality, fairness, cooperation, and compromise, are coded feminine. This is how toxic masculinity sustains itself: Even being willing to talk about disagreements rather than fighting over them constitutes surrender to the feminine. So the mere fact that I am trying to talk them out of their insanely (both self- and other-) destructive norms proves that I serve the enemy.

I don’t often interact with Trump supporters, because doing so is a highly unpleasant experience. But when I have, certain themes kept reoccurring: “Trump is a real man”; “Democrats are pussies”; “they [people of color] are taking over our [White people’s] country”; “you’re a snowflake libtard beta cuck”.

Almost all of the content was about identity, particularly masculine and White identity. Virtually none of their defenses of Trump involved any substantive claims about policy, though some did at least reference the relatively good performance of the economy (up until recently—and that they all seem to blame on the “unforeseeable” pandemic, a “Black Swan”; nevermind that people actually did foresee it and were ignored). Ironically they are always the ones complaining about “identity politics”.

And while they would be the last to admit it, I noticed something else as well: Most of these men were deeply insecure about their own masculinity. They kept constantly trying to project masculine dominance, and getting increasingly aggravated when I simply ignored it rather than either submitting or responding with my own displays of dominance. Indeed, they probably perceived me as displaying a kind of masculine dominance: I was just countersignaling instead of signaling, and that’s what made them so angry. They clearly felt deeply envious of the fact that I could simply be secure in my own identity without feeling a need to constantly defend it.

But of course I wasn’t born that way. Indeed, the security I now feel in my own identity was very hard-won through years of agony and despair—necessitated by being a bisexual man in a world that even today isn’t very accepting of us. Even now I’m far from immune to the pressures of masculinity; I’ve simply learned to channel them better and resist their worst effects.

They call us “snowflakes” because they feel fragile, and fear their own fragility. And in truth, they are fragile. Indeed, fragile masculinity is one of the strongest predictors of support for Trump. But it is in the nature of fragile masculinity that pointing it out only aggravates it and provokes an even angrier response. Toxic masculinity is a very well-adapted meme; its capacity to defend itself is morbidly impressive, like the way that deadly viruses spread themselves is morbidly impressive.

This is why I think it is extremely dangerous to mock the size of Trump’s penis (or his hands, metonymously—though empirically, digit ratio slightly correlates with penis size, but overall hand size does not), or accuse his supporters of likewise having smaller penises. In doing so, you are reinforcing the very same toxic masculinity norms that underlie so much of Trump’s support. And this is even worse if the claim is true: In that case you’re also reinforcing that man’s own crisis of masculine identity.

Indeed, perhaps the easiest way to anger a man who is insecure about his masculinity is to accuse him of being insecure about his masculinity. It’s a bit of a paradox. I have even hesitated to write this post, for fear of triggering the same effect; but I realized that it’s more likely that you, my readers, would trigger it inadvertently, and by warning you I might reduce the overall rate at which it is triggered.

I do not use the word “triggered” lightly; I am talking about a traumatic trigger response. These men have been beaten down their whole lives for not being “manly enough”, however defined, and they lash out by attacking the masculinity of every other man they encounter—thereby perpetuating the cycle of trauma. And stricter norms of masculinity also make coping with trauma more difficult, which is why men who exhibit stricter masculinity also are more likely to suffer PTSD in war. There are years of unprocessed traumatic memories in these men’s brains, and the only way they know to cope with them is to try to inflict them on someone else.

The ubiquity of “cuck” as an insult in the alt-right is also quite notable in this context. It’s honestly a pretty weird insult to throw around casually; it implies knowing all sorts of things about a person’s sexual relationships that you can’t possibly know. (For someone in an openly polyamorous relationship, it’s probably quite amusing.) But it’s a way of attacking masculine identity: If you were a “real man”, your wife wouldn’t be sleeping around. We accuse her of infidelity in order to accuse you of inferiority. (And if your spouse is male? Well then obviously you’re even worse than a “cuck”—you’re a “fag”.) There also seems to be some sort of association that the alt-right made between cuckoldry and politics, as though the election of Obama constitutes America “cheating” on them. I’m not sure whether it bothers them more that Obama is liberal, or that he is Black. Both definitely bother them a great deal.

How do we deal with these men? If we shouldn’t attack their masculinity for fear of retrenchment, and we can’t directly engage them on questions of policy because it means nothing to them, what then should we do? I’m honestly not sure. What these men actually need is years of psychotherapy to cope with their deep-seated traumas; but they would never seek it out, because that, too, is considered unmasculine. Of course you can’t be expected to provide the effect of years of psychotherapy in a single conversation with a stranger. Even a trained therapist wouldn’t be able to do that, nor would they be likely to give actual therapy sessions to angry strangers for free.

What I think we can do, however, is to at least try to refrain from making their condition worse. We can rigorously resist the temptation to throw the same insults back at them, accusing them of having small penises, or being cuckolds, or whatever. We should think of this the way we think of using “gay” as an insult (something I all too well remember from middle school): You’re not merely insulting the person you’re aiming it at, you’re also insulting an entire community of innocent people.

We should even be very careful about directly addressing their masculine insecurity; it may sometimes be necessary, but it, too, is sure to provoke a defensive response. And as I mentioned earlier, if you are a man and you are not constantly defending your own masculinity, they can read that as countersignaling your own superiority. This is not an easy game to win.

But the stakes are far too high for us to simply give up. The fate of America and perhaps even the world hinges upon finding a solution.

Do I want to stay in academia?

Apr 5 JDN 2458945

This is a very personal post. You’re not going to learn any new content today; but this is what I needed to write about right now.

I am now nearly finished with my dissertation. It only requires three papers (which, quite honestly, have very little to do with one another). I just got my second paper signed off on, and my third is far enough along that I can probably finish it in a couple of months.

I feel like I ought to be more excited than I am. Mostly what I feel right now is dread.

Yes, some of that dread is the ongoing pandemic—though I am pleased to report that the global number of cases of COVID-19 has substantially undershot the estimates I made last week, suggesting that at least most places are getting the virus under control. The number of cases and number of deaths has about doubled in the past week, which is a lot better than doubling every two days as it was at the start of the pandemic. And that’s all I want to say about COVID-19 today, because I’m sure you’re as tired of the wall-to-wall coverage of it as I am.

But most of the dread is about my own life, mainly my career path. More and more I’m finding that the world of academic research just isn’t working for me. The actual research part I like, and I’m good at it; but then it comes time to publish, and the journal system is so fundamentally broken, so agonizingly capricious, and has such ludicrous power over the careers of young academics that I’m really not sure I want to stay in this line of work. I honestly think I’d prefer they just flip a coin when you graduate and you get a tenure-track job if you get heads. Or maybe journals could roll a 20-sided die for each paper submitted and publish the papers that get 19 or 20. At least then the powers that be couldn’t convince themselves that their totally arbitrary and fundamentally unjust selection process was actually based on deep wisdom and selecting the most qualified individuals.

In any case I’m fairly sure at this point that I won’t have any publications in peer-reviewed journals by the time I graduate. It’s possible I still could—I actually still have decent odds with two co-authored papers, at least—but I certainly do not expect to. My chances of getting into a top journal at this point are basically negligible.

If I weren’t trying to get into academia, that fact would be basically irrelevant. I think most private businesses and government agencies are fairly well aware of the deep defects in the academic publishing system, and really don’t put a whole lot of weight on its conclusions. But in academia, publication is everything. Specifically, publication in top journals.

For this reason, I am now seriously considering leaving academia once I graduate. The more contact I have with the academic publishing system the more miserable I feel. The idea of spending another six or seven years desperately trying to get published in order to satisfy a tenure committee sounds about as appealing right now as having my fingernails pulled out one by one.

This would mean giving up on a lifelong dream. It would mean wondering why I even bothered with the PhD, when the first MA—let alone the second—would probably have been enough for most government or industry careers. And it means trying to fit myself into a new mold that I may find I hate just as much for different reasons: A steady 9-to-5 work schedule is a lot harder to sustain when waking up before 10 AM consistently gives you migraines. (In theory, there are ways to get special accommodations for that sort of thing; in practice, I’m sure most employers would drag their feet as much as possible, because in our culture a phase-delayed circadian rhythm is tantamount to being lazy and therefore worthless.)

Or perhaps I should aim for a lecturer position, perhaps at a smaller college, that isn’t so obsessed with research publication. This would still dull my dream, but would not require abandoning it entirely.

I was asked a few months ago what my dream job is, and I realized: It is almost what I actually have. It is so tantalizingly close to what I am actually headed for that it is painful. The reality is a twisted mirror of the dream.

I want to teach. I want to do research. I want to write. And I get to do those things, yes. But I want to them without the layers of bureaucracy, without the tiers of arbitrary social status called ‘prestige’, without the hyper-competitive and capricious system of journal publication. Honestly I want to do them without grading or dealing with publishers at all—though I can at least understand why some mechanisms for evaluating student progress and disseminating research are useful, even if our current systems for doing so are fundamentally defective.

It feels as though I have been running a marathon, but was only given a vague notion of the route beforehand. There were a series of flags to follow: This way to the bachelor’s, this way to the master’s, that way to advance to candidacy. Then when I come to the last set of flags, the finish line now visible at the horizon, I see that there is an obstacle course placed in my way, with obstacles I was never warned about, much less trained for. A whole new set of skills, maybe even a whole different personality, is necessary to surpass these new obstacles, and I feel utterly unprepared.

It is as if the last mile of my marathon must bedone on horseback, and I’ve never learned to ride a horse—no one ever told me I would need to ride a horse. (Or maybe they did and I didn’t listen?) And now every time I try to mount one, I fall off immediately; and the injuries I sustain seem to be worse every time. The bruises I thought would heal only get worse. The horses I must ride are research journals, and the injuries when I fall are psychological—but no less real, all too real. With each attempt I keep hoping that my fear will fade, but instead it only intensifies.

It’s the same pain, the same fear, that pulled me away from fiction writing. I want to go back, I hope to go back—but I am not strong enough now, and cannot be sure I ever will be. I was told that working in a creative profession meant working hard and producing good output; it turns out it doesn’t mean that at all. A successful career in a creative field actually means satisfying the arbitrary desires of a handful of inscrutable gatekeepers. It means rolling the dice over, and over, and over again, each time a little more painful than the last. And it turns out that this just isn’t something I’m good at. It’s not what I’m cut out for. And maybe it never will be.

An incompetent narcissist would surely fare better than I, willing to re-submit whatever refuse they produce a thousand times because they are certain they deserve to succeed. For, deep down, I never feel that I deserve it. Others tell me I do, and I try to believe them; but the only validation that feels like it will be enough is the kind that comes directly from those gatekeepers, the kind that I can never get. And truth be told, maybe if I do finally get that, it still won’t be enough. Maybe nothing ever will be.

If I knew that it would get easier one day, that the pain would, if not go away, at least retreat to a dull roar I could push aside, then maybe I could stay on this path. But this cannot be the rest of my life. If this is really what it means to have an academic career, maybe I don’t want one after all.

Or maybe it’s not academia that’s broken. Maybe it’s just me.