A new direction

Dec 31 JDN 2460311

CW: Spiders [it’ll make sense in context]

My time at the University of Edinburgh is officially over. For me it was a surprisingly gradual transition: Because of the holiday break, I had already turned in my laptop and ID badge over a week ago, and because my medical leave, I hadn’t really done much actual work for quite some time. But this is still a momentous final deadline; it’s really, truly, finally over.

I now know with some certainty that leaving Edinburgh early was the right choice, and if anything I should have left sooner or never taken the job in the first place. (It seems I am like Randall Munroe after all.) But what I don’t know is where to go next.

We won’t be starving or homeless. My husband still has his freelance work, and my mother has graciously offered to let us stay in her spare room for awhile. We have some savings to draw upon. Our income will be low enough that payments on my student loans will be frozen. We’ll be able to get by, even if I can’t find work for awhile. But I certainly don’t want to live like that forever.

I’ve been trying to come up with ideas for new career paths, including ones I would never have considered before. Right now I am considering: Back into academia (but much choosier about what sort of school and position), into government or an international aid agency, re-training to work in software development, doing my own freelance writing (then I must decide: fiction or nonfiction? Commercial publishing, or self-published?), publishing our own tabletop games (we have one almost ready for crowdfunding, and another that I could probably finish relatively quickly), opening a game shop or escape room, or even just being a stay-at-home parent (surely the hardest to achieve financially; and while on the one hand it seems like an awful waste of a PhD, on the other hand it would really prove once and for all that I do understand the sunk cost fallacy, and therefore be a sign of my ultimate devotion to behavioral economics). The one mainstream option for an econ PhD that I’m not seriously considering is the private sector: If academia was this soul-sucking, I’m not sure I could survive corporate America.

Maybe none of these are yet the right answer. Or maybe some combination is.

What I’m really feeling right now is a deep uncertainty.

Also, fear. Fear of the unknown. Fear of failure. Fear of rejection. Almost any path I could take involves rejection—though of different kinds, and surely some more than others.

I’ve always been deeply and intensely affected by rejection. Some of it comes from formative experiences I had as a child and a teenager; some of it may simply be innate, the rejection-sensitive dysphoria that often comes with ADHD (which I now believe I have, perhaps mildly). (Come to think of it, even those formative experiences may have hit so hard because of my innate predisposition.)

But wherever it comes from, my intense fear of rejection is probably my greatest career obstacle. In today’s economy, just applying for a job—any job—requires bearing dozens of rejections. Openings get hundreds of applicants, so even being fully qualified is no guarantee of anything.

This makes it far more debilitating than most other kinds of irrational fear. I am also hematophobic, but that doesn’t really get in my way all that much; in the normal course of life, one generally tries to avoid bleeding anyway. (Now that MSM can donate blood, it does prevent me from doing that; and I do feel a little bad about that, since there have been blood shortages recently.)

But rejection phobia basically feels like this:

Imagine you are severely arachnophobic, just absolutely terrified of spiders. You are afraid to touch them, afraid to look at them, afraid to be near them, afraid to even think about them too much. (Given how common it is, you may not even have to imagine.)

Now, imagine (perhaps not too vividly, if you are genuinely arachnophobic!) that every job, every job, in every industry, regardless of what skills are required or what the work entails, requires you to first walk through a long hallway which is covered from floor to ceiling in live spiders. This is simply a condition of employment in our society: Everyone must be able to walk through the hallway full of spiders. Some jobs have longer hallways than others, some have more or less aggressive spiders, and almost none of the spiders are genuinely dangerous; but every job, everywhere, requires passing through a hallway of spiders.

That’s basically how I feel right now.

Freelance writing is the most obvious example—we could say this is an especially long hallway with especially large and aggressive spiders. To succeed as a freelance writer requires continually submitting work you have put your heart and soul into, and receiving in response curtly-worded form rejection letters over and over and over, every single time. And even once your work is successful, there will always be critics to deal with.

Yet even a more conventional job, say in academia or government, requires submitting dozens of applications and getting rejected dozens of times. Sometimes it’s also a curt form letter; other times, you make it all the way through multiple rounds of in-depth interviews and still get turned down. The latter honestly stings a lot more than the former, even though it’s in some sense a sign of your competence: they wouldn’t have taken you that far if you were unqualified; they just think they found someone better. (Did they actually? Who knows?) But investing all that effort for zero reward feels devastating.

The other extreme might be becoming a stay-at-home parent. There aren’t as many spiders in this hallway. While biological children aren’t really an option for us, foster agencies really can’t afford to be choosy. Since we don’t have any obvious major red flags, we will probably be able to adopt if we choose to—there will be bureaucratic red tape, no doubt, but not repeated rejections. But there is one very big rejection—one single, genuinely dangerous spider that lurks in a dark corner of the hallway: What if I am rejected by the child? What if they don’t want me as their parent?

Another alternative is starting a business—such as selling our own games, or opening an escape room. Even self-publishing has more of this character than traditional freelance writing. The only direct, explicit sort of rejection we’d have to worry about there is small business loans; and actually with my PhD and our good credit, we could reasonably expect to get accepted sooner or later. But there is a subtler kind of rejection: What if the market doesn’t want us? What if the sort of games or books (or escape experiences, or whatever) we have to offer just aren’t what the world seems to want? Most startup businesses fail quickly; why should ours be any different? (I wonder if I’d be able to get a small business loan on the grounds that I forecasted only a 50% chance of failing in the first year, instead of the baseline 80%. Somehow, I suspect not.)

I keep searching for a career option with no threat of rejection, and it just… doesn’t seem to exist. The best I can come up with is going off the grid and living as hermits in the woods somewhere. (This sounds pretty miserable for totally different reasons—as well as being an awful, frankly unconscionable waste of my talents.) As long as I continue to live within human society and try to contribute to the world, rejection will rear its ugly head.

Ultimately, I think my only real option is to find a way to cope with rejection—or certain forms of rejection. The hallways full of spiders aren’t going away. I have to find a way to walk through them.

Homeschooling and too much freedom

Nov 19 JDN 2460268

Allowing families to homeschool their children increases freedom, quite directly and obviously. This is a large part of the political argument in favor of homeschooling, and likely a large part of why homeschooling is so popular within the United States in particular.

In the US, about 3% of people are homeschooled. This seems like a small proportion, but it’s enough to have some cultural and political impact, and it’s considerably larger than the proportion who are homeschooled in most other countries.

Moreover, homeschooling rates greatly increased as a result of COVID, and it’s anyone’s guess when, or even whether, they will go back down. I certainly hope they do; here’s why.

A lot of criticism about homeschooling involves academic outcomes: Are the students learning enough English and math? This is largely unfounded; statistically, academic outcomes of homeschooled students don’t seem to be any worse than those of public school students; by some measures, they are actually better.Nor is there clear evidence that homeschooled kids are any less developed socially; most of them get that social development through other networks, such as churches and sports teams.

No, my concern is not that they won’t learn enough English and math. It’s that they won’t learn enough history and science. Specifically, the parts of history and science that contradict the religious beliefs of the parents who are homeschooling them.

One way to study this would be to compare test scores by homeschooled kids on, say, algebra and chemistry (which do not directly threaten Christian evangelical beliefs) to those on, say, biology and neuroscience (which absolutely, fundamentally do). Lying somewhere in between are physics (F=ma is no threat to Christianity, but the Big Bang is) and history (Christian nationalists happily teach that Thomas Jefferson wrote the Declaration of Independence, but often omit that he owned slaves). If homeschooled kids are indeed indoctrinated, we should see particular lacunas in their knowledge where the facts contradict their ideology. In any case, I wasn’t able to find any such studies.

But even if their academic outcomes are worse in certain domains, so what? What about the freedom of parents to educate their children how they choose? What about the freedom of children to not be subjected to the pain of public school?

It will come as no surprise to most of you that I did well in school. In almost everything, really: math, science, philosophy, English, and Latin were my best subjects, and I earned basically flawless grades in them. But I also did very well in creative writing, history, art, and theater, and fairly well in music. My only poor performance was in gym class (as I’ve written about before).

It may come as some surprise when I tell you that I did not particularly enjoy school. In elementary school I had few friends—and one of my closest ended up being abusive to me. Middle school I mostly enjoyed—despite the onset of my migraines. High school started out utterly miserable, though it got a little better—a little—once I transferred to Community High School. Throughout high school, I was lonely, stressed, anxious, and depressed most of the time, and had migraine headaches of one intensity or another nearly every single day. (Sadly, most of that is true now as well; but I at least had a period of college and grad school where it wasn’t, and hopefully I will again once this job is behind me.)

I was good at school. I enjoyed much of the content of school. But I did not particularly enjoy school.

Thus, I can quite well understand why it is tempting to say that kids should be allowed to be schooled at home, if that is what they and their parents want. (Of course, a problem already arises there: What if child and parent disagree? Whose choice actually matters? In practice, it’s usually the parent’s.)

On the whole, public school is a fairly toxic social environment: Cliquish, hyper-competitive, stressful, often full of conflict between genders, races, classes, sexual orientations, and of course the school-specific one, nerds versus jocks (I’d give you two guesses which team I was on, but you’re only gonna need one). Public school sucks.

Then again, many of these problems and conflicts persist into adult life—so perhaps it’s better preparation than we care to admit. Maybe it’s better to be exposed to bias and conflict so that you can learn to cope with them, rather than sheltered from them.

But there is a more important reason why we may need public school, why it may even be worth coercing parents and children into that system against their will.

Public school forces you to interact with people different from you.

At a public school, you cannot avoid being thrown in the same classroom with students of other races, classes, and religions. This is of course more true if your school system is diverse rather than segregated—and all the more reason that the persistent segregation of many of our schools is horrific—but it’s still somewhat true even in a relatively homogeneous school. I was fortunate enough to go to a public school in Ann Arbor, where there was really quite substantial diversity. But even where there is less diversity, there is still usually some diversity—if not race, then class, or religion.

Certainly any public school has more diversity than homeschooling, where parents have the power to specifically choose precisely which other families their children will interact with, and will almost always choose those of the same race, class, and—above all—religious denomination as themselves.

The result is that homeschooled children often grow up indoctrinated into a dogmatic, narrow-minded worldview, convinced that the particular beliefs they were raised in are the objectively, absolutely correct ones and all others are at best mistaken and at worst outright evil. They are trained to reject conflict and dissent, to not even expose themselves to other people’s ideas, because those are seen as dangerous—corrupting.

Moreover, for most homeschooling parents—not all, but most—this is clearly the express intent. They want to raise their children in a particular set of beliefs. They want to inoculate them against the corrupting influences of other ideas. They are not afraid of their kids being bullied in school; they are afraid of them reading books that contradict the Bible.

This article has the headline “Homeschooled children do not grow up to be more religious”, yet its core finding is exactly the opposite of that:

The Cardus Survey found that homeschooled young adults were not noticeably different in their religious lives from their peers who had attended private religious schools, though they were more religious than peers who had attended public or Catholic schools.

No more religious than private religious schools!? That’s still very religious. No, the fair comparison is to public schools, which clearly show lower rates of religiosity among the same demographics. (The interesting case is Catholic schools; they, it turns out, also churn out atheists with remarkable efficiency; I credit the Jesuit norm of top-quality liberal education.) This is clear evidence that religious homeschooling does make children more religious, and so does most private religious education.

Another finding in that same article sounds good, but is misleading:

Indiana University professor Robert Kunzman, in his careful study of six homeschooling families, found that, at least for his sample, homeschooled children tended to become more tolerant and less dogmatic than their parents as they grew up.


This is probably just regression to the mean. The parents who give their kids religious homeschooling are largely the most dogmatic and intolerant, so we would expect by sheer chance that their kids were less dogmatic and intolerant—but probably still pretty dogmatic and intolerant. (Also, do I have to pount out that n=6 barely even constitutes a study!?) This is like the fact that the sons of NBA players are usually shorter than their fathers—but still quite tall.

Homeschooling is directly linked to a lot of terrible things: Young-Earth Creationism, Christian nationalism, homophobia, and shockingly widespread child abuse.

While most right-wing families don’t homeschool, most homeschooling families are right-wing: Between 60% and 70% of homeschooling families vote Republican in most elections. More left-wing voters are homeschooling now with the recent COVID-driven surge in homeschooling, but the right-wing still retains a strong majority for now.

Of course, there are a growing number of left-wing and non-religious families who use homeschooling. Does this mean that the threat of indoctrination is gone? I don’t think so. I once knew someone who was homeschooled by a left-wing non-religious family and still ended up adopting an extremely narrow-minded extremist worldview—simply a left-wing non-religious one. In some sense a left-wing non-religious narrow-minded extremism is better than a right-wing religious narrow-minded extremism, but it’s still narrow-minded extremism. Whatever such a worldview gets right is mainly by the Stopped Clock Principle. It still misses many important nuances, and is still closed to new ideas and new evidence.

Of course this is not a necessary feature of homeschooling. One absolutely could homeschool children into a worldview that is open-minded and tolerant. Indeed, I’m sure some parents do. But statistics suggest that most do not, and this makes sense: When parents want to indoctrinate their children into narrow-minded worldviews, homeschooling allows them to do that far more effectively than if they had sent their children to public school. Whereas if you want to teach your kids open-mindedness and tolerance, exposing them to a diverse environment makes that easier, not harder.

In other words, the problem is that homeschooling gives parents too much control; in a very real sense, this is too much freedom.

When can freedom be too much? It seems absurd at first. But there are at least two cases where it makes sense to say that someone has too much freedom.

The first is paternalism: Sometimes people really don’t know what’s best for them, and giving them more freedom will just allow them to hurt themselves. This notion is easily abused—it has been abused many times, for example against disabled people and colonized populations. For that reason, we are right to be very skeptical of it when applied to adults of sound mind. But what about children? That’s who we are talking about after all. Surely it’s not absurd to suggest that children don’t always know what’s best for them.

The second is the paradox of tolerance: The freedom to take away other people’s freedom is not a freedom we can afford to protect. And homeschooling that indoctrinates children into narrow-minded worldviews is a threat to other people’s freedom—not only those who will be oppressed by a new generation of extremists, but also the children themselves who are never granted the chance to find their own way.

Both reasons apply in this case: paternalism for the children, the paradox of tolerance for the parents. We have a civic responsibility to ensure that children grow up in a rich and diverse environment, so that they learn open-mindedness and tolerance. This is important enough that we should be willing to impose constraints on freedom in order to achieve it. Democracy cannot survive a citizenry who are molded from birth into narrow-minded extremists. There are parents who want to mold their children that way—and we cannot afford to let them.

From where I’m sitting, that means we need to ban homeschooling, or at least very strictly regulate it.

Time and How to Use It

Nov 5 JDN 2460254

A review of Four Thousand Weeks by Oliver Burkeman

The central message of Four Thousand Weeks: Time and How to Use It seems so obvious in hindsight it’s difficult to understand why it feels so new and unfamiliar. It’s a much-needed reaction to the obsessive culture of “efficiency” and “productivity” that dominates the self-help genre. Its core message is remarkable simple:

You don’t have time to do everything you want, so stop trying.

I actually think Burkeman understands the problem incorrectly. He argues repeatedly that it is our mortality which makes our lives precious—that it is because we only get four thousand weeks of life that we must use our time well. But this strikes me as just yet more making excuses for the dragon.

Our lives would not be less precious if we lived a thousand years or a million. Indeed, our time would hardly be any less scarce! You still can’t read every book ever written if you live a million years—for every one of those million years, another 500,000 books will be published. You could visit every one of the 10,000 cities in the world, surely; but if you spend a week in each one, by the time you get back to Paris for a second visit, centuries will have passed—I must imagine you’ll have missed quite a bit of change in that time. (And this assumes that our population remains the same—do we really think it would, if humans could live a million years?)

Even a truly immortal being that will live until the end of time needs to decide where to be at 7 PM this Saturday.

Yet Burkeman does grasp—and I fear that too many of us do not—that our time is precious, and when we try to do everything that seems worth doing, we end up failing to prioritize what really matters most.

What do most of us spend most of our lives doing? Whatever our bosses tell us to do. Aside from sleeping, the activity that human beings spend the largest chunk of their lives on is working.

This has made us tremendously, mind-bogglingly productive—our real GDP per capita is four times what it was in just 1950, and about eight times what it was in the 1920s. Projecting back further than that is a bit dicier, but assuming even 1% annual growth, it should be about twenty times what it was at the dawn of the Industrial Revolution. We could surely live better than medieval peasants did by working only a few hours per week; yet in fact on average we work more hours than they did—by some estimates, nearly twice as much. Rather than getting the same wealth for 5% of the work, or twice the wealth for 10%, we chose to get 40 times the wealth for twice the work.

It would be one thing if all this wealth and productivity actually seemed to make us happy. But does it?

Our physical health is excellent: We are tall, we live long lives—we are smarter, even, than people of the not-so-distant past. We have largely conquered disease as the ancients knew it. Even a ‘catastrophic’ global pandemic today kills a smaller share of the population than would die in a typical year from disease in ancient times. Even many of our most common physical ailments, such as obesity, heart disease, and diabetes, are more symptoms of abundance than poverty. Our higher rates of dementia and cancer are largely consequences of living longer lives—most medieval peasants simply didn’t make it long enough to get Alzheimer’s. I wonder sometimes how ancient people dealt with other common ailments such as migraine and sleep apnea; but my guess is that they basically just didn’t—since treatment was impossible, they learned to live with it. Maybe they consoled themselves with whatever placebo treatments the healers of their local culture offered.

Yet our mental health seems to be no better than ever—and depending on how you measure it, may actually be getting worse over time. Some of the measured increase is surely due to more sensitive diagnosis; but some of it may be a genuine increase—especially as a result of the COVID pandemic. I wasn’t able to find any good estimates of rates of depression or anxiety disorders in ancient or medieval times, so I guess I really can’t say whether this is a problem that’s getting worse. But it sure doesn’t seem to be getting better. We clearly have not solved the problem of depression the way we have solved the problem of infectious disease.

Burkeman doesn’t tell us to all quit our jobs and stop working. But he does suggest that if you are particularly unhappy at your current job (as I am), you may want to quit it and begin searching for something else (as I have). He reminds us that we often get stuck in a particular pattern and underestimate the possibilities that may be available to us.

And he has advice for those who want to stay in their current jobs, too: Do less. Don’t take on everything that is asked of you. Don’t work yourself to the bone. The rewards for working harder are far smaller than our society will tell you, and the costs of burning out are far higher. Do the work that is genuinely most important, and let the rest go.

Unlike most self-help books, Four Thousand Weeks offers very little in the way of practical advice. It’s more like a philosophical treatise, exhorting you to adopt a whole new outlook on time and how you use it. But he does offer a little bit of advice, near the end of the book, in “Ten Tools for Embracing Your Finitude” and “Five Questions”.

The ten tools are as follows:


Adopt a ‘fixed volume’ approach to productivity. Limit the number of tasks on your to-do list. Set aside a particular amount of time for productive work, and work only during that time.

I am relatively good at this one; I work only during certain hours on weekdays, and I resist the urge to work other times.

Serialize, serialize, serialize. Do one major project at a time.

I am terrible at this one; I constantly flit between different projects, leaving most of them unfinished indefinitely. But I’m not entirely convinced I’d do better trying to focus on one in particular. I switch projects because I get stalled on the current one, not because I’m anxious about not doing the others. Unless I can find a better way to break those stalls, switching projects still gets more done than staying stuck on the same one.

Decide in advance what to fail at. Prioritize your life and accept that some things will fail.

We all, inevitably, fail to achieve everything we want to. What Burkeman is telling us to do is choose in advance which achievements we will fail at. Ask yourself: How much do you really care about keeping the kitchen clean and the lawn mowed? If you’re doing these things to satisfy other people’s expectations but you don’t truly care about them yourself, maybe you should just accept that people will frown upon you for your messy kitchen and overgrown lawn.

Focus on what you’ve already completed, not just on what’s left to complete. Make a ‘done list’ of tasks you have completed today—even small ones like “brushed teeth” and “made breakfast”—to remind yourself that you do in fact accomplish things.

I may try this one for awhile. It feels a bit hokey to congratulate yourself on making breakfast—but when you are severely depressed, even small tasks like that can in fact feel like an ordeal.

Consolidate your caring. Be generous and kind, but pick your battles.

I’m not very good at this one either. Spending less time on social media has helped; I am no longer bombarded quite so constantly by worthy causes and global crises. Yet I still have a vague sense that I am not doing enough, that I should be giving more of myself to help others. For me this is partly colored by a feeling that I have failed to build a career that would have both allowed me to have direct impact on some issues and also made enough money to afford large donations.

Embrace boring and single-purpose technology. Downgrade your technology to reduce distraction.

I don’t do this one, but I also don’t see it as particularly good advice. Maybe taking Facebook and (the-platform-formerly-known-as-) Twitter off your phone home screen is a good idea. But the reason you go to social media isn’t that they are so easy to access. It’s that you are expected to, and that you try to use them to fill some kind of need in your life—though it’s unclear they ever actually fill it.

Seek out novelty in the mundane. Cultivate awareness and appreciation of the ordinary things around you.

This one is basically a stripped-down meditation technique. It does work, but it’s also a lot harder to do than most people seem to think. It is especially hard to do when you are severely depressed. One technique I’ve learned from therapy that is surprisingly helpful is to replace “I have to” with “I get to” whenever you can: You don’t have to scoop cat litter, you get to because you have an adorable cat. You don’t have to catch the bus to work, you get to because you have a job. You don’t have to make breakfast for your family, you get to because you have a loving family.

Be a ‘researcher’ in relationships. Cultivate curiosity rather than anxiety or judgment.

Human beings are tremendously varied and often unpredictable. If you worry about whether or not people will do what you want, you’ll be constantly worried. And I have certainly been there. It can help to try to take a stance of detachment, where you concern yourself less with getting the right outcome and more with learning about the people you are with. I think this can be taken too far—you can become totally detached from relationships, or you could put yourself in danger by failing to pass judgment on obviously harmful behaviors—but in moderation, it’s surprisingly powerful. The first time I ever enjoyed going to a nightclub, (at my therapist’s suggestion) I went as a social scientist, tasked with observing and cataloguing the behavior around me. I still didn’t feel fully integrated into the environment (and the music was still too damn loud!), but for once, I wasn’t anxious and miserable.

Cultivate instantaneous generosity. If you feel like doing something good for someone, just do it.

I’m honestly not sure whether this one is good advice. I used to follow it much more than I do now. Interacting with the Effective Altruism community taught me to temper these impulses, and instead of giving to every random charity or homeless person that asks for money, instead concentrate my donations into a few highly cost-effective charities. Objectively, concentrating donations in this way produces a larger positive impact on the world. But subjectively, it doesn’t feel as good, it makes people sad, and sometimes it can make you feel like a very callous person. Maybe there’s a balance to be had here: Give a little when the impulse strikes, but save up most of it for the really important donations.

Practice doing nothing.

This one is perhaps the most subversive, the most opposed to all standard self-help advice. Do nothing? Just rest? How can you say such a thing, when you just reminded us that we have only four thousand weeks to live? Yet this is in fact the advice most of us need to hear. We burn ourselves out because we forget how to rest.

I am also terrible at this one. I tend to get most anxious when I have between 15 and 45 minutes of free time before an activity, because 45 minutes doesn’t feel long enough to do anything, and 15 minutes feels too long to do nothing. Logically this doesn’t really make sense: Either you have time to do something, or you don’t. But it can be hard to find good ways to fill that sort of interval, because it requires the emotional overhead of starting and stopping a task.

Then, there are the five questions:

Where in your life or work are you currently pursuing comfort, when what’s called for is a little discomfort?

It seems odd to recommend discomfort as a goal, but I think what Burkeman is getting at is that we tend to get stuck in the comfortable and familiar, even when we would be better off reaching out and exploring into the unknown. I know that for me, finally deciding to quit this job was very uncomfortable; it required taking a big risk and going outside the familiar and expected. But I am now convinced it was the right decision.

Are you holding yourself to, and judging yourself by, standards of productivity or performance that are impossible to meet?

In a word? Yes. I’m sure I am. But this one is also slipperier than it may seem—for how do we really know what’s possible? And possible for whom? If you see someone else who seems to be living the life you think you want, is it just an illusion? Are they really suffering as badly as you? Or do they perhaps have advantages you don’t, which made it possible for them, but not for you? When people say they work 60 hours per week and you can barely manage 20, are they lying? Are you truly not investing enough effort? Or do you suffer from ailments they don’t, which make it impossible for you to commit those same hours?

In what ways have you yet to accept the fact that you are who you are, not the person you think you ought to be?

I think most of us have a lot of ways that we fail to accept ourselves: physically, socially, psychologically. We are never the perfect beings we aspire to be. And constantly aspiring to an impossible ideal will surely drain you. But I also fear that self-acceptance could be a dangerous thing: What if it makes us stop striving to improve? What if we could be better than we are, but we don’t bother? Would you want a murderous psychopath to practice self-acceptance? (Then again, do they already, whether we want them to or not?) How are we to know which flaws in ourselves should be accepted, and which repaired?

In which areas of your life are you still holding back until you feel like you know what you’re doing?

This one cut me very deep. I have several areas of my life where this accusation would be apt, and one in particular where I am plainly guilty as charged: Parenting. In a same-sex marriage, offspring don’t emerge automatically without intervention. If we want to have kids, we must do a great deal of work to secure adoption. And it has been much easier—safer, more comfortable—to simply put off that work, avoid the risk. I told myself we’d adopt once I finished grad school; but then I only got a temporary job, so I put it off again, saying we’d adopt once I found stability in my career. But what if I never find that stability? What if the rest of my career is always this precarious? What if I can always find some excuse to delay? The pain of never fulfilling that lifelong dream of parenthood might continue to gnaw at me forever.

How would you spend your days differently if you didn’t care so much about seeing your actions reach fruition?

This one is frankly useless. I hate it. It’s like when people say “What would you do if you knew you’d die tomorrow?” Obviously, you wouldn’t go to work, you wouldn’t pay your bills, you wouldn’t clean your bathroom. You might devote yourself single-mindedly to a single creative task you hoped to make a legacy, or gather your family and friends to share one last day of love, or throw yourself into meaningless hedonistic pleasure. Those might even be things worth doing, on occasion. But you can’t do them every day. If you knew you were about to die, you absolutely would not live in any kind of sustainable way.

Similarly, if I didn’t care about seeing my actions reach fruition, I would continue to write stories and never worry about publishing them. I would make little stabs at research when I got curious, then once it starts getting difficult or boring, give up and never bother writing the paper. I would continue flitting between a dozen random projects at once and never finish any of them. I might well feel happier—at least until it all came crashing down—but I would get absolutely nothing done.

Above all, I would never apply for any jobs, because applying for jobs is absolutely not about enjoying the journey. If you know for a fact that you won’t get an offer, you’re an idiot to bother applying. That is a task that is only worth doing if I believe that it will yield results—and indeed, a big part of why it’s so hard to bring myself to do it is that I have a hard time maintaining that belief.

If you read the surrounding context, Burkeman actually seems to intend something quite different than the actual question he wrote. He suggests devoting more time to big, long-term projects that require whole communities to complete. He likens this to laying bricks in a cathedral that we will never see finished.

I do think there is wisdom in this. But it isn’t a simple matter of not caring about results. Indeed, if you don’t care at all about whether the cathedral will stand, you won’t bother laying the bricks correctly. In some sense Burkeman is actually asking us to do the opposite: To care more about results, but specifically results that we may never live to see. Maybe he really intends to emphasize the word see—you care about your actions reaching fruition, but not whether or not you’ll ever see it.

Yet this, I am quite certain, is not my problem. When a psychiatrist once asked me, “What do you really want most in life?” I gave a very thoughtful answer: “To be remembered in a thousand years for my contribution to humanity.” (His response was glib: “You can’t control that.”) I still stand by that answer: If I could have whatever I want, no limits at all, three wishes from an all-powerful genie, two of them would be to solve some of the world’s greatest problems, and the third would be for the chance to live my life in a way that I knew would be forever remembered.

But I am slowly coming to realize that maybe I should abandon that answer. That psychiatrist’s answer was far too glib (he was in fact not a very good fit for me; I quickly switched to a different psychiatrist), but maybe it wasn’t fundamentally wrong. It may be impossible to predict, let alone control, whether our lives have that kind of lasting impact—and, almost by construction, most lives can’t.

Perhaps, indeed, I am too worried about whether the cathedral will stand. I only have a few bricks to lay myself, and while I can lay them the best I can, that ultimately will not be what decides the fate of the cathedral. A fire, or an earthquake, or simply some other bricklayer’s incompetence, could bring about its destruction—and there is nothing at all I can do to prevent that.

This post is already getting too long, so I should try to bring it to a close.

As the adage goes, perhaps if I had more time, I’d make it shorter.

On Horror

Oct 29 JDN 2460247

Since this post will go live the weekend before Halloween, the genre of horror seemed a fitting topic.

I must confess, I don’t really get horror as a genre. Generally I prefer not to experience fear and disgust? This can’t be unusual; it’s literally a direct consequence of the evolutionary function of fear and disgust. It’s wanting to be afraid and disgusted that’s weird.

Cracked once came out with a list of “Horror Movies for People Who Hate Horror”, and I found some of my favorite films on it, such as Alien (which is as much sci-fi as horror), The Cabin in the Woods, (which is as much satire) and Zombieland (which is a comedy). Other such lists have prominently featured Get Out (which is as much political as it is horrific), Young Frankenstein (which is entirely a comedy), and The Silence of the Lambs (which is horror, at least in large part, but which I didn’t so much enjoy as appreciate as a work of artistry; I watch it the way I look at Guernica). Some such lists include Saw, which I can appreciate on some level—it does have a lot of sociopolitical commentary—but still can’t enjoy (it’s just too gory). I note that none of these lists seem to include Event Horizon, which starts out as a really good sci-fi film, but then becomes so very much horror that I ended up hating it.

In trying to explain the appeal of horror to me, people have likened it to the experience of a roller coaster: Isn’t fear exhilarating?

I do enjoy roller coasters. But the analogy falls flat for me, because, well, my experience of riding a roller coaster isn’t fear—the exhilaration comes directly from the experience of moving so fast, a rush of “This is awesome!” that has nothing to do with being afraid. Indeed, should I encounter a roller coaster that actually made me afraid, I would assiduously avoid it, and wonder if it was up to code. My goal is not to feel like I’m dying; it’s to feel like I’m flying.

And speaking of flying: Likewise, the few times I have had the chance to pilot an aircraft were thrilling in a way it is difficult to convey to anyone who hasn’t experienced it. I think it might be something like what religious experiences feel like. The sense of perspective, looking down on the world below, seeing it as most people never see it. The sense of freedom, of, for once in your life, actually having the power to maneuver freely in all three dimensions. The subtle mix of knowing that you are traveling at tremendous speed while feeling as if you are peacefully drifting along. Astronauts also describe this sort of experience, which no doubt is even more intense for them.

Yet in all that, fear was never my primary emotion, and had it been, it would have undermined the experience rather than enhanced it. The brief moment when our engine stalled flying over Scotland certainly raised my heart rate, but not in a pleasant way. In that moment—objectively brief, subjectively interminable—I spent all of my emotional energy struggling to remain calm. It helped to continually remind myself of what I knew about aerodynamics: Wings want to fly. An airplane without an engine isn’t a rock; it’s a glider. It is entirely possible to safely land a small aircraft on literally zero engine power. Still, I’m glad we got the propeller started again and didn’t have to.

I have also enjoyed classic horror novels such as Dracula and Frankenstein; their artistry is also quite apparent, and reading them as books provides an emotional distance that watching them as films often lacks. I particularly notice this with vampire stories, as I can appreciate the romantic allure of immortality and the erotic tension of forbidden carnal desire—but the sight of copious blood on screen tends to trigger my mild hematophobia.

Yet if fear is the goal, surely having a phobia should only make it stronger and thus better? And yet, this seems to be a pattern: People with genuine phobia of the subject in question don’t actually enjoy horror films on the subject. Arachnophobes don’t often watch films about giant spiders. Cynophobes are rarely werewolf aficionados. And, indeed, rare is the hematophobe who is a connoisseur of vampire movies.

Moreover, we rarely see horror films about genuine dangers in the world. There are movies about rape, murder, war, terrorism, espionage, asteroid impacts, nuclear weapons and climate change, but (with rare exceptions) they aren’t horror films. They don’t wallow in fear the way that films about vampires, ghosts and werewolves do. They are complex thrillers (Argo, Enemy of the State, Tinker Tailor Soldier Spy, Broken Arrow), police procedurals (most films about rape or murder), heroic sagas (just about every war film), or just fun, light-hearted action spectacles (Armageddon, The Day After Tomorrow). Rather than a loosely-knit gang of helpless horny teenagers, they have strong, brave heroes. Even films about alien invasions aren’t usually horror (Alien notwithstanding); they also tend to be heroic war films. Unlike nuclear war or climate change, alien invasion is a quite unlikely event; but it’s surely more likely than zombies or werewolves.

In other words, when something is genuinely scary, the story is always about overcoming it. There is fear involved, but in the end we conquer our fear and defeat our foes. The good guys win in the end.

I think, then, that enjoyment of horror is not about real fear. Feeling genuinely afraid is unpleasant—as by all Darwinian rights it should be.

Horror is about simulating fear. It’s a kind of brinksmanship: You take yourself to the edge of fear and then back again, because what you are seeing would be scary if it were real, but deep down, you know it isn’t. You can sleep at night after watching movies about zombies, werewolves and vampires, because you know that there aren’t really such things as zombies, werewolves and vampires.

What about the exceptions? What about, say, The Silence of the Lambs? Psychopathic murderers absolutely are real. (Not especially common—but real.) But The Silence of the Lambs only works because of truly brilliant writing, directing, and acting; and part of what makes it work is that it isn’t just horror. It has layers of subtlety, and it crosses genres—it also has a good deal of police procedural in it, in fact. And even in The Silence of the Lambs, at least one of the psychopathic murderers is beaten in the end; evil does not entirely prevail.

Slasher films—which I especially dislike (see above: hematophobia)—seem like they might be a counterexample, in that there genuinely are a common subgenre and they mainly involve psychopathic murderers. But in fact almost all slasher films involve some kind of supernatural element: In Friday the 13th, Jason seems to be immortal. In A Nightmare on Elm Street, Freddy Krueger doesn’t just attack you with a knife, he invades your dreams. Slasher films actually seem to go out of their way to make the killer not real. Perhaps this is because showing helpless people murdered by a realistic psychopath would inspire too much genuine fear.

The terrifying truth is that, more or less at any time, a man with a gun could in fact come and shoot you, and while there may be ways to reduce that risk, there’s no way to make it zero. But that isn’t fun for a movie, so let’s make him a ghost or a zombie or something, so that when the movie ends, you can remind yourself it’s not real. Let’s pretend to be afraid, but never really be afraid.

Realizing that makes me at least a little more able to understand why some people enjoy horror.

Then again, I still don’t.

What is anxiety for?

Sep 17 JDN 2460205

As someone who experiences a great deal of anxiety, I have often struggled to understand what it could possibly be useful for. We have this whole complex system of evolved emotions, and yet more often than not it seems to harm us rather than help us. What’s going on here? Why do we even have anxiety? What even is anxiety, really? And what is it for?

There’s actually an extensive body of research on this, though very few firm conclusions. (One of the best accounts I’ve read, sadly, is paywalled.)

For one thing, there seem to be a lot of positive feedback loops involved in anxiety: Panic attacks make you more anxious, triggering more panic attacks; being anxious disrupts your sleep, which makes you more anxious. Positive feedback loops can very easily spiral out of control, resulting in responses that are wildly disproportionate to the stimulus that triggered them.

A certain amount of stress response is useful, even when the stakes are not life-or-death. But beyond a certain point, more stress becomes harmful rather than helpful. This is the Yerkes-Dodson effect, for which I developed my stochastic overload model (which I still don’t know if I’ll ever publish, ironically enough, because of my own excessive anxiety). Realizing that anxiety can have benefits can also take some of the bite out of having chronic anxiety, and, ironically, reduce that anxiety a little. The trick is finding ways to break those positive feedback loops.

I think one of the most useful insights to come out of this research is the smoke-detector principle, which is a fundamentally economic concept. It sounds quite simple: When dealing with an uncertain danger, sound the alarm if the expected benefit of doing so exceeds the expected cost.

This has profound implications when risk is highly asymmetric—as it usually is. Running away from a shadow or a noise that probably isn’t a lion carries some cost; you wouldn’t want to do it all the time. But it is surely nowhere near as bad as failing to run away when there is an actual lion. Indeed, it might be fair to say that failing to run away from an actual lion counts as one of the worst possible things that could ever happen to you, and could easily be 100 times as bad as running away when there is nothing to fear.

With this in mind, if you have a system for detecting whether or not there is a lion, how sensitive should you make it? Extremely sensitive. You should in fact try to calibrate it so that 99% of the time you experience the fear and want to run away, there is not a lion. Because the 1% of the time when there is one, it’ll all be worth it.

Yet this is far from a complete explanation of anxiety as we experience it. For one thing, there has never been, in my entire life, even a 1% chance that I’m going to be attacked by a lion. Even standing in front of a lion enclosure at the zoo, my chances of being attacked are considerably less than that—for a zoo that allowed 1% of its customers to be attacked would not stay in business very long.

But for another thing, it isn’t really lions I’m afraid of. The things that make me anxious are generally not things that would be expected to do me bodily harm. Sure, I generally try to avoid walking down dark alleys at night, and I look both ways before crossing the street, and those are activities directly designed to protect me from bodily harm. But I actually don’t feel especially anxious about those things! Maybe I would if I actually had to walk through dark alleys a lot, but I don’t, and in the rare occasion I would, I think I’d feel afraid at the time but fine afterward, rather than experiencing persistent, pervasive, overwhelming anxiety. (Whereas, if I’m anxious about reading emails, and I do manage to read emails, I’m usually still anxious afterward.) When it comes to crossing the street, I feel very little fear at all, even though perhaps I should—indeed, it had been remarked that when it comes to the perils of motor vehicles, human beings suffer from a very dangerous lack of fear. We should be much more afraid than we are—and our failure to be afraid kills thousands of people.

No, the things that make me anxious are invariably social: Meetings, interviews, emails, applications, rejection letters. Also parties, networking events, and back when I needed them, dates. They involve interacting with other people—and in particular being evaluated by other people. I never felt particularly anxious about exams, except maybe a little before my PhD qualifying exam and my thesis defenses; but I can understand those who do, because it’s the same thing: People are evaluating you.

This suggests that anxiety, at least of the kind that most of us experience, isn’t really about danger; it’s about status. We aren’t worried that we will be murdered or tortured or even run over by a car. We’re worried that we will lose our friends, or get fired; we are worried that we won’t get a job, won’t get published, or won’t graduate.

And yet it is striking to me that it often feels just as bad as if we were afraid that we were going to die. In fact, in the most severe instances where anxiety feeds into depression, it can literally make people want to die. How can that be evolutionarily adaptive?

Here it may be helpful to remember that in our ancestral environment, status and survival were oft one and the same. Humans are the most social organisms on Earth; I even sometimes describe us as hypersocial, a whole new category of social that no other organism seems to have achieved. We cooperate with others of our species on a mind-bogglingly grand scale, and are utterly dependent upon vast interconnected social systems far too large and complex for us to truly understand, let alone control.

At this historical epoch, these social systems are especially vast and incomprehensible; but at least for most of us in First World countries, they are also forgiving in a way that is fundamentally alien to our ancestors’ experience. It was not so long ago that a failed hunt or a bad harvest would let your family starve unless you could beseech your community for aid successfully—which meant that your very survival could depend upon being in the good graces of that community. But now we have food stamps, so even if everyone in your town hates you, you still get to eat. Of course some societies are more forgiving (Sweden) than others (the United States); and virtually all societies could be even more forgiving than they are. But even the relatively cutthroat competition of the US today has far less genuine risk of truly catastrophic failure than what most human beings lived through for most of our existence as a species.

I have found this realization helpful—hardly a cure, but helpful, at least: What are you really afraid of? When you feel anxious, your body often tells you that the stakes are overwhelming, life-or-death; but if you stop and think about it, in the world we live in today, that’s almost never true. Failing at one important task at work probably won’t get you fired—and even getting fired won’t really make you starve.

In fact, we might be less anxious if it were! For our bodies’ fear system seems to be optimized for the following scenario: An immediate threat with high chance of success and life-or-death stakes. Spear that wild animal, or jump over that chasm. It will either work or it won’t, you’ll know immediately; it probably will work; and if it doesn’t, well, that may be it for you. So you’d better not fail. (I think it’s interesting how much of our fiction and media involves these kinds of events: The hero would surely and promptly die if he fails, but he won’t fail, for he’s the hero! We often seem more comfortable in that sort of world than we do in the one we actually live in.)

Whereas the life we live in now is one of delayed consequences with low chance of success and minimal stakes. Send out a dozen job applications. Hear back in a week from three that want to interview you. Do those interviews and maybe one will make you an offer—but honestly, probably not. Next week do another dozen. Keep going like this, week after week, until finally one says yes. Each failure actually costs you very little—but you will fail, over and over and over and over.

In other words, we have transitioned from an environment of immediate return to one of delayed return.

The result is that a system which was optimized to tell us never fail or you will die is being put through situations where failure is constantly repeated. I think deep down there is a part of us that wonders, “How are you still alive after failing this many times?” If you had fallen in as many ravines as I have received rejection letters, you would assuredly be dead many times over.

Yet perhaps our brains are not quite as miscalibrated as they seem. Again I come back to the fact that anxiety always seems to be about people and evaluation; it’s different from immediate life-or-death fear. I actually experience very little life-or-death fear, which makes sense; I live in a very safe environment. But I experience anxiety almost constantly—which also makes a certain amount of sense, seeing as I live in an environment where I am being almost constantly evaluated by other people.

One theory posits that anxiety and depression are a dual mechanism for dealing with social hierarchy: You are anxious when your position in the hierarchy is threatened, and depressed when you have lost it. Primates like us do seem to care an awful lot about hierarchies—and I’ve written before about how this explains some otherwise baffling things about our economy.

But I for one have never felt especially invested in hierarchy. At least, I have very little desire to be on top of the hiearchy. I don’t want to be on the bottom (for I know how such people are treated); and I strongly dislike most of the people who are actually on top (for they’re most responsible for treating the ones on the bottom that way). I also have ‘a problem with authority’; I don’t like other people having power over me. But if I were to somehow find myself ruling the world, one of the first things I’d do is try to figure out a way to transition to a more democratic system. So it’s less like I want power, and more like I want power to not exist. Which means that my anxiety can’t really be about fearing to lose my status in the hierarchy—in some sense, I want that, because I want the whole hierarchy to collapse.

If anxiety involved the fear of losing high status, we’d expect it to be common among those with high status. Quite the opposite is the case. Anxiety is more common among people who are more vulnerable: Women, racial minorities, poor people, people with chronic illness. LGBT people have especially high rates of anxiety. This suggests that it isn’t high status we’re afraid of losing—though it could still be that we’re a few rungs above the bottom and afraid of falling all the way down.

It also suggests that anxiety isn’t entirely pathological. Our brains are genuinely responding to circumstances. Maybe they are over-responding, or responding in a way that is not ultimately useful. But the anxiety is at least in part a product of real vulnerabilities. Some of what we’re worried about may actually be real. If you cannot carry yourself with the confidence of a mediocre White man, it may be simply because his status is fundamentally secure in a way yours is not, and he has been afforded a great many advantages you never will be. He never had a Supreme Court ruling decide his rights.

I cannot offer you a cure for anxiety. I cannot even really offer you a complete explanation of where it comes from. But perhaps I can offer you this: It is not your fault. Your brain evolved for a very different world than this one, and it is doing its best to protect you from the very different risks this new world engenders. Hopefully one day we’ll figure out a way to get it calibrated better.

Knowing When to Quit

Sep 10 JDN 2460198

At the time of writing this post, I have officially submitted my letter of resignation at the University of Edinburgh. I’m giving them an entire semester of notice, so I won’t actually be leaving until December. But I have committed to my decision now, and that feels momentous.

Since my position here was temporary to begin with, I’m actually only leaving a semester early. Part of me wanted to try to stick it out, continue for that one last semester and leave on better terms. Until I sent that letter, I had that option. Now I don’t, and I feel a strange mix of emotions: Relief that I have finally made the decision, regret that it came to this, doubt about what comes next, and—above all—profound ambivalence.

Maybe it’s the very act of quitting—giving up, being a quitter—that feels bad. Even knowing that I need to get out of here, it hurts to have to be the one to quit.

Our society prizes grit and perseverance. Since I was a child I have been taught that these are virtues. And to some extent, they are; there certainly is such a thing as giving up too quickly.

But there is also such a thing as not knowing when to quit. Sometimes things really aren’t going according to plan, and you need to quit before you waste even more time and effort. And I think I am like Randall Monroe in this regard; I am more inclined to stay when I shouldn’t than quit when I shouldn’t:

Sometimes quitting isn’t even as permanent as it is made out to be. In many cases, you can go back later and try again when you are better prepared.

In my case, I am unlikely to ever work at the University of Edinburgh again, but I haven’t yet given up on ever having a career in academia. Then again, I am by no means as certain as I once was that academia is the right path for me. I will definitely be searching for other options.

There is a reason we are so enthusiastically sold on the virtue of perseverance. Part of how our society sells the false narrative of meritocracy is by claiming that people who succeed did so because they tried harder or kept on trying.

This is not entirely false; all other things equal, you are more likely to succeed if you keep on trying. But in some ways that just makes it more seductive and insidious.

For the real reason most people hit home runs in life is they were born on third base. The vast majority of success in life is determined by circumstances entirely outside individual control.


Even having the resources to keep trying is not guaranteed for everyone. I remember a great post on social media pointing out that entrepreneurship is like one of those carnival games:

Entrepreneurship is like one of those carnival games where you throw darts or something.

Middle class kids can afford one throw. Most miss. A few hit the target and get a small prize. A very few hit the center bullseye and get a bigger prize. Rags to riches! The American Dream lives on.

Rich kids can afford many throws. If they want to, they can try over and over and over again until they hit something and feel good about themselves. Some keep going until they hit the center bullseye, then they give speeches or write blog posts about ‘meritocracy’ and the salutary effects of hard work.

Poor kids aren’t visiting the carnival. They’re the ones working it.

The odds of succeeding on any given attempt are slim—but you can always pay for more tries. A middle-class person can afford to try once; mostly those attempts will fail, but a few will succeed and then go on to talk about how their brilliant talent and hard work made the difference. A rich person can try as many times as they like, and when they finally succeed, they can credit their success to perseverance and a willingness to take risks. But the truth is, they didn’t have any exceptional reserves of grit or courage; they just had exceptional reserves of money.

In my case, I was not depleting money (if anything, I’m probably losing out financially by leaving early, though that very much depends on how the job market goes for me): It was something far more valuable. I was whittling away at my own mental health, depleting my energy, draining my motivation. The resource I was exhausting was my very soul.

I still have trouble articulating why it has been so painful for me to work here. It’s so hard to point to anything in particular.

The most obvious downsides were things I knew at the start: The position is temporary, the pay is mediocre, and I had to move across the Atlantic and live thousands of miles from home. And I had already heard plenty about the publish-or-perish system of research publication.

Other things seem like minor annoyances: They never did give me a good office (I have to share it with too many people, and there isn’t enough space, so in fact I rarely use it at all). They were supposed to assign me a faculty mentor and never did. They kept rearranging my class schedule and not telling me things until immediately beforehand.

I think what it really comes down to is I didn’t realize how much it would hurt. I knew that I was moving across the Atlantic—but I didn’t know how isolated and misunderstood I would feel when I did. I knew that publish-or-perish was a problem—but I didn’t know how agonizing it would be for me in particular. I knew I probably wouldn’t get very good mentorship from the other faculty—but I didn’t realize just how bad it would be, or how desperately I would need that support I didn’t get.

I either underestimated the severity of these problems, or overestimated my own resilience. I thought I knew what I was going into, and I thought I could take it. But I was wrong. I couldn’t take it. It was tearing me apart. My only answer was to leave.

So, leave I shall. I have now committed to doing so.

I don’t know what comes next. I don’t even know if I’ve made the right choice. Perhaps I’ll never truly know. But I made the choice, and now I have to live with it.

What am I without you?

Jul 16 JDN 2460142

When this post goes live, it will be my husband’s birthday. He will probably read it before that, as he follows my Patreon. In honor of his birthday, I thought I would make romance the topic of today’s post.

In particular, there’s a certain common sentiment that is usually viewed as romantic, which I in fact think is quite toxic. This is the notion that “Without you, I am nothing”—that in the absence of the one we love, we would be empty or worthless.

Here is this sentiment being expressed by various musicians:

I’m all out of love,
I’m so lost without you.
I know you were right,
Believing for so long.
I’m all out of love,
What am I without you?

– “All Out of Love”, Air Supply

<quotation>

Well what am I, what am I without you?
What am I without you?
Your love makes me burn.
No, no, no
Well what am I, what am I without you?
I’m nothing without you.
So lеt love burn.

– “What am I without you?”, Suede

Without you, I’m nothing.
Without you, I’m nothing.
Without you, I’m nothing.
Without you, I’m nothing at all.

– “Without you I’m nothing”, Placebo

I’ll be nothin’, nothin’, nothin’, nothin’ without you.
I’ll be nothin’, nothin’, nothin’, nothin’ without you.
Yeah
I was too busy tryna find you with someone else,
The one I couldn’t stand to be with was myself.
‘Cause I’ll be nothin’, nothin’, nothin’, nothin’ without you.

– “Nothing without you”, The Weeknd

You were my strength when I was weak.
You were my voice when I couldn’t speak.
You were my eyes when I couldn’t see.
You saw the best there was in me!
Lifted me up when I couldn’t reach,
You gave me faith ’cause you believed!
I’m everything I am,
Because you loved me.


– “Because You Loved Me”, Celine Dion

Hopefully that’s enough to convince you that this is not a rare sentiment. Moreover, these songs do seem quite romantic, and there are parts of them that still resonate quite strongly for me (particularly “Because You Loved Me”).

Yet there is still something toxic here: They make us lose sight of our own self-worth independently of our relationships with others. Humans are deeply social creatures, so of course we want to fill our lives with relationships with others, and as well we should. But you are more than your relationships.

Stranded alone on a deserted island, you would still be a person of worth. You would still have inherent dignity. You would still deserve to live.

It’s also unhealthy even from a romantic perspective. Yes, once you’ve found the love of your life and you really do plan to live together forever, tying your identity so tightly to the relationship may not be disastrous—though it could still be unhealthy and promote a cycle of codependency. But what about before you’ve made that commitment? If you are nothing without the one you love, what happens when you break up? Who are you then?

And even if you are with the love of your life, what happens if they die?

Of course our relationships do change who we are. To some degree, our identity is inextricably tied to those we love, and this would probably still be desirable even if it weren’t inevitable. But there must always be part of you that isn’t bound to anyone in particular other than yourself—and if you can’t find that part, it’s a very bad sign.

Now compare a quite different sentiment:

If I didn’t have you to hold me tight,

If I didn’t have you to lie with at night,

If I didn’t have you to share my sighs,

And to kiss me and dry my tears when I cry…

Well, I…

Really think that I would…

Probably…

Have somebody else.

– “If I Didn’t Have You”, Tim Minchin

Tim Minchin is a comedy musician, and the song is very much written in that vein. He doesn’t want you to take it too seriously.

Another song Tim Minchin wrote for his wife, “Beautiful Head”, reflects upon the inevitable chasm that separates any two minds—he knows all about her, but not what goes on inside that beautiful head. He also has another sort-of love song, called “I’ll Take Lonely Tonight”, about rejecting someone because he wants to remain faithful to his wife. It’s bittersweet despite the humor within, and honestly I think it shows a deeper sense of romance than the vast majority of love songs I’ve heard.

Yet I must keep coming back to one thing: This is a much healthier attitude.

The factual claim is almost certainly objectively true: In all probability, should you find yourself separated from your current partner, you would, sooner or later, find someone else.

None of us began our lives in romantic partnerships—so who were we before then? No doubt our relationships change us, and losing them would change us yet again. But we were something before, and should it end, we will continue to be something after.

And the attitude that our lives would be empty and worthless without the one we love is dangerously close to the sort of self-destructive self-talk I know all too well from years of depression. “I’m worthless without you, I’m nothing without you” is really not so far from “I’m worthless, I’m nothing” simpliciter. If you hollow yourself out for love, you have still hollowed yourself out.

Why, then, do we only see this healthier attitude expressed as comedy? Why can’t we take seriously the idea that love doesn’t define your whole identity? Why does the toxic self-deprecation of “I am nothing without you” sound more romantic to our ears than the honest self-respect of “I would probably have somebody else”? Why is so much of what we view as “romantic” so often unrealistic—or even harmful?

Tim Minchin himself seems to wonder, as the song alternates between serious expressions of love and ironic jabs:

And if I may conjecture a further objection,
Love is nothing to do with destined perfection.
The connection is strengthened,
The affection simply grows over time,

Like a flower,
Or a mushroom,
Or a guinea pig,
Or a vine,
Or a sponge,
Or bigotry…
…or a banana.

And love is made more powerful
By the ongoing drama of shared experience,
And the synergy of a kind of symbiotic empathy, or… something.

I believe that a healthier form of love is possible. I believe that we can unite ourselves with others in a way that does not sacrifice our own identity and self-worth. I believe that love makes us more than we were—but not that we would be nothing without it. I am more than I was because you loved me—but not everything I am.

This is already how most of us view friendship: We care for our friends, we value our relationships with them—but we would recognize it as toxic to declare that we’d be nothing without them. Indeed, there is a contradiction in our usual attitude here: If part of who I am is in my friendships, then how can losing my romantic partner render me nothing? Don’t I still at least have my friends?

I can now answer this question: What am I without you? An unhappier me. But still, me.

So, on your birthday, let me say this to you, my dear husband:

But with all my heart and all my mind,
I know one thing is true:
I have just one life and just one love,
And my love, that love is you.
And if it wasn’t for you,
Darling, you…
I really think that I would…
Possibly…
Have somebody else.

Age, ambition, and social comparison

Jul 2 JDN 2460128

The day I turned 35 years old was one of the worst days of my life, as I wrote about at the time. I think the only times I have felt more depressed than that day were when my father died, when I was hospitalized by an allergic reaction to lamotrigine, and when I was rejected after interviewing for jobs at GiveWell and Wizards of the Coast.

This is notable because… nothing particularly bad happened to me on my 35th birthday. It was basically an ordinary day for me. I felt horrible simply because I was turning 35 and hadn’t accomplished so many of the things I thought I would have by that point in my life. I felt my dreams shattering as the clock ticked away what chance I thought I’d have at achieving my life’s ambitions.

I am slowly coming to realize just how pathological that attitude truly is. It was ingrained in me very deeply from the very youngest age, not least because I was such a gifted child.

While studying quantum physics in college, I was warned that great physicists do all their best work before they are 30 (some even said 25). Einstein himself said as much (so it must be true, right?). It turns out that was simply untrue. It may have been largely true in the 18th and 19th centuries, and seems to have seen some resurgence during the early years of quantum theory, but today the median age at which a Nobel laureate physicist did their prize-winning work is 48. Less than 20% of eminent scientists made their great discoveries before the age of 40.

Alexander Fleming was 47 when he discovered penicillin—just about average for an eminent scientist of today. Darwin was 22 when he set sail on the Beagle, but didn’t publish On the Origin of Species until he was 50. Andre-Marie Ampere started his work in electromagnetism in his forties.

In creative arts, age seems to be no barrier at all. Julia Child published her first cookbook at 50. Stan Lee sold his first successful Marvel comic at 40. Toni Morrison was 39 when she published her first novel, and 62 when she won her Nobel. Peter Mark Roget was 73 when he published his famous thesaurus. Tolkein didn’t publish The Hobbit until he was 45.

Alan Rickman didn’t start drama school until he was 26 and didn’t have a major Hollywood role until he was 42. Samuel L. Jackson is now the third-highest-grossing actor of all time (mostly because of the Avengers movies), but he didn’t have any major movie roles until his forties. Anna Moses didn’t start painting until she was 78.

We think of entrepreneurship as a young man’s game, but Ray Kroc didn’t buy McDonalds until he was 59. Harland Sanders didn’t franchise KFC until he was 62. Eric Yuan wasn’t a vice president until the age of 37 and didn’t become a billionaire until Zoom took off in 2019—he was 49. Sam Walton didn’t found Walmart until he was 44.

Great humanitarian achievements actually seem to be more likely later in life: Gandhi did not see India achieve independence until he was 78. Nelson Mandela was 76 when he became President of South Africa.

It has taken me far too long to realize this, and in fact I don’t think I have yet fully internalized it: Life is not a race. You do not “fall behind” when others achieve things younger than you did. In fact, most child prodigies grow up no more successful as adults than children who were merely gifted or even above-average. (There is another common belief that prodigies grow up miserable and stunted; that, fortunately, isn’t true either.)

Then there is queer timethe fact that, in a hostile heteronormative world, queer people often find ourselves growing up in a very different way than straight people—and crip timethe ways that coping with a disability changes your relationship with time and often forces you to manage your time in ways that others don’t. As someone who came out fairly young and is now married, queer time doesn’t seem to have affected me all that much. But I feel crip time very acutely: I have to very carefully manage when I go to bed and when I wake up, every single day, making sure I get not only enough sleep—much more sleep than most people get or most employers respect—but also that it aligns properly with my circadian rhythm. Failure to do so risks triggering severe, agonizing pain. Factoring that in, I have lost at least a few years of my life to migraines and depression, and will probably lose several more in the future.

But more importantly, we all need to learn to stop measuring ourselves against other people’s timelines. There is no prize in life for being faster. And while there are prizes for particular accomplishments (Oscars, Nobels, and so on), much of what determines whether you win such prizes is entirely beyond your control. Even people who ultimately made eminent contributions to society didn’t know in advance that they were going to, and didn’t behave all that much differently from others who tried but failed.

I do not want to make this sound easy. It is incredibly hard. I believe that I personally am especially terrible at it. Our society seems to be optimized to make us compare ourselves to others in as many ways as possible as often as possible in as biased a manner as possible.

Capitalism has many important upsides, but one of its deepest flaws is that it makes our standard of living directly dependent on what is happening in the rest of a global market we can neither understand nor control. A subsistence farmer is subject to the whims of nature; but in a supermarket, you are subject to the whims of an entire global economy.

And there is reason to think that the harm of social comparison is getting worse rather than better. If some mad villain set out to devise a system that would maximize harmful social comparison and the emotional damage it causes, he would most likely create something resembling social media.

The villain might also tack on some TV news for good measure: Here are some random terrifying events, which we’ll make it sound like could hit you at any moment (even though their actual risk is declining); then our ‘good news’ will be a litany of amazing accomplishments, far beyond anything you could reasonably hope for, which have been achieved by a cherry-picked sample of unimaginably fortunate people you have never met (yet you somehow still form parasocial bonds with because we keep showing them to you). We will make a point not to talk about the actual problems in the world (such as inequality and climate change), certainly not in any way you might be able to constructively learn from; nor will we mention any actual good news which might be relevant to an ordinary person such as yourself (such as economic growth, improved health, or reduced poverty). We will focus entirely on rare, extreme events that by construction aren’t likely to ever happen to you and are not relevant to how you should live your life.

I do not have some simple formula I can give you that will make social comparison disappear. I do not know how to shake the decades of indoctrination into a societal milieu that prizes richer and faster over all other concepts of worth. But perhaps at least recognizing the problem will weaken its power over us.

The mental health crisis in academia

Apr 30 JDN 2460065

Why are so many academics anxious and depressed?

Depression and anxiety are much more prevalent among both students and faculty than they are in the general population. Unsurprisingly, women seem to have it a bit worse than men, and trans people have it worst of all.

Is this the result of systemic failings of the academic system? Before deciding that, one thing we should consider is that very smart people do seem to have a higher risk of depression.

There is a complex relationship between genes linked to depression and genes linked to intelligence, and some evidence that people of especially high IQ are more prone to depression; nearly 27% of Mensa members report mood disorders, compared to 10% of the general population.

(Incidentally, the stereotype of the weird, sickly nerd has a kernel of truth: the correlations between intelligence and autism, ADHD, allergies, and autoimmune disorders are absolutely real—and not at all well understood. It may be a general pattern of neural hyper-activation, not unlike what I posit in my stochastic overload model. The stereotypical nerd wears glasses, and, yes, indeed, myopia is also correlated with intelligence—and this seems to be mostly driven by genetics.)

Most of these figures are at least a few years old. If anything things are only worse now, as COVID triggered a surge in depression for just about everyone, academics included. It remains to be seen how much of this large increase will abate as things gradually return to normal, and how much will continue to have long-term effects—this may depend in part on how well we manage to genuinely restore a normal way of life and how well we can deal with long COVID.

If we assume that academics are a similar population to Mensa members (admittedly a strong assumption), then this could potentially explain why 26% of academic faculty are depressed—but not why nearly 40% of junior faculty are. At the very least, we junior faculty are about 50% more likely to be depressed than would be explained by our intelligence alone. And grad students have it even worse: Nearly 40% of graduate students report anxiety or depression, and nearly 50% of PhD students meet the criteria for depression. At the very least this sounds like a dual effect of being both high in intelligence and low in status—it’s those of us who have very little power or job security in academia who are the most depressed.

This suggests that, yes, there really is something wrong with academia. It may not be entirely the fault of the system—perhaps even a well-designed academic system would result in more depression than the general population because we are genetically predisposed. But it really does seem like there is a substantial environmental contribution that academic institutions bear some responsibility for.

I think the most obvious explanation is constant evaluation: From the time we are students at least up until we (maybe, hopefully, someday) get tenure, academics are constantly being evaluated on our performance. We know that this sort of evaluation contributes to anxiety and depression.

Don’t other jobs evaluate performance? Sure. But not constantly the way that academia does. This is especially obvious as a student, where everything you do is graded; but it largely continues once you are faculty as well.

For most jobs, you are concerned about doing well enough to keep your job or maybe get a raise. But academia has this continuous forward pressure: if you are a grad student or junior faculty, you can’t possibly keep your job; you must either move upward to the next stage or drop out. And academia has become so hyper-competitive that if you want to continue moving upward—and someday getting that tenure—you must publish in top-ranked journals, which have utterly opaque criteria and ever-declining acceptance rates. And since there are so few jobs available compared to the number of applicants, good enough is never good enough; you must be exceptional, or you will fail. Two thirds of PhD graduates seek a career in academia—but only 30% are actually in one three years later. (And honestly, three years is pretty short; there are plenty of cracks left to fall through between that and a genuinely stable tenured faculty position.)

Moreover, our skills are so hyper-specialized that it’s very hard to imagine finding work anywhere else. This grants academic institutions tremendous monopsony power over us, letting them get away with lower pay and worse working conditions. Even with an economics PhD—relatively transferable, all things considered—I find myself wondering who would actually want to hire me outside this ivory tower, and my feeble attempts at actually seeking out such employment have thus far met with no success.

I also find academia painfully isolating. I’m not an especially extraverted person; I tend to score somewhere near the middle range of extraversion (sometimes called an “ambivert”). But I still find myself craving more meaningful contact with my colleagues. We all seem to work in complete isolation from one another, even when sharing the same office (which is awkward for other reasons). There are very few consistent gatherings or good common spaces. And whenever faculty do try to arrange some sort of purely social event, it always seems to involve drinking at a pub and nobody is interested in providing any serious emotional or professional support.

Some of this may be particular to this university, or to the UK; or perhaps it has more to do with being at a certain stage of my career. In any case I didn’t feel nearly so isolated in graduate school; I had other students in my cohort and adjacent cohorts who were going through the same things. But I’ve been here two years now and so far have been unable to establish any similarly supportive relationships with colleagues.

There may be some opportunities I’m not taking advantage of: I’ve skipped a lot of research seminars, and I stopped going to those pub gatherings. But it wasn’t that I didn’t try them at all; it was that I tried them a few times and quickly found that they were not filling that need. At seminars, people only talked about the particular research project being presented. At the pub, people talked about almost nothing of serious significance—and certainly nothing requiring emotional vulnerability. The closest I think I got to this kind of support from colleagues was a series of lunch meetings designed to improve instruction in “tutorials” (what here in the UK we call discussion sections); there, at least, we could commiserate about feeling overworked and dealing with administrative bureaucracy.

There seem to be deep, structural problems with how academia is run. This whole process of universities outsourcing their hiring decisions to the capricious whims of high-ranked journals basically decides the entire course of our careers. And once you get to the point I have, now so disheartened with the process of publishing research that I can’t even engage with it, it’s not at all clear how it’s even possible to recover. I see no way forward, no one to turn to. No one seems to care how well I teach, if I’m not publishing research.

And I’m clearly not the only one who feels this way.

Implications of stochastic overload

Apr 2 JDN 2460037

A couple weeks ago I presented my stochastic overload model, which posits a neurological mechanism for the Yerkes-Dodson effect: Stress increases sympathetic activation, and this increases performance, up to the point where it starts to risk causing neural pathways to overload and shut down.

This week I thought I’d try to get into some of the implications of this model, how it might be applied to make predictions or guide policy.

One thing I often struggle with when it comes to applying theory is what actual benefits we get from a quantitative mathematical model as opposed to simply a basic qualitative idea. In many ways I think these benefits are overrated; people seem to think that putting something into an equation automatically makes it true and useful. I am sometimes tempted to try to take advantage of this, to put things into equations even though I know there is no good reason to put them into equations, simply because so many people seem to find equations so persuasive for some reason. (Studies have even shown that, particularly in disciplines that don’t use a lot of math, inserting a totally irrelevant equation into a paper makes it more likely to be accepted.)

The basic implications of the Yerkes-Dodson effect are already widely known, and utterly ignored in our society. We know that excessive stress is harmful to health and performance, and yet our entire economy seems to be based around maximizing the amount of stress that workers experience. I actually think neoclassical economics bears a lot of the blame for this, as neoclassical economists are constantly talking about “increasing work incentives”—which is to say, making work life more and more stressful. (And let me remind you that there has never been any shortage of people willing to work in my lifetime, except possibly briefly during the COVID pandemic. The shortage has always been employers willing to hire them.)

I don’t know if my model can do anything to change that. Maybe by putting it into an equation I can make people pay more attention to it, precisely because equations have this weird persuasive power over most people.

As far as scientific benefits, I think that the chief advantage of a mathematical model lies in its ability to make quantitative predictions. It’s one thing to say that performance increases with low levels of stress then decreases with high levels; but it would be a lot more useful if we could actually precisely quantify how much stress is optimal for a given person and how they are likely to perform at different levels of stress.

Unfortunately, the stochastic overload model can only make detailed predictions if you have fully specified the probability distribution of innate activation, which requires a lot of free parameters. This is especially problematic if you don’t even know what type of distribution to use, which we really don’t; I picked three classes of distribution because they were plausible and tractable, not because I had any particular evidence for them.

Also, we don’t even have standard units of measurement for stress; we have a vague notion of what more or less stressed looks like, but we don’t have the sort of quantitative measure that could be plugged into a mathematical model. Probably the best units to use would be something like blood cortisol levels, but then we’d need to go measure those all the time, which raises its own issues. And maybe people don’t even respond to cortisol in the same ways? But at least we could measure your baseline cortisol for awhile to get a prior distribution, and then see how different incentives increase your cortisol levels; and then the model should give relatively precise predictions about how this will affect your overall performance. (This is a very neuroeconomic approach.)

So, for now, I’m not really sure how useful the stochastic overload model is. This is honestly something I feel about a lot of the theoretical ideas I have come up with; they often seem too abstract to be usefully applicable to anything.

Maybe that’s how all theory begins, and applications only appear later? But that doesn’t seem to be how people expect me to talk about it whenever I have to present my work or submit it for publication. They seem to want to know what it’s good for, right now, and I never have a good answer to give them. Do other researchers have such answers? Do they simply pretend to?

Along similar lines, I recently had one of my students ask about a theory paper I wrote on international conflict for my dissertation, and after sending him a copy, I re-read the paper. There are so many pages of equations, and while I am confident that the mathematical logic is valid,I honestly don’t know if most of them are really useful for anything. (I don’t think I really believe that GDP is produced by a Cobb-Douglas production function, and we don’t even really know how to measure capital precisely enough to say.) The central insight of the paper, which I think is really important but other people don’t seem to care about, is a qualitative one: International treaties and norms provide an equilibrium selection mechanism in iterated games. The realists are right that this is cheap talk. The liberals are right that it works. Because when there are many equilibria, cheap talk works.

I know that in truth, science proceeds in tiny steps, building a wall brick by brick, never sure exactly how many bricks it will take to finish the edifice. It’s impossible to see whether your work will be an irrelevant footnote or the linchpin for a major discovery. But that isn’t how the institutions of science are set up. That isn’t how the incentives of academia work. You’re not supposed to say that this may or may not be correct and is probably some small incremental progress the ultimate impact of which no one can possibly foresee. You’re supposed to sell your work—justify how it’s definitely true and why it’s important and how it has impact. You’re supposed to convince other people why they should care about it and not all the dozens of other probably equally-valid projects being done by other researchers.

I don’t know how to do that, and it is agonizing to even try. It feels like lying. It feels like betraying my identity. Being good at selling isn’t just orthogonal to doing good science—I think it’s opposite. I think the better you are at selling your work, the worse you are at cultivating the intellectual humility necessary to do good science. If you think you know all the answers, you’re just bad at admitting when you don’t know things. It feels like in order to succeed in academia, I have to act like an unscientific charlatan.

Honestly, why do we even need to convince you that our work is more important than someone else’s? Are there only so many science points to go around? Maybe the whole problem is this scarcity mindset. Yes, grant funding is limited; but why does publishing my work prevent you from publishing someone else’s? Why do you have to reject 95% of the papers that get sent to you? Don’t tell me you’re limited by space; the journals are digital and searchable and nobody reads the whole thing anyway. Editorial time isn’t infinite, but most of the work has already been done by the time you get a paper back from peer review. Of course, I know the real reason: Excluding people is the main source of prestige.