What am I without you?

Jul 16 JDN 2460142

When this post goes live, it will be my husband’s birthday. He will probably read it before that, as he follows my Patreon. In honor of his birthday, I thought I would make romance the topic of today’s post.

In particular, there’s a certain common sentiment that is usually viewed as romantic, which I in fact think is quite toxic. This is the notion that “Without you, I am nothing”—that in the absence of the one we love, we would be empty or worthless.

Here is this sentiment being expressed by various musicians:

I’m all out of love,
I’m so lost without you.
I know you were right,
Believing for so long.
I’m all out of love,
What am I without you?

– “All Out of Love”, Air Supply

<quotation>

Well what am I, what am I without you?
What am I without you?
Your love makes me burn.
No, no, no
Well what am I, what am I without you?
I’m nothing without you.
So lеt love burn.

– “What am I without you?”, Suede

Without you, I’m nothing.
Without you, I’m nothing.
Without you, I’m nothing.
Without you, I’m nothing at all.

– “Without you I’m nothing”, Placebo

I’ll be nothin’, nothin’, nothin’, nothin’ without you.
I’ll be nothin’, nothin’, nothin’, nothin’ without you.
Yeah
I was too busy tryna find you with someone else,
The one I couldn’t stand to be with was myself.
‘Cause I’ll be nothin’, nothin’, nothin’, nothin’ without you.

– “Nothing without you”, The Weeknd

You were my strength when I was weak.
You were my voice when I couldn’t speak.
You were my eyes when I couldn’t see.
You saw the best there was in me!
Lifted me up when I couldn’t reach,
You gave me faith ’cause you believed!
I’m everything I am,
Because you loved me.


– “Because You Loved Me”, Celine Dion

Hopefully that’s enough to convince you that this is not a rare sentiment. Moreover, these songs do seem quite romantic, and there are parts of them that still resonate quite strongly for me (particularly “Because You Loved Me”).

Yet there is still something toxic here: They make us lose sight of our own self-worth independently of our relationships with others. Humans are deeply social creatures, so of course we want to fill our lives with relationships with others, and as well we should. But you are more than your relationships.

Stranded alone on a deserted island, you would still be a person of worth. You would still have inherent dignity. You would still deserve to live.

It’s also unhealthy even from a romantic perspective. Yes, once you’ve found the love of your life and you really do plan to live together forever, tying your identity so tightly to the relationship may not be disastrous—though it could still be unhealthy and promote a cycle of codependency. But what about before you’ve made that commitment? If you are nothing without the one you love, what happens when you break up? Who are you then?

And even if you are with the love of your life, what happens if they die?

Of course our relationships do change who we are. To some degree, our identity is inextricably tied to those we love, and this would probably still be desirable even if it weren’t inevitable. But there must always be part of you that isn’t bound to anyone in particular other than yourself—and if you can’t find that part, it’s a very bad sign.

Now compare a quite different sentiment:

If I didn’t have you to hold me tight,

If I didn’t have you to lie with at night,

If I didn’t have you to share my sighs,

And to kiss me and dry my tears when I cry…

Well, I…

Really think that I would…

Probably…

Have somebody else.

– “If I Didn’t Have You”, Tim Minchin

Tim Minchin is a comedy musician, and the song is very much written in that vein. He doesn’t want you to take it too seriously.

Another song Tim Minchin wrote for his wife, “Beautiful Head”, reflects upon the inevitable chasm that separates any two minds—he knows all about her, but not what goes on inside that beautiful head. He also has another sort-of love song, called “I’ll Take Lonely Tonight”, about rejecting someone because he wants to remain faithful to his wife. It’s bittersweet despite the humor within, and honestly I think it shows a deeper sense of romance than the vast majority of love songs I’ve heard.

Yet I must keep coming back to one thing: This is a much healthier attitude.

The factual claim is almost certainly objectively true: In all probability, should you find yourself separated from your current partner, you would, sooner or later, find someone else.

None of us began our lives in romantic partnerships—so who were we before then? No doubt our relationships change us, and losing them would change us yet again. But we were something before, and should it end, we will continue to be something after.

And the attitude that our lives would be empty and worthless without the one we love is dangerously close to the sort of self-destructive self-talk I know all too well from years of depression. “I’m worthless without you, I’m nothing without you” is really not so far from “I’m worthless, I’m nothing” simpliciter. If you hollow yourself out for love, you have still hollowed yourself out.

Why, then, do we only see this healthier attitude expressed as comedy? Why can’t we take seriously the idea that love doesn’t define your whole identity? Why does the toxic self-deprecation of “I am nothing without you” sound more romantic to our ears than the honest self-respect of “I would probably have somebody else”? Why is so much of what we view as “romantic” so often unrealistic—or even harmful?

Tim Minchin himself seems to wonder, as the song alternates between serious expressions of love and ironic jabs:

And if I may conjecture a further objection,
Love is nothing to do with destined perfection.
The connection is strengthened,
The affection simply grows over time,

Like a flower,
Or a mushroom,
Or a guinea pig,
Or a vine,
Or a sponge,
Or bigotry…
…or a banana.

And love is made more powerful
By the ongoing drama of shared experience,
And the synergy of a kind of symbiotic empathy, or… something.

I believe that a healthier form of love is possible. I believe that we can unite ourselves with others in a way that does not sacrifice our own identity and self-worth. I believe that love makes us more than we were—but not that we would be nothing without it. I am more than I was because you loved me—but not everything I am.

This is already how most of us view friendship: We care for our friends, we value our relationships with them—but we would recognize it as toxic to declare that we’d be nothing without them. Indeed, there is a contradiction in our usual attitude here: If part of who I am is in my friendships, then how can losing my romantic partner render me nothing? Don’t I still at least have my friends?

I can now answer this question: What am I without you? An unhappier me. But still, me.

So, on your birthday, let me say this to you, my dear husband:

But with all my heart and all my mind,
I know one thing is true:
I have just one life and just one love,
And my love, that love is you.
And if it wasn’t for you,
Darling, you…
I really think that I would…
Possibly…
Have somebody else.

Age, ambition, and social comparison

Jul 2 JDN 2460128

The day I turned 35 years old was one of the worst days of my life, as I wrote about at the time. I think the only times I have felt more depressed than that day were when my father died, when I was hospitalized by an allergic reaction to lamotrigine, and when I was rejected after interviewing for jobs at GiveWell and Wizards of the Coast.

This is notable because… nothing particularly bad happened to me on my 35th birthday. It was basically an ordinary day for me. I felt horrible simply because I was turning 35 and hadn’t accomplished so many of the things I thought I would have by that point in my life. I felt my dreams shattering as the clock ticked away what chance I thought I’d have at achieving my life’s ambitions.

I am slowly coming to realize just how pathological that attitude truly is. It was ingrained in me very deeply from the very youngest age, not least because I was such a gifted child.

While studying quantum physics in college, I was warned that great physicists do all their best work before they are 30 (some even said 25). Einstein himself said as much (so it must be true, right?). It turns out that was simply untrue. It may have been largely true in the 18th and 19th centuries, and seems to have seen some resurgence during the early years of quantum theory, but today the median age at which a Nobel laureate physicist did their prize-winning work is 48. Less than 20% of eminent scientists made their great discoveries before the age of 40.

Alexander Fleming was 47 when he discovered penicillin—just about average for an eminent scientist of today. Darwin was 22 when he set sail on the Beagle, but didn’t publish On the Origin of Species until he was 50. Andre-Marie Ampere started his work in electromagnetism in his forties.

In creative arts, age seems to be no barrier at all. Julia Child published her first cookbook at 50. Stan Lee sold his first successful Marvel comic at 40. Toni Morrison was 39 when she published her first novel, and 62 when she won her Nobel. Peter Mark Roget was 73 when he published his famous thesaurus. Tolkein didn’t publish The Hobbit until he was 45.

Alan Rickman didn’t start drama school until he was 26 and didn’t have a major Hollywood role until he was 42. Samuel L. Jackson is now the third-highest-grossing actor of all time (mostly because of the Avengers movies), but he didn’t have any major movie roles until his forties. Anna Moses didn’t start painting until she was 78.

We think of entrepreneurship as a young man’s game, but Ray Kroc didn’t buy McDonalds until he was 59. Harland Sanders didn’t franchise KFC until he was 62. Eric Yuan wasn’t a vice president until the age of 37 and didn’t become a billionaire until Zoom took off in 2019—he was 49. Sam Walton didn’t found Walmart until he was 44.

Great humanitarian achievements actually seem to be more likely later in life: Gandhi did not see India achieve independence until he was 78. Nelson Mandela was 76 when he became President of South Africa.

It has taken me far too long to realize this, and in fact I don’t think I have yet fully internalized it: Life is not a race. You do not “fall behind” when others achieve things younger than you did. In fact, most child prodigies grow up no more successful as adults than children who were merely gifted or even above-average. (There is another common belief that prodigies grow up miserable and stunted; that, fortunately, isn’t true either.)

Then there is queer timethe fact that, in a hostile heteronormative world, queer people often find ourselves growing up in a very different way than straight people—and crip timethe ways that coping with a disability changes your relationship with time and often forces you to manage your time in ways that others don’t. As someone who came out fairly young and is now married, queer time doesn’t seem to have affected me all that much. But I feel crip time very acutely: I have to very carefully manage when I go to bed and when I wake up, every single day, making sure I get not only enough sleep—much more sleep than most people get or most employers respect—but also that it aligns properly with my circadian rhythm. Failure to do so risks triggering severe, agonizing pain. Factoring that in, I have lost at least a few years of my life to migraines and depression, and will probably lose several more in the future.

But more importantly, we all need to learn to stop measuring ourselves against other people’s timelines. There is no prize in life for being faster. And while there are prizes for particular accomplishments (Oscars, Nobels, and so on), much of what determines whether you win such prizes is entirely beyond your control. Even people who ultimately made eminent contributions to society didn’t know in advance that they were going to, and didn’t behave all that much differently from others who tried but failed.

I do not want to make this sound easy. It is incredibly hard. I believe that I personally am especially terrible at it. Our society seems to be optimized to make us compare ourselves to others in as many ways as possible as often as possible in as biased a manner as possible.

Capitalism has many important upsides, but one of its deepest flaws is that it makes our standard of living directly dependent on what is happening in the rest of a global market we can neither understand nor control. A subsistence farmer is subject to the whims of nature; but in a supermarket, you are subject to the whims of an entire global economy.

And there is reason to think that the harm of social comparison is getting worse rather than better. If some mad villain set out to devise a system that would maximize harmful social comparison and the emotional damage it causes, he would most likely create something resembling social media.

The villain might also tack on some TV news for good measure: Here are some random terrifying events, which we’ll make it sound like could hit you at any moment (even though their actual risk is declining); then our ‘good news’ will be a litany of amazing accomplishments, far beyond anything you could reasonably hope for, which have been achieved by a cherry-picked sample of unimaginably fortunate people you have never met (yet you somehow still form parasocial bonds with because we keep showing them to you). We will make a point not to talk about the actual problems in the world (such as inequality and climate change), certainly not in any way you might be able to constructively learn from; nor will we mention any actual good news which might be relevant to an ordinary person such as yourself (such as economic growth, improved health, or reduced poverty). We will focus entirely on rare, extreme events that by construction aren’t likely to ever happen to you and are not relevant to how you should live your life.

I do not have some simple formula I can give you that will make social comparison disappear. I do not know how to shake the decades of indoctrination into a societal milieu that prizes richer and faster over all other concepts of worth. But perhaps at least recognizing the problem will weaken its power over us.

The mental health crisis in academia

Apr 30 JDN 2460065

Why are so many academics anxious and depressed?

Depression and anxiety are much more prevalent among both students and faculty than they are in the general population. Unsurprisingly, women seem to have it a bit worse than men, and trans people have it worst of all.

Is this the result of systemic failings of the academic system? Before deciding that, one thing we should consider is that very smart people do seem to have a higher risk of depression.

There is a complex relationship between genes linked to depression and genes linked to intelligence, and some evidence that people of especially high IQ are more prone to depression; nearly 27% of Mensa members report mood disorders, compared to 10% of the general population.

(Incidentally, the stereotype of the weird, sickly nerd has a kernel of truth: the correlations between intelligence and autism, ADHD, allergies, and autoimmune disorders are absolutely real—and not at all well understood. It may be a general pattern of neural hyper-activation, not unlike what I posit in my stochastic overload model. The stereotypical nerd wears glasses, and, yes, indeed, myopia is also correlated with intelligence—and this seems to be mostly driven by genetics.)

Most of these figures are at least a few years old. If anything things are only worse now, as COVID triggered a surge in depression for just about everyone, academics included. It remains to be seen how much of this large increase will abate as things gradually return to normal, and how much will continue to have long-term effects—this may depend in part on how well we manage to genuinely restore a normal way of life and how well we can deal with long COVID.

If we assume that academics are a similar population to Mensa members (admittedly a strong assumption), then this could potentially explain why 26% of academic faculty are depressed—but not why nearly 40% of junior faculty are. At the very least, we junior faculty are about 50% more likely to be depressed than would be explained by our intelligence alone. And grad students have it even worse: Nearly 40% of graduate students report anxiety or depression, and nearly 50% of PhD students meet the criteria for depression. At the very least this sounds like a dual effect of being both high in intelligence and low in status—it’s those of us who have very little power or job security in academia who are the most depressed.

This suggests that, yes, there really is something wrong with academia. It may not be entirely the fault of the system—perhaps even a well-designed academic system would result in more depression than the general population because we are genetically predisposed. But it really does seem like there is a substantial environmental contribution that academic institutions bear some responsibility for.

I think the most obvious explanation is constant evaluation: From the time we are students at least up until we (maybe, hopefully, someday) get tenure, academics are constantly being evaluated on our performance. We know that this sort of evaluation contributes to anxiety and depression.

Don’t other jobs evaluate performance? Sure. But not constantly the way that academia does. This is especially obvious as a student, where everything you do is graded; but it largely continues once you are faculty as well.

For most jobs, you are concerned about doing well enough to keep your job or maybe get a raise. But academia has this continuous forward pressure: if you are a grad student or junior faculty, you can’t possibly keep your job; you must either move upward to the next stage or drop out. And academia has become so hyper-competitive that if you want to continue moving upward—and someday getting that tenure—you must publish in top-ranked journals, which have utterly opaque criteria and ever-declining acceptance rates. And since there are so few jobs available compared to the number of applicants, good enough is never good enough; you must be exceptional, or you will fail. Two thirds of PhD graduates seek a career in academia—but only 30% are actually in one three years later. (And honestly, three years is pretty short; there are plenty of cracks left to fall through between that and a genuinely stable tenured faculty position.)

Moreover, our skills are so hyper-specialized that it’s very hard to imagine finding work anywhere else. This grants academic institutions tremendous monopsony power over us, letting them get away with lower pay and worse working conditions. Even with an economics PhD—relatively transferable, all things considered—I find myself wondering who would actually want to hire me outside this ivory tower, and my feeble attempts at actually seeking out such employment have thus far met with no success.

I also find academia painfully isolating. I’m not an especially extraverted person; I tend to score somewhere near the middle range of extraversion (sometimes called an “ambivert”). But I still find myself craving more meaningful contact with my colleagues. We all seem to work in complete isolation from one another, even when sharing the same office (which is awkward for other reasons). There are very few consistent gatherings or good common spaces. And whenever faculty do try to arrange some sort of purely social event, it always seems to involve drinking at a pub and nobody is interested in providing any serious emotional or professional support.

Some of this may be particular to this university, or to the UK; or perhaps it has more to do with being at a certain stage of my career. In any case I didn’t feel nearly so isolated in graduate school; I had other students in my cohort and adjacent cohorts who were going through the same things. But I’ve been here two years now and so far have been unable to establish any similarly supportive relationships with colleagues.

There may be some opportunities I’m not taking advantage of: I’ve skipped a lot of research seminars, and I stopped going to those pub gatherings. But it wasn’t that I didn’t try them at all; it was that I tried them a few times and quickly found that they were not filling that need. At seminars, people only talked about the particular research project being presented. At the pub, people talked about almost nothing of serious significance—and certainly nothing requiring emotional vulnerability. The closest I think I got to this kind of support from colleagues was a series of lunch meetings designed to improve instruction in “tutorials” (what here in the UK we call discussion sections); there, at least, we could commiserate about feeling overworked and dealing with administrative bureaucracy.

There seem to be deep, structural problems with how academia is run. This whole process of universities outsourcing their hiring decisions to the capricious whims of high-ranked journals basically decides the entire course of our careers. And once you get to the point I have, now so disheartened with the process of publishing research that I can’t even engage with it, it’s not at all clear how it’s even possible to recover. I see no way forward, no one to turn to. No one seems to care how well I teach, if I’m not publishing research.

And I’m clearly not the only one who feels this way.

Implications of stochastic overload

Apr 2 JDN 2460037

A couple weeks ago I presented my stochastic overload model, which posits a neurological mechanism for the Yerkes-Dodson effect: Stress increases sympathetic activation, and this increases performance, up to the point where it starts to risk causing neural pathways to overload and shut down.

This week I thought I’d try to get into some of the implications of this model, how it might be applied to make predictions or guide policy.

One thing I often struggle with when it comes to applying theory is what actual benefits we get from a quantitative mathematical model as opposed to simply a basic qualitative idea. In many ways I think these benefits are overrated; people seem to think that putting something into an equation automatically makes it true and useful. I am sometimes tempted to try to take advantage of this, to put things into equations even though I know there is no good reason to put them into equations, simply because so many people seem to find equations so persuasive for some reason. (Studies have even shown that, particularly in disciplines that don’t use a lot of math, inserting a totally irrelevant equation into a paper makes it more likely to be accepted.)

The basic implications of the Yerkes-Dodson effect are already widely known, and utterly ignored in our society. We know that excessive stress is harmful to health and performance, and yet our entire economy seems to be based around maximizing the amount of stress that workers experience. I actually think neoclassical economics bears a lot of the blame for this, as neoclassical economists are constantly talking about “increasing work incentives”—which is to say, making work life more and more stressful. (And let me remind you that there has never been any shortage of people willing to work in my lifetime, except possibly briefly during the COVID pandemic. The shortage has always been employers willing to hire them.)

I don’t know if my model can do anything to change that. Maybe by putting it into an equation I can make people pay more attention to it, precisely because equations have this weird persuasive power over most people.

As far as scientific benefits, I think that the chief advantage of a mathematical model lies in its ability to make quantitative predictions. It’s one thing to say that performance increases with low levels of stress then decreases with high levels; but it would be a lot more useful if we could actually precisely quantify how much stress is optimal for a given person and how they are likely to perform at different levels of stress.

Unfortunately, the stochastic overload model can only make detailed predictions if you have fully specified the probability distribution of innate activation, which requires a lot of free parameters. This is especially problematic if you don’t even know what type of distribution to use, which we really don’t; I picked three classes of distribution because they were plausible and tractable, not because I had any particular evidence for them.

Also, we don’t even have standard units of measurement for stress; we have a vague notion of what more or less stressed looks like, but we don’t have the sort of quantitative measure that could be plugged into a mathematical model. Probably the best units to use would be something like blood cortisol levels, but then we’d need to go measure those all the time, which raises its own issues. And maybe people don’t even respond to cortisol in the same ways? But at least we could measure your baseline cortisol for awhile to get a prior distribution, and then see how different incentives increase your cortisol levels; and then the model should give relatively precise predictions about how this will affect your overall performance. (This is a very neuroeconomic approach.)

So, for now, I’m not really sure how useful the stochastic overload model is. This is honestly something I feel about a lot of the theoretical ideas I have come up with; they often seem too abstract to be usefully applicable to anything.

Maybe that’s how all theory begins, and applications only appear later? But that doesn’t seem to be how people expect me to talk about it whenever I have to present my work or submit it for publication. They seem to want to know what it’s good for, right now, and I never have a good answer to give them. Do other researchers have such answers? Do they simply pretend to?

Along similar lines, I recently had one of my students ask about a theory paper I wrote on international conflict for my dissertation, and after sending him a copy, I re-read the paper. There are so many pages of equations, and while I am confident that the mathematical logic is valid,I honestly don’t know if most of them are really useful for anything. (I don’t think I really believe that GDP is produced by a Cobb-Douglas production function, and we don’t even really know how to measure capital precisely enough to say.) The central insight of the paper, which I think is really important but other people don’t seem to care about, is a qualitative one: International treaties and norms provide an equilibrium selection mechanism in iterated games. The realists are right that this is cheap talk. The liberals are right that it works. Because when there are many equilibria, cheap talk works.

I know that in truth, science proceeds in tiny steps, building a wall brick by brick, never sure exactly how many bricks it will take to finish the edifice. It’s impossible to see whether your work will be an irrelevant footnote or the linchpin for a major discovery. But that isn’t how the institutions of science are set up. That isn’t how the incentives of academia work. You’re not supposed to say that this may or may not be correct and is probably some small incremental progress the ultimate impact of which no one can possibly foresee. You’re supposed to sell your work—justify how it’s definitely true and why it’s important and how it has impact. You’re supposed to convince other people why they should care about it and not all the dozens of other probably equally-valid projects being done by other researchers.

I don’t know how to do that, and it is agonizing to even try. It feels like lying. It feels like betraying my identity. Being good at selling isn’t just orthogonal to doing good science—I think it’s opposite. I think the better you are at selling your work, the worse you are at cultivating the intellectual humility necessary to do good science. If you think you know all the answers, you’re just bad at admitting when you don’t know things. It feels like in order to succeed in academia, I have to act like an unscientific charlatan.

Honestly, why do we even need to convince you that our work is more important than someone else’s? Are there only so many science points to go around? Maybe the whole problem is this scarcity mindset. Yes, grant funding is limited; but why does publishing my work prevent you from publishing someone else’s? Why do you have to reject 95% of the papers that get sent to you? Don’t tell me you’re limited by space; the journals are digital and searchable and nobody reads the whole thing anyway. Editorial time isn’t infinite, but most of the work has already been done by the time you get a paper back from peer review. Of course, I know the real reason: Excluding people is the main source of prestige.

The role of innate activation in stochastic overload

Mar 26 JDN 2460030

Two posts ago I introduced my stochastic overload model, which offers an explanation for the Yerkes-Dodson effect by positing that additional stress increases sympathetic activation, which is useful up until the point where it starts risking an overload that forces systems to shut down and rest.

The central equation of the model is actually quite simple, expressed either as an expectation or as an integral:

Y = E[x + s | x + s < 1] P[x + s < 1]

Y = \int_{0}^{1-s} (x+s) dF(x)

The amount of output produced is the expected value of innate activation plus stress activation, times the probability that there is no overload. Increased stress raises this expectation value (the incentive effect), but also increases the probability of overload (the overload effect).

The model relies upon assuming that the brain starts with some innate level of activation that is partially random. Exactly what sort of Yerkes-Dodson curve you get from this model depends very much on what distribution this innate activation takes.

I’ve so far solved it for three types of distribution.

The simplest is a uniform distribution, where within a certain range, any level of activation is equally probable. The probability density function looks like this:

Assume the distribution has support between a and b, where a < b.

When b+s < 1, then overload is impossible, and only the incentive effect occurs; productivity increases linearly with stress.

The expected output is simply the expected value of a uniform distribution from a+s to b+s, which is:

E[x + s] = (a+b)/2+s

Then, once b+s > 1, overload risk begins to increase.

In this range, the probability of avoiding overload is:

P[x + s < 1] = F(1-s) = (1-s-a)/(b-a)

(Note that at b+s=1, this is exactly 1.)

The expected value of x+s in this range is:

E[x + s | x + s < 1] = (1-s)(1+s)/(2(b-a))

Multiplying these two together:

Y = [(1-s)(1+s)(1-s-a)]/[2(b-a)^2]

Here is what that looks like for a=0, b=1/2:

It does have the right qualitative features: increasing, then decreasing. But its sure looks weird, doesn’t it? It has this strange kinked shape.

So let’s consider some other distributions.

The next one I was able to solve it for is an exponential distribution, where the most probable activation is zero, and then higher activation always has lower probability than lower activation in an exponential decay:

For this it was actually easiest to do the integral directly (I did it by integrating by parts, but I’m sure you don’t care about all the mathematical steps):

Y = \int_{0}^{1-s} (x+s) dF(x)

Y = (1/λ+s) – (1/ λ + 1)e^(-λ(1-s))

The parameter λdecides how steeply your activation probability decays. Someone with low λ is relatively highly activated all the time, while someone with high λ is usually not highly activated; this seems like it might be related to the personality trait neuroticism.

Here are graphs of what the resulting Yerkes-Dodson curve looks like for several different values of λ:

λ = 0.5:

λ = 1:

λ = 2:

λ = 4:

λ = 8:

The λ = 0.5 person has high activation a lot of the time. They are actually fairly productive even without stress, but stress quickly overwhelms them. The λ = 8 person has low activation most of the time. They are not very productive without stress, but can also bear relatively high amounts of stress without overloading.

(The low-λ people also have overall lower peak productivity in this model, but that might not be true in reality, if λ is inversely correlated with some other attributes that are related to productivity.)

Neither uniform nor exponential has the nice bell-curve shape for innate activation we might have hoped for. There is another class of distributions, beta distributions, which do have this shape, and they are sort of tractable—you need something called an incomplete beta function, which isn’t an elementary function but it’s useful enough that most statistical packages include it.

Beta distributions have two parameters, α and β. They look like this:

Beta distributions are quite useful in Bayesian statistics; if you’re trying to estimate the probability of a random event that either succeeds or fails with a fixed probability (a Bernoulli process), and so far you have observed a successes and b failures, your best guess of its probability at each trial is a beta distribution with α = a+1 and β = b+1.

For beta distributions with parameters α and β, the result comes out to (I is that incomplete beta function I mentioned earlier):

Y = I(1-s, α+1, β) + I(1-s, α, β)

For whole number values of α andβ, the incomplete beta function can be computed by hand (though it is more work the larger they are); here’s an example with α = β = 2.

The innate activation probability looks like this:

And the result comes out like this:

Y = 2(1-s)^3 – 3/2(1-s)^4 + 3s(1-s)^2 – 2s(1-s)^3

This person has pretty high innate activation most of the time, so stress very quickly overwhelms them. If I had chosen a much higher β, I could change that, making them less likely to be innately so activated.

These are the cases I’ve found to be relatively tractable so far. They all have the right qualitative pattern: Increasing stress increases productivity for awhile, then begins decreasing it once overload risk becomes too high. They also show a general pattern where people who are innately highly activated (neurotic?) are much more likely to overload and thus much more sensitive to stress.

The stochastic overload model

The stochastic overload model

Mar 12 JDN 2460016

The next few posts are going to be a bit different, a bit more advanced and technical than usual. This is because, for the first time in several months at least, I am actually working on what could be reasonably considered something like theoretical research.

I am writing it up in the form of blog posts, because actually writing a paper is still too stressful for me right now. This also forces me to articulate my ideas in a clearer and more readable way, rather than dive directly into a morass of equations. It also means that even if I do never actually get around to finishing a paper, the idea is out there, and maybe someone else could make use of it (and hopefully give me some of the credit).

I’ve written previously about the Yerkes-Dodson effect: On cognitively-demanding tasks, increased stress increases performance, but only to a point, after which it begins decreasing it again. The effect is well-documented, but the mechanism is poorly understood.

I am currently on the wrong side of the Yerkes-Dodson curve, which is why I’m too stressed to write this as a formal paper right now. But that also gave me some ideas about how it may work.

I have come up with a simple but powerful mathematical model that may provide a mechanism for the Yerkes-Dodson effect.

This model is clearly well within the realm of a behavioral economic model, but it is also closely tied to neuroscience and cognitive science.

I call it the stochastic overload model.

First, a metaphor: Consider an engine, which can run faster or slower. If you increase its RPMs, it will output more power, and provide more torque—but only up to a certain point. Eventually it hits a threshold where it will break down, or even break apart. In real engines, we often include safety systems that force the engine to shut down as it approaches such a threshold.

I believe that human brains function on a similar principle. Stress increases arousal, which activates a variety of processes via the sympathetic nervous system. This activation improves performance on both physical and cognitive tasks. But it has a downside; especially on cognitively demanding tasks which required sustained effort, I hypothesize that too much sympathetic activation can result in a kind of system overload, where your brain can no longer handle the stress and processes are forced to shut down.

This shutdown could be brief—a few seconds, or even a fraction of a second—or it could be prolonged—hours or days. That might depend on just how severe the stress is, or how much of your brain it requires, or how prolonged it is. For purposes of the model, this isn’t vital. It’s probably easiest to imagine it being a relatively brief, localized shutdown of a particular neural pathway. Then, your performance in a task is summed up over many such pathways over a longer period of time, and by the law of large numbers your overall performance is essentially the average performance of all your brain systems.

That’s the “overload” part of the model. Now for the “stochastic” part.

Let’s say that, in the absence of stress, your brain has a certain innate level of sympathetic activation, which varies over time in an essentially chaotic, unpredictable—stochastic—sort of way. It is never really completely deactivated, and may even have some chance of randomly overloading itself even without outside input. (Actually, a potential role in the model for the personality trait neuroticism is an innate tendency toward higher levels of sympathetic activation in the absence of outside stress.)

Let’s say that this innate activation is x, which follows some kind of known random distribution F(x).

For simplicity, let’s also say that added stress s adds linearly to your level of sympathetic activation, so your overall level of activation is x + s.

For simplicity, let’s say that activation ranges between 0 and 1, where 0 is no activation at all and 1 is the maximum possible activation and triggers overload.

I’m assuming that if a pathway shuts down from overload, it doesn’t contribute at all to performance on the task. (You can assume it’s only reduced performance, but this adds complexity without any qualitative change.)

Since sympathetic activation improves performance, but can result in overload, your overall expected performance in a given task can be computed as the product of two terms:

[expected value of x + s, provided overload does not occur] * [probability overload does not occur]

E[x + s | x + s < 1] P[x + s < 1]

The first term can be thought of as the incentive effect: Higher stress promotes more activation and thus better performance.

The second term can be thought of as the overload effect: Higher stress also increases the risk that activation will exceed the threshold and force shutdown.

This equation actually turns out to have a remarkably elegant form as an integral (and here’s where I get especially technical and mathematical):

\int_{0}^{1-s} (x+s) dF(x)

The integral subsumes both the incentive effect and the overload effect into one term; you can also think of the +s in the integrand as the incentive effect and the 1-s in the limit of integration as the overload effect.

For the uninitated, this is probably just Greek. So let me show you some pictures to help with your intuition. These are all freehand sketches, so let me apologize in advance for my limited drawing skills. Think of this as like Arthur Laffer’s famous cocktail napkin.

Suppose that, in the absence of outside stress, your innate activation follows a distribution like this (this could be a normal or logit PDF; as I’ll talk about next week, logit is far more tractable):

As I start adding stress, this shifts the distribution upward, toward increased activation:

Initially, this will improve average performance.

But at some point, increased stress actually becomes harmful, as it increases the probability of overload.

And eventually, the probability of overload becomes so high that performance becomes worse than it was with no stress at all:

The result is that overall performance, as a function of stress, looks like an inverted U-shaped curve—the Yerkes-Dodson curve:

The precise shape of this curve depends on the distribution that we use for the innate activation, which I will save for next week’s post.

Where is the money going in academia?

Feb 19 JDN 2459995

A quandary for you:

My salary is £41,000.

Annual tuition for a full-time full-fee student in my department is £23,000.

I teach roughly the equivalent of one full-time course (about 1/2 of one and 1/4 of two others; this is typically counted as “teaching 3 courses”, but if I used that figure, it would underestimate the number of faculty needed).

Each student takes about 5 or 6 courses at a time.

Why do I have 200 students?

If you multiply this out, the 200 students I teach, divided by the 6 instructors they have at one time, times the £23,000 they are paying… I should be bringing in over £760,000 for the university. Why am I paid only 5% of that?

Granted, there are other costs a university must bear aside from paying instructors. There are facilities, and administration, and services. And most of my students are not full-fee paying; that £23,000 figure really only applies to international students.

Students from Scotland pay only £1,820, but there aren’t very many of them, and public funding is supposed to make up that difference. Even students from the rest of the UK pay £9,250. And surely the average tuition paid has got to be close to that? Yet if we multiply that out, £9,000 times 200 divided by 6, we’re still looking at £300,000. So I’m still getting only 14%.

Where is the rest going?

This isn’t specific to my university by any means. It seems to be a global phenomenon. The best data on this seems to be from the US.

According to salary.com, the median salary for an adjunct professor in the US is about $63,000. This actually sounds high, given what I’ve heard from other entry-level faculty. But okay, let’s take that as our figure. (My pay is below this average, though how much depends upon the strength of the pound against the dollar. Currently the pound is weak, so quite a bit.)

Yet average tuition for out-of-state students at public college is $23,000 per year.

This means that an adjunct professor in the US with 200 students takes in $760,000 but receives $63,000. Where does that other $700,000 go?

If you think that it’s just a matter of paying for buildings, service staff, and other costs of running a university, consider this: It wasn’t always this way.

Since 1970, inflation-adjusted salaries for US academic faculty at public universities have risen a paltry 3.1%. In other words, basically not at all.

This is considerably slower than the growth of real median household income, which has risen almost 40% in that same time.

Over the same interval, nominal tuition has risen by over 2000%; adjusted for inflation, this is a still-staggering increase of 250%.

In other words, over the last 50 years, college has gotten three times as expensive, but faculty are still paid basically the same. Where is all this extra money going?

Part of the explanation is that public funding for colleges has fallen over time, and higher tuition partly makes up the difference. But private school tuition has risen just as fast, and their faculty salaries haven’t kept up either.

In their annual budget report, the University of Edinburgh proudly declares that their income increased by 9% last year. Let me assure you, my salary did not. (In fact, inflation-adjusted, my salary went down.) And their EBITDA—earnings before interest, taxes, depreciation, and amortization—was £168 million. Of that, £92 million was lost to interest and depreciation, but they don’t pay taxes at all, so their real net income was about £76 million. In the report, they include price changes of their endowment and pension funds to try to make this number look smaller, ending up with only £37 million, but that’s basically fiction; these are just stock market price drops, and they will bounce back.

Using similar financial alchemy, they’ve been trying to cut our pensions lately, because they say they “are too expensive” (because the stock market went down—nevermind that it’ll bounce back in a year or two). Fortunately, the unions are fighting this pretty hard. I wish they’d also fight harder to make them put people like me on the tenure track.

Had that £76 million been distributed evenly between all 5,000 of us faculty, we’d each get an extra £15,600.

Well, then, that solves part of the mystery in perhaps the most obvious, corrupt way possible: They’re literally just hoarding it.

And Edinburgh is far from the worst offender here. No, that would be Harvard, who are sitting on over $50 billion in assets. Since they have 21,000 students, that is over $2 million per student. With even a moderate return on its endowment, Harvard wouldn’t need to charge tuition at all.

But even then, raising my salary to £56,000 wouldn’t explain why I need to teach 200 students. Even that is still only 19% of the £300,000 those students are bringing in. But hey, then at least the primary service for which those students are here for might actually account for one-fifth of what they’re paying!

Now let’s considers administrators. Median salary for a university administrator in the US is about $138,000—twice what adjunct professors make.


Since 1970, that same time interval when faculty salaries were rising a pitiful 3% and tuition was rising a staggering 250%, how much did chancellors’ salaries increase? Over 60%.

Of course, the number of administrators is not fixed. You might imagine that with technology allowing us to automate a lot of administrative tasks, the number of administrators could be reduced over time. If that’s what you thought happened, you would be very, very wrong. The number of university administrators in the US has more than doubled since the 1980s. This is far faster growth than the number of students—and quite frankly, why should the number of administrators even grow with the number of students? There is a clear economy of scale here, yet it doesn’t seem to matter.

Combine those two facts: 60% higher pay times twice as many administrators means that universities now spend at least 3 times as much on administration as they did 50 years ago. (Why, that’s just about the proportional increase in tuition! Coincidence? I think not.)

Edinburgh isn’t even so bad in this regard. They have 6,000 administrative staff versus 5,000 faculty. If that already sounds crazy—more admins than instructors?—consider that the University of Michigan has 7,000 faculty but 19,000 administrators.

Michigan is hardly exceptional in this regard: Illinois UC has 2,500 faculty but nearly 8,000 administrators, while Ohio State has 7,300 faculty and 27,000 administrators. UCLA is even worse, with only 4,000 faculty but 26,000 administrators—a ratio of 6 to 1. It’s not the UC system in general, though: My (other?) alma mater of UC Irvine somehow supports 5,600 faculty with only 6,400 administrators. Yes, that’s right; compared to UCLA, UCI has 40% more faculty but 76% fewer administrators. (As far as students? UCLA has 47,000 while UCI has 36,000.)

At last, I think we’ve solved the mystery! Where is all the money in academia going? Administrators.

They keep hiring more and more of them, and paying them higher and higher salaries. Meanwhile, they stop hiring tenure-track faculty and replace them with adjuncts that they can get away with paying less. And then, whatever they manage to save that way, they just squirrel away into the endowment.

A common right-wing talking point is that more institutions should be “run like a business”. Well, universities seem to have taken that to heart. Overpay your managers, underpay your actual workers, and pocket the savings.

I’m old enough to be President now.

Jan 22 JDN 2459967

When this post goes live, I will have passed my 35th birthday. This is old enough to be President of the United States, at least by law. (In practice, no POTUS has been less than 42.)

Not that I will ever be President. I have neither the wealth nor the charisma to run any kind of national political campaign. I might be able to get elected to some kind of local office at some point, like a school board or a city water authority. But I’ve been eligible to run for such offices for quite awhile now, and haven’t done so; nor do I feel particularly inclined at the moment.

No, the reason this birthday feels so significant is the milestone it represents. By this age, most people have spouses, children, careers. I have a spouse. I don’t have kids. I sort of have a career.

I have a job, certainly. I work for relatively decent pay. Not excellent, not what I was hoping for with a PhD in economics, but enough to live on (anywhere but an overpriced coastal metropolis). But I can’t really call that job a career, because I find large portions of it unbearable and I have absolutely no job security. In fact, I have the exact opposite: My job came with an explicit termination date from the start. (Do the people who come up with these short-term postdoc positions understand how that feels? It doesn’t seem like they do.)

I missed the window to apply for academic jobs that start next year. If I were happy here, this would be fine; I still have another year left on my contract. But I’m not happy here, and that is a grievous understatement. Working here is clearly the most important situational factor contributing to my ongoing depression. So I really ought to be applying to every alternative opportunity I can find—but I can’t find the will to try it, or the self-confidence to believe that my attempts could succeed if I did.

Then again, I’m not sure I should be applying to academic positions at all. If I did apply to academic positions, they’d probably be teaching-focused ones, since that’s the one part of my job I’m actually any good at. I’ve more or less written off applying to major research institutions; I don’t think I would get hired anyway, and even if I did, the pressure to publish is so unbearable that I think I’d be just as miserable there as I am here.

On the other hand, I can’t be sure that I would be so miserable even at another research institution; maybe with better mentoring and better administration I could be happy and successful in academic research after all.

The truth is, I really don’t know how much of my misery is due to academia in general, versus the British academic system, versus Edinburgh as an institution, versus starting work during the pandemic, versus the experience of being untenured faculty, versus simply my own particular situation. I don’t know if working at another school would be dramatically better, a little better, or just the same. (If it were somehow worse—which frankly seems hard to arrange—I would literally just quit immediately.)

I guess if the University of Michigan offered me an assistant professor job right now, I would take it. But I’m confident enough that they wouldn’t offer it to me that I can’t see the point in applying. (Besides, I missed the application windows this year.) And I’m not even sure that I would be happy there, despite the fact that just a few years ago I would have called it a dream job.

That’s really what I feel most acutely about turning 35: The shattering of dreams.

I thought I had some idea of how my life would go. I thought I knew what I wanted. I thought I knew what would make me happy.

The weirdest part it that it isn’t even that different from how I’d imagined it. If you’d asked me 10 or even 20 years ago what my career would be like at 35, I probably would have correctly predicted that I would have a PhD and be working at a major research university. 10 years ago I would have correctly expected it to be a PhD in economics; 20, I probably would have guessed physics. In both cases I probably would have thought I’d be tenured by now, or at least on the tenure track. But a postdoc or adjunct position (this is sort of both?) wouldn’t have been utterly shocking, just vaguely disappointing.

The biggest error by my past self was thinking that I’d be happy and successful in this career, instead of barely, desperately hanging on. I thought I’d have published multiple successful papers by now, and be excited to work on a new one. I imagined I’d also have published a book or two. (The fact that I self-published a nonfiction book at 16 but haven’t published any nonfiction ever since would be particularly baffling to my 15-year-old self, and is particularly depressing to me now.) I imagined myself becoming gradually recognized as an authority in my field, not languishing in obscurity; I imagined myself feeling successful and satisfied, not hopeless and depressed.

It’s like the dark Mirror Universe version of my dream job. It’s so close to what I thought I wanted, but it’s also all wrong. I finally get to touch my dreams, and they shatter in my hands.

When you are young, birthdays are a sincere cause for celebration; you look forward to the new opportunities the future will bring you. I seem to be now at the age where it no longer feels that way.

Good enough is perfect, perfect is bad

Jan 8 JDN 2459953

Not too long ago, I read the book How to Keep House While Drowning by KC Davis, which I highly recommend. It offers a great deal of useful and practical advice, especially for someone neurodivergent and depressed living through an interminable pandemic (which I am, but honestly, odds are, you may be too). And to say it is a quick and easy read is actually an unfair understatement; it is explicitly designed to be readable in short bursts by people with ADHD, and it has a level of accessibility that most other books don’t even aspire to and I honestly hadn’t realized was possible. (The extreme contrast between this and academic papers is particularly apparent to me.)

One piece of advice that really stuck with me was this: Good enough is perfect.

At first, it sounded like nonsense; no, perfect is perfect, good enough is just good enough. But in fact there is a deep sense in which it is absolutely true.

Indeed, let me make it a bit stronger: Good enough is perfect; perfect is bad.

I doubt Davis thought of it in these terms, but this is a concise, elegant statement of the principles of bounded rationality. Sometimes it can be optimal not to optimize.

Suppose that you are trying to optimize something, but you have limited computational resources in which to do so. This is actually not a lot for you to suppose—it’s literally true of basically everyone basically every moment of every day.

But let’s make it a bit more concrete, and say that you need to find the solution to the following math problem: “What is the product of 2419 times 1137?” (Pretend you don’t have a calculator, as it would trivialize the exercise. I thought about using a problem you couldn’t do with a standard calculator, but I realized that would also make it much weirder and more obscure for my readers.)

Now, suppose that there are some quick, simple ways to get reasonably close to the correct answer, and some slow, difficult ways to actually get the answer precisely.

In this particular problem, the former is to approximate: What’s 2500 times 1000? 2,500,000. So it’s probably about 2,500,000.

Or we could approximate a bit more closely: Say 2400 times 1100, that’s about 100 times 100 times 24 times 11, which is 2 times 12 times 11 (times 10,000), which is 2 times (110 plus 22), which is 2 times 132 (times 10,000), which is 2,640,000.

Or, we could actually go through all the steps to do the full multiplication (remember I’m assuming you have no calculator), multiply, carry the 1s, add all four sums, re-check everything and probably fix it because you messed up somewhere; and then eventually you will get: 2,750,403.

So, our really fast method was only off by about 10%. Our moderately-fast method was only off by 4%. And both of them were a lot faster than getting the exact answer by hand.

Which of these methods you’d actually want to use depends on the context and the tools at hand. If you had a calculator, sure, get the exact answer. Even if you didn’t, but you were balancing the budget for a corporation, I’m pretty sure they’d care about that extra $110,403. (Then again, they might not care about the $403 or at least the $3.) But just as an intellectual exercise, you really didn’t need to do anything; the optimal choice may have been to take my word for it. Or, if you were at all curious, you might be better off choosing the quick approximation rather than the precise answer. Since nothing of any real significance hinged on getting that answer, it may be simply a waste of your time to bother finding it.

This is of course a contrived example. But it’s not so far from many choices we make in real life.

Yes, if you are making a big choice—which job to take, what city to move to, whether to get married, which car or house to buy—you should get a precise answer. In fact, I make spreadsheets with formal utility calculations whenever I make a big choice, and I haven’t regretted it yet. (Did I really make a spreadsheet for getting married? You’re damn right I did; there were a lot of big financial decisions to make there—taxes, insurance, the wedding itself! I didn’t decide whom to marry that way, of course; but we always had the option of staying unmarried.)

But most of the choices we make from day to day are small choices: What should I have for lunch today? Should I vacuum the carpet now? What time should I go to bed? In the aggregate they may all add up to important things—but each one of them really won’t matter that much. If you were to construct a formal model to optimize your decision of everything to do each day, you’d spend your whole day doing nothing but constructing formal models. Perfect is bad.

In fact, even for big decisions, you can’t really get a perfect answer. There are just too many unknowns. Sometimes you can spend more effort gathering additional information—but that’s costly too, and sometimes the information you would most want simply isn’t available. (You can look up the weather in a city, visit it, ask people about it—but you can’t really know what it’s like to live there until you do.) Even those spreadsheet models I use to make big decisions contain error bars and robustness checks, and if, even after investing a lot of effort trying to get precise results, I still find two or more choices just can’t be clearly distinguished to within a good margin of error, I go with my gut. And that seems to have been the best choice for me to make. Good enough is perfect.

I think that being gifted as a child trained me to be dangerously perfectionist as an adult. (Many of you may find this familiar.) When it came to solving math problems, or answering quizzes, perfection really was an attainable goal a lot of the time.

As I got older and progressed further in my education, maybe getting every answer right was no longer feasible; but I still could get the best possible grade, and did, in most of my undergraduate classes and all of my graduate classes. To be clear, I’m not trying to brag here; if anything, I’m a little embarrassed. What it mainly shows is that I had learned the wrong priorities. In fact, one of the main reasons why I didn’t get a 4.0 average in undergrad is that I spent a lot more time back then writing novels and nonfiction books, which to this day I still consider my most important accomplishments and grieve that I’ve not (yet?) been able to get them commercially published. I did my best work when I wasn’t trying to be perfect. Good enough is perfect; perfect is bad.

Now here I am on the other side of the academic system, trying to carve out a career, and suddenly, there is no perfection. When my exam is being graded by someone else, there is a way to get the most points. When I’m the one grading the exams, there is no “correct answer” anymore. There is no one scoring me to see if I did the grading the “right way”—and so, no way to be sure I did it right.

Actually, here at Edinburgh, there are other instructors who moderate grades and often require me to revise them, which feels a bit like “getting it wrong”; but it’s really more like we had different ideas of what the grade curve should look like (not to mention US versus UK grading norms). There is no longer an objectively correct answer the way there is for, say, the derivative of x^3, the capital of France, or the definition of comparative advantage. (Or, one question I got wrong on an undergrad exam because I had zoned out of that lecture to write a book on my laptop: Whether cocaine is a dopamine reuptake inhibitor. It is. And the fact that I still remember that because I got it wrong over a decade ago tells you a lot about me.)

And then when it comes to research, it’s even worse: What even constitutes “good” research, let alone “perfect” research? What would be most scientifically rigorous isn’t what journals would be most likely to publish—and without much bigger grants, I can afford neither. I find myself longing for the research paper that will be so spectacular that top journals have to publish it, removing all risk of rejection and failure—in other words, perfect.

Yet such a paper plainly does not exist. Even if I were to do something that would win me a Nobel or a Fields Medal (this is, shall we say, unlikely), it probably wouldn’t be recognized as such immediately—a typical Nobel isn’t awarded until 20 or 30 years after the work that spawned it, and while Fields Medals are faster, they’re by no means instant or guaranteed. In fact, a lot of ground-breaking, paradigm-shifting research was originally relegated to minor journals because the top journals considered it too radical to publish.

Or I could try to do something trendy—feed into DSGE or GTFO—and try to get published that way. But I know my heart wouldn’t be in it, and so I’d be miserable the whole time. In fact, because it is neither my passion nor my expertise, I probably wouldn’t even do as good a job as someone who really buys into the core assumptions. I already have trouble speaking frequentist sometimes: Are we allowed to say “almost significant” for p = 0.06? Maximizing the likelihood is still kosher, right? Just so long as I don’t impose a prior? But speaking DSGE fluently and sincerely? I’d have an easier time speaking in Latin.

What I know—on some level at least—I ought to be doing is finding the research that I think is most worthwhile, given the resources I have available, and then getting it published wherever I can. Or, in fact, I should probably constrain a little by what I know about journals: I should do the most worthwhile research that is feasible for me and has a serious chance of getting published in a peer-reviewed journal. It’s sad that those two things aren’t the same, but they clearly aren’t. This constraint binds, and its Lagrange multiplier is measured in humanity’s future.

But one thing is very clear: By trying to find the perfect paper, I have floundered and, for the last year and a half, not written any papers at all. The right choice would surely have been to write something.

Because good enough is perfect, and perfect is bad.

The case against phys ed

Dec 4 JDN 2459918

If I want to stop someone from engaging in an activity, what should I do? I could tell them it’s wrong, and if they believe me, that would work. But what if they don’t believe me? Or I could punish them for doing it, and as long as I can continue to do that reliably, that should deter them from doing it. But what happens after I remove the punishment?

If I really want to make someone not do something, the best way to accomplish that is to make them not want to do it. Make them dread doing it. Make them hate the very thought of it. And to accomplish that, a very efficient method would be to first force them to do it, but make that experience as miserable and humiliating is possible. Give them a wide variety of painful or outright traumatic experiences that are directly connected with the undesired activity, to carry with them for the rest of their life.

This is precisely what physical education does, with regard to exercise. Phys ed is basically optimized to make people hate exercise.

Oh, sure, some students enjoy phys ed. These are the students who are already athletic and fit, who already engage in regular exercise and enjoy doing so. They may enjoy phys ed, may even benefit a little from it—but they didn’t really need it in the first place.

The kids who need more physical activity are the kids who are obese, or have asthma, or suffer from various other disabilities that make exercising difficult and painful for them. And what does phys ed do to those kids? It makes them compete in front of their peers at various athletic tasks at which they will inevitably fail and be humiliated.

Even the kids who are otherwise healthy but just don’t get enough exercise will go into phys ed class at a disadvantage, and instead of being carefully trained to improve their skills and physical condition at their own level, they will be publicly shamed by their peers for their inferior performance.

I know this, because I was one of those kids. I have exercise-induced bronchoconstriction, a lung condition similar to asthma (actually there’s some debate as to whether it should be considered a form of asthma), in which intense aerobic exercise causes the airways of my lungs to become constricted and inflamed, making me unable to get enough air to continue.

It’s really quite remarkable I wasn’t diagnosed with this as a child; I actually once collapsed while running in gym class, and all they thought to do at the time was give me water and let me rest for the remainder of the class. Nobody thought to call the nurse. I was never put on a beta agonist or an inhaler. (In fact at one point I was put on a beta blocker for my migraines; I now understand why I felt so fatigued when taking it—it was literally the opposite of the drug my lungs needed.)

Actually it’s been a few years since I had an attack. This is of course partly due to me generally avoiding intense aerobic exercise; but even when I do get intense exercise, I rarely seem to get bronchoconstriction attacks. My working hypothesis is that the norepinephrine reuptake inhibition of my antidepressant acts like a beta agonist; both drugs mimic norepinephrine.

But as a child, I got such attacks quite frequently; and even when I didn’t, my overall athletic performance was always worse than most of the other kids. They knew it, I knew it, and while only a few actively tried to bully me for it, none of the others did anything to make me feel better. So gym class was always a humiliating and painful experience that I came to dread.

As a result, as soon as I got out of school and had my own autonomy in how to structure my own life, I basically avoided exercise whenever I could. Even knowing that it was good for me—really, exercise is ridiculously good for you; it honestly doesn’t even make sense to me how good it is for you—I could rarely get myself to actually go out and exercise. I certainly couldn’t do it with anyone else; sometimes, if I was very disciplined, I could manage to maintain an exercise routine by myself, as long as there was no one else there who could watch me, judge me, or compare themselves to me.

In fact, I’d probably have avoided exercise even more, had I not also had some more positive experiences with it outside of school. I trained in martial arts for a few years, getting almost to a black belt in tae kwon do; I quit precisely when it started becoming very competitive and thus began to feel humiliated again when I performed worse than others. Part of me wishes I had stuck with it long enough to actually get the black belt; but the rest of me knows that even if I’d managed it, I would have been miserable the whole time and it probably would have made me dread exercise even more.

The details of my story are of course individual to me; but the general pattern is disturbingly common. A kid does poorly in gym class, or even suffers painful attacks of whatever disabling condition they have, but nobody sees it as a medical problem; they just see the kid as weak and lazy. Or even if the adults are sympathetic, the other kids aren’t; they just see a peer who performed worse than them, and they have learned by various subtle (and not-so-subtle) cultural pressures that anyone who performs worse at a culturally-important task is worthy of being bullied and shunned.

Even outside the directly competitive environment of sports, the very structure of a phys ed class, where a large group of students are all expected to perform the same athletic tasks and can directly compare their performance against each other, invites this kind of competition. Kids can see, right in their faces, who is doing better and who is doing worse. And our culture is astonishingly bad at teaching children (or anyone else, for that matter) how to be sympathetic to others who perform worse. Worse performance is worse character. Being bad at running, jumping and climbing is just being bad.

Part of the problem is that school administrators seem to see physical education as a training and selection regimen for their sports programs. (In fact, some of them seem to see their entire school as existing to serve their sports programs.) Here is a UK government report bemoaning the fact that “only a minority of schools play competitive sport to a high level”, apparently not realizing that this is necessarily true because high-level sports performance is a relative concept. Only one team can win the championship each year. Only 10% of students will ever be in the top 10% of athletes. No matter what. Anything else is literally mathematically impossible. We do not live in Lake Wobegon; not all the children can be above average.

There are good phys ed programs out there. They have highly-trained instructors and they focus on matching tasks to a student’s own skill level, as well as actually educating them—teaching them about anatomy and physiology rather than just making them run laps. Actually the one phys ed class I took that I actually enjoyed was actually an anatomy and physiology class; we didn’t do any physical exercise in that class. But well-taught phys ed classes are clearly the exception, not the norm.

Of course, it could be that some students actually benefit from phys ed, perhaps even enough to offset the harms to people like me. (Though then the question should be asked whether phys ed should be compulsory for all students—if an intervention helps some and hurts others, maybe only give it to the ones it helps?) But I know very few people who actually described their experiences of phys ed class as positive ones. While many students describe their experiences of math class in similarly-negative terms (which is also a problem with how math classes are taught), I definitely do know people who actually enjoyed and did well in math class. Still, my sample is surely biased—it’s comprised of people similar to me, and I hated gym and loved math. So let’s look at the actual data.

Or rather, I’d like to, but there isn’t that much out there. The empirical literature on the effects of physical education is surprisingly limited.

A lot of analyses of physical education simply take as axiomatic that more phys ed means more exercise, and so they use the—overwhelming, unassailable—evidence that exercise is good to support an argument for more phys ed classes. But they never seem to stop and take a look at whether phys ed classes are actually making kids exercise more, particularly once those kids grow up and become adults.

In fact, the surprisingly weak correlations between higher physical activity and better mental health among adolescents (despite really strong correlations in adults) could be because exercise among adolescents is largely coerced via phys ed, and the misery of being coerced into physical humiliation counteracts any benefits that might have been obtained from increased exercise.

The best long-term longitudinal study I can find did show positive effects of phys ed on long-term health, though by a rather odd mechanism: Women exercised more as adults if they had phys ed in primary school, but men didn’t; they just smoked less. And this study was back in 1999, studying a cohort of adults who had phys ed quite a long time ago, when it was better funded.

The best experiment I can find actually testing whether phys ed programs work used a very carefully designed phys ed program with a lot of features that it would be really nice to have, but the vast majority of actual gym classes do not, including carefully structured activities with specific developmental goals, and, perhaps most importantly, children were taught to track and evaluate their own individual progress rather than evaluate themselves in comparison to others.

And even then, the effects are not all that large. The physical activity scores of the treatment group rose from 932 minutes per week to 1108 minutes per week for first-graders, and from 1212 to 1454 for second-graders. But the physical activity scores of the control group rose from 906 to 996 for first-graders, and 1105 to 1211 for second-graders. So of the 176 minutes per week gained by first-graders, 90 would have happened anyway. Likewise, of the 242 minutes per week gained by second-graders, 106 were not attributable to the treatment. Only about half of the gains were due to the intervention, and they amount to about a 10% increase in overall physical activity. It also seems a little odd to me that the control groups both started worse off than the experimental groups and both groups gained; it raises some doubts about the randomization.

The researchers also measured psychological effects, and these effects are even smaller and honestly a little weird. On a scale of “somatic anxiety” (basically, how bad do you feel about your body’s physical condition?), this well-designed phys ed program only reduced scores in the treatment group from 4.95 to 4.55 among first-graders, and from 4.50 to 4.10 among second-graders. Seeing as the scores for second-graders also fell in the control group from 4.63 to 4.45, only about half of the observed reduction—0.2 points on a 10-point scale—is really attributable to the treatment. And the really baffling part is that the measure of social anxiety actually fell more, which makes me wonder if they’re really measuring what they think they are.

Clearly, exercise is good. We should be trying to get people to exercise more. Actually, this is more important than almost anything else we could do for public health, with the possible exception of vaccinations. All of these campaigns trying to get kids to lose weight should be removed and replaced with programs to get them to exercise more, because losing weight doesn’t benefit health and exercising more does.

But I am not convinced that physical education as we know it actually makes people exercise more. In the short run, it forces kids to exercise, when there were surely ways to get kids to exercise that didn’t require such coercion; and in the long run, it gives them painful, even traumatic memories of exercise that make them not want to continue it once they get older. It’s too competitive, too one-size-fits-all. It doesn’t account for innate differences in athletic ability or match challenge levels to skill levels. It doesn’t help kids cope with having less ability, or even teach kids to be compassionate toward others with less ability than them.

And it makes kids miserable.