How is the economy doing this well?

Apr 14 JDN 2460416

We are living in a very weird time, economically. The COVID pandemic created huge disruptions throughout our economy, from retail shops closing to shortages in shipping containers. The result was a severe recession with the worst unemployment since the Great Depression.

Now, a few years later, we have fully recovered.

Here’s a graph from FRED showing our unemployment and inflation rates since 1990 [technical note: I’m using the urban CPI; there are a few other inflation measures you could use instead, but they look much the same]:

Inflation fluctuates pretty quickly, while unemployment moves much slower.

There are a lot of things we can learn from this graph:

  1. Before COVID, we had pretty low inflation; from 1990 to 2019, inflation averaged about 2.4%, just over the Fed’s 2% target.
  2. Before COVID, we had moderate to high unemployment; it rarely went below 5% and and for several years after the 2008 crash it was over 7%—which is why we called it the Great Recession.
  3. The only times we actually had negative inflation—deflationwere during recessions, and coincided with high unemployment; so, no, we really don’t want prices to come down.
  4. During COVID, we had a massive spike in unemployment up to almost 15%, but then it came back down much more rapidly than it had in the Great Recession.
  5. After COVID, there was a surge in inflation, peaking at almost 10%.
  6. That inflation surge was short-lived; by the end of 2022 inflation was back down to 4%.
  7. Unemployment now stands at 3.8% while inflation is at 2.7%.

What I really want to emphasize right now is point 7, so let me repeat it:

Unemployment now stands at 3.8% while inflation is at 2.7%.

Yes, technically, 2.7% is above our inflation target. But honestly, I’m not sure it should be. I don’t see any particular reason to think that 2% is optimal, and based on what we’ve learned from the Great Recession, I actually think 3% or even 4% would be perfectly reasonable inflation targets. No, we don’t want to be going into double-digits (and we certainly don’t want true hyperinflation); but 4% inflation really isn’t a disaster, and we should stop treating it like it is.

2.7% inflation is actually pretty close to the 2.4% inflation we’d been averaging from 1990 to 2019. So I think it’s fair to say that inflation is back to normal.

But the really wild thing is that unemployment isn’t back to normal: It’s much better than that.

To get some more perspective on this, let’s extend our graph backward all the way to 1950:

Inflation has been much higher than it is now. In the late 1970s, it was consistently as high as it got during the post-COVID surge. But it has never been substantially lower than it is now; a little above the 2% target really seems to be what stable, normal inflation looks like in the United States.

On the other hand, unemployment is almost never this low. It was for a few years in the early 1950s and the late 1960s; but otherwise, it has always been higher—and sometimes much higher. It did not dip below 5% for the entire period from 1971 to 1994.

They hammer into us in our intro macroeconomics courses the Phillips Curve, which supposedly says that unemployment is inversely related to inflation, so that it’s impossible to have both low inflation and low unemployment.

But we’re looking at it, right now. It’s here, right in front of us. What wasn’t supposed to be possible has now been achieved. E pur si muove.

There was supposed to be this terrible trade-off between inflation and unemployment, leaving our government with the stark dilemma of either letting prices surge or letting millions remain out of work. I had always been on the “inflation” side: I thought that rising prices were far less of a problem than poeple out of work.

But we just learned that the entire premise was wrong.

You can have both. You don’t have to choose.

Right here, right now, we have both. All we need to do is keep doing whatever we’re doing.

One response might be: what if we can’t? What if this is unsustainable? (Then again, conservatives never seemed terribly concerned about sustainability before….)

It’s worth considering. One thing that doesn’t look so great now is the federal deficit. It got extremely high during COVID, and it’s still pretty high now. But as a proportion of GDP, it isn’t anywhere near as high as it was during WW2, and we certainly made it through that all right:

So, yeah, we should probably see if we can bring the budget back to balanced—probably by raising taxes. But this isn’t an urgent problem. We have time to sort it out. 15% unemployment was an urgent problem—and we fixed it.

In fact in some ways the economy is even doing better now than it looks. Unemployment for Black people has never been this low, since we’ve been keeping track of it:

Black people had basically learned to live with 8% or 9% unemployment as if it were normal; but now, for the first time ever—ever—their unemployment rate is down to only 5%.

This isn’t because people are dropping out of the labor force. Broad unemployment, which includes people marginally attached to the labor force, people employed part-time not by choice, and people who gave up looking for work, is also at historic lows, despite surging to almost 23% during COVID:

In fact, overall employment among people 25-54 years old (considered “prime age”—old enough to not be students, young enough to not be retired) is nearly the highest it has ever been, and radically higher than it was before the 1980s (because women entered the workforce):

So this is not an illusion: More Americans really are working now. And employment has become more inclusive of women and minorities.

I really don’t understand why President Biden isn’t more popular. Biden inherited the worst unemployment since the Great Depression, and turned it around into an economic situation so good that most economists thought it was impossible. A 39% approval rating does not seem consistent with that kind of staggering economic improvement.

And yes, there are a lot of other factors involved aside from the President; but for once I think he really does deserve a lot of the credit here. Programs he enacted to respond to COVID brought us back to work quicker than many thought possible. Then, the Inflation Reduction Act made historic progress at fighting climate change—and also, lo and behold, reduced inflation.

He’s not a particularly charismatic figure. He is getting pretty old for this job (or any job, really). But Biden’s economic policy has been amazing, and deserves more credit for that.

Bundling the stakes to recalibrate ourselves

Mar 31 JDN 2460402

In a previous post I reflected on how our minds evolved for an environment of immediate return: An immediate threat with high chance of success and life-or-death stakes. But the world we live in is one of delayed return: delayed consequences with low chance of success and minimal stakes.

We evolved for a world where you need to either jump that ravine right now or you’ll die; but we live in a world where you’ll submit a hundred job applications before finally getting a good offer.

Thus, our anxiety system is miscalibrated for our modern world, and this miscalibration causes us to have deep, chronic anxiety which is pathological, instead of brief, intense anxiety that would protect us from harm.

I had an idea for how we might try to jury-rig this system and recalibrate ourselves:

Bundle the stakes.

Consider job applications.

The obvious way to think about it is to consider each application, and decide whether it’s worth the effort.

Any particular job application in today’s market probably costs you 30 minutes, but you won’t hear back for 2 weeks, and you have maybe a 2% chance of success. But if you fail, all you lost was that 30 minutes. This is the exact opposite of what our brains evolved to handle.

So now suppose if you think of it in terms of sending 100 job applications.

That will cost you 30 times 100 minutes = 50 hours. You still won’t hear back for weeks, but you’ve spent weeks, so that won’t feel as strange. And your chances of success after 100 applications are something like 1-(0.98)^100 = 87%.

Even losing 50 hours over a few weeks is not the disaster that falling down a ravine is. But it still feels a lot more reasonable to be anxious about that than to be anxious about losing 30 minutes.

More importantly, we have radically changed the chances of success.

Each individual application will almost certainly fail, but all 100 together will probably succeed.

If we were optimally rational, these two methods would lead to the same outcomes, by a rather deep mathematical law, the linearity of expectation:
E[nX] = n E[X]

Thus, the expected utility of doing something n times is precisely n times the expected utility of doing it once (all other things equal); and so, it doesn’t matter which way you look at it.

But of course we aren’t perfectly rational. We don’t actually respond to the expected utility. It’s still not entirely clear how we do assess probability in our minds (prospect theory seems to be onto something, but it’s computationally harder than rational probability, which means it makes absolutely no sense to evolve it).

If instead we are trying to match up our decisions with a much simpler heuristic that evolved for things like jumping over ravines, our representation of probability may be very simple indeed, something like “definitely”, “probably”, “maybe”, “probably not”, “definitely not”. (This is essentially my categorical prospect theory, which, like the stochastic overload model, is a half-baked theory that I haven’t published and at this point probably never will.)

2% chance of success is solidly “probably not” (or maybe something even stronger, like “almost definitely not”). Then, outcomes that are in that category are presumably weighted pretty low, because they generally don’t happen. Unless they are really good or really bad, it’s probably safest to ignore them—and in this case, they are neither.

But 87% chance of success is a clear “probably”; and outcomes in that category deserve our attention, even if their stakes aren’t especially high. And in fact, by bundling them, we have even made the stakes a bit higher—likely making the outcome a bit more salient.

The goal is to change “this will never work” to “this is going to work”.

For an individual application, there’s really no way to do that (without self-delusion); maybe you can make the odds a little better than 2%, but you surely can’t make them so high they deserve to go all the way up to “probably”. (At best you might manage a “maybe”, if you’ve got the right contacts or something.)

But for the whole set of 100 applications, this is in fact the correct assessment. It will probably work. And if 100 doesn’t, 150 might; if 150 doesn’t, 200 might. At no point do you need to delude yourself into over-estimating the odds, because the actual odds are in your favor.

This isn’t perfect, though.

There’s a glaring problem with this technique that I still can’t resolve: It feels overwhelming.

Doing one job application is really not that big a deal. It accomplishes very little, but also costs very little.

Doing 100 job applications is an enormous undertaking that will take up most of your time for multiple weeks.

So if you are feeling demotivated, asking you to bundle the stakes is asking you to take on a huge, overwhelming task that surely feels utterly beyond you.

Also, when it comes to this particular example, I even managed to do 100 job applications and still get a pretty bad outcome: My only offer was Edinburgh, and I ended up being miserable there. I have reason to believe that these were exceptional circumstances (due to COVID), but it has still been hard to shake the feeling of helplessness I learned from that ordeal.

Maybe there’s some additional reframing that can help here. If so, I haven’t found it yet.

But maybe stakes bundling can help you, or someone out there, even if it can’t help me.

Depression and the War on Drugs

Jan 7 JDN 2460318

There exists, right now, an extremely powerful antidepressant which is extremely cheap and has minimal side effects.

It’s so safe that it has no known lethal dose, and—unlike SSRIs—it is not known to trigger suicide. It is shockingly effective: it works in a matter of hours—not weeks like a typical SSRI—and even a single moderate dose can have benefits lasting months. It isn’t patented, because it comes from a natural source. That natural source is so easy to grow, you can do it by yourself at home for less than $100.

Why in the world aren’t we all using it?

I’ll tell you why: This wonder drug is called psilocybin. It is a Schedule I narcotic, which means that simply possessing it is a federal crime in the United States. Carrying it across the border is a felony.

It is also illegal in most other countries, including the UK, Australia, Belgium, Finland, Denmark, Sweden, Norway (#ScandinaviaIsNotAlwaysBetter), France, Germany, Hungary, Ireland, Japan, the list goes on….

Actually, it’s faster to list the places it’s not illegal: Austria, the Bahamas, Brazil, the British Virgin Islands, Jamaica, Nepal, the Netherlands, and Samoa. That’s it for true legalization, though it’s also decriminalized or unenforced in some other countries.

The best known antidepressant lies unused, because we made it illegal.

Similar stories hold for other amazingly beneficial drugs:

LSD also has powerful antidepressant effects with minimal side effects, and is likewise so ludicrously safe that we are not aware of a single fatal overdose ever happening in any human being. And it’s also Schedule I banned.

Ahayuasca is the same story: A great antidepressant, very safe, minimal side effects—and highly illegal.

There is also no evidence that psilocybin, LSD, or ahayuasca are addictive; and far from promoting the sort of violent, anti-social behavior that alcohol does, they actually seem to make people more compassionate.

This is pure speculation, but I think we should try psilocybin as a possible treatment for psychopathy. And if that works, maybe having a psilocybin trip should be a prerequisite for eligibility for any major elected office. (I often find it a bit silly how the biggest fans of psychedelics talk about the drugs radically changing the world, bringing peace and prosperity through a shift in consciousness; but if psilocybin could make all the world’s leaders more compassionate, that might actually have that kind of impact.)

Ketamine and MDMA at least do have some overdose risk and major side effects, and are genuinely addictive—but it’s not really clear that they’re any worse than SSRIs, and they certainly aren’t any worse than alcohol.

Alcohol may actually be the most widely-used antidepressant, and yet is clearly utterly ineffective; in fact, alcoholics consistently show depression increasing over time. Alcohol has a fatal dose so low it’s a common accident; it is also implicated in violent behavior, including half of all rapes—and in the majority of those rape cases, all consumption of alcohol was voluntary.

Yet alcohol can be bought over-the-counter at any grocery store.

The good news is that this is starting to change.

Recent changes in the law have allowed the use of psychedelic drugs in medical research—which is part of how we now know just how shockingly effective they are at treating depression.

Some jurisdictions in the US—notably, the whole state of Colorado—have decriminalized psilocybin, and Oregon has made it outright legal. Yet even this situation is precarious; just as has occurred with cannabis legalization, it’s still difficult to run a business selling psilocybin even in Oregon, because banks don’t want to deal with a business that sells something which is federally illegal.

Fortunately, this, too, is starting to change: A bill passed the US Senate a few months ago that would legalize banking to cannabis businesses in states where it is legal, and President Biden recently pardoned everyone in federal prison for simple cannabis possession. Now, why can’t we just make cannabis legal!?

The War on Drugs hasn’t just been a disaster for all the thousands of people needlessly imprisoned.

(Of course they had it the worst, and we should set them all free immediately—preferably with some form of restitution.)

The War of Drugs has also been a disaster for all the people who couldn’t get the treatment they needed, because we made that medicine illegal.

And for what? What are we even trying to accomplish here?

Prohibition was a failure—and a disaster of its own—but I can at least understand why it was done. When a drug kills nearly a hundred thousand people a year and is implicated in half of all rapes, that seems like a pretty damn good reason to want that drug gone. The question there becomes how we can best reduce alcohol use without the awful consequences that Prohibition caused—and so far, really high taxes seem to be the best method, and they absolutely do reduce crime.

But where was the disaster caused by cannabis, psilocybin, or ahayuasca? These drugs are made by plants and fungi; like alcohol, they have been used by humans for thousands of years. Where are the overdoses? Where is the crime? Psychedelics have none of these problems.

Honestly, it’s kind of amazing that these drugs aren’t more associated with organized crime than they are.

When alcohol was banned, it seemed to immediately trigger a huge expansion of the Mafia, as only they were willing and able to provide for the enormous demand of this highly addictive neurotoxin. But psilocybin has been illegal for decades, and yet there’s no sign of organized crime having anything to do with it. In fact, psilocybin use is associated with lower rates of arrest—which actually makes sense to me, because like I said, it makes you more compassionate.

That’s how idiotic and ridiculous our drug laws are:

We made a drug that causes crime legal, and we made a drug that prevents crime illegal.

Note that this also destroys any conspiracy theory suggesting that the government wants to keep us all docile and obedient: psilocybin is way better at making people docile than alcohol. No, this isn’t the product of some evil conspiracy.

Hanlon’s Razor: Never attribute to malice what can be adequately explained by stupidity.

This isn’t malice; it’s just massive, global, utterly catastrophic stupidity.

I might attribute this to Puritanical American attitude toward pleasure (Pleasure is suspect, pleasure is dangerous), but I don’t think of Sweden as particularly Puritanical, and they also ban most psychedelics. I guess the most libertine countries—the Netherlands, Brazil—seem to be the ones that have legalized them; but it doesn’t really seem like one should have to be that libertine to want the world’s cheapest, safest, most effective antidepressants to be widely available. I have very mixed feelings about Amsterdam’s (in)famous red light district, but absolutely no hesitation in supporting their legalization of psilocybin truffles.

Honestly, I think patriarchy might be part of this. Alcohol is seen as a very masculine drug—maybe because it can make you angry and violent. Psychedelics seem more feminine; they make you sensitive, compassionate and loving.

Even the way that psychedelics make you feel more connected with your body is sort of feminine; we seem to have a common notion that men are their minds, but women are their bodies.

Here, try it. Someone has said, “I feel really insecure about my body.” Quick: What is that person’s gender? Now suppose someone has said, “I’m very proud of my mind.” What is that person’s gender?

(No, it’s not just because the former is insecure and the latter is proud—though we do also gender those emotions, and there’s statistical evidence that men are generally more confident, though that’s never been my experience of manhood. Try it with the emotions swapped and it still works, just not quite as well.)

I’m not suggesting that this makes sense. Both men and women are precisely as physical and mental as each other—we are all both, and that is a deep truth about our nature. But I know that my mind makes an automatic association between mind/body and male/female, and I suspect yours does as well, because we came from similar cultural norms. (This goes at least back to Classical Rome, where the animus, the rational soul, was masculine, while the anima, the emotional one, was feminine.)

That is, it may be that we banned psychedelics because they were girly. The men in charge were worried about us becoming soft and weak. The drug that’s tied to thousands of rapes and car collisions is manly. The drug that brings you peace, joy, and compassion is not.

Think about the things that the mainstream objected to about Hippies: Men with long hair and makeup, women wearing pants, bright colors, flowery patterns, kindness and peacemongering—all threats to the patriarchal order.

Whatever it is, we need to stop. Millions of people are suffering, and we could so easily help them; all we need to do is stop locking people up for taking medicine.

What is anxiety for?

Sep 17 JDN 2460205

As someone who experiences a great deal of anxiety, I have often struggled to understand what it could possibly be useful for. We have this whole complex system of evolved emotions, and yet more often than not it seems to harm us rather than help us. What’s going on here? Why do we even have anxiety? What even is anxiety, really? And what is it for?

There’s actually an extensive body of research on this, though very few firm conclusions. (One of the best accounts I’ve read, sadly, is paywalled.)

For one thing, there seem to be a lot of positive feedback loops involved in anxiety: Panic attacks make you more anxious, triggering more panic attacks; being anxious disrupts your sleep, which makes you more anxious. Positive feedback loops can very easily spiral out of control, resulting in responses that are wildly disproportionate to the stimulus that triggered them.

A certain amount of stress response is useful, even when the stakes are not life-or-death. But beyond a certain point, more stress becomes harmful rather than helpful. This is the Yerkes-Dodson effect, for which I developed my stochastic overload model (which I still don’t know if I’ll ever publish, ironically enough, because of my own excessive anxiety). Realizing that anxiety can have benefits can also take some of the bite out of having chronic anxiety, and, ironically, reduce that anxiety a little. The trick is finding ways to break those positive feedback loops.

I think one of the most useful insights to come out of this research is the smoke-detector principle, which is a fundamentally economic concept. It sounds quite simple: When dealing with an uncertain danger, sound the alarm if the expected benefit of doing so exceeds the expected cost.

This has profound implications when risk is highly asymmetric—as it usually is. Running away from a shadow or a noise that probably isn’t a lion carries some cost; you wouldn’t want to do it all the time. But it is surely nowhere near as bad as failing to run away when there is an actual lion. Indeed, it might be fair to say that failing to run away from an actual lion counts as one of the worst possible things that could ever happen to you, and could easily be 100 times as bad as running away when there is nothing to fear.

With this in mind, if you have a system for detecting whether or not there is a lion, how sensitive should you make it? Extremely sensitive. You should in fact try to calibrate it so that 99% of the time you experience the fear and want to run away, there is not a lion. Because the 1% of the time when there is one, it’ll all be worth it.

Yet this is far from a complete explanation of anxiety as we experience it. For one thing, there has never been, in my entire life, even a 1% chance that I’m going to be attacked by a lion. Even standing in front of a lion enclosure at the zoo, my chances of being attacked are considerably less than that—for a zoo that allowed 1% of its customers to be attacked would not stay in business very long.

But for another thing, it isn’t really lions I’m afraid of. The things that make me anxious are generally not things that would be expected to do me bodily harm. Sure, I generally try to avoid walking down dark alleys at night, and I look both ways before crossing the street, and those are activities directly designed to protect me from bodily harm. But I actually don’t feel especially anxious about those things! Maybe I would if I actually had to walk through dark alleys a lot, but I don’t, and in the rare occasion I would, I think I’d feel afraid at the time but fine afterward, rather than experiencing persistent, pervasive, overwhelming anxiety. (Whereas, if I’m anxious about reading emails, and I do manage to read emails, I’m usually still anxious afterward.) When it comes to crossing the street, I feel very little fear at all, even though perhaps I should—indeed, it had been remarked that when it comes to the perils of motor vehicles, human beings suffer from a very dangerous lack of fear. We should be much more afraid than we are—and our failure to be afraid kills thousands of people.

No, the things that make me anxious are invariably social: Meetings, interviews, emails, applications, rejection letters. Also parties, networking events, and back when I needed them, dates. They involve interacting with other people—and in particular being evaluated by other people. I never felt particularly anxious about exams, except maybe a little before my PhD qualifying exam and my thesis defenses; but I can understand those who do, because it’s the same thing: People are evaluating you.

This suggests that anxiety, at least of the kind that most of us experience, isn’t really about danger; it’s about status. We aren’t worried that we will be murdered or tortured or even run over by a car. We’re worried that we will lose our friends, or get fired; we are worried that we won’t get a job, won’t get published, or won’t graduate.

And yet it is striking to me that it often feels just as bad as if we were afraid that we were going to die. In fact, in the most severe instances where anxiety feeds into depression, it can literally make people want to die. How can that be evolutionarily adaptive?

Here it may be helpful to remember that in our ancestral environment, status and survival were oft one and the same. Humans are the most social organisms on Earth; I even sometimes describe us as hypersocial, a whole new category of social that no other organism seems to have achieved. We cooperate with others of our species on a mind-bogglingly grand scale, and are utterly dependent upon vast interconnected social systems far too large and complex for us to truly understand, let alone control.

At this historical epoch, these social systems are especially vast and incomprehensible; but at least for most of us in First World countries, they are also forgiving in a way that is fundamentally alien to our ancestors’ experience. It was not so long ago that a failed hunt or a bad harvest would let your family starve unless you could beseech your community for aid successfully—which meant that your very survival could depend upon being in the good graces of that community. But now we have food stamps, so even if everyone in your town hates you, you still get to eat. Of course some societies are more forgiving (Sweden) than others (the United States); and virtually all societies could be even more forgiving than they are. But even the relatively cutthroat competition of the US today has far less genuine risk of truly catastrophic failure than what most human beings lived through for most of our existence as a species.

I have found this realization helpful—hardly a cure, but helpful, at least: What are you really afraid of? When you feel anxious, your body often tells you that the stakes are overwhelming, life-or-death; but if you stop and think about it, in the world we live in today, that’s almost never true. Failing at one important task at work probably won’t get you fired—and even getting fired won’t really make you starve.

In fact, we might be less anxious if it were! For our bodies’ fear system seems to be optimized for the following scenario: An immediate threat with high chance of success and life-or-death stakes. Spear that wild animal, or jump over that chasm. It will either work or it won’t, you’ll know immediately; it probably will work; and if it doesn’t, well, that may be it for you. So you’d better not fail. (I think it’s interesting how much of our fiction and media involves these kinds of events: The hero would surely and promptly die if he fails, but he won’t fail, for he’s the hero! We often seem more comfortable in that sort of world than we do in the one we actually live in.)

Whereas the life we live in now is one of delayed consequences with low chance of success and minimal stakes. Send out a dozen job applications. Hear back in a week from three that want to interview you. Do those interviews and maybe one will make you an offer—but honestly, probably not. Next week do another dozen. Keep going like this, week after week, until finally one says yes. Each failure actually costs you very little—but you will fail, over and over and over and over.

In other words, we have transitioned from an environment of immediate return to one of delayed return.

The result is that a system which was optimized to tell us never fail or you will die is being put through situations where failure is constantly repeated. I think deep down there is a part of us that wonders, “How are you still alive after failing this many times?” If you had fallen in as many ravines as I have received rejection letters, you would assuredly be dead many times over.

Yet perhaps our brains are not quite as miscalibrated as they seem. Again I come back to the fact that anxiety always seems to be about people and evaluation; it’s different from immediate life-or-death fear. I actually experience very little life-or-death fear, which makes sense; I live in a very safe environment. But I experience anxiety almost constantly—which also makes a certain amount of sense, seeing as I live in an environment where I am being almost constantly evaluated by other people.

One theory posits that anxiety and depression are a dual mechanism for dealing with social hierarchy: You are anxious when your position in the hierarchy is threatened, and depressed when you have lost it. Primates like us do seem to care an awful lot about hierarchies—and I’ve written before about how this explains some otherwise baffling things about our economy.

But I for one have never felt especially invested in hierarchy. At least, I have very little desire to be on top of the hiearchy. I don’t want to be on the bottom (for I know how such people are treated); and I strongly dislike most of the people who are actually on top (for they’re most responsible for treating the ones on the bottom that way). I also have ‘a problem with authority’; I don’t like other people having power over me. But if I were to somehow find myself ruling the world, one of the first things I’d do is try to figure out a way to transition to a more democratic system. So it’s less like I want power, and more like I want power to not exist. Which means that my anxiety can’t really be about fearing to lose my status in the hierarchy—in some sense, I want that, because I want the whole hierarchy to collapse.

If anxiety involved the fear of losing high status, we’d expect it to be common among those with high status. Quite the opposite is the case. Anxiety is more common among people who are more vulnerable: Women, racial minorities, poor people, people with chronic illness. LGBT people have especially high rates of anxiety. This suggests that it isn’t high status we’re afraid of losing—though it could still be that we’re a few rungs above the bottom and afraid of falling all the way down.

It also suggests that anxiety isn’t entirely pathological. Our brains are genuinely responding to circumstances. Maybe they are over-responding, or responding in a way that is not ultimately useful. But the anxiety is at least in part a product of real vulnerabilities. Some of what we’re worried about may actually be real. If you cannot carry yourself with the confidence of a mediocre White man, it may be simply because his status is fundamentally secure in a way yours is not, and he has been afforded a great many advantages you never will be. He never had a Supreme Court ruling decide his rights.

I cannot offer you a cure for anxiety. I cannot even really offer you a complete explanation of where it comes from. But perhaps I can offer you this: It is not your fault. Your brain evolved for a very different world than this one, and it is doing its best to protect you from the very different risks this new world engenders. Hopefully one day we’ll figure out a way to get it calibrated better.

Against average utilitarianism

Jul 30 JDN 2460156

Content warning: Suicide and suicidal ideation

There are two broad strands of utilitarianism, known as average utilitarianism and total utilitarianism. As utilitarianism, both versions concern themselves with maximizing happiness and minimizing suffering. And for many types of ethical question, they yield the same results.

Under average utilitarianism, the goal is to maximize the average level of happiness minus suffering: It doesn’t matter how many people there are in the world, only how happy they are.

Under total utilitarianism, the goal is to maximize the total level of happiness minus suffering: Adding another person is a good thing, as long as their life is worth living.

Mathematically, its the difference between taking the sum of net happiness (total utilitarianism), and taking that sum and dividing it by the population (average utilitarianism).

It would make for too long a post to discuss the validity of utilitarianism in general. Overall I will say briefly that I think utilitarianism is basically correct, but there are some particular issues with it that need to be resolved, and usually end up being resolved by heading slightly in the direction of a more deontological ethics—in short, rule utilitarianism.

But for today, I want to focus on the difference between average and total utilitarianism, because average utilitarianism is a very common ethical view despite having appalling, horrifying implications.

Above all: under average utilitarianism, if you are considering suicide, you should probably do it.

Why? Because anyone who is considering suicide is probably of below-average happiness. And average utilitarianism necessarily implies that anyone who expects to be of below-average happiness should be immediately killed as painlessly as possible.

Note that this does not require that your life be one of endless suffering, so that it isn’t even worth going on living. Even a total utilitarian would be willing to commit suicide, if their life is expected to be so full of suffering that it isn’t worth going on.

Indeed, I suspect that most actual suicidal ideation by depressed people takes this form: My life will always be endless suffering. I will never be happy again. My life is worthless.

The problem with such suicidal ideation is not the ethical logic, which is valid: If indeed your existence from this point forward would be nothing but endless suffering, suicide actually makes sense. (Imagine someone who is being held in a dungeon being continually mercilessly tortured with no hope of escape; it doesn’t seem unreasonable for them to take a cyanide pill.) The problem is the prediction, which says that your life from this point forward will be nothing but endless suffering. Most people with depression do, eventually, feel better. They may never be quite as happy overall as people who aren’t depressed, but they do, in fact, have happy times. And most people who considered suicide but didn’t go through with it end up glad that they went on living.

No, an average utilitarian says you should commit suicide as long as your happiness is below average.

We could be living in a glorious utopia, where almost everyone is happy almost all the time, and people are only occasionally annoyed by minor inconveniences—and average utilitarianism would say that if you expect to suffer a more than average rate of such inconveniences, the world would be better off if you ceased to exist.

Moreover, average utilitarianism says that you should commit suicide if your life is expected to get worse—even if it’s still going to be good, adding more years to your life will just bring your average happiness down. If you had a very happy childhood and adulthood is going just sort of okay, you may as well end it now.

Average utilitarianism also implies that we should bomb Third World countries into oblivion, because their people are less happy than ours and thus their deaths will raise the population average.

Are there ways an average utilitarian can respond to these problems? Perhaps. But every response I’ve seen is far too weak to resolve the real problem.

One approach would be to say that the killing itself is bad, or will cause sufficient grief as to offset the loss of the unhappy person. (An average utilitarian is inherently committed to the claim that losing an unhappy person is itself an inherent good. There is something to be offset.)

This might work for the utopia case: The grief from losing someone you love is much worse than even a very large number of minor inconveniences.

It may even work for the case of declining happiness over your lifespan: Presumably some other people would be sad to lose you, even if they agreed that your overall happiness is expected to gradually decline. Then again, if their happiness is also expected to decline… should they, too, shuffle off this mortal coil?

But does it work for the question of bombing? Would most Americans really be so aggrieved at the injustice of bombing Burundi or Somalia to oblivion? Most of them don’t seem particularly aggrieved at the actual bombings of literally dozens of countries—including, by the way, Somalia. Granted, these bombings were ostensibly justified by various humanitarian or geopolitical objectives, but some of those justifications (e.g. Kosovo) seem a lot stronger than others (e.g. Grenada). And quite frankly, I care more about this sort of thing than most people, and I still can’t muster anything like the same kind of grief for random strangers in a foreign country that I feel when a friend or relative dies. Indeed, I can’t muster the same grief for one million random strangers in a foreign country that I feel for one lost loved one. Human grief just doesn’t seem to work that way. Sometimes I wish it did—but then, I’m not quite sure what our lives would be like in such a radically different world.

Moreover, the whole point is that an average utilitarian should consider it an intrinsically good thing to eliminate the existence of unhappy people, as long as it can be done swiftly and painlessly. So why, then, should people be aggrieved at the deaths of millions of innocent strangers they know are mostly unhappy? Under average utilitarianism, the greatest harm of war is the survivors you leave, because they will feel grief—so your job is to make sure you annihilate them as thoroughly as possible, presumably with nuclear weapons. Killing a soldier is bad as long as his family is left alive to mourn him—but if you kill an entire country, that’s good, because their country was unhappy.

Enough about killing and dying. Let’s talk about something happier: Babies.

At least, total utilitarians are happy about babies. When a new person is brought into the world, a total utilitarian considers this a good thing, as long as the baby is expected to have a life worth living and their existence doesn’t harm the rest of the world too much.

I think that fits with most people’s notions of what is good. Generally the response when someone has a baby is “Congratulations!” rather than “I’m sorry”. We see adding another person to the world as generally a good thing.

But under average utilitarianism, babies must reach a much higher standard in order to be a good thing. Your baby only deserves to exist if they will be happier than average.

Granted, this is the average for the whole world, so perhaps First World people can justify the existence of their children by pointing out that unless things go very badly, they should end up happier than the world average. (Then again, if you have a family history of depression….)

But for Third World families, quite the opposite: The baby may well bring joy to all around them, but unless that joy is enough to bring someone above the global average, it would still be better if the baby did not exist. Adding one more person of moderately-low happiness will just bring the world average down.

So in fact, on a global scale, an average utilitarian should always expect that babies are nearly as likely to be bad as they are good, unless we have some reason to think that the next generation would be substantially happier than this one.

And while I’m not aware of anyone who sincerely believes that we should nuke Third World countries for their own good, I have heard people speak this way about population growth in Third World countries: such discussions of “overpopulation” are usually ostensibly about ecological sustainability, even though the ecological impact of First World countries is dramatically higher—and such talk often shades very quickly into eugenics.

Of course, we wouldn’t want to say that having babies is always good, lest we all be compelled to crank out as many babies as possible and genuinely overpopulate the world. But total utilitarianism can solve this problem: It’s worth adding more people to the world unless the harm of adding those additional people is sufficient to offset the benefit of adding another person whose life is worth living.

Moreover, total utilitarianism can say that it would be good to delay adding another person to the world, until the situation is better. Potentially this delay could be quite long: Perhaps it is best for us not to have too many children until we can colonize the stars. For now, let’s just keep our population sustainable while we develop the technology for interstellar travel. If having more children now would increase the risk that we won’t ever manage to colonize distant stars, total utilitarianism would absolutely say we shouldn’t do it.

There’s also a subtler problem here, which is that it may seem good for any particular individual to have more children, but the net result is that the higher total population is harmful. Then what I think is happening is that we are unaware of, or uncertain about, or simply inattentive to, the small harm to many other people caused by adding one new person to the world. Alternatively, we may not be entirely altruistic, and a benefit that accrues to our own family may be taken as greater than a harm that accrues to many other people far away. If we really knew the actual marginal costs and benefits, and we really agreed on that utility function, we would in fact make the right decision. It’s our ignorance or disagreement that makes us fail, not total utilitarianism in principle. In practice, this means coming up with general rules that seem to result in a fair and reasonable outcome, like “families who want to have kids should aim for two or three”—and again we’re at something like rule utilitarianism.

Another case average utilitarianism seems tempting is in resolving the mere addition paradox.

Consider three possible worlds, A, B, and C:

In world A, there is a population of 1 billion, and everyone is living an utterly happy, utopian life.

In world B, there is a population of 1 billion living in a utopia, and a population of 2 billion living mediocre lives.

In world C, there is a population of 3 billion living good, but not utopian, lives.

The mere addition paradox is that, to many people, world B seems worse than world A, even though all we’ve done is add 2 billion people whose lives are worth living.

Moreover, many people seem to think that the ordering goes like this:


World B is better than world A, because all we’ve done is add more people whose lives are worth living.

World C is better than world B, because it’s fairer, and overall happiness is higher.

World A is better than world C, because everyone is happier, and all we’ve done is reduce the population.


This is intransitive: We have A > C > B > A. Our preferences over worlds are incoherent.

Average utilitarianism resolves this by saying that A > C is true, and C > B is true—but it says that B > A is false. Since average happiness is higher in world A, A > B.

But of course this results in the conclusion that if we are faced with world B, we should do whatever we can to annihilate the 2 billion extra unhappy people, so that we can get to world A. And the whole point of this post is that this is an utterly appalling conclusion we should immediately reject.

What does total utilitarianism say? It says that indeed C > B and B > A, but it denies that A > C. Rather, since there are more people in world C, it’s okay that people aren’t quite as happy.

Derek Parfit argues that this leads to what he calls the “repugnant conclusion”: If we keep increasing the population by a large amount while decreasing happiness by a small amount, the best possible world ends up being one where population is utterly massive but our lives are only barely worth living.

I do believe that total utilitarianism results in this outcome. I can live with that.

Under average utilitarianism, the best possible world is precisely one person who is immortal and absolutely ecstatic 100% of the time. Adding even one person who is not quite that happy will make things worse.

Under total utilitarianism, adding more people who are still very happy would be good, even if it makes that one ecstatic person a bit less ecstatic. And adding more people would continue to be good, as long as it didn’t bring the average down too quickly.

If you find this conclusion repugnant, as Parfit does, I submit that it is because it is difficult to imagine just how large a population we are talking about. Maybe putting some numbers on it will help.

Let’s say the happiness level of an average person in the world today is 35 quality-adjusted life years—our life expectancy of 70, times an average happiness level of 0.5.

So right now we have a world of 8 billion people at 35 QALY, for a total of 280 TQALY. (That’s tera-QALY, 1 trillion QALY.)

(Note: I’m not addressing inequality here. If you believe that a world where one person has 100 QALY and another has 50 QALY is worse than one where both have 75 QALY, you should adjust your scores accordingly—which mainly serves to make the current world look worse, due to our utterly staggering inequality. In fact I think I do not believe this—in my view, the problem is not that happiness is unequal, but that staggering inequality of wealth makes much greater suffering among the poor in exchange for very little happiness among the rich.)

Average utilitarianism says that we should eliminate the less happy people, so we can raise the average QALY higher, maybe to something like 60. I’ve already said why I find this appalling.

So now consider what total utilitarianism asks of us. If we could raise that figure above 280 TQALY, we should. Say we could increase our population to 10 billion, at the cost of reducing average happiness to 30 QALY; should we? Yes, we should, because that’s 300 TQALY.

But notice that in this scenario we’re still 85% as happy as we were. That doesn’t sound so bad. Parfit is worried about a scenario where our lives are barely worth living. So let’s consider what that would require.

“Barely worth living” sounds like maybe 1 QALY. This wouldn’t mean we all live exactly one year; that’s not sustainable, because babies can’t have babies. So it would be more like a life expectancy of 33, with a happiness of 0.03—pretty bad, but still worth living.

In that case, we would need to raise our population over 800 billion to make it better than our current existence. We must colonize at least 100 other planets and fill them as full as we’ve filled Earth.

In fact, I think this 1 QALY life was something like that human beings had at the dawn of agriculture (which by some estimates was actually worse than ancient hunter-gatherer life; we were sort of forced into early agriculture, rather than choosing it because it was better): Nasty, brutish, and short, but still, worth living.

So, Parfit’s repugnant conclusion is that filling 100 planets with people who live like the ancient Babylonians would be as good as life on Earth is now? I don’t really see how this is obviously horrible. Certainly not to the same degree that saying we should immediately nuke Somalia is obviously horrible.

Moreover, total utilitarianism absolutely still says that if we can make those 800 billion people happier, we should. A world of 800 billion people each getting 35 QALY is 100 times better than the way things are now—and doesn’t that seem right, at least?


Yet if you indeed believe that copying a good world 100 times gives you a 100 times better world, you are basically committed to total utilitarianism.

There are actually other views that would allow you to escape this conclusion without being an average utilitarian.

One way, naturally, is to not be a utilitarian. You could be a deontologist or something. I don’t have time to go into that in this post, so let’s save it for another time. For now, let me say that, historically, utilitarianism has led the charge in positive moral change, from feminism to gay rights, from labor unions to animal welfare. We tend to drag stodgy deontologists kicking and screaming toward a better world. (I vaguely recall an excellent tweet on this, though not who wrote it: “Yes, historically, almost every positive social change has been spearheaded by utilitarians. But sometimes utilitarianism seems to lead to weird conclusions in bizarre thought experiments, and surely that’s more important!”)

Another way, which has gotten surprisingly little attention, is to use an aggregating function that is neither a sum nor an average. For instance, you could add up all utility and divide by the square root of population, so that larger populations get penalized for being larger, but you aren’t simply trying to maximize average happiness. That does seem to still tell some people to die even though their lives were worth living, but at least it doesn’t require us to exterminate all who are below average. And it may also avoid the conclusion Parfit considers repugnant, by making our galactic civilization span 10,000 worlds. Of course, why square root? Why not a cube root, or a logarithm? Maybe the arbitrariness is why it hasn’t been seriously considered. But honestly, I think dividing by anything is suspicious; how can adding someone else who is happy ever make things worse?

But if I must admit that a sufficiently large galactic civilization would be better than our current lives, even if everyone there is mostly pretty unhappy? That’s a bullet I’m prepared to bite. At least I’m not saying we should annihilate everyone who is unhappy.

What am I without you?

Jul 16 JDN 2460142

When this post goes live, it will be my husband’s birthday. He will probably read it before that, as he follows my Patreon. In honor of his birthday, I thought I would make romance the topic of today’s post.

In particular, there’s a certain common sentiment that is usually viewed as romantic, which I in fact think is quite toxic. This is the notion that “Without you, I am nothing”—that in the absence of the one we love, we would be empty or worthless.

Here is this sentiment being expressed by various musicians:

I’m all out of love,
I’m so lost without you.
I know you were right,
Believing for so long.
I’m all out of love,
What am I without you?

– “All Out of Love”, Air Supply

<quotation>

Well what am I, what am I without you?
What am I without you?
Your love makes me burn.
No, no, no
Well what am I, what am I without you?
I’m nothing without you.
So lеt love burn.

– “What am I without you?”, Suede

Without you, I’m nothing.
Without you, I’m nothing.
Without you, I’m nothing.
Without you, I’m nothing at all.

– “Without you I’m nothing”, Placebo

I’ll be nothin’, nothin’, nothin’, nothin’ without you.
I’ll be nothin’, nothin’, nothin’, nothin’ without you.
Yeah
I was too busy tryna find you with someone else,
The one I couldn’t stand to be with was myself.
‘Cause I’ll be nothin’, nothin’, nothin’, nothin’ without you.

– “Nothing without you”, The Weeknd

You were my strength when I was weak.
You were my voice when I couldn’t speak.
You were my eyes when I couldn’t see.
You saw the best there was in me!
Lifted me up when I couldn’t reach,
You gave me faith ’cause you believed!
I’m everything I am,
Because you loved me.


– “Because You Loved Me”, Celine Dion

Hopefully that’s enough to convince you that this is not a rare sentiment. Moreover, these songs do seem quite romantic, and there are parts of them that still resonate quite strongly for me (particularly “Because You Loved Me”).

Yet there is still something toxic here: They make us lose sight of our own self-worth independently of our relationships with others. Humans are deeply social creatures, so of course we want to fill our lives with relationships with others, and as well we should. But you are more than your relationships.

Stranded alone on a deserted island, you would still be a person of worth. You would still have inherent dignity. You would still deserve to live.

It’s also unhealthy even from a romantic perspective. Yes, once you’ve found the love of your life and you really do plan to live together forever, tying your identity so tightly to the relationship may not be disastrous—though it could still be unhealthy and promote a cycle of codependency. But what about before you’ve made that commitment? If you are nothing without the one you love, what happens when you break up? Who are you then?

And even if you are with the love of your life, what happens if they die?

Of course our relationships do change who we are. To some degree, our identity is inextricably tied to those we love, and this would probably still be desirable even if it weren’t inevitable. But there must always be part of you that isn’t bound to anyone in particular other than yourself—and if you can’t find that part, it’s a very bad sign.

Now compare a quite different sentiment:

If I didn’t have you to hold me tight,

If I didn’t have you to lie with at night,

If I didn’t have you to share my sighs,

And to kiss me and dry my tears when I cry…

Well, I…

Really think that I would…

Probably…

Have somebody else.

– “If I Didn’t Have You”, Tim Minchin

Tim Minchin is a comedy musician, and the song is very much written in that vein. He doesn’t want you to take it too seriously.

Another song Tim Minchin wrote for his wife, “Beautiful Head”, reflects upon the inevitable chasm that separates any two minds—he knows all about her, but not what goes on inside that beautiful head. He also has another sort-of love song, called “I’ll Take Lonely Tonight”, about rejecting someone because he wants to remain faithful to his wife. It’s bittersweet despite the humor within, and honestly I think it shows a deeper sense of romance than the vast majority of love songs I’ve heard.

Yet I must keep coming back to one thing: This is a much healthier attitude.

The factual claim is almost certainly objectively true: In all probability, should you find yourself separated from your current partner, you would, sooner or later, find someone else.

None of us began our lives in romantic partnerships—so who were we before then? No doubt our relationships change us, and losing them would change us yet again. But we were something before, and should it end, we will continue to be something after.

And the attitude that our lives would be empty and worthless without the one we love is dangerously close to the sort of self-destructive self-talk I know all too well from years of depression. “I’m worthless without you, I’m nothing without you” is really not so far from “I’m worthless, I’m nothing” simpliciter. If you hollow yourself out for love, you have still hollowed yourself out.

Why, then, do we only see this healthier attitude expressed as comedy? Why can’t we take seriously the idea that love doesn’t define your whole identity? Why does the toxic self-deprecation of “I am nothing without you” sound more romantic to our ears than the honest self-respect of “I would probably have somebody else”? Why is so much of what we view as “romantic” so often unrealistic—or even harmful?

Tim Minchin himself seems to wonder, as the song alternates between serious expressions of love and ironic jabs:

And if I may conjecture a further objection,
Love is nothing to do with destined perfection.
The connection is strengthened,
The affection simply grows over time,

Like a flower,
Or a mushroom,
Or a guinea pig,
Or a vine,
Or a sponge,
Or bigotry…
…or a banana.

And love is made more powerful
By the ongoing drama of shared experience,
And the synergy of a kind of symbiotic empathy, or… something.

I believe that a healthier form of love is possible. I believe that we can unite ourselves with others in a way that does not sacrifice our own identity and self-worth. I believe that love makes us more than we were—but not that we would be nothing without it. I am more than I was because you loved me—but not everything I am.

This is already how most of us view friendship: We care for our friends, we value our relationships with them—but we would recognize it as toxic to declare that we’d be nothing without them. Indeed, there is a contradiction in our usual attitude here: If part of who I am is in my friendships, then how can losing my romantic partner render me nothing? Don’t I still at least have my friends?

I can now answer this question: What am I without you? An unhappier me. But still, me.

So, on your birthday, let me say this to you, my dear husband:

But with all my heart and all my mind,
I know one thing is true:
I have just one life and just one love,
And my love, that love is you.
And if it wasn’t for you,
Darling, you…
I really think that I would…
Possibly…
Have somebody else.

The mental health crisis in academia

Apr 30 JDN 2460065

Why are so many academics anxious and depressed?

Depression and anxiety are much more prevalent among both students and faculty than they are in the general population. Unsurprisingly, women seem to have it a bit worse than men, and trans people have it worst of all.

Is this the result of systemic failings of the academic system? Before deciding that, one thing we should consider is that very smart people do seem to have a higher risk of depression.

There is a complex relationship between genes linked to depression and genes linked to intelligence, and some evidence that people of especially high IQ are more prone to depression; nearly 27% of Mensa members report mood disorders, compared to 10% of the general population.

(Incidentally, the stereotype of the weird, sickly nerd has a kernel of truth: the correlations between intelligence and autism, ADHD, allergies, and autoimmune disorders are absolutely real—and not at all well understood. It may be a general pattern of neural hyper-activation, not unlike what I posit in my stochastic overload model. The stereotypical nerd wears glasses, and, yes, indeed, myopia is also correlated with intelligence—and this seems to be mostly driven by genetics.)

Most of these figures are at least a few years old. If anything things are only worse now, as COVID triggered a surge in depression for just about everyone, academics included. It remains to be seen how much of this large increase will abate as things gradually return to normal, and how much will continue to have long-term effects—this may depend in part on how well we manage to genuinely restore a normal way of life and how well we can deal with long COVID.

If we assume that academics are a similar population to Mensa members (admittedly a strong assumption), then this could potentially explain why 26% of academic faculty are depressed—but not why nearly 40% of junior faculty are. At the very least, we junior faculty are about 50% more likely to be depressed than would be explained by our intelligence alone. And grad students have it even worse: Nearly 40% of graduate students report anxiety or depression, and nearly 50% of PhD students meet the criteria for depression. At the very least this sounds like a dual effect of being both high in intelligence and low in status—it’s those of us who have very little power or job security in academia who are the most depressed.

This suggests that, yes, there really is something wrong with academia. It may not be entirely the fault of the system—perhaps even a well-designed academic system would result in more depression than the general population because we are genetically predisposed. But it really does seem like there is a substantial environmental contribution that academic institutions bear some responsibility for.

I think the most obvious explanation is constant evaluation: From the time we are students at least up until we (maybe, hopefully, someday) get tenure, academics are constantly being evaluated on our performance. We know that this sort of evaluation contributes to anxiety and depression.

Don’t other jobs evaluate performance? Sure. But not constantly the way that academia does. This is especially obvious as a student, where everything you do is graded; but it largely continues once you are faculty as well.

For most jobs, you are concerned about doing well enough to keep your job or maybe get a raise. But academia has this continuous forward pressure: if you are a grad student or junior faculty, you can’t possibly keep your job; you must either move upward to the next stage or drop out. And academia has become so hyper-competitive that if you want to continue moving upward—and someday getting that tenure—you must publish in top-ranked journals, which have utterly opaque criteria and ever-declining acceptance rates. And since there are so few jobs available compared to the number of applicants, good enough is never good enough; you must be exceptional, or you will fail. Two thirds of PhD graduates seek a career in academia—but only 30% are actually in one three years later. (And honestly, three years is pretty short; there are plenty of cracks left to fall through between that and a genuinely stable tenured faculty position.)

Moreover, our skills are so hyper-specialized that it’s very hard to imagine finding work anywhere else. This grants academic institutions tremendous monopsony power over us, letting them get away with lower pay and worse working conditions. Even with an economics PhD—relatively transferable, all things considered—I find myself wondering who would actually want to hire me outside this ivory tower, and my feeble attempts at actually seeking out such employment have thus far met with no success.

I also find academia painfully isolating. I’m not an especially extraverted person; I tend to score somewhere near the middle range of extraversion (sometimes called an “ambivert”). But I still find myself craving more meaningful contact with my colleagues. We all seem to work in complete isolation from one another, even when sharing the same office (which is awkward for other reasons). There are very few consistent gatherings or good common spaces. And whenever faculty do try to arrange some sort of purely social event, it always seems to involve drinking at a pub and nobody is interested in providing any serious emotional or professional support.

Some of this may be particular to this university, or to the UK; or perhaps it has more to do with being at a certain stage of my career. In any case I didn’t feel nearly so isolated in graduate school; I had other students in my cohort and adjacent cohorts who were going through the same things. But I’ve been here two years now and so far have been unable to establish any similarly supportive relationships with colleagues.

There may be some opportunities I’m not taking advantage of: I’ve skipped a lot of research seminars, and I stopped going to those pub gatherings. But it wasn’t that I didn’t try them at all; it was that I tried them a few times and quickly found that they were not filling that need. At seminars, people only talked about the particular research project being presented. At the pub, people talked about almost nothing of serious significance—and certainly nothing requiring emotional vulnerability. The closest I think I got to this kind of support from colleagues was a series of lunch meetings designed to improve instruction in “tutorials” (what here in the UK we call discussion sections); there, at least, we could commiserate about feeling overworked and dealing with administrative bureaucracy.

There seem to be deep, structural problems with how academia is run. This whole process of universities outsourcing their hiring decisions to the capricious whims of high-ranked journals basically decides the entire course of our careers. And once you get to the point I have, now so disheartened with the process of publishing research that I can’t even engage with it, it’s not at all clear how it’s even possible to recover. I see no way forward, no one to turn to. No one seems to care how well I teach, if I’m not publishing research.

And I’m clearly not the only one who feels this way.

The case against phys ed

Dec 4 JDN 2459918

If I want to stop someone from engaging in an activity, what should I do? I could tell them it’s wrong, and if they believe me, that would work. But what if they don’t believe me? Or I could punish them for doing it, and as long as I can continue to do that reliably, that should deter them from doing it. But what happens after I remove the punishment?

If I really want to make someone not do something, the best way to accomplish that is to make them not want to do it. Make them dread doing it. Make them hate the very thought of it. And to accomplish that, a very efficient method would be to first force them to do it, but make that experience as miserable and humiliating is possible. Give them a wide variety of painful or outright traumatic experiences that are directly connected with the undesired activity, to carry with them for the rest of their life.

This is precisely what physical education does, with regard to exercise. Phys ed is basically optimized to make people hate exercise.

Oh, sure, some students enjoy phys ed. These are the students who are already athletic and fit, who already engage in regular exercise and enjoy doing so. They may enjoy phys ed, may even benefit a little from it—but they didn’t really need it in the first place.

The kids who need more physical activity are the kids who are obese, or have asthma, or suffer from various other disabilities that make exercising difficult and painful for them. And what does phys ed do to those kids? It makes them compete in front of their peers at various athletic tasks at which they will inevitably fail and be humiliated.

Even the kids who are otherwise healthy but just don’t get enough exercise will go into phys ed class at a disadvantage, and instead of being carefully trained to improve their skills and physical condition at their own level, they will be publicly shamed by their peers for their inferior performance.

I know this, because I was one of those kids. I have exercise-induced bronchoconstriction, a lung condition similar to asthma (actually there’s some debate as to whether it should be considered a form of asthma), in which intense aerobic exercise causes the airways of my lungs to become constricted and inflamed, making me unable to get enough air to continue.

It’s really quite remarkable I wasn’t diagnosed with this as a child; I actually once collapsed while running in gym class, and all they thought to do at the time was give me water and let me rest for the remainder of the class. Nobody thought to call the nurse. I was never put on a beta agonist or an inhaler. (In fact at one point I was put on a beta blocker for my migraines; I now understand why I felt so fatigued when taking it—it was literally the opposite of the drug my lungs needed.)

Actually it’s been a few years since I had an attack. This is of course partly due to me generally avoiding intense aerobic exercise; but even when I do get intense exercise, I rarely seem to get bronchoconstriction attacks. My working hypothesis is that the norepinephrine reuptake inhibition of my antidepressant acts like a beta agonist; both drugs mimic norepinephrine.

But as a child, I got such attacks quite frequently; and even when I didn’t, my overall athletic performance was always worse than most of the other kids. They knew it, I knew it, and while only a few actively tried to bully me for it, none of the others did anything to make me feel better. So gym class was always a humiliating and painful experience that I came to dread.

As a result, as soon as I got out of school and had my own autonomy in how to structure my own life, I basically avoided exercise whenever I could. Even knowing that it was good for me—really, exercise is ridiculously good for you; it honestly doesn’t even make sense to me how good it is for you—I could rarely get myself to actually go out and exercise. I certainly couldn’t do it with anyone else; sometimes, if I was very disciplined, I could manage to maintain an exercise routine by myself, as long as there was no one else there who could watch me, judge me, or compare themselves to me.

In fact, I’d probably have avoided exercise even more, had I not also had some more positive experiences with it outside of school. I trained in martial arts for a few years, getting almost to a black belt in tae kwon do; I quit precisely when it started becoming very competitive and thus began to feel humiliated again when I performed worse than others. Part of me wishes I had stuck with it long enough to actually get the black belt; but the rest of me knows that even if I’d managed it, I would have been miserable the whole time and it probably would have made me dread exercise even more.

The details of my story are of course individual to me; but the general pattern is disturbingly common. A kid does poorly in gym class, or even suffers painful attacks of whatever disabling condition they have, but nobody sees it as a medical problem; they just see the kid as weak and lazy. Or even if the adults are sympathetic, the other kids aren’t; they just see a peer who performed worse than them, and they have learned by various subtle (and not-so-subtle) cultural pressures that anyone who performs worse at a culturally-important task is worthy of being bullied and shunned.

Even outside the directly competitive environment of sports, the very structure of a phys ed class, where a large group of students are all expected to perform the same athletic tasks and can directly compare their performance against each other, invites this kind of competition. Kids can see, right in their faces, who is doing better and who is doing worse. And our culture is astonishingly bad at teaching children (or anyone else, for that matter) how to be sympathetic to others who perform worse. Worse performance is worse character. Being bad at running, jumping and climbing is just being bad.

Part of the problem is that school administrators seem to see physical education as a training and selection regimen for their sports programs. (In fact, some of them seem to see their entire school as existing to serve their sports programs.) Here is a UK government report bemoaning the fact that “only a minority of schools play competitive sport to a high level”, apparently not realizing that this is necessarily true because high-level sports performance is a relative concept. Only one team can win the championship each year. Only 10% of students will ever be in the top 10% of athletes. No matter what. Anything else is literally mathematically impossible. We do not live in Lake Wobegon; not all the children can be above average.

There are good phys ed programs out there. They have highly-trained instructors and they focus on matching tasks to a student’s own skill level, as well as actually educating them—teaching them about anatomy and physiology rather than just making them run laps. Actually the one phys ed class I took that I actually enjoyed was actually an anatomy and physiology class; we didn’t do any physical exercise in that class. But well-taught phys ed classes are clearly the exception, not the norm.

Of course, it could be that some students actually benefit from phys ed, perhaps even enough to offset the harms to people like me. (Though then the question should be asked whether phys ed should be compulsory for all students—if an intervention helps some and hurts others, maybe only give it to the ones it helps?) But I know very few people who actually described their experiences of phys ed class as positive ones. While many students describe their experiences of math class in similarly-negative terms (which is also a problem with how math classes are taught), I definitely do know people who actually enjoyed and did well in math class. Still, my sample is surely biased—it’s comprised of people similar to me, and I hated gym and loved math. So let’s look at the actual data.

Or rather, I’d like to, but there isn’t that much out there. The empirical literature on the effects of physical education is surprisingly limited.

A lot of analyses of physical education simply take as axiomatic that more phys ed means more exercise, and so they use the—overwhelming, unassailable—evidence that exercise is good to support an argument for more phys ed classes. But they never seem to stop and take a look at whether phys ed classes are actually making kids exercise more, particularly once those kids grow up and become adults.

In fact, the surprisingly weak correlations between higher physical activity and better mental health among adolescents (despite really strong correlations in adults) could be because exercise among adolescents is largely coerced via phys ed, and the misery of being coerced into physical humiliation counteracts any benefits that might have been obtained from increased exercise.

The best long-term longitudinal study I can find did show positive effects of phys ed on long-term health, though by a rather odd mechanism: Women exercised more as adults if they had phys ed in primary school, but men didn’t; they just smoked less. And this study was back in 1999, studying a cohort of adults who had phys ed quite a long time ago, when it was better funded.

The best experiment I can find actually testing whether phys ed programs work used a very carefully designed phys ed program with a lot of features that it would be really nice to have, but the vast majority of actual gym classes do not, including carefully structured activities with specific developmental goals, and, perhaps most importantly, children were taught to track and evaluate their own individual progress rather than evaluate themselves in comparison to others.

And even then, the effects are not all that large. The physical activity scores of the treatment group rose from 932 minutes per week to 1108 minutes per week for first-graders, and from 1212 to 1454 for second-graders. But the physical activity scores of the control group rose from 906 to 996 for first-graders, and 1105 to 1211 for second-graders. So of the 176 minutes per week gained by first-graders, 90 would have happened anyway. Likewise, of the 242 minutes per week gained by second-graders, 106 were not attributable to the treatment. Only about half of the gains were due to the intervention, and they amount to about a 10% increase in overall physical activity. It also seems a little odd to me that the control groups both started worse off than the experimental groups and both groups gained; it raises some doubts about the randomization.

The researchers also measured psychological effects, and these effects are even smaller and honestly a little weird. On a scale of “somatic anxiety” (basically, how bad do you feel about your body’s physical condition?), this well-designed phys ed program only reduced scores in the treatment group from 4.95 to 4.55 among first-graders, and from 4.50 to 4.10 among second-graders. Seeing as the scores for second-graders also fell in the control group from 4.63 to 4.45, only about half of the observed reduction—0.2 points on a 10-point scale—is really attributable to the treatment. And the really baffling part is that the measure of social anxiety actually fell more, which makes me wonder if they’re really measuring what they think they are.

Clearly, exercise is good. We should be trying to get people to exercise more. Actually, this is more important than almost anything else we could do for public health, with the possible exception of vaccinations. All of these campaigns trying to get kids to lose weight should be removed and replaced with programs to get them to exercise more, because losing weight doesn’t benefit health and exercising more does.

But I am not convinced that physical education as we know it actually makes people exercise more. In the short run, it forces kids to exercise, when there were surely ways to get kids to exercise that didn’t require such coercion; and in the long run, it gives them painful, even traumatic memories of exercise that make them not want to continue it once they get older. It’s too competitive, too one-size-fits-all. It doesn’t account for innate differences in athletic ability or match challenge levels to skill levels. It doesn’t help kids cope with having less ability, or even teach kids to be compassionate toward others with less ability than them.

And it makes kids miserable.

Mindful of mindfulness

Sep 25 JDN 2459848

I have always had trouble with mindfulness meditation.

On the one hand, I find it extremely difficult to do: if there is one thing my mind is good at, it’s wandering. (I think in addition to my autism spectrum disorder, I may also have a smidgen of ADHD. I meet some of the criteria at least.) And it feels a little too close to a lot of practices that are obviously mumbo-jumbo nonsense, like reiki, qigong, and reflexology.

On the other hand, mindfulness meditation has been empirically shown to have large beneficial effects in study after study after study. It helps with not only depression, but also chronic pain. It even seems to improve immune function. The empirical data is really quite clear at this point. The real question is how it does all this.

And I am, above all, an empiricist. I bow before the data. So, when my new therapist directed me to an app that’s supposed to train me to do mindfulness meditation, I resolved that I would in fact give it a try.

Honestly, as of writing this, I’ve been using it less than a week; it’s probably too soon to make a good evaluation. But I did have some prior experience with mindfulness, so this was more like getting back into it rather than starting from scratch. And, well, I think it might actually be working. I feel a bit better than I did when I started.

If it is working, it doesn’t seem to me that the mechanism is greater focus or mental control. I don’t think I’ve really had time to meaningfully improve those skills, and to be honest, I have a long way to go there. The pre-recorded voice samples keep telling me it’s okay if my mind wanders, but I doubt the app developers planned for how much my mind can wander. When they suggest I try to notice each wandering thought, I feel like saying, “Do you want the complete stack trace, or just the final output? Because if I wrote down each terminal branch alone, my list would say something like ‘fusion reactors, ice skating, Napoleon’.”

I think some of the benefit is simply parasympathetic activation, that is, being more relaxed. I am, and have always been, astonishingly bad at relaxing. It’s not that I lack positive emotions: I can enjoy, I can be excited. Nor am I incapable of low-arousal emotions: I can get bored, I can be lethargic. I can also experience emotions that are negative and high-arousal: I can be despondent or outraged. But I have great difficulty reaching emotional states which are simultaneously positive and low-arousal, i.e. states of calm and relaxation. (See here for more on the valence/arousal model of emotional states.) To some extent I think this is due to innate personality: I am high in both Conscientiousness and Neuroticism, which basically amounts to being “high-strung“. But mindfulness has taught me that it’s also trainable, to some extent; I can get better at relaxing, and I already have.

And even more than that, I think the most important effect has been reminding and encouraging me to practice self-compassion. I am an intensely compassionate person, toward other people; but toward myself, I am brutal, demanding, unforgiving, even cruel. My internal monologue says terrible things to me that I wouldnever say to anyone else. (Or at least, not to anyone else who wasn’t a mass murderer or something. I wouldn’t feel particularly bad about saying “You are a failure, you are broken, you are worthless, you are unworthy of love” to, say, Josef Stalin. And yes, these are in fact things my internal monologue has said to me.) Whenever I am unable to master a task I consider important, my automatic reaction is to denigrate myself for failing; I think the greatest benefit I am getting from practicing meditation is being encouraged to fight that impulse. That is, the most important value added by the meditation app has not been in telling me how to focus on my own breathing, but in reminding me to forgive myself when I do it poorly.

If this is right (as I said, it’s probably too soon to say), then we may at last be able to explain why meditation is simultaneously so weird and tied to obvious mumbo-jumbo on the one hand, and also so effective on the other. The actual function of meditation is to be a difficult cognitive task which doesn’t require outside support.

And then the benefit actually comes from doing this task, getting slowly better at it—feeling that sense of progress—and also from learning to forgive yourself when you do it badly. The task probably could have been anything: Find paths through mazes. Fill out Sudoku grids. Solve integrals. But these things are hard to do without outside resources: It’s basically impossible to draw a maze without solving it in the process. Generating a Sudoku grid with a unique solution is at least as hard as solving one (which is NP-complete). By the time you know a given function is even integrable over elementary functions, you’ve basically integrated it. But focusing on your breath? That you can do anywhere, anytime. And the difficulty of controlling all your wandering thoughts may be less a bug than a feature: It’s precisely because the task is so difficult that you will have reason to practice forgiving yourself for failure.

The arbitrariness of the task itself is how you can get a proliferation of different meditation techniques, and a wide variety of mythologies and superstitions surrounding them all, but still have them all be about equally effective in the end. Because it was never really about the task at all. It’s about getting better and failing gracefully.

It probably also helps that meditation is relaxing. Solving integrals might not actually work as well as focusing on your breath, even if you had a textbook handy full of integrals to solve. Breathing deeply is calming; integration by parts isn’t. But lots of things are calming, and some things may be calming to one person but not to another.

It is possible that there is yet some other benefit to be had directly via mindfulness itself. If there is, it will surely have more to do with anterior cingulate activation than realignment of qi. But such a particular benefit isn’t necessary to explain the effectiveness of meditation, and indeed would be hard-pressed to explain why so many different kinds of meditation all seem to work about as well.

Because it was never about what you’re doing—it was always about how.

The injustice of talent

Sep 4 JDN 2459827

Consider the following two principles of distributive justice.

A: People deserve to be rewarded in proportion to what they accomplish.

B: People deserve to be rewarded in proportion to the effort they put in.

Both principles sound pretty reasonable, don’t they? They both seem like sensible notions of fairness, and I think most people would broadly agree with both them.

This is a problem, because they are mutually contradictory. We cannot possibly follow them both.

For, as much as our society would like to pretend otherwise—and I think this contradiction is precisely why our society would like to pretend otherwise—what you accomplish is not simply a function of the effort you put in.

Don’t get me wrong; it is partly a function of the effort you put in. Hard work does contribute to success. But it is neither sufficient, nor strictly necessary.

Rather, success is a function of three factors: Effort, Environment, and Talent.

Effort is the work you yourself put in, and basically everyone agrees you deserve to be rewarded for that.

Environment includes all the outside factors that affect you—including both natural and social environment. Inheritance, illness, and just plain luck are all in here, and there is general, if not universal, agreement that society should make at least some efforts to minimize inequality created by such causes.

And then, there is talent. Talent includes whatever capacities you innately have. It could be strictly genetic, or it could be acquired in childhood or even in the womb. But by the time you are an adult and responsible for your own life, these factors are largely fixed and immutable. This includes things like intelligence, disability, even height. The trillion-dollar question is: How much should we reward talent?

For talent clearly does matter. I will never swim like Michael Phelps, run like Usain Bolt, or shoot hoops like Steph Curry. It doesn’t matter how much effort I put in, how many hours I spend training—I will never reach their level of capability. Never. It’s impossible. I could certainly improve from my current condition; perhaps it would even be good for me to do so. But there are certain hard fundamental constraints imposed by biology that give them more potential in these skills than I will ever have.

Conversely, there are likely things I can do that they will never be able to do, though this is less obvious. Could Michael Phelps never be as good a programmer or as skilled a mathematician as I am? He certainly isn’t now. Maybe, with enough time, enough training, he could be; I honestly don’t know. But I can tell you this: I’m sure it would be harder for him than it was for me. He couldn’t breeze through college-level courses in differential equations and quantum mechanics the way I did. There is something I have that he doesn’t, and I’m pretty sure I was born with it. Call it spatial working memory, or mathematical intuition, or just plain IQ. Whatever it is, math comes easy to me in not so different a way from how swimming comes easy to Michael Phelps. I have talent for math; he has talent for swimming.

Moreover, these are not small differences. It’s not like we all come with basically the same capabilities with a little bit of variation that can be easily washed out by effort. We’d like to believe that—we have all sorts of cultural tropes that try to inculcate that belief in us—but it’s obviously not true. The vast majority of quantum physicists are people born with high IQ. The vast majority of pro athletes are people born with physical prowess. The vast majority of movie stars are people born with pretty faces. For many types of jobs, the determining factor seems to be talent.

This isn’t too surprising, actually—even if effort matters a lot, we would still expect talent to show up as the determining factor much of the time.

Let’s go back to that contest function model I used to analyze the job market awhile back (the one that suggests we spend way too much time and money in the hiring process). This time let’s focus on the perspective of the employees themselves.

Each employee has a level of talent, h. Employee X has talent hx and exerts effort x, producing output of a quality that is the product of these: hx x. Similarly, employee Z has talent hz and exerts effort z, producing output hz z.

Then, there’s a certain amount of luck that factors in. The most successful output isn’t necessarily the best, or maybe what should have been the best wasn’t because some random circumstance prevailed. But we’ll say that the probability an individual succeeds is proportional to the quality of their output.

So the probability that employee X succeeds is: hx x / ( hx x + hz z)

I’ll skip the algebra this time (if you’re interested you can look back at that previous post), but to make a long story short, in Nash equilibrium the two employees will exert exactly the same amount of effort.

Then, which one succeeds will be entirely determined by talent; because x = z, the probability that X succeeds is hx / ( hx + hz).

It’s not that effort doesn’t matter—it absolutely does matter, and in fact in this model, with zero effort you get zero output (which isn’t necessarily the case in real life). It’s that in equilibrium, everyone is exerting the same amount of effort; so what determines who wins is innate talent. And I gotta say, that sounds an awful lot like how professional sports works. It’s less clear whether it applies to quantum physicists.

But maybe we don’t really exert the same amount of effort! This is true. Indeed, it seems like actually effort is easier for people with higher talent—that the same hour spent running on a track is easier for Usain Bolt than for me, and the same hour studying calculus is easier for me than it would be for Usain Bolt. So in the end our equilibrium effort isn’t the same—but rather than compensating, this effect only serves to exaggerate the difference in innate talent between us.

It’s simple enough to generalize the model to allow for such a thing. For instance, I could say that the cost of producing a unit of effort is inversely proportional to your talent; then instead of hx / ( hx + hz ), in equilibrium the probability of X succeeding would become hx2 / ( hx2 + hz2). The equilibrium effort would also be different, with x > z if hx > hz.

Once we acknowledge that talent is genuinely important, we face an ethical problem. Do we want to reward people for their accomplishment (A), or for their effort (B)? There are good cases to be made for each.

Rewarding for accomplishment, which we might call meritocracy,will tend to, well, maximize accomplishment. We’ll get the best basketball players playing basketball, the best surgeons doing surgery. Moreover, accomplishment is often quite easy to measure, even when effort isn’t.

Rewarding for effort, which we might call egalitarianism, will give people the most control over their lives, and might well feel the most fair. Those who succeed will be precisely those who work hard, even if they do things they are objectively bad at. Even people who are born with very little talent will still be able to make a living by working hard. And it will ensure that people do work hard, which meritocracy can actually fail at: If you are extremely talented, you don’t really need to work hard because you just automatically succeed.

Capitalism, as an economic system, is very good at rewarding accomplishment. I think part of what makes socialism appealing to so many people is that it tries to reward effort instead. (Is it very good at that? Not so clear.)

The more extreme differences are actually in terms of disability. There’s a certain baseline level of activities that most people are capable of, which we think of as “normal”: most people can talk; most people can run, if not necessarily very fast; most people can throw a ball, if not pitch a proper curveball. But some people can’t throw. Some people can’t run. Some people can’t even talk. It’s not that they are bad at it; it’s that they are literally not capable of it. No amount of effort could have made Stephen Hawking into a baseball player—not even a bad one.

It’s these cases when I think egalitarianism becomes most appealing: It just seems deeply unfair that people with severe disabilities should have to suffer in poverty. Even if they really can’t do much productive work on their own, it just seems wrong not to help them, at least enough that they can get by. But capitalism by itself absolutely would not do that—if you aren’t making a profit for the company, they’re not going to keep you employed. So we need some kind of social safety net to help such people. And it turns out that such people are quite numerous, and our current system is really not adequate to help them.

But meritocracy has its pull as well. Especially when the job is really important—like surgery, not so much basketball—we really want the highest quality work. It’s not so important whether the neurosurgeon who removes your tumor worked really hard at it or found it a breeze; what we care about is getting that tumor out.

Where does this leave us?

I think we have no choice but to compromise, on both principles. We will reward both effort and accomplishment, to greater or lesser degree—perhaps varying based on circumstances. We will never be able to entirely reward accomplishment or entirely reward effort.

This is more or less what we already do in practice, so why worry about it? Well, because we don’t like to admit that it’s what we do in practice, and a lot of problems seem to stem from that.

We have people acting like billionaires are such brilliant, hard-working people just because they’re rich—because our society rewards effort, right? So they couldn’t be so successful if they didn’t work so hard, right? Right?

Conversely, we have people who denigrate the poor as lazy and stupid just because they are poor. Because it couldn’t possibly be that their circumstances were worse than yours? Or hey, even if they are genuinely less talented than you—do less talented people deserve to be homeless and starving?

We tell kids from a young age, “You can be whatever you want to be”, and “Work hard and you’ll succeed”; and these things simply aren’t true. There are limitations on what you can achieve through effort—limitations imposed by your environment, and limitations imposed by your innate talents.

I’m not saying we should crush children’s dreams; I’m saying we should help them to build more realistic dreams, dreams that can actually be achieved in the real world. And then, when they grow up, they either will actually succeed, or when they don’t, at least they won’t hate themselves for failing to live up to what you told them they’d be able to do.

If you were wondering why Millennials are so depressed, that’s clearly a big part of it: We were told we could be and do whatever we wanted if we worked hard enough, and then that didn’t happen; and we had so internalized what we were told that we thought it had to be our fault that we failed. We didn’t try hard enough. We weren’t good enough. I have spent years feeling this way—on some level I do still feel this way—and it was not because adults tried to crush my dreams when I was a child, but on the contrary because they didn’t do anything to temper them. They never told me that life is hard, and people fail, and that I would probably fail at my most ambitious goals—and it wouldn’t be my fault, and it would still turn out okay.

That’s really it, I think: They never told me that it’s okay not to be wildly successful. They never told me that I’d still be good enough even if I never had any great world-class accomplishments. Instead, they kept feeding me the lie that I would have great world-class accomplishments; and then, when I didn’t, I felt like a failure and I hated myself. I think my own experience may be particularly extreme in this regard, but I know a lot of other people in my generation who had similar experiences, especially those who were also considered “gifted” as children. And we are all now suffering from depression, anxiety, and Impostor Syndrome.

All because nobody wanted to admit that talent, effort, and success are not the same thing.