What is anxiety for?

Sep 17 JDN 2460205

As someone who experiences a great deal of anxiety, I have often struggled to understand what it could possibly be useful for. We have this whole complex system of evolved emotions, and yet more often than not it seems to harm us rather than help us. What’s going on here? Why do we even have anxiety? What even is anxiety, really? And what is it for?

There’s actually an extensive body of research on this, though very few firm conclusions. (One of the best accounts I’ve read, sadly, is paywalled.)

For one thing, there seem to be a lot of positive feedback loops involved in anxiety: Panic attacks make you more anxious, triggering more panic attacks; being anxious disrupts your sleep, which makes you more anxious. Positive feedback loops can very easily spiral out of control, resulting in responses that are wildly disproportionate to the stimulus that triggered them.

A certain amount of stress response is useful, even when the stakes are not life-or-death. But beyond a certain point, more stress becomes harmful rather than helpful. This is the Yerkes-Dodson effect, for which I developed my stochastic overload model (which I still don’t know if I’ll ever publish, ironically enough, because of my own excessive anxiety). Realizing that anxiety can have benefits can also take some of the bite out of having chronic anxiety, and, ironically, reduce that anxiety a little. The trick is finding ways to break those positive feedback loops.

I think one of the most useful insights to come out of this research is the smoke-detector principle, which is a fundamentally economic concept. It sounds quite simple: When dealing with an uncertain danger, sound the alarm if the expected benefit of doing so exceeds the expected cost.

This has profound implications when risk is highly asymmetric—as it usually is. Running away from a shadow or a noise that probably isn’t a lion carries some cost; you wouldn’t want to do it all the time. But it is surely nowhere near as bad as failing to run away when there is an actual lion. Indeed, it might be fair to say that failing to run away from an actual lion counts as one of the worst possible things that could ever happen to you, and could easily be 100 times as bad as running away when there is nothing to fear.

With this in mind, if you have a system for detecting whether or not there is a lion, how sensitive should you make it? Extremely sensitive. You should in fact try to calibrate it so that 99% of the time you experience the fear and want to run away, there is not a lion. Because the 1% of the time when there is one, it’ll all be worth it.

Yet this is far from a complete explanation of anxiety as we experience it. For one thing, there has never been, in my entire life, even a 1% chance that I’m going to be attacked by a lion. Even standing in front of a lion enclosure at the zoo, my chances of being attacked are considerably less than that—for a zoo that allowed 1% of its customers to be attacked would not stay in business very long.

But for another thing, it isn’t really lions I’m afraid of. The things that make me anxious are generally not things that would be expected to do me bodily harm. Sure, I generally try to avoid walking down dark alleys at night, and I look both ways before crossing the street, and those are activities directly designed to protect me from bodily harm. But I actually don’t feel especially anxious about those things! Maybe I would if I actually had to walk through dark alleys a lot, but I don’t, and in the rare occasion I would, I think I’d feel afraid at the time but fine afterward, rather than experiencing persistent, pervasive, overwhelming anxiety. (Whereas, if I’m anxious about reading emails, and I do manage to read emails, I’m usually still anxious afterward.) When it comes to crossing the street, I feel very little fear at all, even though perhaps I should—indeed, it had been remarked that when it comes to the perils of motor vehicles, human beings suffer from a very dangerous lack of fear. We should be much more afraid than we are—and our failure to be afraid kills thousands of people.

No, the things that make me anxious are invariably social: Meetings, interviews, emails, applications, rejection letters. Also parties, networking events, and back when I needed them, dates. They involve interacting with other people—and in particular being evaluated by other people. I never felt particularly anxious about exams, except maybe a little before my PhD qualifying exam and my thesis defenses; but I can understand those who do, because it’s the same thing: People are evaluating you.

This suggests that anxiety, at least of the kind that most of us experience, isn’t really about danger; it’s about status. We aren’t worried that we will be murdered or tortured or even run over by a car. We’re worried that we will lose our friends, or get fired; we are worried that we won’t get a job, won’t get published, or won’t graduate.

And yet it is striking to me that it often feels just as bad as if we were afraid that we were going to die. In fact, in the most severe instances where anxiety feeds into depression, it can literally make people want to die. How can that be evolutionarily adaptive?

Here it may be helpful to remember that in our ancestral environment, status and survival were oft one and the same. Humans are the most social organisms on Earth; I even sometimes describe us as hypersocial, a whole new category of social that no other organism seems to have achieved. We cooperate with others of our species on a mind-bogglingly grand scale, and are utterly dependent upon vast interconnected social systems far too large and complex for us to truly understand, let alone control.

At this historical epoch, these social systems are especially vast and incomprehensible; but at least for most of us in First World countries, they are also forgiving in a way that is fundamentally alien to our ancestors’ experience. It was not so long ago that a failed hunt or a bad harvest would let your family starve unless you could beseech your community for aid successfully—which meant that your very survival could depend upon being in the good graces of that community. But now we have food stamps, so even if everyone in your town hates you, you still get to eat. Of course some societies are more forgiving (Sweden) than others (the United States); and virtually all societies could be even more forgiving than they are. But even the relatively cutthroat competition of the US today has far less genuine risk of truly catastrophic failure than what most human beings lived through for most of our existence as a species.

I have found this realization helpful—hardly a cure, but helpful, at least: What are you really afraid of? When you feel anxious, your body often tells you that the stakes are overwhelming, life-or-death; but if you stop and think about it, in the world we live in today, that’s almost never true. Failing at one important task at work probably won’t get you fired—and even getting fired won’t really make you starve.

In fact, we might be less anxious if it were! For our bodies’ fear system seems to be optimized for the following scenario: An immediate threat with high chance of success and life-or-death stakes. Spear that wild animal, or jump over that chasm. It will either work or it won’t, you’ll know immediately; it probably will work; and if it doesn’t, well, that may be it for you. So you’d better not fail. (I think it’s interesting how much of our fiction and media involves these kinds of events: The hero would surely and promptly die if he fails, but he won’t fail, for he’s the hero! We often seem more comfortable in that sort of world than we do in the one we actually live in.)

Whereas the life we live in now is one of delayed consequences with low chance of success and minimal stakes. Send out a dozen job applications. Hear back in a week from three that want to interview you. Do those interviews and maybe one will make you an offer—but honestly, probably not. Next week do another dozen. Keep going like this, week after week, until finally one says yes. Each failure actually costs you very little—but you will fail, over and over and over and over.

In other words, we have transitioned from an environment of immediate return to one of delayed return.

The result is that a system which was optimized to tell us never fail or you will die is being put through situations where failure is constantly repeated. I think deep down there is a part of us that wonders, “How are you still alive after failing this many times?” If you had fallen in as many ravines as I have received rejection letters, you would assuredly be dead many times over.

Yet perhaps our brains are not quite as miscalibrated as they seem. Again I come back to the fact that anxiety always seems to be about people and evaluation; it’s different from immediate life-or-death fear. I actually experience very little life-or-death fear, which makes sense; I live in a very safe environment. But I experience anxiety almost constantly—which also makes a certain amount of sense, seeing as I live in an environment where I am being almost constantly evaluated by other people.

One theory posits that anxiety and depression are a dual mechanism for dealing with social hierarchy: You are anxious when your position in the hierarchy is threatened, and depressed when you have lost it. Primates like us do seem to care an awful lot about hierarchies—and I’ve written before about how this explains some otherwise baffling things about our economy.

But I for one have never felt especially invested in hierarchy. At least, I have very little desire to be on top of the hiearchy. I don’t want to be on the bottom (for I know how such people are treated); and I strongly dislike most of the people who are actually on top (for they’re most responsible for treating the ones on the bottom that way). I also have ‘a problem with authority’; I don’t like other people having power over me. But if I were to somehow find myself ruling the world, one of the first things I’d do is try to figure out a way to transition to a more democratic system. So it’s less like I want power, and more like I want power to not exist. Which means that my anxiety can’t really be about fearing to lose my status in the hierarchy—in some sense, I want that, because I want the whole hierarchy to collapse.

If anxiety involved the fear of losing high status, we’d expect it to be common among those with high status. Quite the opposite is the case. Anxiety is more common among people who are more vulnerable: Women, racial minorities, poor people, people with chronic illness. LGBT people have especially high rates of anxiety. This suggests that it isn’t high status we’re afraid of losing—though it could still be that we’re a few rungs above the bottom and afraid of falling all the way down.

It also suggests that anxiety isn’t entirely pathological. Our brains are genuinely responding to circumstances. Maybe they are over-responding, or responding in a way that is not ultimately useful. But the anxiety is at least in part a product of real vulnerabilities. Some of what we’re worried about may actually be real. If you cannot carry yourself with the confidence of a mediocre White man, it may be simply because his status is fundamentally secure in a way yours is not, and he has been afforded a great many advantages you never will be. He never had a Supreme Court ruling decide his rights.

I cannot offer you a cure for anxiety. I cannot even really offer you a complete explanation of where it comes from. But perhaps I can offer you this: It is not your fault. Your brain evolved for a very different world than this one, and it is doing its best to protect you from the very different risks this new world engenders. Hopefully one day we’ll figure out a way to get it calibrated better.

Implications of stochastic overload

Apr 2 JDN 2460037

A couple weeks ago I presented my stochastic overload model, which posits a neurological mechanism for the Yerkes-Dodson effect: Stress increases sympathetic activation, and this increases performance, up to the point where it starts to risk causing neural pathways to overload and shut down.

This week I thought I’d try to get into some of the implications of this model, how it might be applied to make predictions or guide policy.

One thing I often struggle with when it comes to applying theory is what actual benefits we get from a quantitative mathematical model as opposed to simply a basic qualitative idea. In many ways I think these benefits are overrated; people seem to think that putting something into an equation automatically makes it true and useful. I am sometimes tempted to try to take advantage of this, to put things into equations even though I know there is no good reason to put them into equations, simply because so many people seem to find equations so persuasive for some reason. (Studies have even shown that, particularly in disciplines that don’t use a lot of math, inserting a totally irrelevant equation into a paper makes it more likely to be accepted.)

The basic implications of the Yerkes-Dodson effect are already widely known, and utterly ignored in our society. We know that excessive stress is harmful to health and performance, and yet our entire economy seems to be based around maximizing the amount of stress that workers experience. I actually think neoclassical economics bears a lot of the blame for this, as neoclassical economists are constantly talking about “increasing work incentives”—which is to say, making work life more and more stressful. (And let me remind you that there has never been any shortage of people willing to work in my lifetime, except possibly briefly during the COVID pandemic. The shortage has always been employers willing to hire them.)

I don’t know if my model can do anything to change that. Maybe by putting it into an equation I can make people pay more attention to it, precisely because equations have this weird persuasive power over most people.

As far as scientific benefits, I think that the chief advantage of a mathematical model lies in its ability to make quantitative predictions. It’s one thing to say that performance increases with low levels of stress then decreases with high levels; but it would be a lot more useful if we could actually precisely quantify how much stress is optimal for a given person and how they are likely to perform at different levels of stress.

Unfortunately, the stochastic overload model can only make detailed predictions if you have fully specified the probability distribution of innate activation, which requires a lot of free parameters. This is especially problematic if you don’t even know what type of distribution to use, which we really don’t; I picked three classes of distribution because they were plausible and tractable, not because I had any particular evidence for them.

Also, we don’t even have standard units of measurement for stress; we have a vague notion of what more or less stressed looks like, but we don’t have the sort of quantitative measure that could be plugged into a mathematical model. Probably the best units to use would be something like blood cortisol levels, but then we’d need to go measure those all the time, which raises its own issues. And maybe people don’t even respond to cortisol in the same ways? But at least we could measure your baseline cortisol for awhile to get a prior distribution, and then see how different incentives increase your cortisol levels; and then the model should give relatively precise predictions about how this will affect your overall performance. (This is a very neuroeconomic approach.)

So, for now, I’m not really sure how useful the stochastic overload model is. This is honestly something I feel about a lot of the theoretical ideas I have come up with; they often seem too abstract to be usefully applicable to anything.

Maybe that’s how all theory begins, and applications only appear later? But that doesn’t seem to be how people expect me to talk about it whenever I have to present my work or submit it for publication. They seem to want to know what it’s good for, right now, and I never have a good answer to give them. Do other researchers have such answers? Do they simply pretend to?

Along similar lines, I recently had one of my students ask about a theory paper I wrote on international conflict for my dissertation, and after sending him a copy, I re-read the paper. There are so many pages of equations, and while I am confident that the mathematical logic is valid,I honestly don’t know if most of them are really useful for anything. (I don’t think I really believe that GDP is produced by a Cobb-Douglas production function, and we don’t even really know how to measure capital precisely enough to say.) The central insight of the paper, which I think is really important but other people don’t seem to care about, is a qualitative one: International treaties and norms provide an equilibrium selection mechanism in iterated games. The realists are right that this is cheap talk. The liberals are right that it works. Because when there are many equilibria, cheap talk works.

I know that in truth, science proceeds in tiny steps, building a wall brick by brick, never sure exactly how many bricks it will take to finish the edifice. It’s impossible to see whether your work will be an irrelevant footnote or the linchpin for a major discovery. But that isn’t how the institutions of science are set up. That isn’t how the incentives of academia work. You’re not supposed to say that this may or may not be correct and is probably some small incremental progress the ultimate impact of which no one can possibly foresee. You’re supposed to sell your work—justify how it’s definitely true and why it’s important and how it has impact. You’re supposed to convince other people why they should care about it and not all the dozens of other probably equally-valid projects being done by other researchers.

I don’t know how to do that, and it is agonizing to even try. It feels like lying. It feels like betraying my identity. Being good at selling isn’t just orthogonal to doing good science—I think it’s opposite. I think the better you are at selling your work, the worse you are at cultivating the intellectual humility necessary to do good science. If you think you know all the answers, you’re just bad at admitting when you don’t know things. It feels like in order to succeed in academia, I have to act like an unscientific charlatan.

Honestly, why do we even need to convince you that our work is more important than someone else’s? Are there only so many science points to go around? Maybe the whole problem is this scarcity mindset. Yes, grant funding is limited; but why does publishing my work prevent you from publishing someone else’s? Why do you have to reject 95% of the papers that get sent to you? Don’t tell me you’re limited by space; the journals are digital and searchable and nobody reads the whole thing anyway. Editorial time isn’t infinite, but most of the work has already been done by the time you get a paper back from peer review. Of course, I know the real reason: Excluding people is the main source of prestige.

The role of innate activation in stochastic overload

Mar 26 JDN 2460030

Two posts ago I introduced my stochastic overload model, which offers an explanation for the Yerkes-Dodson effect by positing that additional stress increases sympathetic activation, which is useful up until the point where it starts risking an overload that forces systems to shut down and rest.

The central equation of the model is actually quite simple, expressed either as an expectation or as an integral:

Y = E[x + s | x + s < 1] P[x + s < 1]

Y = \int_{0}^{1-s} (x+s) dF(x)

The amount of output produced is the expected value of innate activation plus stress activation, times the probability that there is no overload. Increased stress raises this expectation value (the incentive effect), but also increases the probability of overload (the overload effect).

The model relies upon assuming that the brain starts with some innate level of activation that is partially random. Exactly what sort of Yerkes-Dodson curve you get from this model depends very much on what distribution this innate activation takes.

I’ve so far solved it for three types of distribution.

The simplest is a uniform distribution, where within a certain range, any level of activation is equally probable. The probability density function looks like this:

Assume the distribution has support between a and b, where a < b.

When b+s < 1, then overload is impossible, and only the incentive effect occurs; productivity increases linearly with stress.

The expected output is simply the expected value of a uniform distribution from a+s to b+s, which is:

E[x + s] = (a+b)/2+s

Then, once b+s > 1, overload risk begins to increase.

In this range, the probability of avoiding overload is:

P[x + s < 1] = F(1-s) = (1-s-a)/(b-a)

(Note that at b+s=1, this is exactly 1.)

The expected value of x+s in this range is:

E[x + s | x + s < 1] = (1-s)(1+s)/(2(b-a))

Multiplying these two together:

Y = [(1-s)(1+s)(1-s-a)]/[2(b-a)^2]

Here is what that looks like for a=0, b=1/2:

It does have the right qualitative features: increasing, then decreasing. But its sure looks weird, doesn’t it? It has this strange kinked shape.

So let’s consider some other distributions.

The next one I was able to solve it for is an exponential distribution, where the most probable activation is zero, and then higher activation always has lower probability than lower activation in an exponential decay:

For this it was actually easiest to do the integral directly (I did it by integrating by parts, but I’m sure you don’t care about all the mathematical steps):

Y = \int_{0}^{1-s} (x+s) dF(x)

Y = (1/λ+s) – (1/ λ + 1)e^(-λ(1-s))

The parameter λdecides how steeply your activation probability decays. Someone with low λ is relatively highly activated all the time, while someone with high λ is usually not highly activated; this seems like it might be related to the personality trait neuroticism.

Here are graphs of what the resulting Yerkes-Dodson curve looks like for several different values of λ:

λ = 0.5:

λ = 1:

λ = 2:

λ = 4:

λ = 8:

The λ = 0.5 person has high activation a lot of the time. They are actually fairly productive even without stress, but stress quickly overwhelms them. The λ = 8 person has low activation most of the time. They are not very productive without stress, but can also bear relatively high amounts of stress without overloading.

(The low-λ people also have overall lower peak productivity in this model, but that might not be true in reality, if λ is inversely correlated with some other attributes that are related to productivity.)

Neither uniform nor exponential has the nice bell-curve shape for innate activation we might have hoped for. There is another class of distributions, beta distributions, which do have this shape, and they are sort of tractable—you need something called an incomplete beta function, which isn’t an elementary function but it’s useful enough that most statistical packages include it.

Beta distributions have two parameters, α and β. They look like this:

Beta distributions are quite useful in Bayesian statistics; if you’re trying to estimate the probability of a random event that either succeeds or fails with a fixed probability (a Bernoulli process), and so far you have observed a successes and b failures, your best guess of its probability at each trial is a beta distribution with α = a+1 and β = b+1.

For beta distributions with parameters α and β, the result comes out to (I is that incomplete beta function I mentioned earlier):

Y = I(1-s, α+1, β) + I(1-s, α, β)

For whole number values of α andβ, the incomplete beta function can be computed by hand (though it is more work the larger they are); here’s an example with α = β = 2.

The innate activation probability looks like this:

And the result comes out like this:

Y = 2(1-s)^3 – 3/2(1-s)^4 + 3s(1-s)^2 – 2s(1-s)^3

This person has pretty high innate activation most of the time, so stress very quickly overwhelms them. If I had chosen a much higher β, I could change that, making them less likely to be innately so activated.

These are the cases I’ve found to be relatively tractable so far. They all have the right qualitative pattern: Increasing stress increases productivity for awhile, then begins decreasing it once overload risk becomes too high. They also show a general pattern where people who are innately highly activated (neurotic?) are much more likely to overload and thus much more sensitive to stress.

The stochastic overload model

The stochastic overload model

Mar 12 JDN 2460016

The next few posts are going to be a bit different, a bit more advanced and technical than usual. This is because, for the first time in several months at least, I am actually working on what could be reasonably considered something like theoretical research.

I am writing it up in the form of blog posts, because actually writing a paper is still too stressful for me right now. This also forces me to articulate my ideas in a clearer and more readable way, rather than dive directly into a morass of equations. It also means that even if I do never actually get around to finishing a paper, the idea is out there, and maybe someone else could make use of it (and hopefully give me some of the credit).

I’ve written previously about the Yerkes-Dodson effect: On cognitively-demanding tasks, increased stress increases performance, but only to a point, after which it begins decreasing it again. The effect is well-documented, but the mechanism is poorly understood.

I am currently on the wrong side of the Yerkes-Dodson curve, which is why I’m too stressed to write this as a formal paper right now. But that also gave me some ideas about how it may work.

I have come up with a simple but powerful mathematical model that may provide a mechanism for the Yerkes-Dodson effect.

This model is clearly well within the realm of a behavioral economic model, but it is also closely tied to neuroscience and cognitive science.

I call it the stochastic overload model.

First, a metaphor: Consider an engine, which can run faster or slower. If you increase its RPMs, it will output more power, and provide more torque—but only up to a certain point. Eventually it hits a threshold where it will break down, or even break apart. In real engines, we often include safety systems that force the engine to shut down as it approaches such a threshold.

I believe that human brains function on a similar principle. Stress increases arousal, which activates a variety of processes via the sympathetic nervous system. This activation improves performance on both physical and cognitive tasks. But it has a downside; especially on cognitively demanding tasks which required sustained effort, I hypothesize that too much sympathetic activation can result in a kind of system overload, where your brain can no longer handle the stress and processes are forced to shut down.

This shutdown could be brief—a few seconds, or even a fraction of a second—or it could be prolonged—hours or days. That might depend on just how severe the stress is, or how much of your brain it requires, or how prolonged it is. For purposes of the model, this isn’t vital. It’s probably easiest to imagine it being a relatively brief, localized shutdown of a particular neural pathway. Then, your performance in a task is summed up over many such pathways over a longer period of time, and by the law of large numbers your overall performance is essentially the average performance of all your brain systems.

That’s the “overload” part of the model. Now for the “stochastic” part.

Let’s say that, in the absence of stress, your brain has a certain innate level of sympathetic activation, which varies over time in an essentially chaotic, unpredictable—stochastic—sort of way. It is never really completely deactivated, and may even have some chance of randomly overloading itself even without outside input. (Actually, a potential role in the model for the personality trait neuroticism is an innate tendency toward higher levels of sympathetic activation in the absence of outside stress.)

Let’s say that this innate activation is x, which follows some kind of known random distribution F(x).

For simplicity, let’s also say that added stress s adds linearly to your level of sympathetic activation, so your overall level of activation is x + s.

For simplicity, let’s say that activation ranges between 0 and 1, where 0 is no activation at all and 1 is the maximum possible activation and triggers overload.

I’m assuming that if a pathway shuts down from overload, it doesn’t contribute at all to performance on the task. (You can assume it’s only reduced performance, but this adds complexity without any qualitative change.)

Since sympathetic activation improves performance, but can result in overload, your overall expected performance in a given task can be computed as the product of two terms:

[expected value of x + s, provided overload does not occur] * [probability overload does not occur]

E[x + s | x + s < 1] P[x + s < 1]

The first term can be thought of as the incentive effect: Higher stress promotes more activation and thus better performance.

The second term can be thought of as the overload effect: Higher stress also increases the risk that activation will exceed the threshold and force shutdown.

This equation actually turns out to have a remarkably elegant form as an integral (and here’s where I get especially technical and mathematical):

\int_{0}^{1-s} (x+s) dF(x)

The integral subsumes both the incentive effect and the overload effect into one term; you can also think of the +s in the integrand as the incentive effect and the 1-s in the limit of integration as the overload effect.

For the uninitated, this is probably just Greek. So let me show you some pictures to help with your intuition. These are all freehand sketches, so let me apologize in advance for my limited drawing skills. Think of this as like Arthur Laffer’s famous cocktail napkin.

Suppose that, in the absence of outside stress, your innate activation follows a distribution like this (this could be a normal or logit PDF; as I’ll talk about next week, logit is far more tractable):

As I start adding stress, this shifts the distribution upward, toward increased activation:

Initially, this will improve average performance.

But at some point, increased stress actually becomes harmful, as it increases the probability of overload.

And eventually, the probability of overload becomes so high that performance becomes worse than it was with no stress at all:

The result is that overall performance, as a function of stress, looks like an inverted U-shaped curve—the Yerkes-Dodson curve:

The precise shape of this curve depends on the distribution that we use for the innate activation, which I will save for next week’s post.