Implications of stochastic overload

Apr 2 JDN 2460037

A couple weeks ago I presented my stochastic overload model, which posits a neurological mechanism for the Yerkes-Dodson effect: Stress increases sympathetic activation, and this increases performance, up to the point where it starts to risk causing neural pathways to overload and shut down.

This week I thought I’d try to get into some of the implications of this model, how it might be applied to make predictions or guide policy.

One thing I often struggle with when it comes to applying theory is what actual benefits we get from a quantitative mathematical model as opposed to simply a basic qualitative idea. In many ways I think these benefits are overrated; people seem to think that putting something into an equation automatically makes it true and useful. I am sometimes tempted to try to take advantage of this, to put things into equations even though I know there is no good reason to put them into equations, simply because so many people seem to find equations so persuasive for some reason. (Studies have even shown that, particularly in disciplines that don’t use a lot of math, inserting a totally irrelevant equation into a paper makes it more likely to be accepted.)

The basic implications of the Yerkes-Dodson effect are already widely known, and utterly ignored in our society. We know that excessive stress is harmful to health and performance, and yet our entire economy seems to be based around maximizing the amount of stress that workers experience. I actually think neoclassical economics bears a lot of the blame for this, as neoclassical economists are constantly talking about “increasing work incentives”—which is to say, making work life more and more stressful. (And let me remind you that there has never been any shortage of people willing to work in my lifetime, except possibly briefly during the COVID pandemic. The shortage has always been employers willing to hire them.)

I don’t know if my model can do anything to change that. Maybe by putting it into an equation I can make people pay more attention to it, precisely because equations have this weird persuasive power over most people.

As far as scientific benefits, I think that the chief advantage of a mathematical model lies in its ability to make quantitative predictions. It’s one thing to say that performance increases with low levels of stress then decreases with high levels; but it would be a lot more useful if we could actually precisely quantify how much stress is optimal for a given person and how they are likely to perform at different levels of stress.

Unfortunately, the stochastic overload model can only make detailed predictions if you have fully specified the probability distribution of innate activation, which requires a lot of free parameters. This is especially problematic if you don’t even know what type of distribution to use, which we really don’t; I picked three classes of distribution because they were plausible and tractable, not because I had any particular evidence for them.

Also, we don’t even have standard units of measurement for stress; we have a vague notion of what more or less stressed looks like, but we don’t have the sort of quantitative measure that could be plugged into a mathematical model. Probably the best units to use would be something like blood cortisol levels, but then we’d need to go measure those all the time, which raises its own issues. And maybe people don’t even respond to cortisol in the same ways? But at least we could measure your baseline cortisol for awhile to get a prior distribution, and then see how different incentives increase your cortisol levels; and then the model should give relatively precise predictions about how this will affect your overall performance. (This is a very neuroeconomic approach.)

So, for now, I’m not really sure how useful the stochastic overload model is. This is honestly something I feel about a lot of the theoretical ideas I have come up with; they often seem too abstract to be usefully applicable to anything.

Maybe that’s how all theory begins, and applications only appear later? But that doesn’t seem to be how people expect me to talk about it whenever I have to present my work or submit it for publication. They seem to want to know what it’s good for, right now, and I never have a good answer to give them. Do other researchers have such answers? Do they simply pretend to?

Along similar lines, I recently had one of my students ask about a theory paper I wrote on international conflict for my dissertation, and after sending him a copy, I re-read the paper. There are so many pages of equations, and while I am confident that the mathematical logic is valid,I honestly don’t know if most of them are really useful for anything. (I don’t think I really believe that GDP is produced by a Cobb-Douglas production function, and we don’t even really know how to measure capital precisely enough to say.) The central insight of the paper, which I think is really important but other people don’t seem to care about, is a qualitative one: International treaties and norms provide an equilibrium selection mechanism in iterated games. The realists are right that this is cheap talk. The liberals are right that it works. Because when there are many equilibria, cheap talk works.

I know that in truth, science proceeds in tiny steps, building a wall brick by brick, never sure exactly how many bricks it will take to finish the edifice. It’s impossible to see whether your work will be an irrelevant footnote or the linchpin for a major discovery. But that isn’t how the institutions of science are set up. That isn’t how the incentives of academia work. You’re not supposed to say that this may or may not be correct and is probably some small incremental progress the ultimate impact of which no one can possibly foresee. You’re supposed to sell your work—justify how it’s definitely true and why it’s important and how it has impact. You’re supposed to convince other people why they should care about it and not all the dozens of other probably equally-valid projects being done by other researchers.

I don’t know how to do that, and it is agonizing to even try. It feels like lying. It feels like betraying my identity. Being good at selling isn’t just orthogonal to doing good science—I think it’s opposite. I think the better you are at selling your work, the worse you are at cultivating the intellectual humility necessary to do good science. If you think you know all the answers, you’re just bad at admitting when you don’t know things. It feels like in order to succeed in academia, I have to act like an unscientific charlatan.

Honestly, why do we even need to convince you that our work is more important than someone else’s? Are there only so many science points to go around? Maybe the whole problem is this scarcity mindset. Yes, grant funding is limited; but why does publishing my work prevent you from publishing someone else’s? Why do you have to reject 95% of the papers that get sent to you? Don’t tell me you’re limited by space; the journals are digital and searchable and nobody reads the whole thing anyway. Editorial time isn’t infinite, but most of the work has already been done by the time you get a paper back from peer review. Of course, I know the real reason: Excluding people is the main source of prestige.

Russia has invaded Ukraine.

Mar 6 JDN 2459645

Russia has invaded Ukraine. No doubt you have heard it by now, as it’s all over the news now in dozens of outlets, from CNN to NBC to The Guardian to Al-Jazeera. And as well it should be, as this is the first time in history that a nuclear power has annexed another country. Yes, nuclear powers have fought wars before—the US just got out of one in Afghanistan as you may recall. They have even started wars and led invasions—the US did that in Iraq. And certainly, countries have been annexing and conquering other countries for millennia. But never before—never before, in human historyhas a nuclear-armed state invaded another country simply to claim it as part of itself. (Trump said he thought the US should have done something like that, and the world was rightly horrified.)

Ukraine is not a nuclear power—not anymore. The Soviet Union built up a great deal of its nuclear production in Ukraine, and in 1991 when Ukraine became independent it still had a sizable nuclear arsenal. But starting in 1994 Ukraine began disarming that arsenal, and now it is gone. Now that Russia has invaded them, the government of Ukraine has begun publicly reconsidering their agreements to disarm their nuclear arsenal.

Russia’s invasion of Ukraine has just disproved the most optimistic models of international relations, which basically said that major power wars for territory were over at the end of WW2. Some thought it was nuclear weapons, others the United Nations, still others a general improvement in trade integration and living standards around the world. But they’ve all turned out to be wrong; maybe such wars are rarer, but they can clearly still happen, because one just did.

I would say that only two major theories of the Long Peace are still left standing in light of this invasion, and that is nuclear deterrence and the democratic peace. Ukraine gave up its nuclear arsenal and later got attacked—that’s consistent with nuclear deterrence. Russia under Putin is nearly as authoritarian as the Soviet Union, and Ukraine is a “hybrid regime” (let’s call it a solid D), so there’s no reason the democratic peace would stop this invasion. But any model which posits that trade or the UN prevent war is pretty much off the table now, as Ukraine had very extensive trade with both Russia and the EU and the UN has been utterly toothless so far. (Maybe we could say the UN prevents wars except those led by permanent Security Council members.)

Well, then, what if the nuclear deterrence theory is right? What would have happened if Ukraine had kept its nuclear weapons? Would that have made this situation better, or worse? It could have made it better, if it acted as a deterrent against Russian aggression. But it could also have made it much, much worse, if it resulted in a nuclear exchange between Russia and Ukraine.

This is the problem with nukes. They are not a guarantee of safety. They are a guarantee of fat tails. To explain what I mean by that, let’s take a brief detour into statistics.

A fat-tailed distribution is one for which very extreme events have non-negligible probability. For some distributions, like a uniform distribution, events are clearly contained within a certain interval and nothing outside is even possible. For others, like a normal distribution or lognormal distribution, extreme events are theoretically possible, but so vanishingly improbable they aren’t worth worrying about. But for fat-tailed distributions like a Cauchy distribution or a Pareto distribution, extreme events are not so improbable. They may be unlikely, but they are not so unlikely they can simply be ignored. Indeed, they can actually dominate the average—most of what happens, happens in a handful of extreme events.

Deaths in war seem to be fat-tailed, even in conventional warfare. They seem to follow a Pareto distribution. There are lots of tiny skirmishes, relatively frequent regional conflicts, occasional major wars, and a handful of super-deadly global wars. This kind of pattern tends to emerge when a phenomenon is self-reinforcing by positive feedback—hence why we also see it in distributions of income and wildfire intensity.

Fat-tailed distributions typically (though not always—it’s easy to construct counterexamples, like the Cauchy distribution with low values truncated off) have another property as well, which is that minor events are common. More common, in fact, than they would be under a normal distribution. What seems to happen is that the probability mass moves away from the moderate outcomes and shifts to both the extreme outcomes and the minor ones.

Nuclear weapons fit this pattern perfectly. They may in fact reduce the probability of moderate, regional conflicts, in favor of increasing the probability of tiny skirmishes or peaceful negotiations. But they also increase the probability of utterly catastrophic outcomes—a full-scale nuclear war could kill billions of people. It probably wouldn’t wipe out all of humanity, and more recent analyses suggest that a catastrophic “nuclear winter” is unlikely. But even 2 billion people dead would be literally the worst thing that has ever happened, and nukes could make it happen in hours when such a death toll by conventional weapons would take years.

If we could somehow guarantee that such an outcome would never occur, then the lower rate of moderate conflicts nuclear weapons provide would justify their existence. But we can’t. It hasn’t happened yet, but it doesn’t have to happen often to be terrible. Really, just once would be bad enough.

Let us hope, then, that the democratic peace turns out to be the theory that’s right. Because a more democratic world would clearly be better—while a more nuclearized world could be better, but could also be much, much worse.