The economic impact of chronic illness

Mar 27 JDN 2459666

This topic is quite personal for me, as someone who has suffered from chronic migraines since adolescence. Some days, weeks, and months are better than others. This past month has been the worst I have felt since 2019, when we moved into an apartment that turned out to be full of mold. This time, there is no clear trigger—which also means no easy escape.

The economic impact of chronic illness is enormous. 90% of US healthcare spending is on people with chronic illnesses, including mental illnesses—and the US has the most expensive healthcare system in the world by almost any measure. Over 55% of adult Medicaid beneficiaries have two or more chronic illnesses.

The total annual cost of all chronic illnesses is hard to estimate, but it’s definitely somewhere in the trillions of dollars per year. The World Economic Forum estimated that number at $47 trillion over the next 20 years, which I actually consider conservative. I think this is counting how much we actually spend and some notion of lost productivity, as well as the (fraught) concept of the value of a statistical life—but I don’t think it’s putting a sensible value on the actual suffering. This will effectively undervalue poor people who are suffering severely but can’t get treated—because they spend little and can’t put a large dollar value on their lives. In the US, where the data is the best, the total cost of chronic illness comes to nearly $4 trillion per year—20% of GDP. If other countries are as bad or worse (and I don’t see why they would be better), then we’re looking at something like $17 trillion in real cost every single year; so over the next 20 years that’s not $47 trillion—it’s over $340 trillion.

Over half of US adults have at least one of the following, and over a quarter have two or more: arthritis, cancer, chronic obstructive pulmonary disease, coronary heart disease, current asthma, diabetes, hepatitis, hypertension, stroke, or kidney disease. (Actually the former very nearly implies the latter, unless chronic conditions somehow prevented one another. Two statistically independent events with 50% probability will jointly occur 25% of the time: Flip two coins.)

Unsurprisingly, age is positively correlated with chronic illness. Income is negatively correlated, both because chronic illnesses reduce job opportunities and because poorer people have more trouble getting good treatment. I am the exception that proves the rule, the upper-middle-class professional with both a PhD and a severe chronic illness.

There seems to be a common perception that chronic illness is largely a “First World problem”, but in fact chronic illnesses are more common—and much less poorly treated—in countries with low and moderate levels of development than they are in the most highly-developed countries. Over 75% of all deaths by non-communicable disease are in low- and middle-income countries. The proportion of deaths that is caused by non-communicable diseases is higher in high-income countries—but that’s because other diseases have been basically eradicated from high-income countries. People in rich countries actually suffer less from chronic illness than people in poor countries (on average).

It’s always a good idea to be careful of the distinction between incidence and prevalence, but with chronic illness this is particularly important, because (almost by definition) chronic illnesses last longer and so can have very high prevalence even with low incidence. Indeed, the odds of someone getting their first migraine (incidence) are low precisely because the odds of being someone who gets migraines (prevalence) is so high.

Quite high in fact: About 10% of men and 20% of women get migraines at least occasionally—though only about 8% of these (so 1% of men and 2% of women) get chronic migraines. Indeed, because ti is both common and can be quite severe, migraine is the second-most disabling condition worldwide as measured by years lived with disability (YLD), after low back pain. Neurologists are particularly likely to get migraines; the paper I linked speculates that they are better at realizing they have migraines, but I think we also need to consider the possibility of self-selection bias where people with migraines may be more likely to become neurologists. (I considered it, and it seems at least as good a reason as becoming a dentist because your name is Denise.)

If you order causes by the number of disability-adjusted life years (DALYs) they cost, chronic conditions rank quite high: while cardiovascular disease and cancer rate by far the highest, diabetes and kidney disease, mental disorders, neurological disorders, and musculoskeletal disorders all rate higher than malaria, HIV, or any other infection except respiratory infections (read: tuberculosis, influenza, and, once these charts are updated for the next few years, COVID). Note also that at the very bottom is “conflict and terrorism”—that’s all organized violence in the world—and natural disasters. Mental disorders alone cost the world 20 times as many DALYs as all conflict and terrorism combined.

Are people basically good?

Mar 20 JDN 2459659

I recently finished reading Human Kind by Rutger Bregman. His central thesis is a surprisingly controversial one, yet one I largely agree with: People are basically good. Most people, in most circumstances, try to do the right thing.

Neoclassical economists in particular seem utterly scandalized by any such suggestion. No, they insist, people are selfish! They’ll take any opportunity to exploit each other! On this, Bregman is right and the neoclassical economists are wrong.

One of the best parts of the book is Bregman’s tale of several shipwrecked Tongan boys who were stranded on the remote island of ‘Ata, sometimes called “the real Lord of the Flies but with an outcome quite radically different from that of the novel. There were of course conflicts during their long time stranded, but the boys resolved most of these conflicts peacefully, and by the time help came over a year later they were still healthy and harmonious. Bregman himself was involved in the investigative reporting about these events, and his tale of how he came to meet some of the (now elderly) survivors and tell their tale is both enlightening and heartwarming.

Bregman spends a lot of time (perhaps too much time) analyzing classic experiments meant to elucidate human nature. He does a good job of analyzing the Milgram experiment—it’s valid, but it says more about our willingness to serve a cause than our blind obedience to authority. He utterly demolishes the Zimbardo experiment; I knew it was bad, but I hadn’t even realized how utterly worthless that so-called “experiment” actually is. Zimbardo basically paid people to act like abusive prison guards—specifically instructing them how to act!—and then claimed that he had discovered something deep in human nature. Bregman calls it a “hoax”, which might be a bit too strong—but it’s about as accurate as calling it an “experiment”. I think it’s more like a form of performance art.

Bregman’s criticism of Steven Pinker I find much less convincing. He cites a few other studies that purported to show the following: (1) the archaeological record is unreliable in assessing death rates in prehistoric societies (fair enough, but what else do we have?), (2) the high death rates in prehistoric cultures could be from predators such as lions rather than other humans (maybe, but that still means civilization is providing vital security!), (3) the Long Peace could be a statistical artifact because data on wars is so sparse (I find this unlikely, but I admit the Russian invasion of Ukraine does support such a notion), or (4) the Long Peace is the result of nuclear weapons, globalized trade, and/or international institutions rather than a change in overall attitudes toward violence (perfectly reasonable, but I’m not even sure Pinker would disagree).

I appreciate that Bregman does not lend credence to the people who want to use absolute death counts instead of relative death rates, who apparently would rather live in a prehistoric village of 100 people that gets wiped out by a plague (or for that matter on a Mars colony of 100 people who all die of suffocation when the life support fails) rather than remain in a modern city of a million people that has a few dozen murders each year. Zero homicides is better than 40, right? Personally, I care most about the question “How likely am I to die at any given time?”; and for that, relative death rates are the only valid measure. I don’t even see why we should particularly care about homicide versus other causes of death—I don’t see being shot as particularly worse than dying of Alzheimer’s (indeed, quite the contrary, other than the fact that Alzheimer’s is largely limited to old age and shooting isn’t). But all right, if violence is the question, then go ahead and use homicides—but it certainly should be rates and not absolute numbers. A larger human population is not an inherently bad thing.

I even appreciate that Bregman offers a theory (not an especially convincing one, but not an utterly ridiculous one either) of how agriculture and civilization could emerge even if hunter-gatherer life was actually better. It basically involves agriculture being discovered by accident, and then people gradually transitioning to a sedentary mode of life and not realizing their mistake until generations had passed and all the old skills were lost. There are various holes one can poke in this theory (Were the skills really lost? Couldn’t they be recovered from others? Indeed, haven’t people done that, in living memory, by “going native”?), but it’s at least better than simply saying “civilization was a mistake”.

Yet Bregman’s own account, particularly his discussion of how early civilizations all seem to have been slave states, seems to better support what I think is becoming the historical consensus, which is that civilization emerged because a handful of psychopaths gathered armies to conquer and enslave everyone around them. This is bad news for anyone who holds to a naively Whiggish view of history as a continuous march of progress (which I have oft heard accused but rarely heard endorsed), but it’s equally bad news for anyone who believes that all human beings are basically good and we should—or even could—return to a state of blissful anarchism.

Indeed, this is where Bregman’s view and mine part ways. We both agree that most people are mostly good most of the time. He even acknowledges that about 2% of people are psychopaths, which is a very plausible figure. (The figures I find most credible are about 1% of women and about 4% of men, which averages out to 2.5%. The prevalence you get also depends on how severely lacking in empathy someone needs to be in order to qualify. I’ve seen figures as low as 1% and as high as 4%.) What he fails to see is how that 2% of people can have large effects on society, wildly disproportionate to their number.

Consider the few dozen murders that are committed in any given city of a million people each year. Who is committing those murders? By and large, psychopaths. That’s more true of premeditated murder than of crimes of passion, but even the latter are far more likely to be committed by psychopaths than the general population.

Or consider those early civilizations that were nearly all authoritarian slave-states. What kind of person tends to govern an authoritarian slave-state? A psychopath. Sure, probably not every Roman emperor was a psychopath—but I’m quite certain that Commodus and Caligula were, and I suspect that Augustus and several others were as well. And the ones who don’t seem like psychopaths (like Marcus Aurelius) still seem like narcissists. Indeed, I’m not sure it’s possible to be an authoritarian emperor and not be at least a narcissist; should an ordinary person somehow find themselves in the role, I think they’d immediately set out to delegate authority and improve civil liberties.

This suggests that civilization was not so much a mistake as it was a crime—civilization was inflicted upon us by psychopaths and their greed for wealth and power. Like I said, not great for a “march of progress” view of history. Yet a lot has changed in the last few thousand years, and life in the 21st century at least seems overall pretty good—and almost certainly better than life on the African savannah 50,000 years ago.

In essence, what I think happened was we invented a technology to turn the tables of civilization, use the same tools psychopaths had used to oppress us as a means to contain them. This technology was called democracy. The institutions of democracy allowed us to convert government from a means by which psychopaths oppress and extract wealth from the populace to a means by which the populace could prevent psychopaths from committing wanton acts of violence.

Is it perfect? Certainly not. Indeed, there are many governments today that much better fit the “psychopath oppressing people” model (e.g. Russia, China, North Korea), and even in First World democracies there are substantial abuses of power and violations of human rights. In fact, psychopaths are overrepresented among the police and also among politicians. Perhaps there are superior modes of governance yet to be found that would further reduce the power psychopaths have and thereby make life better for everyone else.

Yet it remains clear that democracy is better than anarchy. This is not so much because anarchy results in everyone behaving badly and causes immediate chaos (as many people seem to erroneously believe), but because it results in enough people behaving badly to be a problem—and because some of those people are psychopaths who will take advantage of power vacuum to seize control for themselves.

Yes, most people are basically good. But enough people aren’t that it’s a problem.

Bregman seems to think that simply outnumbering the psychopaths is enough to keep them under control, but history clearly shows that it isn’t. We need institutions of governance to protect us. And for the most part, First World democracies do a fairly good job of that.

Indeed, I think Bregman’s perspective may be a bit clouded by being Dutch, as the Netherlands has one of the highest rates of trust in the world. Nearly 90% of people in the Netherlands trust their neighbors. Even the US has high levels of trust by world standards, at about 84%; a more typical country is India or Mexico at 64%, and the least-trusting countries are places like Gabon with 31% or Benin with a dismal 23%. Trust in government varies widely, from an astonishing 94% in Norway (then again, have you seen Norway? Their government is doing a bang-up job!) to 79% in the Netherlands, to closer to 50% in most countries (in this the US is more typical), all the way down to 23% in Nigeria (which seems equally justified). Some mysteries remain, like why more people trust the government in Russia than in Namibia. (Maybe people in Namibia are just more willing to speak their minds? They’re certainly much freer to do so.)

In other words, Dutch people are basically good. Not that the Netherlands has no psychopaths; surely they have a few just like everyone else. But they have strong, effective democratic institutions that provide both liberty and security for the vast majority of the population. And with the psychopaths under control, everyone else can feel free to trust each other and cooperate, even in the absence of obvious government support. It’s precisely because the government of the Netherlands is so unusually effective that someone living there can come to believe that government is unnecessary.

In short, Bregman is right that we should have donation boxes—and a lot of people seem to miss that (especially economists!). But he seems to forget that we need to keep them locked.

Commitment and sophistication

Mar 13 JDN 2459652

One of the central insights of cognitive and behavioral economics is that understanding the limitations of our own rationality can help us devise mechanisms to overcome those limitations—that knowing we are not perfectly rational can make us more rational. The usual term for this is a somewhat vague one: behavioral economists generally call it simply sophistication.

For example, suppose that you are short-sighted and tend to underestimate the importance of the distant future. (This is true of most of us, to greater or lesser extent.)

It’s rational to consider the distant future less important than the present—things change in the meantime, and if we go far enough you may not even be around to see it. In fact, rationality alone doesn’t even say how much you should discount any given distance in the future. But most of us are inconsistent about our attitudes toward the future: We exhibit dynamic inconsistency.

For instance, suppose I ask you today whether you would like $100 today or $102 tomorrow. It is likely you’ll choose $100 today. But if I ask you whether you would like $100 365 days from now or $102 366 days from now, you’ll almost certainly choose the $102.


This means that if I asked you the second question first, then waited a year and asked you the first question, you’d change your mind—that’s inconsistent. Whichever choice is better shouldn’t systematically change over time. (It might happen to change, if your circumstances changed in some unexpected way. But on average it shouldn’t change.) Indeed, waiting a day for an extra $2 is typically going to be worth it; 2% daily interest is pretty hard to beat.

Now, suppose you have some option to make a commitment, something that will bind you to your earlier decision. It could be some sort of punishment for deviating from your earlier choice, some sort of reward for keeping to the path, or, in the most extreme example, a mechanism that simply won’t let you change your mind. (The literally classic example of this is Odysseus having his crew tie him to the mast so he can listen to the Sirens.)

If you didn’t know that your behavior was inconsistent, you’d never want to make such a commitment. You don’t expect to change your mind, and if you do change your mind, it would be because your circumstances changed in some unexpected way—in which case changing your mind would be the right thing to do. And if your behavior wasn’t inconsistent, this reasoning would be quite correct: No point in committing when you have less information.

But if you know that your behavior is inconsistent, you can sometimes improve the outcome for yourself by making a commitment. You can force your own behavior into consistency, even though you will later be tempted to deviate from your plan.

Yet there is a piece missing from this account, often not clearly enough stated: Why should we trust the version of you that has a year to plan over the version of you that is making the decision today? What’s the difference between those two versions of you that makes them inconsistent, and why is one more trustworthy than the other?

The biggest difference is emotional. You don’t really feel $100 a year from now, so you can do the math and see that 2% daily interest is pretty darn good. But $100 today makes you feel something—excitement over what you might buy, or relief over a bill you can now pay. (Actually that’s one of the few times when it would be rational to take $100 today: If otherwise you’re going to miss a deadline and pay a late fee.) And that feeling about $102 tomorrow just isn’t as strong.

We tend to think that our emotional selves and our rational selves are in conflict, and so we expect to be more rational when we are less emotional. There is some truth to this—strong emotions can cloud our judgments and make us behave rashly.

Yet this is only one side of the story. We also need emotions to be rational. There is a condition known as flat affect, often a symptom of various neurological disorders, in which emotional reactions are greatly blunted or even non-existent. People with flat affect aren’t more rational—they just do less. In the worst cases, they completely lose their ability to be motivated to do things and become outright inert, known as abulia.

Emotional judgments are often less accurate than thoughtfully reasoned arguments, but they are also much faster—and that’s why we have them. In many contexts, particularly when survival is at stake, doing something pretty well right away is often far better than waiting long enough to be sure you’ll get the right answer. Running away from a loud sound that turns out to be nothing is a lot better than waiting to carefully determine whether that sound was really a tiger—and finding that it was.

With this in mind, the cases where we should expected commitment to be effective are those that are unfamiliar, not only on an individual level, but in an evolutionary sense. I have no doubt that experienced stock traders can develop certain intuitions that make them better at understanding financial markets than randomly chosen people—but they still systematically underperform simple mathematical models, likely because finance is just so weird from an evolutionary perspective. So when deciding whether to accept some amount of money m1 at time t1 and some other amount of money m2 at time t2, your best bet is really to just do the math.

But this may not be the case for many other types of decisions. Sometimes how you feel in the moment really is the right signal to follow. Committing to work at your job every day may seem responsible, ethical, rational—but if you hate your job when you’re actually doing it, maybe it really isn’t how you should be spending your life. Buying a long-term gym membership to pressure yourself to exercise may seem like a good idea, but if you’re miserable every time you actually go to the gym, maybe you really need to be finding a better way to integrate exercise into your lifestyle.

There are no easy answers here. We can think of ourselves as really being made of two (if not more) individuals: A cold, calculating planner who looks far into the future, and a heated, emotional experiencer who lives in the moment. There’s a tendency to assume that the planner is our “true self”, the one we should always listen to, but this is wrong; we are both of those people, and a life well-lived requires finding the right balance between their conflicting desires.

Russia has invaded Ukraine.

Mar 6 JDN 2459645

Russia has invaded Ukraine. No doubt you have heard it by now, as it’s all over the news now in dozens of outlets, from CNN to NBC to The Guardian to Al-Jazeera. And as well it should be, as this is the first time in history that a nuclear power has annexed another country. Yes, nuclear powers have fought wars before—the US just got out of one in Afghanistan as you may recall. They have even started wars and led invasions—the US did that in Iraq. And certainly, countries have been annexing and conquering other countries for millennia. But never before—never before, in human historyhas a nuclear-armed state invaded another country simply to claim it as part of itself. (Trump said he thought the US should have done something like that, and the world was rightly horrified.)

Ukraine is not a nuclear power—not anymore. The Soviet Union built up a great deal of its nuclear production in Ukraine, and in 1991 when Ukraine became independent it still had a sizable nuclear arsenal. But starting in 1994 Ukraine began disarming that arsenal, and now it is gone. Now that Russia has invaded them, the government of Ukraine has begun publicly reconsidering their agreements to disarm their nuclear arsenal.

Russia’s invasion of Ukraine has just disproved the most optimistic models of international relations, which basically said that major power wars for territory were over at the end of WW2. Some thought it was nuclear weapons, others the United Nations, still others a general improvement in trade integration and living standards around the world. But they’ve all turned out to be wrong; maybe such wars are rarer, but they can clearly still happen, because one just did.

I would say that only two major theories of the Long Peace are still left standing in light of this invasion, and that is nuclear deterrence and the democratic peace. Ukraine gave up its nuclear arsenal and later got attacked—that’s consistent with nuclear deterrence. Russia under Putin is nearly as authoritarian as the Soviet Union, and Ukraine is a “hybrid regime” (let’s call it a solid D), so there’s no reason the democratic peace would stop this invasion. But any model which posits that trade or the UN prevent war is pretty much off the table now, as Ukraine had very extensive trade with both Russia and the EU and the UN has been utterly toothless so far. (Maybe we could say the UN prevents wars except those led by permanent Security Council members.)

Well, then, what if the nuclear deterrence theory is right? What would have happened if Ukraine had kept its nuclear weapons? Would that have made this situation better, or worse? It could have made it better, if it acted as a deterrent against Russian aggression. But it could also have made it much, much worse, if it resulted in a nuclear exchange between Russia and Ukraine.

This is the problem with nukes. They are not a guarantee of safety. They are a guarantee of fat tails. To explain what I mean by that, let’s take a brief detour into statistics.

A fat-tailed distribution is one for which very extreme events have non-negligible probability. For some distributions, like a uniform distribution, events are clearly contained within a certain interval and nothing outside is even possible. For others, like a normal distribution or lognormal distribution, extreme events are theoretically possible, but so vanishingly improbable they aren’t worth worrying about. But for fat-tailed distributions like a Cauchy distribution or a Pareto distribution, extreme events are not so improbable. They may be unlikely, but they are not so unlikely they can simply be ignored. Indeed, they can actually dominate the average—most of what happens, happens in a handful of extreme events.

Deaths in war seem to be fat-tailed, even in conventional warfare. They seem to follow a Pareto distribution. There are lots of tiny skirmishes, relatively frequent regional conflicts, occasional major wars, and a handful of super-deadly global wars. This kind of pattern tends to emerge when a phenomenon is self-reinforcing by positive feedback—hence why we also see it in distributions of income and wildfire intensity.

Fat-tailed distributions typically (though not always—it’s easy to construct counterexamples, like the Cauchy distribution with low values truncated off) have another property as well, which is that minor events are common. More common, in fact, than they would be under a normal distribution. What seems to happen is that the probability mass moves away from the moderate outcomes and shifts to both the extreme outcomes and the minor ones.

Nuclear weapons fit this pattern perfectly. They may in fact reduce the probability of moderate, regional conflicts, in favor of increasing the probability of tiny skirmishes or peaceful negotiations. But they also increase the probability of utterly catastrophic outcomes—a full-scale nuclear war could kill billions of people. It probably wouldn’t wipe out all of humanity, and more recent analyses suggest that a catastrophic “nuclear winter” is unlikely. But even 2 billion people dead would be literally the worst thing that has ever happened, and nukes could make it happen in hours when such a death toll by conventional weapons would take years.

If we could somehow guarantee that such an outcome would never occur, then the lower rate of moderate conflicts nuclear weapons provide would justify their existence. But we can’t. It hasn’t happened yet, but it doesn’t have to happen often to be terrible. Really, just once would be bad enough.

Let us hope, then, that the democratic peace turns out to be the theory that’s right. Because a more democratic world would clearly be better—while a more nuclearized world could be better, but could also be much, much worse.