What we still have to be thankful for

Nov 30 JDN 2461010

This post has been written before, but will go live after, Thanksgiving.

Thanksgiving is honestly a very ambivalent holiday.

The particular event it celebrates don’t seem quite so charming in their historical context: Rather than finding peace and harmony with all Native Americans, the Pilgrims in fact allied with the Wampanoag against the Narragansett, though they did later join forces with the Narragansett in order to conquer the Pequot. And of course we all know how things went for most Native American nations in the long run.

Moreover, even the gathering of family comes with some major downsides, especially in a time of extreme political polarization such as this one. I won’t be joining any of my Trump-supporting relatives for dinner this year (and they probably wouldn’t have invited me anyway), but the fact that this means becoming that much more detached from a substantial part of my extended family is itself a tragedy.

This year in particular, US policy has gotten so utterly horrific that it often feels like we have nothing to be thankful for at all, that all we thought was good and just in the world could simply be torn away at a moment’s notice by raving madmen. It isn’t really quite that bad—but it feels that way sometimes.

It also felt a bit uncanny celebrating Thanksgiving a few years ago when we were living in Scotland, for the UK does not celebrate Thanksgiving, but absolutely does celebrate Black Friday: Holidays may be local, but capitalism is global.

But fall feasts of giving thanks are far more ancient than that particular event in 1621 that we have mythologized to oblivion. They appear in numerous cultures across the globe—indeed their very ubiquity may be why the Wampanoag were so willing to share one with the Pilgrims despite their cultures having diverged something like 40,000 years prior.

And I think that it is by seeing ourselves in that context—as part of the whole of humanity—that we can best appreciate what we truly do have to be thankful for, and what we truly do have to look forward to in the future.

Above all, medicine.

We have actual treatments for some diseases, even actual cures for some. By no means all, of course—and it often feels like we are fighting an endless battle even against what we can treat.

But it is worth reflecting on the fact that aside from the last few centuries, this has simply not been the case. There were no actual treatments. There was no real medicine.

Oh, sure, there were attempts at medicine; and there was certainly what we would think of as more like “first aid”: bandaging wounds, setting broken bones. Even amputation and surgery were done sometimes. But most medical treatment was useless or even outright harmful—not least because for most of history, most of it was done without anesthetic or even antiseptic!

There were various herbal remedies for various ailments, some of which even have happened to work: Willow bark genuinely helps with pain, St. John’s wort is a real antidepressant, and some traditional burn creams are surprisingly effective.

But there was no system in place for testing medicine, no way of evaluating what remedies worked and what didn’t. And thus, for every remedy that worked as advertised, there were a hundred more that did absolutely nothing, or even made things worse.

Today, it can feel like we are all chronically ill, because so many of us take so many different pills and supplements. But this is not a sign that we are ill—it is a sign that we can be treated. The pills are new, yes—but the illnesses they treat were here all along.

I don’t see any particular reason to think that Roman plebs or Medieval peasants were any less likely to get migraines than we are; but they certainly didn’t have access to sumatriptan or rimegepant. Maybe they were less likely to get diabetes, but mainly because they were much more likely to be malnourished. (Well, okay, also because they got more exercise, which we surely could stand to.) And they only reason they didn’t get Alzheimer’s was that they usually didn’t live long enough.

Looking further back, before civilization, human health actually does seem to have been better: Foragers were rarely malnourished, weren’t exposed to as many infectious pathogens, and certainly got plenty of exercise. But should a pathogen like smallpox or influenza make it to a forager tribe, the results were often utterly catastrophic.

Today, we don’t really have the sort of plague that human beings used to deal with. We have pandemics, which are also horrible, but far less so. We were horrified by losing 0.3% of our population to COVID; a society that had only suffered 0.3%—or even ten times that, 3%—losses from the Black Death would have been hailed as a miracle, for a more typical rate was 30%.

At 0.3%, most of us knew somebody, or knew somebody who knew somebody, who died from COVID. At 3%, nearly everyone would know somebody, and most would know several. At 30%, nearly everyone would have close family and friends who died.

Then there is infant mortality.

As recently as 1950—this is living memory—the global infant mortality rate was 14.6%. This is about half what it had been historically; for most of human history, roughly a third of all children died between birth and the age of 5.

Today, it is 2.5%.

Where our distant ancestors expected two out of three of their children to survive and our own great-grandparents expected five out of six can now safely expect thirty-nine out of forty to live. This is the difference between “nearly every family has lost a child” and “most families have not lost a child”.

And this is worldwide; in highly-developed countries it’s even better. The US has a relatively high infant mortality rate by the standards of highly-developed countries (indeed, are we even highly-developed, or are we becoming like Saudi Arabia, extremely rich but so unequal that it doesn’t really mean anything to most of our people?). Yet even for us, the infant mortality rate is 0.5%—so we can expect one-hundred-ninety-nine out of two-hundred to survive. This is at the level of “most families don’t even know someone who has lost a child.”

Poverty is a bit harder to measure.

I am increasingly dubious of conventional measures of poverty; ever since compiling my Index of Necessary Expenditure, I am convinced that economists in general, and perhaps US economists in particular, are systematically underestimating the cost of living and thereby underestimating the prevalence of poverty. (I don’t think this is intentional, mind you; I just think it’s a result of using convenient but simplistic measures and not looking too closely into the details.) I think not being able to sustainably afford a roof over your head constitutes being poor—and that applies to a lot of people.

Yet even with that caveat in mind, it’s quite clear that global poverty has greatly declined in the long run.

At the “extreme poverty” level, currently defined as consuming $1.90 at purchasing power parity per day—that’s just under $700 per year, less than 2% of the median personal income in the United States—the number of people has fallen from 1.9 billion in 1990 to about 700 million today. That’s from 36% of the world’s population to under 9% today.

Now, there are good reasons to doubt that “purchasing power parity” really can be estimated as accurately as we would like, and thus it’s not entirely clear that people living on “$2 per day PPP” are really living at less than 2% the standard of living of a typical American (honestly to me that just sounds like… dead); but they are definitely living at a much worse standard of living, and there are a lot fewer people living at such low standard of living today than there used to be not all that long ago. These are people who don’t have reliable food, clean water, or even basic medicine—and that used to include over a third of humanity and does no longer. (And I would like to note that actually finding such a person and giving them a few hundred dollars absolutely would change their life, and this is the sort of thing GiveDirectly does. We may not know exactly how to evaluate their standard of living, but we do know that the actual amount of money they have access to is very, very small.)

There are many ways in which the world could be better than it is.

Indeed, part of the deep, overwhelming outrage I feel pretty much all the time lies in the fact that it would be so easy to make things so much better for so many people, if there weren’t so many psychopaths in charge of everything.


Increased foreign aid is one avenue by which that could be achieved—so, naturally, Trump cut it tremendously. More progressive taxation is another—so, of course, we get tax cuts for the rich.

Just think about the fact that there are families with starving children for whom a $500 check could change their lives; but nobody is writing that check, because Elon Musk needs to become a literal trillionaire.

There are so many water lines and railroad tracks and bridges and hospitals and schools not being built because the money that would have paid for them is tied up in making already unfathomably-rich people even richer.

But even despite all that, things are getting better. Not every day, not every month, not even every year—this past year was genuinely, on net, a bad one. But nearly every decade, every generation, and certainly every century (for at least the last few), humanity has fared better than we did the last.

As long as we can keep that up, we still have much to hope for—and much to be thankful for.

Time and How to Use It

Nov 5 JDN 2460254

A review of Four Thousand Weeks by Oliver Burkeman

The central message of Four Thousand Weeks: Time and How to Use It seems so obvious in hindsight it’s difficult to understand why it feels so new and unfamiliar. It’s a much-needed reaction to the obsessive culture of “efficiency” and “productivity” that dominates the self-help genre. Its core message is remarkable simple:

You don’t have time to do everything you want, so stop trying.

I actually think Burkeman understands the problem incorrectly. He argues repeatedly that it is our mortality which makes our lives precious—that it is because we only get four thousand weeks of life that we must use our time well. But this strikes me as just yet more making excuses for the dragon.

Our lives would not be less precious if we lived a thousand years or a million. Indeed, our time would hardly be any less scarce! You still can’t read every book ever written if you live a million years—for every one of those million years, another 500,000 books will be published. You could visit every one of the 10,000 cities in the world, surely; but if you spend a week in each one, by the time you get back to Paris for a second visit, centuries will have passed—I must imagine you’ll have missed quite a bit of change in that time. (And this assumes that our population remains the same—do we really think it would, if humans could live a million years?)

Even a truly immortal being that will live until the end of time needs to decide where to be at 7 PM this Saturday.

Yet Burkeman does grasp—and I fear that too many of us do not—that our time is precious, and when we try to do everything that seems worth doing, we end up failing to prioritize what really matters most.

What do most of us spend most of our lives doing? Whatever our bosses tell us to do. Aside from sleeping, the activity that human beings spend the largest chunk of their lives on is working.

This has made us tremendously, mind-bogglingly productive—our real GDP per capita is four times what it was in just 1950, and about eight times what it was in the 1920s. Projecting back further than that is a bit dicier, but assuming even 1% annual growth, it should be about twenty times what it was at the dawn of the Industrial Revolution. We could surely live better than medieval peasants did by working only a few hours per week; yet in fact on average we work more hours than they did—by some estimates, nearly twice as much. Rather than getting the same wealth for 5% of the work, or twice the wealth for 10%, we chose to get 40 times the wealth for twice the work.

It would be one thing if all this wealth and productivity actually seemed to make us happy. But does it?

Our physical health is excellent: We are tall, we live long lives—we are smarter, even, than people of the not-so-distant past. We have largely conquered disease as the ancients knew it. Even a ‘catastrophic’ global pandemic today kills a smaller share of the population than would die in a typical year from disease in ancient times. Even many of our most common physical ailments, such as obesity, heart disease, and diabetes, are more symptoms of abundance than poverty. Our higher rates of dementia and cancer are largely consequences of living longer lives—most medieval peasants simply didn’t make it long enough to get Alzheimer’s. I wonder sometimes how ancient people dealt with other common ailments such as migraine and sleep apnea; but my guess is that they basically just didn’t—since treatment was impossible, they learned to live with it. Maybe they consoled themselves with whatever placebo treatments the healers of their local culture offered.

Yet our mental health seems to be no better than ever—and depending on how you measure it, may actually be getting worse over time. Some of the measured increase is surely due to more sensitive diagnosis; but some of it may be a genuine increase—especially as a result of the COVID pandemic. I wasn’t able to find any good estimates of rates of depression or anxiety disorders in ancient or medieval times, so I guess I really can’t say whether this is a problem that’s getting worse. But it sure doesn’t seem to be getting better. We clearly have not solved the problem of depression the way we have solved the problem of infectious disease.

Burkeman doesn’t tell us to all quit our jobs and stop working. But he does suggest that if you are particularly unhappy at your current job (as I am), you may want to quit it and begin searching for something else (as I have). He reminds us that we often get stuck in a particular pattern and underestimate the possibilities that may be available to us.

And he has advice for those who want to stay in their current jobs, too: Do less. Don’t take on everything that is asked of you. Don’t work yourself to the bone. The rewards for working harder are far smaller than our society will tell you, and the costs of burning out are far higher. Do the work that is genuinely most important, and let the rest go.

Unlike most self-help books, Four Thousand Weeks offers very little in the way of practical advice. It’s more like a philosophical treatise, exhorting you to adopt a whole new outlook on time and how you use it. But he does offer a little bit of advice, near the end of the book, in “Ten Tools for Embracing Your Finitude” and “Five Questions”.

The ten tools are as follows:


Adopt a ‘fixed volume’ approach to productivity. Limit the number of tasks on your to-do list. Set aside a particular amount of time for productive work, and work only during that time.

I am relatively good at this one; I work only during certain hours on weekdays, and I resist the urge to work other times.

Serialize, serialize, serialize. Do one major project at a time.

I am terrible at this one; I constantly flit between different projects, leaving most of them unfinished indefinitely. But I’m not entirely convinced I’d do better trying to focus on one in particular. I switch projects because I get stalled on the current one, not because I’m anxious about not doing the others. Unless I can find a better way to break those stalls, switching projects still gets more done than staying stuck on the same one.

Decide in advance what to fail at. Prioritize your life and accept that some things will fail.

We all, inevitably, fail to achieve everything we want to. What Burkeman is telling us to do is choose in advance which achievements we will fail at. Ask yourself: How much do you really care about keeping the kitchen clean and the lawn mowed? If you’re doing these things to satisfy other people’s expectations but you don’t truly care about them yourself, maybe you should just accept that people will frown upon you for your messy kitchen and overgrown lawn.

Focus on what you’ve already completed, not just on what’s left to complete. Make a ‘done list’ of tasks you have completed today—even small ones like “brushed teeth” and “made breakfast”—to remind yourself that you do in fact accomplish things.

I may try this one for awhile. It feels a bit hokey to congratulate yourself on making breakfast—but when you are severely depressed, even small tasks like that can in fact feel like an ordeal.

Consolidate your caring. Be generous and kind, but pick your battles.

I’m not very good at this one either. Spending less time on social media has helped; I am no longer bombarded quite so constantly by worthy causes and global crises. Yet I still have a vague sense that I am not doing enough, that I should be giving more of myself to help others. For me this is partly colored by a feeling that I have failed to build a career that would have both allowed me to have direct impact on some issues and also made enough money to afford large donations.

Embrace boring and single-purpose technology. Downgrade your technology to reduce distraction.

I don’t do this one, but I also don’t see it as particularly good advice. Maybe taking Facebook and (the-platform-formerly-known-as-) Twitter off your phone home screen is a good idea. But the reason you go to social media isn’t that they are so easy to access. It’s that you are expected to, and that you try to use them to fill some kind of need in your life—though it’s unclear they ever actually fill it.

Seek out novelty in the mundane. Cultivate awareness and appreciation of the ordinary things around you.

This one is basically a stripped-down meditation technique. It does work, but it’s also a lot harder to do than most people seem to think. It is especially hard to do when you are severely depressed. One technique I’ve learned from therapy that is surprisingly helpful is to replace “I have to” with “I get to” whenever you can: You don’t have to scoop cat litter, you get to because you have an adorable cat. You don’t have to catch the bus to work, you get to because you have a job. You don’t have to make breakfast for your family, you get to because you have a loving family.

Be a ‘researcher’ in relationships. Cultivate curiosity rather than anxiety or judgment.

Human beings are tremendously varied and often unpredictable. If you worry about whether or not people will do what you want, you’ll be constantly worried. And I have certainly been there. It can help to try to take a stance of detachment, where you concern yourself less with getting the right outcome and more with learning about the people you are with. I think this can be taken too far—you can become totally detached from relationships, or you could put yourself in danger by failing to pass judgment on obviously harmful behaviors—but in moderation, it’s surprisingly powerful. The first time I ever enjoyed going to a nightclub, (at my therapist’s suggestion) I went as a social scientist, tasked with observing and cataloguing the behavior around me. I still didn’t feel fully integrated into the environment (and the music was still too damn loud!), but for once, I wasn’t anxious and miserable.

Cultivate instantaneous generosity. If you feel like doing something good for someone, just do it.

I’m honestly not sure whether this one is good advice. I used to follow it much more than I do now. Interacting with the Effective Altruism community taught me to temper these impulses, and instead of giving to every random charity or homeless person that asks for money, instead concentrate my donations into a few highly cost-effective charities. Objectively, concentrating donations in this way produces a larger positive impact on the world. But subjectively, it doesn’t feel as good, it makes people sad, and sometimes it can make you feel like a very callous person. Maybe there’s a balance to be had here: Give a little when the impulse strikes, but save up most of it for the really important donations.

Practice doing nothing.

This one is perhaps the most subversive, the most opposed to all standard self-help advice. Do nothing? Just rest? How can you say such a thing, when you just reminded us that we have only four thousand weeks to live? Yet this is in fact the advice most of us need to hear. We burn ourselves out because we forget how to rest.

I am also terrible at this one. I tend to get most anxious when I have between 15 and 45 minutes of free time before an activity, because 45 minutes doesn’t feel long enough to do anything, and 15 minutes feels too long to do nothing. Logically this doesn’t really make sense: Either you have time to do something, or you don’t. But it can be hard to find good ways to fill that sort of interval, because it requires the emotional overhead of starting and stopping a task.

Then, there are the five questions:

Where in your life or work are you currently pursuing comfort, when what’s called for is a little discomfort?

It seems odd to recommend discomfort as a goal, but I think what Burkeman is getting at is that we tend to get stuck in the comfortable and familiar, even when we would be better off reaching out and exploring into the unknown. I know that for me, finally deciding to quit this job was very uncomfortable; it required taking a big risk and going outside the familiar and expected. But I am now convinced it was the right decision.

Are you holding yourself to, and judging yourself by, standards of productivity or performance that are impossible to meet?

In a word? Yes. I’m sure I am. But this one is also slipperier than it may seem—for how do we really know what’s possible? And possible for whom? If you see someone else who seems to be living the life you think you want, is it just an illusion? Are they really suffering as badly as you? Or do they perhaps have advantages you don’t, which made it possible for them, but not for you? When people say they work 60 hours per week and you can barely manage 20, are they lying? Are you truly not investing enough effort? Or do you suffer from ailments they don’t, which make it impossible for you to commit those same hours?

In what ways have you yet to accept the fact that you are who you are, not the person you think you ought to be?

I think most of us have a lot of ways that we fail to accept ourselves: physically, socially, psychologically. We are never the perfect beings we aspire to be. And constantly aspiring to an impossible ideal will surely drain you. But I also fear that self-acceptance could be a dangerous thing: What if it makes us stop striving to improve? What if we could be better than we are, but we don’t bother? Would you want a murderous psychopath to practice self-acceptance? (Then again, do they already, whether we want them to or not?) How are we to know which flaws in ourselves should be accepted, and which repaired?

In which areas of your life are you still holding back until you feel like you know what you’re doing?

This one cut me very deep. I have several areas of my life where this accusation would be apt, and one in particular where I am plainly guilty as charged: Parenting. In a same-sex marriage, offspring don’t emerge automatically without intervention. If we want to have kids, we must do a great deal of work to secure adoption. And it has been much easier—safer, more comfortable—to simply put off that work, avoid the risk. I told myself we’d adopt once I finished grad school; but then I only got a temporary job, so I put it off again, saying we’d adopt once I found stability in my career. But what if I never find that stability? What if the rest of my career is always this precarious? What if I can always find some excuse to delay? The pain of never fulfilling that lifelong dream of parenthood might continue to gnaw at me forever.

How would you spend your days differently if you didn’t care so much about seeing your actions reach fruition?

This one is frankly useless. I hate it. It’s like when people say “What would you do if you knew you’d die tomorrow?” Obviously, you wouldn’t go to work, you wouldn’t pay your bills, you wouldn’t clean your bathroom. You might devote yourself single-mindedly to a single creative task you hoped to make a legacy, or gather your family and friends to share one last day of love, or throw yourself into meaningless hedonistic pleasure. Those might even be things worth doing, on occasion. But you can’t do them every day. If you knew you were about to die, you absolutely would not live in any kind of sustainable way.

Similarly, if I didn’t care about seeing my actions reach fruition, I would continue to write stories and never worry about publishing them. I would make little stabs at research when I got curious, then once it starts getting difficult or boring, give up and never bother writing the paper. I would continue flitting between a dozen random projects at once and never finish any of them. I might well feel happier—at least until it all came crashing down—but I would get absolutely nothing done.

Above all, I would never apply for any jobs, because applying for jobs is absolutely not about enjoying the journey. If you know for a fact that you won’t get an offer, you’re an idiot to bother applying. That is a task that is only worth doing if I believe that it will yield results—and indeed, a big part of why it’s so hard to bring myself to do it is that I have a hard time maintaining that belief.

If you read the surrounding context, Burkeman actually seems to intend something quite different than the actual question he wrote. He suggests devoting more time to big, long-term projects that require whole communities to complete. He likens this to laying bricks in a cathedral that we will never see finished.

I do think there is wisdom in this. But it isn’t a simple matter of not caring about results. Indeed, if you don’t care at all about whether the cathedral will stand, you won’t bother laying the bricks correctly. In some sense Burkeman is actually asking us to do the opposite: To care more about results, but specifically results that we may never live to see. Maybe he really intends to emphasize the word see—you care about your actions reaching fruition, but not whether or not you’ll ever see it.

Yet this, I am quite certain, is not my problem. When a psychiatrist once asked me, “What do you really want most in life?” I gave a very thoughtful answer: “To be remembered in a thousand years for my contribution to humanity.” (His response was glib: “You can’t control that.”) I still stand by that answer: If I could have whatever I want, no limits at all, three wishes from an all-powerful genie, two of them would be to solve some of the world’s greatest problems, and the third would be for the chance to live my life in a way that I knew would be forever remembered.

But I am slowly coming to realize that maybe I should abandon that answer. That psychiatrist’s answer was far too glib (he was in fact not a very good fit for me; I quickly switched to a different psychiatrist), but maybe it wasn’t fundamentally wrong. It may be impossible to predict, let alone control, whether our lives have that kind of lasting impact—and, almost by construction, most lives can’t.

Perhaps, indeed, I am too worried about whether the cathedral will stand. I only have a few bricks to lay myself, and while I can lay them the best I can, that ultimately will not be what decides the fate of the cathedral. A fire, or an earthquake, or simply some other bricklayer’s incompetence, could bring about its destruction—and there is nothing at all I can do to prevent that.

This post is already getting too long, so I should try to bring it to a close.

As the adage goes, perhaps if I had more time, I’d make it shorter.

The case against phys ed

Dec 4 JDN 2459918

If I want to stop someone from engaging in an activity, what should I do? I could tell them it’s wrong, and if they believe me, that would work. But what if they don’t believe me? Or I could punish them for doing it, and as long as I can continue to do that reliably, that should deter them from doing it. But what happens after I remove the punishment?

If I really want to make someone not do something, the best way to accomplish that is to make them not want to do it. Make them dread doing it. Make them hate the very thought of it. And to accomplish that, a very efficient method would be to first force them to do it, but make that experience as miserable and humiliating is possible. Give them a wide variety of painful or outright traumatic experiences that are directly connected with the undesired activity, to carry with them for the rest of their life.

This is precisely what physical education does, with regard to exercise. Phys ed is basically optimized to make people hate exercise.

Oh, sure, some students enjoy phys ed. These are the students who are already athletic and fit, who already engage in regular exercise and enjoy doing so. They may enjoy phys ed, may even benefit a little from it—but they didn’t really need it in the first place.

The kids who need more physical activity are the kids who are obese, or have asthma, or suffer from various other disabilities that make exercising difficult and painful for them. And what does phys ed do to those kids? It makes them compete in front of their peers at various athletic tasks at which they will inevitably fail and be humiliated.

Even the kids who are otherwise healthy but just don’t get enough exercise will go into phys ed class at a disadvantage, and instead of being carefully trained to improve their skills and physical condition at their own level, they will be publicly shamed by their peers for their inferior performance.

I know this, because I was one of those kids. I have exercise-induced bronchoconstriction, a lung condition similar to asthma (actually there’s some debate as to whether it should be considered a form of asthma), in which intense aerobic exercise causes the airways of my lungs to become constricted and inflamed, making me unable to get enough air to continue.

It’s really quite remarkable I wasn’t diagnosed with this as a child; I actually once collapsed while running in gym class, and all they thought to do at the time was give me water and let me rest for the remainder of the class. Nobody thought to call the nurse. I was never put on a beta agonist or an inhaler. (In fact at one point I was put on a beta blocker for my migraines; I now understand why I felt so fatigued when taking it—it was literally the opposite of the drug my lungs needed.)

Actually it’s been a few years since I had an attack. This is of course partly due to me generally avoiding intense aerobic exercise; but even when I do get intense exercise, I rarely seem to get bronchoconstriction attacks. My working hypothesis is that the norepinephrine reuptake inhibition of my antidepressant acts like a beta agonist; both drugs mimic norepinephrine.

But as a child, I got such attacks quite frequently; and even when I didn’t, my overall athletic performance was always worse than most of the other kids. They knew it, I knew it, and while only a few actively tried to bully me for it, none of the others did anything to make me feel better. So gym class was always a humiliating and painful experience that I came to dread.

As a result, as soon as I got out of school and had my own autonomy in how to structure my own life, I basically avoided exercise whenever I could. Even knowing that it was good for me—really, exercise is ridiculously good for you; it honestly doesn’t even make sense to me how good it is for you—I could rarely get myself to actually go out and exercise. I certainly couldn’t do it with anyone else; sometimes, if I was very disciplined, I could manage to maintain an exercise routine by myself, as long as there was no one else there who could watch me, judge me, or compare themselves to me.

In fact, I’d probably have avoided exercise even more, had I not also had some more positive experiences with it outside of school. I trained in martial arts for a few years, getting almost to a black belt in tae kwon do; I quit precisely when it started becoming very competitive and thus began to feel humiliated again when I performed worse than others. Part of me wishes I had stuck with it long enough to actually get the black belt; but the rest of me knows that even if I’d managed it, I would have been miserable the whole time and it probably would have made me dread exercise even more.

The details of my story are of course individual to me; but the general pattern is disturbingly common. A kid does poorly in gym class, or even suffers painful attacks of whatever disabling condition they have, but nobody sees it as a medical problem; they just see the kid as weak and lazy. Or even if the adults are sympathetic, the other kids aren’t; they just see a peer who performed worse than them, and they have learned by various subtle (and not-so-subtle) cultural pressures that anyone who performs worse at a culturally-important task is worthy of being bullied and shunned.

Even outside the directly competitive environment of sports, the very structure of a phys ed class, where a large group of students are all expected to perform the same athletic tasks and can directly compare their performance against each other, invites this kind of competition. Kids can see, right in their faces, who is doing better and who is doing worse. And our culture is astonishingly bad at teaching children (or anyone else, for that matter) how to be sympathetic to others who perform worse. Worse performance is worse character. Being bad at running, jumping and climbing is just being bad.

Part of the problem is that school administrators seem to see physical education as a training and selection regimen for their sports programs. (In fact, some of them seem to see their entire school as existing to serve their sports programs.) Here is a UK government report bemoaning the fact that “only a minority of schools play competitive sport to a high level”, apparently not realizing that this is necessarily true because high-level sports performance is a relative concept. Only one team can win the championship each year. Only 10% of students will ever be in the top 10% of athletes. No matter what. Anything else is literally mathematically impossible. We do not live in Lake Wobegon; not all the children can be above average.

There are good phys ed programs out there. They have highly-trained instructors and they focus on matching tasks to a student’s own skill level, as well as actually educating them—teaching them about anatomy and physiology rather than just making them run laps. Actually the one phys ed class I took that I actually enjoyed was actually an anatomy and physiology class; we didn’t do any physical exercise in that class. But well-taught phys ed classes are clearly the exception, not the norm.

Of course, it could be that some students actually benefit from phys ed, perhaps even enough to offset the harms to people like me. (Though then the question should be asked whether phys ed should be compulsory for all students—if an intervention helps some and hurts others, maybe only give it to the ones it helps?) But I know very few people who actually described their experiences of phys ed class as positive ones. While many students describe their experiences of math class in similarly-negative terms (which is also a problem with how math classes are taught), I definitely do know people who actually enjoyed and did well in math class. Still, my sample is surely biased—it’s comprised of people similar to me, and I hated gym and loved math. So let’s look at the actual data.

Or rather, I’d like to, but there isn’t that much out there. The empirical literature on the effects of physical education is surprisingly limited.

A lot of analyses of physical education simply take as axiomatic that more phys ed means more exercise, and so they use the—overwhelming, unassailable—evidence that exercise is good to support an argument for more phys ed classes. But they never seem to stop and take a look at whether phys ed classes are actually making kids exercise more, particularly once those kids grow up and become adults.

In fact, the surprisingly weak correlations between higher physical activity and better mental health among adolescents (despite really strong correlations in adults) could be because exercise among adolescents is largely coerced via phys ed, and the misery of being coerced into physical humiliation counteracts any benefits that might have been obtained from increased exercise.

The best long-term longitudinal study I can find did show positive effects of phys ed on long-term health, though by a rather odd mechanism: Women exercised more as adults if they had phys ed in primary school, but men didn’t; they just smoked less. And this study was back in 1999, studying a cohort of adults who had phys ed quite a long time ago, when it was better funded.

The best experiment I can find actually testing whether phys ed programs work used a very carefully designed phys ed program with a lot of features that it would be really nice to have, but the vast majority of actual gym classes do not, including carefully structured activities with specific developmental goals, and, perhaps most importantly, children were taught to track and evaluate their own individual progress rather than evaluate themselves in comparison to others.

And even then, the effects are not all that large. The physical activity scores of the treatment group rose from 932 minutes per week to 1108 minutes per week for first-graders, and from 1212 to 1454 for second-graders. But the physical activity scores of the control group rose from 906 to 996 for first-graders, and 1105 to 1211 for second-graders. So of the 176 minutes per week gained by first-graders, 90 would have happened anyway. Likewise, of the 242 minutes per week gained by second-graders, 106 were not attributable to the treatment. Only about half of the gains were due to the intervention, and they amount to about a 10% increase in overall physical activity. It also seems a little odd to me that the control groups both started worse off than the experimental groups and both groups gained; it raises some doubts about the randomization.

The researchers also measured psychological effects, and these effects are even smaller and honestly a little weird. On a scale of “somatic anxiety” (basically, how bad do you feel about your body’s physical condition?), this well-designed phys ed program only reduced scores in the treatment group from 4.95 to 4.55 among first-graders, and from 4.50 to 4.10 among second-graders. Seeing as the scores for second-graders also fell in the control group from 4.63 to 4.45, only about half of the observed reduction—0.2 points on a 10-point scale—is really attributable to the treatment. And the really baffling part is that the measure of social anxiety actually fell more, which makes me wonder if they’re really measuring what they think they are.

Clearly, exercise is good. We should be trying to get people to exercise more. Actually, this is more important than almost anything else we could do for public health, with the possible exception of vaccinations. All of these campaigns trying to get kids to lose weight should be removed and replaced with programs to get them to exercise more, because losing weight doesn’t benefit health and exercising more does.

But I am not convinced that physical education as we know it actually makes people exercise more. In the short run, it forces kids to exercise, when there were surely ways to get kids to exercise that didn’t require such coercion; and in the long run, it gives them painful, even traumatic memories of exercise that make them not want to continue it once they get older. It’s too competitive, too one-size-fits-all. It doesn’t account for innate differences in athletic ability or match challenge levels to skill levels. It doesn’t help kids cope with having less ability, or even teach kids to be compassionate toward others with less ability than them.

And it makes kids miserable.

The economic impact of chronic illness

Mar 27 JDN 2459666

This topic is quite personal for me, as someone who has suffered from chronic migraines since adolescence. Some days, weeks, and months are better than others. This past month has been the worst I have felt since 2019, when we moved into an apartment that turned out to be full of mold. This time, there is no clear trigger—which also means no easy escape.

The economic impact of chronic illness is enormous. 90% of US healthcare spending is on people with chronic illnesses, including mental illnesses—and the US has the most expensive healthcare system in the world by almost any measure. Over 55% of adult Medicaid beneficiaries have two or more chronic illnesses.

The total annual cost of all chronic illnesses is hard to estimate, but it’s definitely somewhere in the trillions of dollars per year. The World Economic Forum estimated that number at $47 trillion over the next 20 years, which I actually consider conservative. I think this is counting how much we actually spend and some notion of lost productivity, as well as the (fraught) concept of the value of a statistical life—but I don’t think it’s putting a sensible value on the actual suffering. This will effectively undervalue poor people who are suffering severely but can’t get treated—because they spend little and can’t put a large dollar value on their lives. In the US, where the data is the best, the total cost of chronic illness comes to nearly $4 trillion per year—20% of GDP. If other countries are as bad or worse (and I don’t see why they would be better), then we’re looking at something like $17 trillion in real cost every single year; so over the next 20 years that’s not $47 trillion—it’s over $340 trillion.

Over half of US adults have at least one of the following, and over a quarter have two or more: arthritis, cancer, chronic obstructive pulmonary disease, coronary heart disease, current asthma, diabetes, hepatitis, hypertension, stroke, or kidney disease. (Actually the former very nearly implies the latter, unless chronic conditions somehow prevented one another. Two statistically independent events with 50% probability will jointly occur 25% of the time: Flip two coins.)

Unsurprisingly, age is positively correlated with chronic illness. Income is negatively correlated, both because chronic illnesses reduce job opportunities and because poorer people have more trouble getting good treatment. I am the exception that proves the rule, the upper-middle-class professional with both a PhD and a severe chronic illness.

There seems to be a common perception that chronic illness is largely a “First World problem”, but in fact chronic illnesses are more common—and much less poorly treated—in countries with low and moderate levels of development than they are in the most highly-developed countries. Over 75% of all deaths by non-communicable disease are in low- and middle-income countries. The proportion of deaths that is caused by non-communicable diseases is higher in high-income countries—but that’s because other diseases have been basically eradicated from high-income countries. People in rich countries actually suffer less from chronic illness than people in poor countries (on average).

It’s always a good idea to be careful of the distinction between incidence and prevalence, but with chronic illness this is particularly important, because (almost by definition) chronic illnesses last longer and so can have very high prevalence even with low incidence. Indeed, the odds of someone getting their first migraine (incidence) are low precisely because the odds of being someone who gets migraines (prevalence) is so high.

Quite high in fact: About 10% of men and 20% of women get migraines at least occasionally—though only about 8% of these (so 1% of men and 2% of women) get chronic migraines. Indeed, because ti is both common and can be quite severe, migraine is the second-most disabling condition worldwide as measured by years lived with disability (YLD), after low back pain. Neurologists are particularly likely to get migraines; the paper I linked speculates that they are better at realizing they have migraines, but I think we also need to consider the possibility of self-selection bias where people with migraines may be more likely to become neurologists. (I considered it, and it seems at least as good a reason as becoming a dentist because your name is Denise.)

If you order causes by the number of disability-adjusted life years (DALYs) they cost, chronic conditions rank quite high: while cardiovascular disease and cancer rate by far the highest, diabetes and kidney disease, mental disorders, neurological disorders, and musculoskeletal disorders all rate higher than malaria, HIV, or any other infection except respiratory infections (read: tuberculosis, influenza, and, once these charts are updated for the next few years, COVID). Note also that at the very bottom is “conflict and terrorism”—that’s all organized violence in the world—and natural disasters. Mental disorders alone cost the world 20 times as many DALYs as all conflict and terrorism combined.

Ancient plagues, modern pandemics

Mar 1 JDN 2458917

The coronavirus epidemic continues; though it originated in Wuhan province, the virus has now been confirmed in places as far-flung as Italy, Brazil, and Mexico. So far, about 90,000 people have caught it, and about 3,000 have died, mostly in China.

There are legitimate reasons to be concerned about this epidemic: Like influenza, coronavirus spreads quickly, and can be carried without symptoms, yet unlike influenza, it has a very high rate of complications, causing hospitalization as often as 10% of the time and death as often as 2%. There’s a lot of uncertainty about these numbers, because it’s difficult to know exactly how many people are infected but either have no symptoms or have symptoms that can be confused with other diseases. But we do have reason to believe that coronavirus is much deadlier for those infected than influenza: Influenza spreads so widely that it kills about 300,000 people every year, but this is only 0.1% of the people infected.

And yet, despite our complex interwoven network of international trade that sends people and goods all around the world, our era is probably the safest in history in terms of the risk of infectious disease.

Partly this is technology: Especially for bacterial infections, we have highly effective treatments that our forebears lacked. But for most viral infections we actually don’t have very effective treatments—which means that technology per se is not the real hero here.

Vaccination is a major part of the answer: Vaccines have effectively eradicated polio and smallpox, and would probably be on track to eliminate measles and rubella if not for dangerous anti-vaccination ideology. But even with no vaccine against coronavirus (yet) and not very effective vaccines against influenza, still the death rates from these viruses are nowhere near those of ancient plagues.

The Black Death killed something like 40% of Europe’s entire population. The Plague of Justinian killed as many as 20% of the entire world’s population. This is a staggeringly large death rate compared to a modern pandemic, in which even a 2% death rate would be considered a total catastrophe.

Even the 1918 influenza pandemic, which killed more than all the battle deaths in World War I combined, wasn’t as terrible as an ancient plague; it killed about 2% of the infected population. And when a very similar influenza virus appeared in 2009, how many people did it kill? About 400,000 people, roughly 0.1% of those infectedslightly worse than the average flu season. That’s how much better our public health has gotten in the last century alone.

Remember SARS, a previous viral pandemic that also emerged in China? It only killed 774 people, in a year in which over 300,000 died of influenza.

Sanitation is probably the most important factor: Certainly sanitation was far worse in ancient times. Today almost everyone routinely showers and washes their hands, which makes a big difference—but it’s notable that widespread bathing didn’t save the Romans from the Plague of Justinian.

I think it’s underappreciated just how much better our communication and quarantine procedures are today than they once were. In ancient times, the only way you heard about a plague was a live messenger carrying the news—and that messenger might well be already carrying the virus. Today, an epidemic in China becomes immediate news around the world. This means that people prepare—they avoid travel, they stock up on food, they become more diligent about keeping clean. And perhaps even more important than the preparation by individual people is the preparation by institutions: Governments, hospitals, research labs. We can see the pandemic coming and be ready to respond weeks or even months before it hits us.

So yes, do wash your hands regularly. Wash for at least 20 seconds, which will definitely feel like a long time if you haven’t made it a habit—but it does make a difference. Try to avoid travel for awhile. Stock up on food and water in case you need to be quarantined. Follow whatever instructions public health officials give as the pandemic progresses. But you don’t need to panic: We’ve got this under control. That Horseman of the Apocalypse is dead; and fear not, Famine and War are next. I’m afraid Death himself will probably be awhile, though.

The cost of illness

Feb 2 JDN 2458882

As I write this I am suffering from some sort of sinus infection, most likely some strain of rhinovirus. So far it has just been basically a bad cold, so there isn’t much to do aside from resting and waiting it out. But it did get me thinking about healthcare—we’re so focused on the costs of providing it that we often forget the costs of not providing it.

The United States is the only First World country without a universal healthcare system. It is not a coincidence that we also have some of the highest rates of preventable mortality and burden of disease.

We in the United States spend about $3.5 trillion per year on healthcare, the most of any country in the world, even as a proportion of GDP. Yet this is not the cost of disease; this is how much we were willing to pay to avoid the cost of disease. Whatever harm that would have been caused without all that treatment must actually be worth more than $3.5 trillion to us—because we paid that much to avoid it.

Globally, the disease burden is about 30,000 disability-adjusted life-years (DALY) per 100,000 people per year—that is to say, the average person is about 30% disabled by disease. I’ve spoken previously about quality-adjusted life years (QALY); the two measures take slightly different approaches to the same overall goal, and are largely interchangeable for most purposes.

Of course this result relies upon the disability weights; it’s not so obvious how we should be comparing across different conditions. How many years would you be willing to trade of normal life to avoid ten years of Alzheimer’s? But it’s probably not too far off to say that if we could somehow wave a magic wand and cure all disease, we would really increase our GDP by something like 30%. This would be over $6 trillion in the US, and over $26 trillion worldwide.

Of course, we can’t actually do that. But we can ask what kinds of policies are most likely to promote health in a cost-effective way.

Unsurprisingly, the biggest improvements to be made are in the poorest countries, where it can be astonishingly cheap to improve health. Malaria prevention has a cost of around $30 per DALY—by donating to the Against Malaria Foundation you can buy a year of life for less than the price of a new video game. Compare this to the standard threshold in the US of $50,000 per QALY: Targeting healthcare in the poorest countries can increase cost-effectiveness a thousandfold. In humanitarian terms, it would be well worth diverting spending from our own healthcare to provide public health interventions in poor countries. (Fortunately, we have even better options than that, like raising taxes on billionaires or diverting military spending instead.)

We in the United States spend about twice as much (per person per year) on healthcare as other First World countries. Are our health outcomes twice as good? Clearly not. Are they any better at all? That really isn’t clear. We certainly don’t have a particularly high life expectancy. We spend more on administrative costs than we do on preventative care—unlike every other First World country except Australia. Almost all of our drugs and therapies are more expensive here than they are everywhere else in the world.

The obvious answer here is to make our own healthcare system more like those of other First World countries. There are a variety of universal health care systems in the world that we could model ourselves on, ranging from the single-payer government-run system in the UK to the universal mandate system of Switzerland. The amazing thing is that it almost doesn’t matter which one we choose: We could copy basically any other First World country and get better healthcare for less spending. Obamacare was in many ways similar to the Swiss system, but we never fully implemented it and the Republicans have been undermining it every way they can. Under President Trump, they have made significant progress in undermining it, and as a result, there are now 3 million more Americans without health insurance than there were before Trump took office. The Republican Party is intentionally increasing the harm of disease.

What do we mean by “obesity”?

Nov 25 JDN 2458448

I thought this topic would be particularly appropriate for the week of Thanksgiving, since as a matter of public ritual, this time every year, we eat too much and don’t get enough exercise.

No doubt you have heard the term “obesity epidemic”: It’s not just used by WebMD or mainstream news; it’s also used by the American Heart Association, the Center for Disease Control, the World Health Organization, and sometimes even published in peer-reviewed journal articles.

This is kind of weird, because the formal meaning of the term “epidemic” clearly does not apply here. I feel uncomfortable going against public health officials in what is clearly their area of expertise rather than my own, but everything I’ve ever read about the official definition of the word “epidemic” requires it to be an infectious disease. You can’t “catch” obesity. Hanging out with people who are obese may slightly raise your risk of obesity, but not in the way that hanging out with people with influenza gives you influenza. It’s not caused by bacteria or viruses. Eating food touched by a fat person won’t cause you to catch the fat. Therefore, whatever else it is, this is not an epidemic. (I guess sometimes we use the term more metaphorically, “an epidemic of bankruptcies” or an “epidemic of video game consumption”; but I feel like the WHO and CDC of all people should be more careful.)

Indeed, before we decide what exactly this is, I think we should first ask ourselves a deeper question: What do we mean by “obesity”?

The standard definition of “obesity” relies upon the body mass index (BMI), a very crude measure that simply takes your body mass and divides by the square of your height. It’s easy to measure, but that’s basically its only redeeming quality.

Anyone who has studied dimensional analysis should immediately see a problem here: That isn’t a unit of density. It’s a unit of… density-length? If you take the exact same individual and scale them up by 10%, their BMI will increase by 10%. Do we really intend to say that simply being larger makes you obese, for the exact same ratios of muscle, fat, and bone?

Because of this, the taller you are, the more likely your BMI is going to register as “obese”, holding constant your actual level of health and fitness. And worldwide, average height has been increasing. This isn’t enough to account for the entire trend in rising BMI, but it reduces it substantially; average height has increased by about 10% since the 1950s, which is enough to raise our average BMI by about 2 points of the 5-point observed increase.

And of course BMI doesn’t say anything about your actual ratios of fat and muscle; all it says is how many total kilograms are in your body. As a result, there is a systematic bias against athletes in the calculation of BMI—and any health measure that is biased against athletes is clearly doing something wrong. All those doctors telling us to exercise more may not realize it, but if we actually took their advice, our BMIs would very likely get higher, not lower—especially for men, especially for strength-building exercise.

It’s also quite clear that our standards for “healthy weight” are distorted by social norms. Feminists have been talking about this for years; most women will never look like supermodels no matter how much weight they lose—and eating disorders are much more dangerous than being even 50 pounds overweight. We’re starting to figure out that similar principles hold for men: A six-pack of abs doesn’t actually mean you’re healthy; it means you are dangerously depleted of fatty acids.

To compensate for this, it seems like the most sensible methodology would be to figure out empirically what sort of weight is most strongly correlated with good health and long lifespan—what BMI maximizes your expected QALY.

You might think that this is what public health officials did when defining what is currently categorized as “normal weight”—but you would be wrong. They used social norms and general intuition, and as a result, our standards for “normal weight” are systematically miscalibrated.

In fact, the empirical evidence is quite clear: The people with the highest expected QALY are those who are classified as “overweight”, with BMI between 25 and 30. Those of “normal weight” (20 to 25) fare slightly worse, followed by those classified as “obese class I” (30 to 35)—but we don’t actually see large effects until either “underweight” (18.5-20) or “obese class II” (35 to 40). And the really severe drops in life and health expectancy don’t happen until “obese class III” (>40); and we see the same severe drops at “very underweight” (<18.5).
With that in mind, consider that the global average BMI increased from 21.7 in men and 21.4 in women in 1975 to 24.2 in men and 24.4 in women in 2014. That is, the world average increased from the low end of “normal weight” which is actually too light, to the high end of “normal weight” which is probably optimal. The global prevalence of “morbid obesity”, the kind that actually has severely detrimental effects on health, is only 0.64% in men and 1.6% in men. Even including “severe obesity”, the kind that has a noticeable but not dramatic effect on health, is only 2.3% in men and 5.0% in women. That’s your epidemic? Reporting often says things like “2/3 of American adults are overweight or obese”; but all that “overweight” proportion should be utterly disregarded, since it is beneficial to health. The actual prevalence of obesity in the US—even including class I obesity which is not very harmful—is less than 40%.

If obesity were the health crisis it were made out to be, we should expect that global life expectancy is decreasing, or at the very least not increasing. On the contrary, it is rapidly increasing: In 1955, global life expectancy was only 55 years, while it is now over 70.

Worldwide, the countries with the highest obesity rates are those with the longest life expectancy, because both of these things are strongly correlated with high levels of economic development. But it may not just be that: Smoking reduces obesity while also reducing lifespan, and a lot of those countries with very high obesity (including the US) have very low rates of smoking.

There’s some evidence that within the set of rich, highly-developed countries, obesity rates are positively correlated with lower life expectancy, but these effects are much smaller than the effects of high development itself. Going from the highest obesity in the world (the US, of course) to the lowest among all highly-developed countries (Japan) requires reducing the obesity rate by 34 percentage points but only increases life expectancy by about 5 years. You’d get the same increase by raising overall economic development from the level of Turkey to the level of Greece, about 10 points on the 100-point HDI scale.

 

Now, am I saying that we should all be 400 pounds? No, there does come a point where excess weight is clearly detrimental to health. But this threshold is considerably higher than you have probably been led to believe. If you are 15 or 20 pounds “overweight” by what our society (or even your doctor!) tells you, you are probably actually at the optimal weight for your body type. If you are 30 or 40 pounds “overweight”, you may want to try to lose some weight, but don’t make yourself suffer to achieve it. Only if you are 50 pounds or more “overweight” should you really be considering drastic action. If you do try to lose weight, be realistic about your goal: Losing 5% to 10% of your initial weight is a roaring success.

There are also reasons to be particularly concerned about obesity and lack of exercise in children, which is why Michelle Obama’s “Let’s Move!” campaign was a good thing.

And yes, exercise more! Don’t do it to try to lose weight (exercise does not actually cause much weight loss). Just do it. Exercise has so many health benefits it’s honestly kind of ridiculous.

But why am I complaining about this, anyway? Even if we cause some people to worry more about eating less than is strictly necessary, what’s the harm in that? At least we’re getting people to exercise, and Thanksgiving was already ruined by politics anyway.

Well, here’s the thing: I don’t think this obesity panic is actually making us any less obese.

The United States is the most obese country in the world—and you can’t so much as call up Facebook or step into a subway car in the US without someone telling you that you’re too fat and you need to lose weight. The people who really are obese and may need medical help losing weight are the ones most likely to be publicly shamed and harassed for their weight—and there’s no evidence that this actually does anything to reduce their weight. People who experience shaming and harassment for their weight are actually less likely to achieve sustained weight loss.

Teenagers—both boys and girls—who are perceived to be “overweight” are at substantially elevated risk of depression and suicide. People who more fully internalize feelings of shame about their weight have higher blood pressure and higher triglicerides, though once you control for other factors the effect is not huge. There’s even evidence that fat shaming by medical professionals leads to worse treatment outcomes among obese patients.

If we want to actually reduce obesity—and this makes sense, at least for the upper-tail obesity of BMI above 35—then we should be looking at what sort of interventions are actually effective at doing that. Medicine has an important role to play of course, but I actually think economics might be stronger here (though I suppose I would, wouldn’t I?).

Number 1: Stop subsidizing meat and feed grains. There is now quite clear evidence that direct and indirect government subsidies for meat production are a contributing factor in our high fat consumption and thus high obesity rate, though obviously other factors matter too. If you’re worried about farmers, subsidize vegetables instead, or pay for active labor market programs that will train those farmers to work in new industries. This thing we do where we try to save the job instead of the worker is fundamentally idiotic and destructive. Jobs are supposed to be destroyed; that’s what technological improvement is. If you stop destroying jobs, you will stop economic growth.

Number 2: Restrict advertising of high-sugar, high-fat foods, especially to children. Food advertising is particularly effective, because it draws on such primal impulses, and children are particularly vulnerable (as the APA has publicly reported on, including specifically for food advertising). Corporations like McDonald’s and Kellogg’s know quite well what they’re doing when they advertise high-fat, high-sugar foods to kids and get them into the habit of eating them early.

Number 3: Find policies to promote exercise. Despite its small effects on weight loss, exercise has enormous effects on health. Indeed, the fact that people who successfully lose weight show long-term benefits even if they put the weight back on suggests to me that really what they gained was a habit of exercise. We need to find ways to integrate exercise into our daily lives more. The one big thing that our ancestors did do better than we do is constantly exercise—be it hunting, gathering, or farming. Standing desks and treadmill desks may seem weird, but there is evidence that they actually improve health. Right now they are quite expensive, so most people don’t buy them. If we subsidized them, they would be cheaper; if they were cheaper, more people would buy them; if more people bought them, they would seem less weird. Eventually, it could become normative to walk on a treadmill while you work and sitting might seem weird. Even a quite large subsidy could be worthwhile: say we had to spend $500 per person per year to buy every single adult a treadmill desk each year. That comes to about $80 billion per year, which is less than one fourth what we’re currently spending on diabetes or heart disease, so we’d break even if we simply managed to reduce those two conditions by 13%. Add in all the other benefits for depression, chronic pain, sleep, sexual function, and so on, and the quality of life improvement could be quite substantial.

How do we measure happiness?

JDN 2457028 EST 20:33.

No, really, I’m asking. I strongly encourage my readers to offer in the comments any ideas they have about the measurement of happiness in the real world; this has been a stumbling block in one of my ongoing research projects.

In one sense the measurement of happiness—or more formally utility—is absolutely fundamental to economics; in another it’s something most economists are astonishingly afraid of even trying to do.

The basic question of economics has nothing to do with money, and is really only incidentally related to “scarce resources” or “the production of goods” (though many textbooks will define economics in this way—apparently implying that a post-scarcity economy is not an economy). The basic question of economics is really this: How do we make people happy?

This must always be the goal in any economic decision, and if we lose sight of that fact we can make some truly awful decisions. Other goals may work sometimes, but they inevitably fail: If you conceive of the goal as “maximize GDP”, then you’ll try to do any policy that will increase the amount of production, even if that production comes at the expense of stress, injury, disease, or pollution. (And doesn’t that sound awfully familiar, particularly here in the US? 40% of Americans report their jobs as “very stressful” or “extremely stressful”.) If you were to conceive of the goal as “maximize the amount of money”, you’d print money as fast as possible and end up with hyperinflation and total economic collapse ala Zimbabwe. If you were to conceive of the goal as “maximize human life”, you’d support methods of increasing population to the point where we had a hundred billion people whose lives were barely worth living. Even if you were to conceive of the goal as “save as many lives as possible”, you’d find yourself investing in whatever would extend lifespan even if it meant enormous pain and suffering—which is a major problem in end-of-life care around the world. No, there is one goal and one goal only: Maximize happiness.

I suppose technically it should be “maximize utility”, but those are in fact basically the same thing as long as “happiness” is broadly conceived as eudaimoniathe joy of a life well-lived—and not a narrow concept of just adding up pleasure and subtracting out pain. The goal is not to maximize the quantity of dopamine and endorphins in your brain; the goal is to achieve a world where people are safe from danger, free to express themselves, with friends and family who love them, who participate in a world that is just and peaceful. We do not want merely the illusion of these things—we want to actually have them. So let me be clear that this is what I mean when I say “maximize happiness”.

The challenge, therefore, is how we figure out if we are doing that. Things like money and GDP are easy to measure; but how do you measure happiness?
Early economists like Adam Smith and John Stuart Mill tried to deal with this question, and while they were not very successful I think they deserve credit for recognizing its importance and trying to resolve it. But sometime around the rise of modern neoclassical economics, economists gave up on the project and instead sought a narrower task, to measure preferences.

This is often called technically ordinal utility, as opposed to cardinal utility; but this terminology obscures the fundamental distinction. Cardinal utility is actual utility; ordinal utility is just preferences.

(The notion that cardinal utility is defined “up to a linear transformation” is really an eminently trivial observation, and it shows just how little physics the physics-envious economists really understand. All we’re talking about here is units of measurement—the same distance is 10.0 inches or 25.4 centimeters, so is distance only defined “up to a linear transformation”? It’s sometimes argued that there is no clear zero—like Fahrenheit and Celsius—but actually it’s pretty clear to me that there is: Zero utility is not existing. So there you go, now you have Kelvin.)

Preferences are a bit easier to measure than happiness, but not by as much as most economists seem to think. If you imagine a small number of options, you can just put them in order from most to least preferred and there you go; and we could imagine asking someone to do that, or—the technique of revealed preferenceuse the choices they make to infer their preferences by assuming that when given the choice of X and Y, choosing X means you prefer X to Y.

Like much of neoclassical theory, this sounds good in principle and utterly collapses when applied to the real world. Above all: How many options do you have? It’s not easy to say, but the number is definitely huge—and both of those facts pose serious problems for a theory of preferences.

The fact that it’s not easy to say means that we don’t have a well-defined set of choices; even if Y is theoretically on the table, people might not realize it, or they might not see that it’s better even though it actually is. Much of our cognitive effort in any decision is actually spent narrowing the decision space—when deciding who to date or where to go to college or even what groceries to buy, simply generating a list of viable options involves a great deal of effort and extremely complex computation. If you have a true utility function, you can satisficechoosing the first option that is above a certain threshold—or engage in constrained optimizationchoosing whether to continue searching or accept your current choice based on how good it is. Under preference theory, there is no such “how good it is” and no such thresholds. You either search forever or choose a cutoff arbitrarily.

Even if we could decide how many options there are in any given choice, in order for this to form a complete guide for human behavior we would need an enormous amount of information. Suppose there are 10 different items I could have or not have; then there are 10! = 3.6 million possible preference orderings. If there were 100 items, there would be 100! = 9e157 possible orderings. It won’t do simply to decide on each item whether I’d like to have it or not. Some things are complements: I prefer to have shoes, but I probably prefer to have $100 and no shoes at all rather than $50 and just a left shoe. Other things are substitutes: I generally prefer eating either a bowl of spaghetti or a pizza, rather than both at the same time. No, the combinations matter, and that means that we have an exponentially increasing decision space every time we add a new option. If there really is no more structure to preferences than this, we have an absurd computational task to make even the most basic decisions.

This is in fact most likely why we have happiness in the first place. Happiness did not emerge from a vacuum; it evolved by natural selection. Why make an organism have feelings? Why make it care about things? Wouldn’t it be easier to just hard-code a list of decisions it should make? No, on the contrary, it would be exponentially more complex. Utility exists precisely because it is more efficient for an organism to like or dislike things by certain amounts rather than trying to define arbitrary preference orderings. Adding a new item means assigning it an emotional value and then slotting it in, instead of comparing it to every single other possibility.

To illustrate this: I like Coke more than I like Pepsi. (Let the flame wars begin?) I also like getting massages more than I like being stabbed. (I imagine less controversy on this point.) But the difference in my mind between massages and stabbings is an awful lot larger than the difference between Coke and Pepsi. Yet according to preference theory (“ordinal utility”), that difference is not meaningful; instead I have to say that I prefer the pair “drink Pepsi and get a massage” to the pair “drink Coke and get stabbed”. There’s no such thing as “a little better” or “a lot worse”; there is only what I prefer over what I do not prefer, and since these can be assigned arbitrarily there is an impossible computational task before me to make even the most basic decisions.

Real utility also allows you to make decisions under risk, to decide when it’s worth taking a chance. Is a 50% chance of $100 worth giving up a guaranteed $50? Probably. Is a 50% chance of $10 million worth giving up a guaranteed $5 million? Not for me. Maybe for Bill Gates. How do I make that decision? It’s not about what I prefer—I do in fact prefer $10 million to $5 million. It’s about how much difference there is in terms of my real happiness—$5 million is almost as good as $10 million, but $100 is a lot better than $50. My marginal utility of wealth—as I discussed in my post on progressive taxation—is a lot steeper at $50 than it is at $5 million. There’s actually a way to use revealed preferences under risk to estimate true (“cardinal”) utility, developed by Von Neumann and Morgenstern. In fact they proved a remarkably strong theorem: If you don’t have a cardinal utility function that you’re maximizing, you can’t make rational decisions under risk. (In fact many of our risk decisions clearly aren’t rational, because we aren’t actually maximizing an expected utility; what we’re actually doing is something more like cumulative prospect theory, the leading cognitive economic theory of risk decisions. We overrespond to extreme but improbable events—like lightning strikes and terrorist attacks—and underrespond to moderate but probable events—like heart attacks and car crashes. We play the lottery but still buy health insurance. We fear Ebola—which has never killed a single American—but not influenza—which kills 10,000 Americans every year.)

A lot of economists would argue that it’s “unscientific”—Kenneth Arrow said “impossible”—to assign this sort of cardinal distance between our choices. But assigning distances between preferences is something we do all the time. Amazon.com lets us vote on a 5-star scale, and very few people send in error reports saying that cardinal utility is meaningless and only preference orderings exist. In 2000 I would have said “I like Gore best, Nader is almost as good, and Bush is pretty awful; but of course they’re all a lot better than the Fascist Party.” If we had simply been able to express those feelings on the 2000 ballot according to a range vote, either Nader would have won and the United States would now have a three-party system (and possibly a nationalized banking system!), or Gore would have won and we would be a decade ahead of where we currently are in preventing and mitigating global warming. Either one of these things would benefit millions of people.

This is extremely important because of another thing that Arrow said was “impossible”—namely, “Arrow’s Impossibility Theorem”. It should be called Arrow’s Range Voting Theorem, because simply by restricting preferences to a well-defined utility and allowing people to make range votes according to that utility, we can fulfill all the requirements that are supposedly “impossible”. The theorem doesn’t say—as it is commonly paraphrased—that there is no fair voting system; it says that range voting is the only fair voting system. A better claim is that there is no perfect voting system, which is true if you mean that there is no way to vote strategically that doesn’t accurately reflect your true beliefs. The Myerson-Satterthwaithe Theorem is then the proper theorem to use; if you could design a voting system that would force you to reveal your beliefs, you could design a market auction that would force you to reveal your optimal price. But the least expressive way to vote in a range vote is to pick your favorite and give them 100% while giving everyone else 0%—which is identical to our current plurality vote system. The worst-case scenario in range voting is our current system.

But the fact that utility exists and matters, unfortunately doesn’t tell us how to measure it. The current state-of-the-art in economics is what’s called “willingness-to-pay”, where we arrange (or observe) decisions people make involving money and try to assign dollar values to each of their choices. This is how you get disturbing calculations like “the lives lost due to air pollution are worth $10.2 billion.”

Why are these calculations disturbing? Because they have the whole thing backwards—people aren’t valuable because they are worth money; money is valuable because it helps people. It’s also really bizarre because it has to be adjusted for inflation. Finally—and this is the point that far too few people appreciate—the value of a dollar is not constant across people. Because different people have different marginal utilities of wealth, something that I would only be willing to pay $1000 for, Bill Gates might be willing to pay $1 million for—and a child in Africa might only be willing to pay $10, because that is all he has to spend. This makes the “willingness-to-pay” a basically meaningless concept independent of whose wealth we are spending.

Utility, on the other hand, might differ between people—but, at least in principle, it can still be added up between them on the same scale. The problem is that “in principle” part: How do we actually measure it?

So far, the best I’ve come up with is to borrow from public health policy and use the QALY, or quality-adjusted life year. By asking people macabre questions like “What is the maximum number of years of your life you would give up to not have a severe migraine every day?” (I’d say about 20—that’s where I feel ambivalent. At 10 I definitely would; at 30 I definitely wouldn’t.) or “What chance of total paralysis would you take in order to avoid being paralyzed from the waist down?” (I’d say about 20%.) we assign utility values: 80 years of migraines is worth giving up 20 years to avoid, so chronic migraine is a quality of life factor of 0.75. Total paralysis is 5 times as bad as paralysis from the waist down, so if waist-down paralysis is a quality of life factor of 0.90 then total paralysis is 0.50.

You can probably already see that there are lots of problems: What if people don’t agree? What if due to framing effects the same person gives different answers to slightly different phrasing? Some conditions will directly bias our judgments—depression being the obvious example. How many years of your life would you give up to not be depressed? Suicide means some people say all of them. How well do we really know our preferences on these sorts of decisions, given that most of them are decisions we will never have to make? It’s difficult enough to make the actual decisions in our lives, let alone hypothetical decisions we’ve never encountered.

Another problem is often suggested as well: How do we apply this methodology outside questions of health? Does it really make sense to ask you how many years of your life drinking Coke or driving your car is worth?
Well, actually… it better, because you make that sort of decision all the time. You drive instead of staying home, because you value where you’re going more than the risk of dying in a car accident. You drive instead of walking because getting there on time is worth that additional risk as well. You eat foods you know aren’t good for you because you think the taste is worth the cost. Indeed, most of us aren’t making most of these decisions very well—maybe you shouldn’t actually drive or drink that Coke. But in order to know that, we need to know how many years of your life a Coke is worth.

As a very rough estimate, I figure you can convert from willingness-to-pay to QALY by dividing by your annual consumption spending Say you spend annually about $20,000—pretty typical for a First World individual. Then $1 is worth about 50 microQALY, or about 26 quality-adjusted life-minutes. Now suppose you are in Third World poverty; your consumption might be only $200 a year, so $1 becomes worth 5 milliQALY, or 1.8 quality-adjusted life-days. The very richest individuals might spend as much as $10 million on consumption, so $1 to them is only worth 100 nanoQALY, or 3 quality-adjusted life-seconds.

That’s an extremely rough estimate, of course; it assumes you are in perfect health, all your time is equally valuable and all your purchasing decisions are optimized by purchasing at marginal utility. Don’t take it too literally; based on the above estimate, an hour to you is worth about $2.30, so it would be worth your while to work for even $3 an hour. Here’s a simple correction we should probably make: if only a third of your time is really usable for work, you should expect at least $6.90 an hour—and hey, that’s a little less than the US minimum wage. So I think we’re in the right order of magnitude, but the details have a long way to go.

So let’s hear it, readers: How do you think we can best measure happiness?