# When maximizing utility doesn’t

Jun 4 JDN 2460100

Expected utility theory behaves quite strangely when you consider questions involving mortality.

Nick Beckstead and Teruji Thomas recently published a paper on this: All well-defined utility functions are either reckless in that they make you take crazy risks, or timid in that they tell you not to take even very small risks. It’s starting to make me wonder if utility theory is even the right way to make decisions after all.

Consider a game of Russian roulette where the prize is \$1 million. The revolver has 6 chambers, 3 with a bullet. So that’s a 1/2 chance of \$1 million, and a 1/2 chance of dying. Should you play?

I think it’s probably a bad idea to play. But the prize does matter; if it were \$100 million, or \$1 billion, maybe you should play after all. And if it were \$10,000, you clearly shouldn’t.

And lest you think that there is no chance of dying you should be willing to accept for any amount of money, consider this: Do you drive a car? Do you cross the street? Do you do anything that could ever have any risk of shortening your lifespan in exchange for some other gain? I don’t see how you could live a remotely normal life without doing so. It might be a very small risk, but it’s still there.

This raises the question: Suppose we have some utility function over wealth; ln(x) is a quite plausible one. What utility should we assign to dying?

The fact that the prize matters means that we can’t assign death a utility of negative infinity. It must be some finite value.

But suppose we choose some value, -V, (so V is positive), for the utility of dying. Then we can find some amount of money that will make you willing to play: ln(x) = V, x = e^(V).

Now, suppose that you have the chance to play this game over and over again. Your marginal utility of wealth will change each time you win, so we may need to increase the prize to keep you playing; but we could do that. The prizes could keep scaling up as needed to make you willing to play. So then, you will keep playing, over and over—and then, sooner or later, you’ll die. So, at each step you maximized utility—but at the end, you didn’t get any utility.

Well, at that point your heirs will be rich, right? So maybe you’re actually okay with that. Maybe there is some amount of money (\$1 billion?) that you’d be willing to die in order to ensure your heirs have.

But what if you don’t have any heirs? Or, what if we consider making such a decision as a civilization? What if death means not only the destruction of you, but also the destruction of everything you care about?

As a civilization, are there choices before us that would result in some chance of a glorious, wonderful future, but also some chance of total annihilation? I think it’s pretty clear that there are. Nuclear technology, biotechnology, artificial intelligence. For about the last century, humanity has been at a unique epoch: We are being forced to make this kind of existential decision, to face this kind of existential risk.

It’s not that we were immune to being wiped out before; an asteroid could have taken us out at any time (as happened to the dinosaurs), and a volcanic eruption nearly did. But this is the first time in humanity’s existence that we have had the power to destroy ourselves. This is the first time we have a decision to make about it.

One possible answer would be to say we should never be willing to take any kind of existential risk. Unlike the case of an individual, when we speaking about an entire civilization, it no longer seems obvious that we shouldn’t set the utility of death at negative infinity. But if we really did this, it would require shutting down whole industries—definitely halting all research in AI and biotechnology, probably disarming all nuclear weapons and destroying all their blueprints, and quite possibly even shutting down the coal and oil industries. It would be an utterly radical change, and it would require bearing great costs.

On the other hand, if we should decide that it is sometimes worth the risk, we will need to know when it is worth the risk. We currently don’t know that.

Even worse, we will need some mechanism for ensuring that we don’t take the risk when it isn’t worth it. And we have nothing like such a mechanism. In fact, most of our process of research in AI and biotechnology is widely dispersed, with no central governing authority and regulations that are inconsistent between countries. I think it’s quite apparent that right now, there are research projects going on somewhere in the world that aren’t worth the existential risk they pose for humanity—but the people doing them are convinced that they are worth it because they so greatly advance their national interest—or simply because they could be so very profitable.

In other words, humanity finally has the power to make a decision about our survival, and we’re not doing it. We aren’t making a decision at all. We’re letting that responsibility fall upon more or less randomly-chosen individuals in government and corporate labs around the world. We may be careening toward an abyss, and we don’t even know who has the steering wheel.

May 28 JDN 2460093

It’s no secret that Americans don’t like to pay taxes. It’s almost a founding principle of our country, really, going all the way back to the Boston Tea Party. This is likely part of why the US has one of the lowest tax-to-GDP ratios in the First World; our taxes are barely half what they pay in Scandinavia. And this in turn surely contributes to our ongoing budget issues and our stingy social welfare spending. (Speaking of budget issues: As of this writing, the debt ceiling debacle is still unresolved.)

Why don’t Americans like to pay taxes? Why does no one really like to pay taxes (though some seem more willing than others)?

It surely has something to do with the fact that taxes are so coercive: You have to pay them, you get no choice. And you also have very little choice as to how that money is used; yes, you can vote for politicians who will in theory at some point enact budgets that might possibly reflect the priorities they expressed in their campaigns—but the actual budget invariably ends up quite far removed from the campaign promises you could vote based on.

What if we could give you more choice? We can’t let people choose how much to pay—then most people would choose to pay less and we’d be in even more trouble. (If you want to pay more than you’re required to, the IRS will actually let you right now. You can just refuse your refund.) But perhaps we could let people choose where the money goes?

I call this program Vote Your Dollars. I would initially limit it to a small fraction of the budget, tied to a tax increase: Say, raise taxes enough to increase revenue by 5% and use that 5% for the program.

Under Vote Your Dollars, on your tax return, you are given a survey, asking you how you want to divide up your additional money toward various categories. I think they should be fairly broad categories, such as ‘healthcare’, ‘social security’, ‘anti-poverty programs’, ‘defense’, ‘foreign aid’. If we make them too specific, it would be more work for the voters and also more likely to lead to foolish allocations. We want them to basically reflect a voter’s priorities, rather than ask them to make detailed economic management decisions. Most voters are not qualified to properly allocate a budget; the goal here is to get people to weight how much they care about different programs.

As only a small portion of the budget, Vote Your Dollars would initially have very little real fiscal impact. Money is fungible, so any funds that were expected to go somewhere else than where voters put them could easily be reallocated as needed. But I suspect that most voters would fail to appreciate this effect, and thus actually feel like they have more control than they really do. (If voters understood fungibility and inframarginal transfers, they’d never have supported food stamps over just giving poor people cash.)

Moreover, it would still provide useful information, namely: What happens when voters are given this power? Do they make decisions that seem to make sense and reflect their interests and beliefs? Does the resulting budget actually seem like one that could be viable? Could it even be better than what we currently have in some ways?

I suspect that the result would be better than most economists and political scientists imagine. There seems to be a general sense that voters are too foolish or apathetic to usefully participate in politics, which of course would raise the very big question: Why does democracy work?

I don’t think that most voters would choose a perfect budget; indeed, I already said I wouldn’t trust them with the fine details of how to allocate the funds. But I do think most people have at least some reasonable idea of how important they think healthcare is relative to defense, and it would be good to at least gather that information in a more direct way.

If it goes well and Vote Your Dollars seems to result in reasonable budgets even for that extra 5%, we could start expanding it to a larger portion of the overall budget. Try 10% for the next election, then 15% for the next. There should always be some part that remains outside direct voter control, because voters would almost certainly underspend on certain categories (such as administration and national debt payments) and likely overspend on others.

This would allow us to increase taxes—which we clearly must do, because we need to improve government services, but we don’t want to go further into debt—while giving voters more choice, and thus making taxes feel less coercive. Being forced to pay a certain amount each year might not sting as much if you get to say where a significant portion of that money goes.

To give voters even more control over their money, I think I would also include a provision whereby you can deduct the full amount of your charitable contributions to certain high-impact charities (we would need to come up with a good list, but clear examples include UNICEF, Oxfam, and GiveWell) from your tax payment. Currently, you deduct charitable contributions from your income, which means you don’t pay taxes on those donations; but you still end up with less money after donating than you did before. If we let you deduct the full amount, then you would have the same amount after donating, and effectively the government would pay the full cost of your donation. Presumably this would lead to people donating a great deal; this might hurt tax revenues, but its overall positive impact on the world would be so large that it is obviously worth it. By the time we have given enough to UNICEF to meaningfully impact the US federal budget, we have ended world hunger.

Of course, it’s very unlikely that anything like Vote Your Dollars would ever be implemented. There are already ways we could make paying taxes less painful that we haven’t done—such as sending you a bill, as they do in Denmark, rather than making you file a tax form. And we could already increase revenue with very little real cost by simply expanding the IRS and auditing rich people more. These simple, obvious reforms have been repeatedly obstructed by powerful lobbies, who personally benefit from the current system even though it’s obviously a bad system. I guess I can’t think of anyone in particular who would want to lobby against Vote Your Dollars, but I feel like Republicans might just because they want taxes to hurt as much as possible so that they have an excuse to cut spending.

But still, I thought I’d put the idea out there.

# We ignorant, incompetent gods

May 21 JDN 2460086

A review of Homo Deus

The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions and godlike technology.

E.O. Wilson

Homo Deus is a very good read—and despite its length, a quick one; as you can see, I read it cover to cover in a week. Yuval Noah Harari’s central point is surely correct: Our technology is reaching a threshold where it grants us unprecedented power and forces us to ask what it means to be human.

Biotechnology and artificial intelligence are now advancing so rapidly that advancements in other domains, such as aerospace and nuclear energy, seem positively mundane. Who cares about making flight or electricity a bit cleaner when we will soon have the power to modify ourselves or we’ll all be replaced by machines?

Indeed, we already have technology that would have seemed to ancient people like the powers of gods. We can fly; we can witness or even control events thousands of miles away; we can destroy mountains; we can wipeout entire armies in an instant; we can even travel into outer space.

Harari rightly warns us that our not-so-distant descendants are likely to have powers that we would see as godlike: Immortality, superior intelligence, self-modification, the power to create life.

And where it is scary to think about what they might do with that power if they think the way we do—as ignorant and foolish and tribal as we are—Harari points out that it is equally scary to think about what they might do if they don’t think the way we do—for then, how do they think? If their minds are genetically modified or even artificially created, who will they be? What values will they have, if not ours? Could they be better? What if they’re worse?

It is of course difficult to imagine values better than our own—if we thought those values were better, we’d presumably adopt them. But we should seriously consider the possibility, since presumably most of us believe that our values today are better than what most people’s values were 1000 years ago. If moral progress continues, does it not follow that people’s values will be better still 1000 years from now? Or at least that they could be?

I also think Harari overestimates just how difficult it is to anticipate the future. This may be a useful overcorrection; the world is positively infested with people making overprecise predictions about the future, often selling them for exorbitant fees (note that Harari was quite well-compensated for this book as well!). But our values are not so fundamentally alien from those of our forebears, and we have reason to suspect that our descendants’ values will be no more different from ours.

For instance, do you think that medieval people thought suffering and death were good? I assure you they did not. Nor did they believe that the supreme purpose in life is eating cheese. (They didn’t even believe the Earth was flat!) They did not have the concept of GDP, but they could surely appreciate the value of economic prosperity.

Indeed, our world today looks very much like a medieval peasant’s vision of paradise. Boundless food in endless variety. Near-perfect security against violence. Robust health, free from nearly all infectious disease. Freedom of movement. Representation in government! The land of milk and honey is here; there they are, milk and honey on the shelves at Walmart.

Of course, our paradise comes with caveats: Not least, we are by no means free of toil, but instead have invented whole new kinds of toil they could scarcely have imagined. If anything I would have to guess that coding a robot or recording a video lecture probably isn’t substantially more satisfying than harvesting wheat or smithing a sword; and reconciling receivables and formatting spreadsheets is surely less. Our tasks are physically much easier, but mentally much harder, and it’s not obvious which of those is preferable. And we are so very stressed! It’s honestly bizarre just how stressed we are, given the abudance in which we live; there is no reason for our lives to have stakes so high, and yet somehow they do. It is perhaps this stress and economic precarity that prevents us from feeling such joy as the medieval peasants would have imagined for us.

Of course, we don’t agree with our ancestors on everything. The medieval peasants were surely more religious, more ignorant, more misogynistic, more xenophobic, and more racist than we are. But projecting that trend forward mostly means less ignorance, less misogyny, less racism in the future; it means that future generations should see the world world catch up to what the best of us already believe and strive for—hardly something to fear. The values that I believe are surely not what we as a civilization act upon, and I sorely wish they were. Perhaps someday they will be.

I can even imagine something that I myself would recognize as better than me: Me, but less hypocritical. Strictly vegan rather than lacto-ovo-vegetarian, or at least more consistent about only buying free range organic animal products. More committed to ecological sustainability, more willing to sacrifice the conveniences of plastic and gasoline. Able to truly respect and appreciate all life, even humble insects. (Though perhaps still not mosquitoes; this is war. They kill more of us than any other animal, including us.) Not even casually or accidentally racist or sexist. More courageous, less burnt out and apathetic. I don’t always live up to my own ideals. Perhaps someday someone will.

Harari fears something much darker, that we will be forced to give up on humanist values and replace them with a new techno-religion he calls Dataism, in which the supreme value is efficient data processing. I see very little evidence of this. If it feels like data is worshipped these days, it is only because data is profitable. Amazon and Google constantly seek out ever richer datasets and ever faster processing because that is how they make money. The real subject of worship here is wealth, and that is nothing new. Maybe there are some die-hard techno-utopians out there who long for us all to join the unified oversoul of all optimized data processing, but I’ve never met one, and they are clearly not the majority. (Harari also uses the word ‘religion’ in an annoyingly overbroad sense; he refers to communism, liberalism, and fascism as ‘religions’. Ideologies, surely; but religions?)

Harari in fact seems to think that ideologies are strongly driven by economic structures, so maybe he would even agree that it’s about profit for now, but thinks it will become religion later. But I don’t really see history fitting this pattern all that well. If monotheism is directly tied to the formation of organized bureaucracy and national government, then how did Egypt and Rome last so long with polytheistic pantheons? If atheism is the natural outgrowth of industrialized capitalism, then why are Africa and South America taking so long to get the memo? I do think that economic circumstances can constrain culture and shift what sort of ideas become dominant, including religious ideas; but there clearly isn’t this one-to-one correspondence he imagines. Moreover, there was never Coalism or Oilism aside from the greedy acquisition of these commodities as part of a far more familiar ideology: capitalism.

He also claims that all of science is now, or is close to, following a united paradigm under which everything is a data processing algorithm, which suggests he has not met very many scientists. Our paradigms remain quite varied, thank you; and if they do all have certain features in common, it’s mainly things like rationality, naturalism and empiricism that are more or less inherent to science. It’s not even the case that all cognitive scientists believe in materialism (though it probably should be); there are still dualists out there.

Moreover, when it comes to values, most scientists believe in liberalism. This is especially true if we use Harari’s broad sense (on which mainline conservatives and libertarians are ‘liberal’ because they believe in liberty and human rights), but even in the narrow sense of center-left. We are by no means converging on a paradigm where human life has no value because it’s all just data processing; maybe some scientists believe that, but definitely not most of us. If scientists ran the world, I can’t promise everything would be better, but I can tell you that Bush and Trump would never have been elected and we’d have a much better climate policy in place by now.

I do share many of Harari’s fears of the rise of artificial intelligence. The world is clearly not ready for the massive economic disruption that AI is going to cause all too soon. We still define a person’s worth by their employment, and think of ourselves primarily as collection of skills; but AI is going to make many of those skills obsolete, and may make many of us unemployable. It would behoove us to think in advance about who we truly are and what we truly want before that day comes. I used to think that creative intellectual professions would be relatively secure; ChatGPT and Midjourney changed my mind. Even writers and artists may not be safe much longer.

Harari is so good at sympathetically explaining other views he takes it to a fault. At times it is actually difficult to know whether he himself believes something and wants you to, or if he is just steelmanning someone else’s worldview. There’s a whole section on ‘evolutionary humanism’ where he details a worldview that is at best Nietschean and at worst Nazi, but he makes it sound so seductive. I don’t think it’s what he believes, in part because he has similarly good things to say about liberalism and socialism—but it’s honestly hard to tell.

The weakest part of the book is when Harari talks about free will. Like most people, he just doesn’t get compatibilism. He spends a whole chapter talking about how science ‘proves we have no free will’, and it’s just the same old tired arguments hard determinists have always made.

He talks about how we can make choices based on our desires, but we can’t choose our desires; well of course we can’t! What would that even mean? If you could choose your desires, what would you choose them based on, if not your desires? Your desire-desires? Well, then, can you choose your desire-desires? What about your desire-desire-desires?

What even is this ultimate uncaused freedom that libertarian free will is supposed to consist in? No one seems capable of even defining it. (I’d say Kant got the closest: He defined it as the capacity to act based upon what ought rather than what is. But of course what we believe about ‘ought’ is fundamentally stored in our brains as a particular state, a way things are—so in the end, it’s an ‘is’ we act on after all.)

Maybe before you lament that something doesn’t exist, you should at least be able to describe that thing as a coherent concept? Woe is me, that 2 plus 2 is not equal to 5!

It is true that as our technology advances, manipulating other people’s desires will become more and more feasible. Harari overstates the case on so-called robo-rats; they aren’t really mind-controlled, it’s more like they are rewarded and punished. The rat chooses to go left because she knows you’ll make her feel good if she does; she’s still freely choosing to go left. (Dangling a carrot in front of a horse is fundamentally the same thing—and frankly, paying a wage isn’t all that different.) The day may yet come where stronger forms of control become feasible, and woe betide us when it does. Yet this is no threat to the concept of free will; we already knew that coercion was possible, and mind control is simply a more precise form of coercion.

Harari reports on a lot of interesting findings in neuroscience, which are important for people to know about, but they do not actually show that free will is an illusion. What they do show is that free will is thornier than most people imagine. Our desires are not fully unified; we are often ‘of two minds’ in a surprisingly literal sense. We are often tempted by things we know are wrong. We often aren’t sure what we really want. Every individual is in fact quite divisible; we literally contain multitudes.

We do need a richer account of moral responsibility that can deal with the fact that human beings often feel multiple conflicting desires simultaneously, and often experience events differently than we later go on to remember them. But at the end of the day, human consciousness is mostly unified, our choices are mostly rational, and our basic account of moral responsibility is mostly valid.

I think for now we should perhaps be less worried about what may come in the distant future, what sort of godlike powers our descendants may have—and more worried about what we are doing with the godlike powers we already have. We have the power to feed the world; why aren’t we? We have the power to save millions from disease; why don’t we? I don’t see many people blindly following this ‘Dataism’, but I do see an awful lot blinding following a 19th-century vision of capitalism.

And perhaps if we straighten ourselves out, the future will be in better hands.

# Why does democracy work?

May 14 JDN 2460079

A review of Democracy for Realists

I don’t think it can be seriously doubted that democracy does, in fact, work. Not perfectly, by any means; but the evidence is absolutely overwhelming that more democratic societies are better than more authoritarian societies by just about any measure you could care to use.

When I first started reading Democracy for Realists and saw their scathing, at times frothing criticism of mainstream ideas of democracy, I thought they were going to try to disagree with that; but in the end they don’t. Achen and Bartels do agree that democracy works; they simply think that why and how it works is radically different from what most people think.

For it is a very long-winded book, and in dire need of better editing. Most of the middle section of the book is taken up by a deluge of empirical analysis, most of which amounts to over-interpreting the highly ambiguous results of underpowered linear regressions on extremely noisy data. The sheer quantity of them seems intended to overwhelm any realization that no particular one is especially compelling. But a hundred weak arguments don’t add up to a single strong one.

To their credit, the authors often include the actual scatter plots; but when you look at those scatter plots, you find yourself wondering how anyone could be so convinced these effects are real and important. Many of them seem more prone to new constellations.

Their econometric techniques are a bit dubious, as well; at one point they said they “removed outliers” but then the examples they gave as “outliers” were the observations most distant from their regression line rather than the rest of the data. Removing the things furthest from your regression line will always—always—make your regression seem stronger. But that’s not what outliers are. Other times, they add weird controls or exclude parts of the sample for dubious reasons, and I get the impression that these are the cherry-picked results of a much larger exploration. (Why in the world would you exclude Catholics from a study of abortion attitudes? And this study on shark attacks seems awfully specific….) And of course if you try 20 regressions at random, you can expect that at least 1 of them will probably show up with p < 0.05. I think they are mainly just following the norms of their discipline—but those norms are quite questionable.

They don’t ever get into much detail as to what sort of practical institutional changes they would recommend, so it’s hard to know whether I would agree with those. Some of their suggestions, such as more stringent rules on campaign spending, I largely agree with. Others, such as their opposition to popular referenda and recommendation for longer term limits, I have more mixed feelings about. But none seem totally ridiculous or even particularly radical, and they really don’t offer much detail about any of them. I thought they were going to tell me that appointment of judges is better than election (which many experts widely agree), or that the Electoral College is a good system (which far fewer experts would assent to, at least since George W. Bush and Donald Trump). In fact they didn’t do that; they remain eerily silent on substantive questions like this.

Honestly, what little they have to say about institutional policy feels a bit tacked on at the end, as if they suddenly realized that they ought to say something useful rather than just spend the whole time tearing down another theory.

In fact, I came to wonder if they really were tearing down anyone’s actual theory, or if this whole book was really just battering a strawman. Does anyone really think that voters are completely rational? At one point they speak of an image of the ‘sovereign omnicompetent voter’; is that something anyone really believes in?

It does seem like many people believe in making government more responsive to the people, whereas Achen and Bartels seem to have the rather distinct goal of making government make better decisions. They were able to find at least a few examples—though I know not how far and wide they had to search—where it seemed like more popular control resulted in worse outcomes, such as water fluoridation and funding for fire departments. So maybe the real substantive disagreement here is over whether more or less direct democracy is a good idea. And that is indeed a reasonable question. But one need not believe that voters are superhuman geniuses to think that referenda are better than legislation. Simply showing that voters are limited in their capacity and bound to group identity is not enough to answer that question.

In fact, I think that Achen and Bartels seriously overestimate the irrationality of voters, because they don’t seem to appreciate that group identity is often a good proxy for policy—in fact, they don’t even really seem to see social policy as policy at all. Consider this section (p. 238):

“In this pre-Hitlerian age it must have seemed to most Jews that there were no crucial issues dividing the major parties” (Fuchs 1956, 63). Yet by 1923, a very substantial majority of Jews had abandoned their Republican loyalties and begun voting for the Democrats. What had changed was not foreign policy, but rather the social status of Jews within one of America’s major political parties. In a very visible way, the Democrats had become fully accepting and incorporating of religious minorities, both Catholics and Jews. The result was a durable Jewish partisan realignment grounded in “ethnic solidarity”, in Gamm’s characterization.

Gee, I wonder why Jews would suddenly care a great deal which party was more respectful toward people like them? Okay, the Holocaust hadn’t happened yet, but anti-Semitism is very old indeed, and it was visibly creeping upward during that era. And just in general, if one party is clearly more anti-Semitic than the other, why wouldn’t Jews prefer the one that is less hateful toward them? How utterly blinded by privilege do you need to be to not see that this is an important policy difference?

Perhaps because they are both upper-middle-class straight White cisgender men (I would also venture a guess nominally but not devoutly Protestant), Achens and Bartel seem to have no concept that social policy directly affects people of minority identity, that knowing that one party accepts people like you and the other doesn’t is a damn good reason to prefer one over the other. This is not a game where we are rooting for our home team. This directly affects our lives.

I know quite a few transgender people, and not a single one is a Republican. It’s not because all trans people hate low taxes. It’s because the Republican Party has declared war on trans people.

This may also lead to trans people being more left-wing generally, as once you’re in a group you tend to absorb some views from others in that group (and, I’ll admit, Marxists and anarcho-communists seem overrepresented among LGBT people). But I absolutely know some LGBT people who would like to vote conservative for economic policy reasons, but realize they can’t, because it means voting for bigots who hate them and want to actively discriminate against them. There is nothing irrational or even particularly surprising about this choice. It would take a very powerful overriding reason for anyone to want to vote for someone who publicly announces hatred toward them.

Indeed, for me the really baffling thing is that there are political parties that publicly announce hatred toward particular groups. It seems like a really weird strategy for winning elections. That is the thing that needs to be explained here; why isn’t inclusiveness—at least a smarmy lip-service toward inclusiveness, like ‘Diversity, Equity, and Inclusion’ offices at universities—the default behavior of all successful politicians? Why don’t they all hug a Latina trans woman after kissing a baby and taking a selfie with the giant butter cow? Why is not being an obvious bigot considered a left-wing position?

Since it obviously is the case that many voters don’t want this hatred (at the very least, its targets!), in order for it not to damage electoral changes, it must be that some other voters do want this hatred. Perhaps they themselves define their own identity in opposition to other people’s identities. They certainly talk that way a lot: We hear White people fearing ‘replacement‘ by shifting racial demographics, when no sane forecaster thinks that European haplotypes are in any danger of disappearing any time soon. The central argument against gay marriage was always that it would somehow destroy straight marriage, by some mechanism never explained.

Indeed, perhaps it is this very blindness toward social policy that makes Achen and Bartels unable to see the benefits of more direct democracy. When you are laser-focused on economic policy, as they are, then it seems to you as though policy questions are mainly technical matters of fact, and thus what we need are qualified experts. (Though even then, it is not purely a matter of fact whether we should care more about inequality than growth, or more about unemployment than inflation.)

But once you include social policy, you see that politics often involves very real, direct struggles between conflicting interests and differing moral views, and that by the time you’ve decided which view is the correct one, you already have your answer for what must be done. There is no technical question of gay marriage; there is only a moral one. We don’t need expertise on such questions; we need representation. (Then again, it’s worth noting that courts have sometimes advanced rights more effectively than direct democratic votes; so having your interests represented isn’t as simple as getting an equal vote.)

Achen and Bartels even include a model in the appendix where politicians are modeled as either varying in competence or controlled by incentives; never once does it consider that they might differ in whose interests they represent. Yet I don’t vote for a particular politician just because I think they are more intelligent, or as part of some kind of deterrence mechanism to keep them from misbehaving (I certainly hope the courts do a better job of that!); I vote for them because I think they represent the goals and interests I care about. We aren’t asking who is smarter, we are asking who is on our side.

The central question that I think the book raises is one that the authors don’t seem to have much to offer on: If voters are so irrational, why does democracy work? I do think there is strong evidence that voters are irrational, though maybe not as irrational as Achen and Bartels seem to think. Honestly, I don’t see how anyone can watch Donald Trump get elected President of the United States and not think that voters are irrational. (The book was written before that; apparently there’s a new edition with a preface about Trump, but my copy doesn’t have that.) But it isn’t at all obvious to me what to do with that information, because even if so-called elites are in fact more competent than average citizens—which may or may not be true—the fact remains that their interests are never completely aligned. Thus far, representative democracy of one stripe or another seems to be the best mechanism we have for finding people who have sufficient competence while also keeping them on a short enough leash.

And perhaps that’s why democracy works as well as it does; it gives our leaders enough autonomy to let them generally advance their goals, but also places limits on how badly misaligned our leaders’ goals can be from our own.

# Reckoning costs in money distorts them

May 7 JDN 2460072

Consider for a moment what it means when an economic news article reports “rising labor costs”. What are they actually saying?

They’re saying that wages are rising—perhaps in some industry, perhaps in the economy as a whole. But this is not a cost. It’s a price. As I’ve written about before, the two are fundamentally distinct.

The cost of labor is measured in effort, toil, and time. It’s the pain of having to work instead of whatever else you’d like to do with your time.

The price of labor is a monetary amount, which is delivered in a transaction.

This may seem perfectly obvious, but it has important and oft-neglected implications. A cost, one paid, is gone. That value has been destroyed. We hope that it was worth it for some benefit we gained. A price, when paid, is simply transferred: One person had that money before, now someone else has it. Nothing was gained or lost.

So in fact when reports say that “labor costs have risen”, what they are really saying is that income is being transferred from owners to workers without any change in real value taking place. They are framing as a loss what is fundamentally a zero-sum redistribution.

In fact, it is disturbingly common to see a fundamentally good redistribution of income framed in the press as a bad outcome because of its expression as “costs”; the “cost” of chocolate is feared to go up if we insist upon enforcing bans on forced labor—when in fact it is only the price that goes up, and the cost actually goes down: chocolate would no longer include complicity in an atrocity. The real suffering of making chocolate would be thereby reduced, not increased. Even when they aren’t literally enslaved, those workers are astonishingly poor, and giving them even a few more cents per hour would make a real difference in their lives. But God forbid we pay a few cents more for a candy bar!

If labor costs were to rise, that would mean that work had suddenly gotten harder, or more painful; or else, that some outside circumstance had made it more difficult to work. Having a child increases your labor costs—you now have the opportunity cost of not caring for the child. COVID increased the cost of labor, by making it suddenly dangerous just to go outside in public. That could also increase prices—you may demand a higher wage, and people do seem to have demanded higher wages after COVID. But these are two separate effects, and you can have one without the other. In fact, women typically see wage stagnation or even reduction after having kids (but men largely don’t), despite their real opportunity cost of labor having obviously greatly increased.

On an individual level, it’s not such a big mistake to equate price and cost. If you are buying something, its cost to you basically just is its price, plus a little bit of transaction cost for actually finding and buying it. But on a societal level, it makes an enormous difference. It distorts our policy priorities and can even lead to actively trying to suppress things that are beneficial—such as rising wages.

This false equivalence between price and costs seems to be at least as common among economists as it is among laypeople. Economists will often justify it on the grounds that in an ideal perfect competitive market the two would be in some sense equated. But of course we don’t live in that ideal perfect market, and even if we did, they would only beproportional at the margin, not fundamentally equal across the board. It would still be obviously wrong to characterize the total value or cost of work by the price paid for it; only the last unit of effort would be priced so that marginal value equals price equals marginal cost. The first 39 hours of your work would cost you less than what you were paid, and produce more than you were paid; only that 40th hour would set the three equal.

Once you account for all the various market distortions in the world, there’s no particular relationship between what something costs—in terms of real effort and suffering—and its price—in monetary terms. Things can be expensive and easy, or cheap and awful. In fact, they often seem to be; for some reason, there seems to be a pattern where the most terrible, miserable jobs (e.g. coal mining) actually pay the leastand the easiest, most pleasant jobs (e.g. stock trading) pay the most. Some jobs that benefit society pay well (e.g. doctors) and others pay terribly or not at all (e.g. climate activists). Some actions that harm the world get punished (e.g. armed robbery) and others get rewarded with riches (e.g. oil drilling). In the real world, whether a job is good or bad and whether it is paid well or poorly seem to be almost unrelated.

In fact, sometimes they seem even negatively related, where we often feel tempted to “sell out” and do something destructive in order to get higher pay. This is likely due to Berkson’s paradox: If people are willing to do jobs if they are either high-paying or beneficial to humanity, then we should expect that, on average, most of the high-paying jobs people do won’t be beneficial to humanity. Even if there were inherently no correlation or a small positive one, people’s refusal to do harmful low-paying work removes those jobs from our sample and results in a negative correlation in what remains.

I think that the best solution, ultimately, is to stop reckoning costs in money entirely. We should reckon them in happiness.

This is of course much more difficult than simply using prices; it’s not easy to say exactly how many QALY are sacrificed in the extraction of cocoa beans or the drilling of offshore oil wells. But if we actually did find a way to count them, I strongly suspect we’d find that it was far more than we ought to be willing to pay.

A very rough approximation, surely flawed but at least a start, would be to simply convert all payments into proportions of their recipient’s income: For full-time wages, this would result in basically everyone being counted the same, as 1 hour of work if you work 40 hours per week, 50 weeks per year is precisely 0.05% of your annual income. So we could say that whatever is equivalent to your hourly wage constitutes 50 microQALY.

This automatically implies that every time a rich person pays a poor person, QALY increase, while every time a poor person pays a rich person, QALY decrease. This is not an error in the calculation. It is a fact of the universe. We ignore it only at out own peril. All wealth redistributed downward is a benefit, while all wealth redistributed upward is a harm. That benefit may cause some other harm, or that harm may be compensated by some other benefit; but they are still there.

This would also put some things in perspective. When HSBC was fined £70 million for its crimes, that can be compared against its £1.5 billion in net income; if it were an individual, it would have been hurt about 50 milliQALY, which is about what I would feel if I lost \$2000. Of course, it’s not a person, and it’s not clear exactly how this loss was passed through to employees or shareholders; but that should give us at least some sense of how small that loss was for them. They probably felt it… a little.

When Trump was ordered to pay a \$1.3 million settlement, based on his \$2.5 billion net wealth (corresponding to roughly \$125 million in annual investment income), that cost him about 10 milliQALY; for me that would be about \$500.

At the other extreme, if someone goes from making \$1 per day to making \$1.50 per day, that’s a 50% increase in their income—500 milliQALY per year.

For those who have no income at all, this becomes even trickier; for them I think we should probably use their annual consumption, since everyone needs to eat and that costs something, though likely not very much. Or we could try to measure their happiness directly, trying to determine how much it hurts to not eat enough and work all day in sweltering heat.

Properly shifting this whole cultural norm will take a long time. For now, I leave you with this: Any time you see a monetary figure, ask yourself: How much is that worth to them?” The world will seem quite different once you get in the habit of that.

# The mental health crisis in academia

Apr 30 JDN 2460065

Why are so many academics anxious and depressed?

Depression and anxiety are much more prevalent among both students and faculty than they are in the general population. Unsurprisingly, women seem to have it a bit worse than men, and trans people have it worst of all.

Is this the result of systemic failings of the academic system? Before deciding that, one thing we should consider is that very smart people do seem to have a higher risk of depression.

There is a complex relationship between genes linked to depression and genes linked to intelligence, and some evidence that people of especially high IQ are more prone to depression; nearly 27% of Mensa members report mood disorders, compared to 10% of the general population.

(Incidentally, the stereotype of the weird, sickly nerd has a kernel of truth: the correlations between intelligence and autism, ADHD, allergies, and autoimmune disorders are absolutely real—and not at all well understood. It may be a general pattern of neural hyper-activation, not unlike what I posit in my stochastic overload model. The stereotypical nerd wears glasses, and, yes, indeed, myopia is also correlated with intelligence—and this seems to be mostly driven by genetics.)

Most of these figures are at least a few years old. If anything things are only worse now, as COVID triggered a surge in depression for just about everyone, academics included. It remains to be seen how much of this large increase will abate as things gradually return to normal, and how much will continue to have long-term effects—this may depend in part on how well we manage to genuinely restore a normal way of life and how well we can deal with long COVID.

If we assume that academics are a similar population to Mensa members (admittedly a strong assumption), then this could potentially explain why 26% of academic faculty are depressed—but not why nearly 40% of junior faculty are. At the very least, we junior faculty are about 50% more likely to be depressed than would be explained by our intelligence alone. And grad students have it even worse: Nearly 40% of graduate students report anxiety or depression, and nearly 50% of PhD students meet the criteria for depression. At the very least this sounds like a dual effect of being both high in intelligence and low in status—it’s those of us who have very little power or job security in academia who are the most depressed.

This suggests that, yes, there really is something wrong with academia. It may not be entirely the fault of the system—perhaps even a well-designed academic system would result in more depression than the general population because we are genetically predisposed. But it really does seem like there is a substantial environmental contribution that academic institutions bear some responsibility for.

I think the most obvious explanation is constant evaluation: From the time we are students at least up until we (maybe, hopefully, someday) get tenure, academics are constantly being evaluated on our performance. We know that this sort of evaluation contributes to anxiety and depression.

Don’t other jobs evaluate performance? Sure. But not constantly the way that academia does. This is especially obvious as a student, where everything you do is graded; but it largely continues once you are faculty as well.

For most jobs, you are concerned about doing well enough to keep your job or maybe get a raise. But academia has this continuous forward pressure: if you are a grad student or junior faculty, you can’t possibly keep your job; you must either move upward to the next stage or drop out. And academia has become so hyper-competitive that if you want to continue moving upward—and someday getting that tenure—you must publish in top-ranked journals, which have utterly opaque criteria and ever-declining acceptance rates. And since there are so few jobs available compared to the number of applicants, good enough is never good enough; you must be exceptional, or you will fail. Two thirds of PhD graduates seek a career in academia—but only 30% are actually in one three years later. (And honestly, three years is pretty short; there are plenty of cracks left to fall through between that and a genuinely stable tenured faculty position.)

Moreover, our skills are so hyper-specialized that it’s very hard to imagine finding work anywhere else. This grants academic institutions tremendous monopsony power over us, letting them get away with lower pay and worse working conditions. Even with an economics PhD—relatively transferable, all things considered—I find myself wondering who would actually want to hire me outside this ivory tower, and my feeble attempts at actually seeking out such employment have thus far met with no success.

I also find academia painfully isolating. I’m not an especially extraverted person; I tend to score somewhere near the middle range of extraversion (sometimes called an “ambivert”). But I still find myself craving more meaningful contact with my colleagues. We all seem to work in complete isolation from one another, even when sharing the same office (which is awkward for other reasons). There are very few consistent gatherings or good common spaces. And whenever faculty do try to arrange some sort of purely social event, it always seems to involve drinking at a pub and nobody is interested in providing any serious emotional or professional support.

Some of this may be particular to this university, or to the UK; or perhaps it has more to do with being at a certain stage of my career. In any case I didn’t feel nearly so isolated in graduate school; I had other students in my cohort and adjacent cohorts who were going through the same things. But I’ve been here two years now and so far have been unable to establish any similarly supportive relationships with colleagues.

There may be some opportunities I’m not taking advantage of: I’ve skipped a lot of research seminars, and I stopped going to those pub gatherings. But it wasn’t that I didn’t try them at all; it was that I tried them a few times and quickly found that they were not filling that need. At seminars, people only talked about the particular research project being presented. At the pub, people talked about almost nothing of serious significance—and certainly nothing requiring emotional vulnerability. The closest I think I got to this kind of support from colleagues was a series of lunch meetings designed to improve instruction in “tutorials” (what here in the UK we call discussion sections); there, at least, we could commiserate about feeling overworked and dealing with administrative bureaucracy.

There seem to be deep, structural problems with how academia is run. This whole process of universities outsourcing their hiring decisions to the capricious whims of high-ranked journals basically decides the entire course of our careers. And once you get to the point I have, now so disheartened with the process of publishing research that I can’t even engage with it, it’s not at all clear how it’s even possible to recover. I see no way forward, no one to turn to. No one seems to care how well I teach, if I’m not publishing research.

And I’m clearly not the only one who feels this way.

# The idiocy of the debt ceiling

Apr 23 JDN 2460058

I thought we had put this behind us. I guess I didn’t think the Republicans would stop using the tactic once they saw it worked, but I had hoped that the Democrats would come up with a better permanent solution so that it couldn’t be used again. But they did not, and here we are again: Republicans are refusing to raise the debt ceiling, we have now hit that ceiling, and we are running out of time before we have to start shutting down services or defaulting on debt. There are talks ongoing that may yet get the ceiling raised in time, but we’re now cutting it very close. Already the risk that we might default or do something crazy is causing turmoil in financial markets.

Because US Treasury bonds are widely regarded as one of the world’s most secure assets, and the US dollar is the most important global reserve currency, the entire world’s financial markets get disrupted every time there is an issue with the US national debt, and the debt ceiling creates such disruptions on the regular for no good reason.

I will try to offer some of my own suggestions for what to do here, but first, I want to make something very clear: The debt ceiling should not exist. I don’t think most people understand just how truly idiotic the entire concept of a debt ceiling is. It seems practically designed to make our government dysfunctional.

This is not like a credit card limit, where your bank imposes a limit on how much you can borrow based on how much they think you are likely to be able to repay. A lot of people have been making that analogy, and I can see why it’s tempting; but as usual, it’s important to remember that government debt is not like personal debt.

As I said some years ago, US government debt is about as close as the world is ever likely to come to a perfect credit market: with no effort at all, borrow as much as you want at low, steady interest rates, and everyone will always be sure that you will pay it back on time. The debt ceiling is a limit imposed by the government itself—it is not imposed by our creditors, who would be more than happy to lend us more.

Also, I’d like to remind you that some of the US national debt is owned by the US government itself (is that really even “debt”?) and most of what’s left is owned by US individuals or corporations—only about a third is owed to foreign powers. Here is a detailed breakdown of who owns US national debt.

There is no reason to put an arbitrary cap on the amount the US government can borrow. The only reason anyone is at all worried about a default on the US national debt is because of this stupid arbitrary cap. If it didn’t exist, they would simply roll over more Treasury bonds to make the payments and everything would run smoothly. And this is normally what happens, when the Republicans aren’t playing ridiculous brinkmanship games.

As it is, they could simply print money to pay it—and at this point, maybe that’s what needs to happen. Mint the Coin already: Mint a \$1 trillion platinum coin and deposit it in the Federal Reserve, and there you go, you’ve paid off a chunk of the debt. Sometimes stupid problems require stupid solutions.

Aren’t there reasons to be worried about the government borrowing too much? Yes, a little. The amount of concern most people have about this is wildly disproportionate to the actual problem, but yes, there are legitimate concerns about high national debt resulting in high interest rates and eventually forcing us to raise taxes or cut services. This is a slow-burn, long-term problem that by its very nature would never require a sudden, immediate solution; but it is a genuine concern we should be aware of.

But here’s the thing: That’s a conversation we should be having when we vote on the budget. Whenever we pass a government budget, it already includes detailed projections of tax revenue and spending that yield precise, accurate forecasts of the deficit and the debt. If Republicans are genuinely concerned that we are overspending on certain programs, they should propose budget cuts to those programs and get those cuts passed as part of the budget.

Once a budget is already passed, we have committed to spend that money. It has literally been signed into law that \$X will be spend on program Y. At that point, you can’t simply cut the spending. If you think we’re spending too much, you needed to say that before we signed it into law. It’s too late now.

I’m always dubious of analogies between household spending and government spending, but if you really want one, think of it this way: Say your credit card company is offering to raise your credit limit, and you just signed a contract for some home improvements that would force you to run up your credit card past your current limit. Do you call the credit card company and accept the higher limit, or not? If you don’t, why don’t you? And what’s your plan for paying those home contractors? Even if you later decide that the home improvements were a bad idea, you already signed the contract! You can’t just back out!

This is why the debt ceiling is so absurd: It is a self-imposed limit on what you’re allowed to spend after you have already committed to spending it. The only sensible thing to do is to raise the debt ceiling high enough to account for the spending you’ve already committed to—or better yet, eliminate the ceiling entirely.

I think that when they last had a majority in both houses, the Democrats should have voted to make the debt ceiling ludicrously high—say \$100 trillion. Then, at least for the foreseeable future, we wouldn’t have to worry about raising it, and could just pass budgets normally like a sane government. But they didn’t do that; they only raised it as much as was strictly necessary, thus giving the Republicans an opening now to refuse to raise it again.

And that is what the debt ceiling actually seems to accomplish: It gives whichever political party is least concerned about the public welfare a lever they can pull to disrupt the entire system whenever they don’t get things the way they want. If you absolutely do not care about the public good—and it’s quite clear at this point that most of the Republican leadership does not—then whenever you don’t get your way, you can throw a tantrum that threatens to destabilize the entire global financial system.

We need to stop playing their game. Do what you have to do to keep things running for now—but then get rid of the damn debt ceiling before they can use it to do even more damage.

# What behavioral economics needs

Apr 16 JDN 2460049

The transition from neoclassical to behavioral economics has been a vital step forward in science. But lately we seem to have reached a plateau, with no major advances in the paradigm in quite some time.

It could be that there is work already being done which will, in hindsight, turn out to be significant enough to make that next step forward. But my fear is that we are getting bogged down by our own methodological limitations.

Neoclassical economics shared with us its obsession with mathematical sophistication. To some extent this was inevitable; in order to impress neoclassical economists enough to convert some of them, we had to use fancy math. We had to show that we could do it their way in order to convince them why we shouldn’t—otherwise, they’d just have dismissed us the way they had dismissed psychologists for decades, as too “fuzzy-headed” to do the “hard work” of putting everything into equations.

But the truth is, putting everything into equations was never the right approach. Because human beings clearly don’t think in equations. Once we write down a utility function and get ready to take its derivative and set it equal to zero, we have already distanced ourselves from how human thought actually works.

When dealing with a simple physical system, like an atom, equations make sense. Nobody thinks that the electron knows the equation and is following it intentionally. That equation simply describes how the forces of the universe operate, and the electron is subject to those forces.

But human beings do actually know things and do things intentionally. And while an equation could be useful for analyzing human behavior in the aggregate—I’m certainly not objecting to statistical analysis—it really never made sense to say that people make their decisions by optimizing the value of some function. Most people barely even know what a function is, much less remember calculus well enough to optimize one.

Yet right now, behavioral economics is still all based in that utility-maximization paradigm. We don’t use the same simplistic utility functions as neoclassical economists; we make them more sophisticated and realistic. Yet in that very sophistication we make things more complicated, more difficult—and thus in at least that respect, even further removed from how actual human thought must operate.

The worst offender here is surely Prospect Theory. I recognize that Prospect Theory predicts human behavior better than conventional expected utility theory; nevertheless, it makes absolutely no sense to suppose that human beings actually do some kind of probability-weighting calculation in their heads when they make judgments. Most of my students—who are well-trained in mathematics and economics—can’t even do that probability-weighting calculation on paper, with a calculator, on an exam. (There’s also absolutely no reason to do it! All it does it make your decisions worse!) This is a totally unrealistic model of human thought.

This is not to say that human beings are stupid. We are still smarter than any other entity in the known universe—computers are rapidly catching up, but they haven’t caught up yet. It is just that whatever makes us smart must not be easily expressible as an equation that maximizes a function. Our thoughts are bundles of heuristics, each of which may be individually quite simple, but all of which together make us capable of not only intelligence, but something computers still sorely, pathetically lack: wisdom. Computers optimize functions better than we ever will, but we still make better decisions than they do.

I think that what behavioral economics needs now is a new unifying theory of these heuristics, which accounts for not only how they work, but how we select which one to use in a given situation, and perhaps even where they come from in the first place. This new theory will of course be complex; there’s a lot of things to explain, and human behavior is a very complex phenomenon. But it shouldn’t be—mustn’t be—reliant on sophisticated advanced mathematics, because most people can’t do advanced mathematics (almost by construction—we would call it something different otherwise). If your model assumes that people are taking derivatives in their heads, your model is already broken. 90% of the world’s people can’t take a derivative.

I guess it could be that our cognitive processes in some sense operate as if they are optimizing some function. This is commonly posited for the human motor system, for instance; clearly baseball players aren’t actually solving differential equations when they throw and catch balls, but the trajectories that balls follow do in fact obey such equations, and the reliability with which baseball players can catch and throw suggests that they are in some sense acting as if they can solve them.

But I think that a careful analysis of even this classic example reveals some deeper insights that should call this whole notion into question. How do baseball players actually do what they do? They don’t seem to be calculating at all—in fact, if you asked them to try to calculate while they were playing, it would destroy their ability to play. They learn. They engage in practiced motions, acquire skills, and notice patterns. I don’t think there is anywhere in their brains that is actually doing anything like solving a differential equation. It’s all a process of throwing and catching, throwing and catching, over and over again, watching and remembering and subtly adjusting.

One thing that is particularly interesting to me about that process is that is astonishingly flexible. It doesn’t really seem to matter what physical process you are interacting with; as long as it is sufficiently orderly, such a method will allow you to predict and ultimately control that process. You don’t need to know anything about differential equations in order to learn in this way—and, indeed, I really can’t emphasize this enough, baseball players typically don’t.

In fact, learning is so flexible that it can even perform better than calculation. The usual differential equations most people would think to use to predict the throw of a ball would assume ballistic motion in a vacuum, which absolutely not what a curveball is. In order to throw a curveball, the ball must interact with the air, and it must be launched with spin; curving a baseball relies very heavily on the Magnus Effect. I think it’s probably possible to construct an equation that would fully predict the motion of a curveball, but it would be a tremendously complicated one, and might not even have an exact closed-form solution. In fact, I think it would require solving the Navier-Stokes equations, for which there is an outstanding Millennium Prize. Since the viscosity of air is very low, maybe you could get away with approximating using the Euler fluid equations.

To be fair, a learning process that is adapting to a system that obeys an equation will yield results that become an ever-closer approximation of that equation. And it is in that sense that a baseball player can be said to be acting as if solving a differential equation. But this relies heavily on the system in question being one that obeys an equation—and when it comes to economic systems, is that even true?

What if the reason we can’t find a simple set of equations that accurately describe the economy (as opposed to equations of ever-escalating complexity that still utterly fail to describe the economy) is that there isn’t one? What if the reason we can’t find the utility function people are maximizing is that they aren’t maximizing anything?

What behavioral economics needs now is a new approach, something less constrained by the norms of neoclassical economics and more aligned with psychology and cognitive science. We should be modeling human beings based on how they actually think, not some weird mathematical construct that bears no resemblance to human reasoning but is designed to impress people who are obsessed with math.

I’m of course not the first person to have suggested this. I probably won’t be the last, or even the one who most gets listened to. But I hope that I might get at least a few more people to listen to it, because I have gone through the mathematical gauntlet and earned my bona fides. It is too easy to dismiss this kind of reasoning from people who don’t actually understand advanced mathematics. But I do understand differential equations—and I’m telling you, that’s not how people think.

# Will hydrogen make air travel sustainable?

Apr 9 JDN 2460042

Air travel is currently one of the most carbon-intensive activities anyone can engage in. Per passenger kilometer, airplanes emit about 8 times as much carbon as ships, 4 times as much as trains, and 1.5 times as much as cars. Living in a relatively eco-friendly city without a car and eating a vegetarian diet, I produce much less carbon than most First World citizens—except when I fly across the Atlantic a couple of times a year.

Until quite recently, most climate scientists believed that this was basically unavoidable, that simply sustaining the kind of power output required to keep an airliner in the air would always require carbon-intensive jet fuel. But in just the past few years, major breakthroughs have been made in using hydrogen propulsion.

The beautiful thing about hydrogen is that burning it simply produces water—no harmful pollution at all. It’s basically the cleanest possible fuel.

The simplest approach, which is actually quite old, but until recently didn’t seem viable, is the use of liquid hydrogen as airplane fuel.

We’ve been using liquid hydrogen as a rocket fuel for decades; so we knew it had enough energy density. (Actually its energy density is higher than conventional jet fuel.)

The problem with liquid hydrogen is that it must be kept extremely cold—it boils at 20 Kelvin. And once liquid hydrogen boils into gas, it builds up pressure very fast and easily permeates through most materials, so it’s extremely hard to contain. This makes it very difficult and expensive to handle.

But this isn’t the only way to use hydrogen, and may turn out to not be the best one.

There are now prototype aircraft that have flown using hydrogen fuel cells. These fuel cells can be fed with hydrogen gas—so no need to cool below 20 Kelvin. But then they can’t directly run the turbines; instead, these planes use electric turbines which are powered by the fuel cell.

Basically these are really electric aircraft. But whereas a lithium battery would be far too heavy, a hydrogen fuel cell is light enough for aviation use. In fact, hydrogen gas up to a certain pressure is lighter than air (it was often used for zeppelins, though, uh, occasionally catastrophically), so potentially the planes could use their own fuel tanks for buoyancy, landing “heavier” than they took off. (On the other hand it might make more sense to pressurize the hydrogen beyond that point, so that it will still be heavier than air—but perhaps still lighter than jet fuel!)

Of course, the technology is currently too untested and too expensive to be used on a wide scale. But this is how all technologies begin. It’s of course possible that we won’t be able to solve the engineering problems that currently make hydrogen-powered aircraft unaffordable; but several aircraft manufacturers are now investing in hydrogen research—suggesting that they at least believe there is a good chance we will.

There’s also the issue of where we get all the hydrogen. Hydrogen is extremely abundant—literally the most abundant baryonic matter in the universe—but most of what’s on Earth is locked up in water or hydrocarbons. Most of the hydrogen we currently make is produced by processing hydrocarbons (particularly methane), but that produces carbon emissions, so it wouldn’t solve the problem.

A better option is electrolysis: Using electricity to separate water into hydrogen and oxyen. But this requires a lot of energy—and necessarily, more energy than you can get out of burning the hydrogen later, since burning it basically is just putting the hydrogen and oxygen back together to make water.

Yet all is not lost, for while energy density is absolutely vital for an aircraft fuel, it’s not so important for a ground-based power plant. As an ultimate fuel source, hydrogen is a non-starter. But as an energy storage medium, it could be ideal.

The idea is this: We take the excess energy from wind and solar power plants, and use that energy to electrolyze water into hydrogen and oxygen. We then store that hydrogen and use it for fuel cells to run aircraft (and potentially other things as well). This ensures that the extra energy that renewable sources can generate in peak times doesn’t go to waste, and also provides us with what we need to produce clean-burning hydrogen fuel.

The basic technology for doing all this already exists. The current problem is cost. Under current conditions, it’s far more expensive to make hydrogen fuel than to make conventional jet fuel. Since fuel is one of the largest costs for airlines, even small increases in fuel prices matter a lot for the price of air travel; and these are not even small differences. Currently hydrogen costs over 10 times as much per kilogram, and its higher energy density isn’t enough to make up for that. For hydrogen aviation to be viable, that ratio needs to drop to more like 2 or 3—maybe even all the way to 1, since hydrogen is also more expensive to store than jet fuel (the gas needs high-pressure tanks, the liquid needs cryogenic cooling systems).

This means that, for the time being, it’s still environmentally responsible to reduce your air travel. Fly less often, always fly economy (more people on the plane means less carbon per passenger), and buy carbon offsets (they’re cheaper than you may think).

But in the long run, we may be able to have our cake and eat it too: If hydrogen aviation does become viable, we may not need to give up the benefits of routine air travel in order to reduce our carbon emissions.

Apr 2 JDN 2460037

A couple weeks ago I presented my stochastic overload model, which posits a neurological mechanism for the Yerkes-Dodson effect: Stress increases sympathetic activation, and this increases performance, up to the point where it starts to risk causing neural pathways to overload and shut down.

This week I thought I’d try to get into some of the implications of this model, how it might be applied to make predictions or guide policy.

One thing I often struggle with when it comes to applying theory is what actual benefits we get from a quantitative mathematical model as opposed to simply a basic qualitative idea. In many ways I think these benefits are overrated; people seem to think that putting something into an equation automatically makes it true and useful. I am sometimes tempted to try to take advantage of this, to put things into equations even though I know there is no good reason to put them into equations, simply because so many people seem to find equations so persuasive for some reason. (Studies have even shown that, particularly in disciplines that don’t use a lot of math, inserting a totally irrelevant equation into a paper makes it more likely to be accepted.)

The basic implications of the Yerkes-Dodson effect are already widely known, and utterly ignored in our society. We know that excessive stress is harmful to health and performance, and yet our entire economy seems to be based around maximizing the amount of stress that workers experience. I actually think neoclassical economics bears a lot of the blame for this, as neoclassical economists are constantly talking about “increasing work incentives”—which is to say, making work life more and more stressful. (And let me remind you that there has never been any shortage of people willing to work in my lifetime, except possibly briefly during the COVID pandemic. The shortage has always been employers willing to hire them.)

I don’t know if my model can do anything to change that. Maybe by putting it into an equation I can make people pay more attention to it, precisely because equations have this weird persuasive power over most people.

As far as scientific benefits, I think that the chief advantage of a mathematical model lies in its ability to make quantitative predictions. It’s one thing to say that performance increases with low levels of stress then decreases with high levels; but it would be a lot more useful if we could actually precisely quantify how much stress is optimal for a given person and how they are likely to perform at different levels of stress.

Unfortunately, the stochastic overload model can only make detailed predictions if you have fully specified the probability distribution of innate activation, which requires a lot of free parameters. This is especially problematic if you don’t even know what type of distribution to use, which we really don’t; I picked three classes of distribution because they were plausible and tractable, not because I had any particular evidence for them.

Also, we don’t even have standard units of measurement for stress; we have a vague notion of what more or less stressed looks like, but we don’t have the sort of quantitative measure that could be plugged into a mathematical model. Probably the best units to use would be something like blood cortisol levels, but then we’d need to go measure those all the time, which raises its own issues. And maybe people don’t even respond to cortisol in the same ways? But at least we could measure your baseline cortisol for awhile to get a prior distribution, and then see how different incentives increase your cortisol levels; and then the model should give relatively precise predictions about how this will affect your overall performance. (This is a very neuroeconomic approach.)

So, for now, I’m not really sure how useful the stochastic overload model is. This is honestly something I feel about a lot of the theoretical ideas I have come up with; they often seem too abstract to be usefully applicable to anything.

Maybe that’s how all theory begins, and applications only appear later? But that doesn’t seem to be how people expect me to talk about it whenever I have to present my work or submit it for publication. They seem to want to know what it’s good for, right now, and I never have a good answer to give them. Do other researchers have such answers? Do they simply pretend to?

Along similar lines, I recently had one of my students ask about a theory paper I wrote on international conflict for my dissertation, and after sending him a copy, I re-read the paper. There are so many pages of equations, and while I am confident that the mathematical logic is valid,I honestly don’t know if most of them are really useful for anything. (I don’t think I really believe that GDP is produced by a Cobb-Douglas production function, and we don’t even really know how to measure capital precisely enough to say.) The central insight of the paper, which I think is really important but other people don’t seem to care about, is a qualitative one: International treaties and norms provide an equilibrium selection mechanism in iterated games. The realists are right that this is cheap talk. The liberals are right that it works. Because when there are many equilibria, cheap talk works.

I know that in truth, science proceeds in tiny steps, building a wall brick by brick, never sure exactly how many bricks it will take to finish the edifice. It’s impossible to see whether your work will be an irrelevant footnote or the linchpin for a major discovery. But that isn’t how the institutions of science are set up. That isn’t how the incentives of academia work. You’re not supposed to say that this may or may not be correct and is probably some small incremental progress the ultimate impact of which no one can possibly foresee. You’re supposed to sell your work—justify how it’s definitely true and why it’s important and how it has impact. You’re supposed to convince other people why they should care about it and not all the dozens of other probably equally-valid projects being done by other researchers.

I don’t know how to do that, and it is agonizing to even try. It feels like lying. It feels like betraying my identity. Being good at selling isn’t just orthogonal to doing good science—I think it’s opposite. I think the better you are at selling your work, the worse you are at cultivating the intellectual humility necessary to do good science. If you think you know all the answers, you’re just bad at admitting when you don’t know things. It feels like in order to succeed in academia, I have to act like an unscientific charlatan.

Honestly, why do we even need to convince you that our work is more important than someone else’s? Are there only so many science points to go around? Maybe the whole problem is this scarcity mindset. Yes, grant funding is limited; but why does publishing my work prevent you from publishing someone else’s? Why do you have to reject 95% of the papers that get sent to you? Don’t tell me you’re limited by space; the journals are digital and searchable and nobody reads the whole thing anyway. Editorial time isn’t infinite, but most of the work has already been done by the time you get a paper back from peer review. Of course, I know the real reason: Excluding people is the main source of prestige.