The game theory of holidays

Dec 25, JDN 2457748

When this post goes live, it will be Christmas; so I felt I should make the topic somehow involve the subject of Christmas, or holidays in general.

I decided I would pull back for as much perspective as possible, and ask this question: Why do we have holidays in the first place?

All human cultures have holidays, but not the same ones. Cultures with a lot of mutual contact will tend to synchronize their holidays temporally, but still often preserve wildly different rituals on those same holidays. Yes, we celebrate “Christmas” in both the US and in Austria; but I think they are baffled by the Elf on the Shelf and I know that I find the Krampus bizarre and terrifying.

Most cultures from temperate climates have some sort of celebration around the winter solstice, probably because this is an ecologically important time for us. Our food production is about to get much, much lower, so we’d better make sure we have sufficient quantities stored. (In an era of globalization and processed food that lasts for months, this is less important, of course.) But they aren’t the same celebration, and they generally aren’t exactly on the solstice.

What is a holiday, anyway? We all get off work, we visit our families, and we go through a series of ritualized actions with some sort of symbolic cultural meaning. Why do we do this?

First, why not work all year round? Wouldn’t that be more efficient? Well, no, because human beings are subject to exhaustion. We need to rest at least sometimes.

Well, why not simply have each person rest whenever they need to? Well, how do we know they need to? Do we just take their word for it? People might exaggerate their need for rest in order to shirk their duties and free-ride on the work of others.

It would help if we could have pre-scheduled rest times, to remove individual discretion.

Should we have these at the same time for everyone, or at different times for each person?

Well, from the perspective of efficiency, different times for each person would probably make the most sense. We could trade off work in shifts that way, and ensure production keeps moving. So why don’t we do that?
Well, now we get to the game theory part. Do you want to be the only one who gets today off? Or do you want other people to get today off as well?

You probably want other people to be off work today as well, at least your family and friends so that you can spend time with them. In fact, this is probably more important to you than having any particular day off.

We can write this as a normal-form game. Suppose we have four days to choose from, 1 through 4, and two people, who can each decide which day to take off, or they can not take a day off at all. They each get a payoff of 1 if they take the same day off, 0 if they take different days off, and -1 if they don’t take a day off at all. This is our resulting payoff matrix:

1 2 3 4 None
1 1/1 0/0 0/0 0/0 0/-1
2 0/0 1/1 0/0 0/0 0/-1
3 0/0 0/0 1/1 0/0 0/-1
4 0/0 0/0 0/0 1/1 0/-1
None -1/0 -1/0 -1/0 -1/0 -1/-1

 

It’s pretty obvious that each person will take some day off. But which day? How do they decide that?
This is what we call a coordination game; there are many possible equilibria to choose from, and the payoffs are highest if people can somehow coordinate their behavior.

If they can actually coordinate directly, it’s simple; one person should just suggest a day, and since the other one is indifferent, they have no reason not to agree to that day. From that point forward, they have coordinated on a equilibrium (a Nash equilibrium, in point of fact).

But suppose they can’t talk to each other, or suppose there aren’t two people to coordinate but dozens, or hundreds—or even thousands, once you include all the interlocking social networks. How could they find a way to coordinate on the same day?

They need something more intuitive, some “obvious” choice that they can call upon that they hope everyone else will as well. Even if they can’t communicate, as long as they can observe whether their coordination has succeeded or failed they can try to set these “obvious” choices by successive trial and error.

The result is what we call a Schelling point; players converge on this equilibrium not because there’s actually anything better about it, but because it seems obvious and they expect everyone else to think it will also seem obvious.

This is what I think is happening with holidays. Yes, we make up stories to justify them, or sometimes even have genuine reasons for them (Independence Day actually makes sense being on July 4, for instance), but the ultimate reason why we have a holiday on one day rather than other is that we had to have it some time, and this was a way of breaking the deadlock and finally setting a date.

In fact, weekends are probably a more optimal solution to this coordination problem than holidays, because human beings need rest on a fairly regular basis, not just every few months. Holiday seasons now serve more as an opportunity to have long vacations that allow travel, rather than as a rest between work days. But even those we had to originally justify as a matter of religion: Jews would not work on Saturday, Christians would not work on Sunday, so together we will not work on Saturday or Sunday. The logic here is hardly impeccable (why not make it religion-specific, for example?), but it was enough to give us a Schelling point.

This makes me wonder about what it would take to create a new holiday. How could we actually get people to celebrate Darwin Day or Sagan Day on a large scale, for example? Darwin and Sagan are both a lot more worth celebrating than most of the people who get holidays—Columbus especially leaps to mind. But even among those of us who really love Darwin and Sagan, these are sort of half-hearted celebrations that never attain the same status as Easter, much less Thanksgiving or Christmas.

I’d also like to secularize—or at least ecumenicalize—the winter solstice celebration. Christianity shouldn’t have a monopoly on what is really something like a human universal, or at least a “humans who live in temperate climates” universal. It really isn’t Christmas anyway; most of what we do is celebrating Yule, compounded by a modern expression in mass consumption that is thoroughly borne of modern capitalism. We have no reason to think Jesus was actually born in December, much less on the 25th. But that’s around the time when lots of other celebrations were going on anyway, and it’s much easier to convince people that they should change the name of their holiday than that they should stop celebrating it and start celebrating something else—I think precisely because that still preserves the Schelling point.

Creating holidays has obviously been done before—indeed it is literally the only way holidays ever come into existence. But part of their structure seems to be that the more transparent the reasons for choosing that date and those rituals, the more empty and insincere the holiday seems. Once you admit that this is an arbitrary choice meant to converge an equilibrium, it stops seeming like a good choice anymore.

Now, if we could find dates and rituals that really had good reasons behind them, we could probably escape that; but I’m not entirely sure we can. We can use Darwin’s birthday—but why not the first edition publication of On the Origin of Species? And Darwin himself is really that important, but why Sagan Day and not Einstein Day or Niels Bohr Day… and so on? The winter solstice itself is a very powerful choice; its deep astronomical and ecological significance might actually make it a strong enough attractor to defeat all contenders. But what do we do on the winter solstice celebration? What rituals best capture the feelings we are trying to express, and how do we defend those rituals against criticism and competition?

In the long run, I think what usually happens is that people just sort of start doing something, and eventually enough people are doing it that it becomes a tradition. Maybe it always feels awkward and insincere at first. Maybe you have to be prepared for it to change into something radically different as the decades roll on.

This year the winter solstice is on December 21st. I think I’ll be lighting a candle and gazing into the night sky, reflecting on our place in the universe. Unless you’re reading this on Patreon, by the time this goes live, you’ll have missed it; but you can try later, or maybe next year.

In fifty years all the cool kids will be doing it, I’m sure.

Student debt crisis? What student debt crisis?

Dec 18, JDN 2457741
As of this writing, I have over $99,000 in student loans. This is a good thing. It means that I was able to pay for my four years of college, and two years of a master’s program, in order to be able to start this coming five years of a PhD. When I have concluded these eleven years of postgraduate education and incurred six times the world per-capita income in debt, what then will become of me? Will I be left to live on the streets, destitute and overwhelmed by debt?

No. I’ll have a PhD. The average lifetime income of individuals with PhDs in the United States is $3.4 million. Indeed, the median annual income for economists in the US is almost exactly what I currently owe in debt—so if I save well, I could very well pay it off in just a few years. With an advanced degree in economics like mine, or similarly high-paying fields such as physics, medicine, and law one can expect the higher end of that scale, $4 million or more; with a degree in a less-lucrative field such as art, literature, history, or philosophy, one would have to settle for “only” say $3 million. The average lifetime income in the US for someone without any college education is only $1.2 million. So even in literature or history, a PhD is worth about $2 million in future income.

On average, an additional year of college results in a gain in lifetime future earnings of about 15% to 20%. Even when you adjust for interest rates and temporal discounting, this is a rate of return that would make any stock trader envious.

Fitting the law of diminishing returns, the rates of return on education in poor countries are even larger, often mind-bogglingly huge; the increase in lifetime income from a year of college education in Botswana was estimated at 38%. This implies that someone who graduates from college in Botswana earns four times as much money as someone who only finished high school.

We who pay $100,000 to receive an additional $2 to $3 million can hardly be called unfortunate.

Indeed, we are mind-bogglingly fortunate; we have been given an opportunity to better ourselves and the society we live in that is all but unprecedented in human history granted only to a privileged few even today. Right now, only about half of adults in the most educated countries in the world (Canada, Russia, Israel, Japan, Luxembourg, South Korea, and the United States) ever go to college. Only 30% of Americans ever earn a bachelor’s degree, and as recently as 1975 that figure was only 20%. Worldwide, the majority of people never graduate from high school. The average length of schooling in developing countries today is six yearsthat is, sixth grade—and this is an enormous improvement from the two years of average schooling found in developing countries in 1950.

If we look a bit further back in history, the improvements in education are even more staggering. In the United States in 1910, only 13.5% of people graduated high school, and only 2.7% completed a bachelor’s degree. There was no student debt crisis then, to be sure—because there were no college students.

Indeed, I have been underestimating the benefits of education thus far, because education is both a public and private good. The figures I’ve just given have been only the private financial return on education—the additional income received by an individual because they went to college. But there is also a non-financial return, such as the benefits of working in a more appealing or exciting career and the benefits of learning for its own sake. The reason so many people do go into history and literature instead of economics and physics very likely has to do with valuing these other aspects of education as highly as or even more highly than financial income, and it is entirely rational for people to do so. (An interesting survey question I’ve alas never seen asked: “How much money would we have to give you right now to convince you to quit working in philosophy for the rest of your life?”)

Yet even more important is the public return on education, the increased productivity and prosperity of our society as a result of greater education—and these returns are enormous. For every $1 spent on education in the US, the economy grows by an estimated $1.50. Public returns on college education worldwide are on the order of 10%-20% per year of education. This is over and above the 15-20% return already being made by the individuals going to school. This means that raising the average level of education in a country by just one year raises that country’s income by between 25% and 40%.

Indeed, perhaps the simplest way to understand the enormous social benefits of education is to note the strong correlation between education level and income level. This graph comes from the UN Human Development Report Data Explorer; it plots the HDI education index (which ranges from 0, least educated, to 1, most educated) and the per-capita GDP at purchasing power parity (on a log scale, so that each increment corresponds to a proportional increase in GDP); as you can see, educated countries tend to be rich countries, and vice-versa.

hdi_education_income_labeled

Of course, income drives education just as education drives income. But more detailed econometric studies generally (though not without some controversy) show the same basic result: The more educated a country’s people become, the richer that country becomes.

And indeed, the United States is a spectacularly rich country. The figure of “$1 trillion in college debt” sounds alarming (and has been used to such effect in many a news article, ranging from the New York Daily News, Slate, and POLITICO to USA Today and CNN all the way to Bloomberg, MarketWatch, and Business Insider, and even getting support from the Consumer Financial Protection Bureau and The Federal Reserve Bank of New York).

But the United States has a total GDP of over $18.6 trillion, and total net wealth somewhere around $84 trillion. Is it really so alarming that our nation’s most important investment would result in debt of less than two percent of our total nation’s wealth? Democracy Now asks who is getting rich off of $1.3 trillion in student debt? All of us—the students especially.

In fact, the probability of defaulting on student loans is inversely proportional to the amount of loans a student has. Students with over $100,000 in student debt default only 18% of the time, while students with less than $5,000 in student debt default 34% of the time. This should be shocking to those who think that we have a crisis of too much student debt; if student debt were an excess burden that is imposed upon us for little gain, default rates should rise as borrowing amounts increase, as we observe, for example, with credit cards: there is a positive correlation between carrying higher balances and being more likely to default. (This also raises doubts about the argument that higher debt loads should carry higher interest rates—why, if the default rate doesn’t go up?) But it makes perfect sense if you realize that college is an investment—indeed, almost certainly both the most profitable and the most socially responsible investment most people will ever have the opportunity to make. More debt means you had access to more credit to make a larger investment—and therefore your payoff was greater and you were more likely to be able to repay the debt.

Yes, job prospects were bad for college graduates right after the Great Recession—because it was right after the Great Recession, and job prospects were bad for everyone. Indeed, the unemployment rate for people with college degrees was substantially lower than for those without college degrees, all the way through the Second Depression. The New York Times has a nice little gadget where you can estimate the unemployment rate for college graduates; my hint for you is that I just said it’s lower, and I still guessed too high. There was variation across fields, of course; unsurprisingly computer science majors did extremely well and humanities majors did rather poorly. Underemployment was a big problem, but again, clearly because of the recession, not because going to college was a mistake. In fact, unemployment for college graduates (about 9%) has always been so much lower than unemployment for high school dropouts that the maximum unemployment rate for young college graduates is less than the minimum unemployment rate for young high school graduates (10%) over the entire period since the year 2000. Young high school dropouts have fared even worse; their minimum unemployment rate since 2000 was 18%, while their maximum was a terrifying Great Depression-level of 32%. Education isn’t just a good investment—it’s an astonishingly good investment.

There are a lot of things worth panicking about, now that Trump has been elected President. But student debt isn’t one of them. This is a very smart investment, made with a reasonable portion of our nation’s wealth. If you have student debt like I do, make sure you have enough—or otherwise you might not be able to pay it back.

What good are macroeconomic models? How could they be better?

Dec 11, JDN 2457734

One thing that I don’t think most people know, but which immediately obvious to any student of economics at the college level or above, is that there is a veritable cornucopia of different macroeconomic models. There are growth models (the Solow model, the Harrod-Domar model, the Ramsey model), monetary policy models (IS-LM, aggregate demand-aggregate supply), trade models (the Mundell-Fleming model, the Heckscher-Ohlin model), large-scale computational models (dynamic stochastic general equilibrium, agent-based computational economics), and I could go on.

This immediately raises the question: What are all these models for? What good are they?

A cynical view might be that they aren’t useful at all, that this is all false mathematical precision which makes economics persuasive without making it accurate or useful. And with such a proliferation of models and contradictory conclusions, I can see why such a view would be tempting.

But many of these models are useful, at least in certain circumstances. They aren’t completely arbitrary. Indeed, one of the litmus tests of the last decade has been how well the models held up against the events of the Great Recession and following Second Depression. The Keynesian and cognitive/behavioral models did rather well, albeit with significant gaps and flaws. The Monetarist, Real Business Cycle, and most other neoclassical models failed miserably, as did Austrian and Marxist notions so fluid and ill-defined that I’m not sure they deserve to even be called “models”. So there is at least some empirical basis for deciding what assumptions we should be willing to use in our models. Yet even if we restrict ourselves to Keynesian and cognitive/behavioral models, there are still a great many to choose from, which often yield inconsistent results.

So let’s compare with a science that is uncontroversially successful: Physics. How do mathematical models in physics compare with mathematical models in economics?

Well, there are still a lot of models, first of all. There’s the Bohr model, the Schrodinger equation, the Dirac equation, Newtonian mechanics, Lagrangian mechanics, Bohmian mechanics, Maxwell’s equations, Faraday’s law, Coulomb’s law, the Einstein field equations, the Minkowsky metric, the Schwarzschild metric, the Rindler metric, Feynman-Wheeler theory, the Navier-Stokes equations, and so on. So a cornucopia of models is not inherently a bad thing.

Yet, there is something about physics models that makes them more reliable than economics models.

Partly it is that the systems physicists study are literally two dozen orders of magnitude or more smaller and simpler than the systems economists study. Their task is inherently easier than ours.

But it’s not just that; their models aren’t just simpler—actually they often aren’t. The Navier-Stokes equations are a lot more complicated than the Solow model. They’re also clearly a lot more accurate.

The feature that models in physics seem to have that models in economics do not is something we might call nesting, or maybe consistency. Models in physics don’t come out of nowhere; you can’t just make up your own new model based on whatever assumptions you like and then start using it—which you very much can do in economics. Models in physics are required to fit consistently with one another, and usually inside one another, in the following sense:

The Dirac equation strictly generalizes the Schrodinger equation, which strictly generalizes the Bohr model. Bohmian mechanics is consistent with quantum mechanics, which strictly generalizes Lagrangian mechanics, which generalizes Newtonian mechanics. The Einstein field equations are consistent with Maxwell’s equations and strictly generalize the Minkowsky, Schwarzschild, and Rindler metrics. Maxwell’s equations strictly generalize Faraday’s law and Coulomb’s law.
In other words, there are a small number of canonical models—the Dirac equation, Maxwell’s equations and the Einstein field equation, essentially—inside which all other models are nested. The simpler models like Coulomb’s law and Newtonian mechanics are not contradictory with these canonical models; they are contained within them, subject to certain constraints (such as macroscopic systems far below the speed of light).

This is something I wish more people understood (I blame Kuhn for confusing everyone about what paradigm shifts really entail); Einstein did not overturn Newton’s laws, he extended them to domains where they previously had failed to apply.

This is why it is sensible to say that certain theories in physics are true; they are the canonical models that underlie all known phenomena. Other models can be useful, but not because we are relativists about truth or anything like that; Newtonian physics is a very good approximation of the Einstein field equations at the scale of many phenomena we care about, and is also much more mathematically tractable. If we ever find ourselves in situations where Newton’s equations no longer apply—near a black hole, traveling near the speed of light—then we know we can fall back on the more complex canonical model; but when the simpler model works, there’s no reason not to use it.

There are still very serious gaps in the knowledge of physics; in particular, there is a fundamental gulf between quantum mechanics and the Einstein field equations that has been unresolved for decades. A solution to this “quantum gravity problem” would be essentially a guaranteed Nobel Prize. So even a canonical model can be flawed, and can be extended or improved upon; the result is then a new canonical model which we now regard as our best approximation to truth.

Yet the contrast with economics is still quite clear. We don’t have one or two or even ten canonical models to refer back to. We can’t say that the Solow model is an approximation of some greater canonical model that works for these purposes—because we don’t have that greater canonical model. We can’t say that agent-based computational economics is approximately right, because we have nothing to approximate it to.

I went into economics thinking that neoclassical economics needed a new paradigm. I have now realized something much more alarming: Neoclassical economics doesn’t really have a paradigm. Or if it does, it’s a very informal paradigm, one that is expressed by the arbitrary judgments of journal editors, not one that can be written down as a series of equations. We assume perfect rationality, except when we don’t. We assume constant returns to scale, except when that doesn’t work. We assume perfect competition, except when that doesn’t get the results we wanted. The agents in our models are infinite identical psychopaths, and they are exactly as rational as needed for the conclusion I want.

This is quite likely why there is so much disagreement within economics. When you can permute the parameters however you like with no regard to a canonical model, you can more or less draw whatever conclusion you want, especially if you aren’t tightly bound to empirical evidence. I know a great many economists who are sure that raising minimum wage results in large disemployment effects, because the models they believe in say that it must, even though the empirical evidence has been quite clear that these effects are small if they are present at all. If we had a canonical model of employment that we could calibrate to the empirical evidence, that couldn’t happen anymore; there would be a coefficient I could point to that would refute their argument. But when every new paper comes with a new model, there’s no way to do that; one set of assumptions is as good as another.

Indeed, as I mentioned in an earlier post, a remarkable number of economists seem to embrace this relativism. “There is no true model.” they say; “We do what is useful.” Recently I encountered a book by the eminent economist Deirdre McCloskey which, though I confess I haven’t read it in its entirety, appears to be trying to argue that economics is just a meaningless language game that doesn’t have or need to have any connection with actual reality. (If any of you have read it and think I’m misunderstanding it, please explain. As it is I haven’t bought it for a reason any economist should respect: I am disinclined to incentivize such writing.)

Creating such a canonical model would no doubt be extremely difficult. Indeed, it is a task that would require the combined efforts of hundreds of researchers and could take generations to achieve. The true equations that underlie the economy could be totally intractable even for our best computers. But quantum mechanics wasn’t built in a day, either. The key challenge here lies in convincing economists that this is something worth doing—that if we really want to be taken seriously as scientists we need to start acting like them. Scientists believe in truth, and they are trying to find it out. While not immune to tribalism or ideology or other human limitations, they resist them as fiercely as possible, always turning back to the evidence above all else. And in their combined strivings, they attempt to build a grand edifice, a universal theory to stand the test of time—a canonical model.

Experimentally testing categorical prospect theory

Dec 4, JDN 2457727

In last week’s post I presented a new theory of probability judgments, which doesn’t rely upon people performing complicated math even subconsciously. Instead, I hypothesize that people try to assign categories to their subjective probabilities, and throw away all the information that wasn’t used to assign that category.

The way to most clearly distinguish this from cumulative prospect theory is to show discontinuity. Kahneman’s smooth, continuous function places fairly strong bounds on just how much a shift from 0% to 0.000001% can really affect your behavior. In particular, if you want to explain the fact that people do seem to behave differently around 10% compared to 1% probabilities, you can’t allow the slope of the smooth function to get much higher than 10 at any point, even near 0 and 1. (It does depend on the precise form of the function, but the more complicated you make it, the more free parameters you add to the model. In the most parsimonious form, which is a cubic polynomial, the maximum slope is actually much smaller than this—only 2.)

If that’s the case, then switching from 0.% to 0.0001% should have no more effect in reality than a switch from 0% to 0.00001% would to a rational expected utility optimizer. But in fact I think I can set up scenarios where it would have a larger effect than a switch from 0.001% to 0.01%.

Indeed, these games are already quite profitable for the majority of US states, and they are called lotteries.

Rationally, it should make very little difference to you whether your odds of winning the Powerball are 0 (you bought no ticket) or 0.000000001% (you bought a ticket), even when the prize is $100 million. This is because your utility of $100 million is nowhere near 100 million times as large as your marginal utility of $1. A good guess would be that your lifetime income is about $2 million, your utility is logarithmic, the units of utility are hectoQALY, and the baseline level is about 100,000.

I apologize for the extremely large number of decimals, but I had to do that in order to show any difference at all. I have bolded where the decimals first deviate from the baseline.

Your utility if you don’t have a ticket is ln(20) = 2.9957322736 hQALY.

Your utility if you have a ticket is (1-10^-9) ln(20) + 10^-9 ln(1020) = 2.9957322775 hQALY.

You gain a whopping 40 microQALY over your whole lifetime. I highly doubt you could even perceive such a difference.

And yet, people are willing to pay nontrivial sums for the chance to play such lotteries. Powerball tickets sell for about $2 each, and some people buy tickets every week. If you do that and live to be 80, you will spend some $8,000 on lottery tickets during your lifetime, which results in this expected utility: (1-4*10^-6) ln(20-0.08) + 4*10^-6 ln(1020) = 2.9917399955 hQALY.
You have now sacrificed 0.004 hectoQALY, which is to say 0.4 QALY—that’s months of happiness you’ve given up to play this stupid pointless game.

Which shouldn’t be surprising, as (with 99.9996% probability) you have given up four months of your lifetime income with nothing to show for it. Lifetime income of $2 million / lifespan of 80 years = $25,000 per year; $8,000 / $25,000 = 0.32. You’ve actually sacrificed slightly more than this, which comes from your risk aversion.

Why would anyone do such a thing? Because while the difference between 0 and 10^-9 may be trivial, the difference between “impossible” and “almost impossible” feels enormous. “You can’t win if you don’t play!” they say, but they might as well say “You can’t win if you do play either.” Indeed, the probability of winning without playing isn’t zero; you could find a winning ticket lying on the ground, or win due to an error that is then upheld in court, or be given the winnings bequeathed by a dying family member or gifted by an anonymous donor. These are of course vanishingly unlikely—but so was winning in the first place. You’re talking about the difference between 10^-9 and 10^-12, which in proportional terms sounds like a lot—but in absolute terms is nothing. If you drive to a drug store every week to buy a ticket, you are more likely to die in a car accident on the way to the drug store than you are to win the lottery.

Of course, these are not experimental conditions. So I need to devise a similar game, with smaller stakes but still large enough for people’s brains to care about the “almost impossible” category; maybe thousands? It’s not uncommon for an economics experiment to cost thousands, it’s just usually paid out to many people instead of randomly to one person or nobody. Conducting the experiment in an underdeveloped country like India would also effectively amplify the amounts paid, but at the fixed cost of transporting the research team to India.

But I think in general terms the experiment could look something like this. You are given $20 for participating in the experiment (we treat it as already given to you, to maximize your loss aversion and endowment effect and thereby give us more bang for our buck). You then have a chance to play a game, where you pay $X to get a P probability of $Y*X, and we vary these numbers.

The actual participants wouldn’t see the variables, just the numbers and possibly the rules: “You can pay $2 for a 1% chance of winning $200. You can also play multiple times if you wish.” “You can pay $10 for a 5% chance of winning $250. You can only play once or not at all.”

So I think the first step is to find some dilemmas, cases where people feel ambivalent, and different people differ in their choices. That’s a good role for a pilot study.

Then we take these dilemmas and start varying their probabilities slightly.

In particular, we try to vary them at the edge of where people have mental categories. If subjective probability is continuous, a slight change in actual probability should never result in a large change in behavior, and furthermore the effect of a change shouldn’t vary too much depending on where the change starts.

But if subjective probability is categorical, these categories should have edges. Then, when I present you with two dilemmas that are on opposite sides of one of the edges, your behavior should radically shift; while if I change it in a different way, I can make a large change without changing the result.

Based solely on my own intuition, I guessed that the categories roughly follow this pattern:

Impossible: 0%

Almost impossible: 0.1%

Very unlikely: 1%

Unlikely: 10%

Fairly unlikely: 20%

Roughly even odds: 50%

Fairly likely: 80%

Likely: 90%

Very likely: 99%

Almost certain: 99.9%

Certain: 100%

So for example, if I switch from 0%% to 0.01%, it should have a very large effect, because I’ve moved you out of your “impossible” category (indeed, I think the “impossible” category is almost completely sharp; literally anything above zero seems to be enough for most people, even 10^-9 or 10^-10). But if I move from 1% to 2%, it should have a small effect, because I’m still well within the “very unlikely” category. Yet the latter change is literally one hundred times larger than the former. It is possible to define continuous functions that would behave this way to an arbitrary level of approximation—but they get a lot less parsimonious very fast.

Now, immediately I run into a problem, because I’m not even sure those are my categories, much less that they are everyone else’s. If I knew precisely which categories to look for, I could tell whether or not I had found it. But the process of both finding the categories and determining if their edges are truly sharp is much more complicated, and requires a lot more statistical degrees of freedom to get beyond the noise.

One thing I’m considering is assigning these values as a prior, and then conducting a series of experiments which would adjust that prior. In effect I would be using optimal Bayesian probability reasoning to show that human beings do not use optimal Bayesian probability reasoning. Still, I think that actually pinning down the categories would require a large number of participants or a long series of experiments (in frequentist statistics this distinction is vital; in Bayesian statistics it is basically irrelevant—one of the simplest reasons to be Bayesian is that it no longer bothers you whether someone did 2 experiments of 100 people or 1 experiment of 200 people, provided they were the same experiment of course). And of course there’s always the possibility that my theory is totally off-base, and I find nothing; a dissertation replicating cumulative prospect theory is a lot less exciting (and, sadly, less publishable) than one refuting it.

Still, I think something like this is worth exploring. I highly doubt that people are doing very much math when they make most probabilistic judgments, and using categories would provide a very good way for people to make judgments usefully with no math at all.

How do people think about probability?

Nov 27, JDN 2457690

(This topic was chosen by vote of my Patreons.)

In neoclassical theory, it is assumed (explicitly or implicitly) that human beings judge probability in something like the optimal Bayesian way: We assign prior probabilities to events, and then when confronted with evidence we infer using the observed data to update our prior probabilities to posterior probabilities. Then, when we have to make decisions, we maximize our expected utility subject to our posterior probabilities.

This, of course, is nothing like how human beings actually think. Even very intelligent, rational, numerate people only engage in a vague approximation of this behavior, and only when dealing with major decisions likely to affect the course of their lives. (Yes, I literally decide which universities to attend based upon formal expected utility models. Thus far, I’ve never been dissatisfied with a decision made that way.) No one decides what to eat for lunch or what to do this weekend based on formal expected utility models—or at least I hope they don’t, because that point the computational cost far exceeds the expected benefit.

So how do human beings actually think about probability? Well, a good place to start is to look at ways in which we systematically deviate from expected utility theory.

A classic example is the Allais paradox. See if it applies to you.

In game A, you get $1 million dollars, guaranteed.
In game B, you have a 10% chance of getting $5 million, an 89% chance of getting $1 million, but now you have a 1% chance of getting nothing.

Which do you prefer, game A or game B?

In game C, you have an 11% chance of getting $1 million, and an 89% chance of getting nothing.

In game D, you have a 10% chance of getting $5 million, and a 90% chance of getting nothing.

Which do you prefer, game C or game D?

I have to think about it for a little bit and do some calculations, and it’s still very hard because it depends crucially on my projected lifetime income (which could easily exceed $3 million with a PhD, especially in economics) and the precise form of my marginal utility (I think I have constant relative risk aversion, but I’m not sure what parameter to use precisely), but in general I think I want to choose game A and game C, but I actually feel really ambivalent, because it’s not hard to find plausible parameters for my utility where I should go for the gamble.

But if you’re like most people, you choose game A and game D.

There is no coherent expected utility by which you would do this.

Why? Either a 10% chance of $5 million instead of $1 million is worth risking a 1% chance of nothing, or it isn’t. If it is, you should play B and D. If it’s not, you should play A and C. I can’t tell you for sure whether it is worth it—I can’t even fully decide for myself—but it either is or it isn’t.

Yet most people have a strong intuition that they should take game A but game D. Why? What does this say about how we judge probability?
The leading theory in behavioral economics right now is cumulative prospect theory, developed by the great Kahneman and Tversky, who essentially founded the field of behavioral economics. It’s quite intimidating to try to go up against them—which is probably why we should force ourselves to do it. Fear of challenging the favorite theories of the great scientists before us is how science stagnates.

I wrote about it more in a previous post, but as a brief review, cumulative prospect theory says that instead of judging based on a well-defined utility function, we instead consider gains and losses as fundamentally different sorts of thing, and in three specific ways:

First, we are loss-averse; we feel a loss about twice as intensely as a gain of the same amount.

Second, we are risk-averse for gains, but risk-seeking for losses; we assume that gaining twice as much isn’t actually twice as good (which is almost certainly true), but we also assume that losing twice as much isn’t actually twice as bad (which is almost certainly false and indeed contradictory with the previous).

Third, we judge probabilities as more important when they are close to certainty. We make a large distinction between a 0% probability and a 0.0000001% probability, but almost no distinction at all between a 41% probability and a 43% probability.

That last part is what I want to focus on for today. In Kahneman’s model, this is a continuous, monotonoic function that maps 0 to 0 and 1 to 1, but systematically overestimates probabilities below but near 1/2 and systematically underestimates probabilities above but near 1/2.

It looks something like this, where red is true probability and blue is subjective probability:

cumulative_prospect
I don’t believe this is actually how humans think, for two reasons:

  1. It’s too hard. Humans are astonishingly innumerate creatures, given the enormous processing power of our brains. It’s true that we have some intuitive capacity for “solving” very complex equations, but that’s almost all within our motor system—we can “solve a differential equation” when we catch a ball, but we have no idea how we’re doing it. But probability judgments are often made consciously, especially in experiments like the Allais paradox; and the conscious brain is terrible at math. It’s actually really amazing how bad we are at math. Any model of normal human judgment should assume from the start that we will not do complicated math at any point in the process. Maybe you can hypothesize that we do so subconsciously, but you’d better have a good reason for assuming that.
  2. There is no reason to do this. Why in the world would any kind of optimization system function this way? You start with perfectly good probabilities, and then instead of using them, you subject them to some bizarre, unmotivated transformation that makes them less accurate and costs computing power? You may as well hit yourself in the head with a brick.

So, why might it look like we are doing this? Well, my proposal, admittedly still rather half-baked, is that human beings don’t assign probabilities numerically at all; we assign them categorically.

You may call this, for lack of a better term, categorical prospect theory.

My theory is that people don’t actually have in their head “there is an 11% chance of rain today” (unless they specifically heard that from a weather report this morning); they have in their head “it’s fairly unlikely that it will rain today”.

That is, we assign some small number of discrete categories of probability, and fit things into them. I’m not sure what exactly the categories are, and part of what makes my job difficult here is that they may be fuzzy-edged and vary from person to person, but roughly speaking, I think they correspond to the sort of things psychologists usually put on Likert scales in surveys: Impossible, almost impossible, very unlikely, unlikely, fairly unlikely, roughly even odds, fairly likely, likely, very likely, almost certain, certain. If I’m putting numbers on these probability categories, they go something like this: 0, 0.001, 0.01, 0.10, 0.20, 0.50, 0.8, 0.9, 0.99, 0.999, 1.

Notice that this would preserve the same basic effect as cumulative prospect theory: You care a lot more about differences in probability when they are near 0 or 1, because those are much more likely to actually shift your category. Indeed, as written, you wouldn’t care about a shift from 0.4 to 0.6 at all, despite caring a great deal about a shift from 0.001 to 0.01.

How does this solve the above problems?

  1. It’s easy. Not only don’t you compute a probability and then recompute it for no reason; you never even have to compute it precisely. Just get it within some vague error bounds and that will tell you what box it goes in. Instead of computing an approximation to a continuous function, you just slot things into a small number of discrete boxes, a dozen at the most.
  2. That explains why we would do it: It’s easy. Our brains need to conserve their capacity, and they did especially in our ancestral environment when we struggled to survive. Rather than having to iterate your approximation to arbitrary precision, you just get within 0.1 or so and call it a day. That saves time and computing power, which saves energy, which could save your life.

What new problems have I introduced?

  1. It’s very hard to know exactly where people’s categories are, if they vary between individuals or even between situations, and whether they are fuzzy-edged.
  2. If you take the model I just gave literally, even quite large probability changes will have absolutely no effect as long as they remain within a category such as “roughly even odds”.

With regard to 2, I think Kahneman may himself be able to save me, with his dual process theory concept of System 1 and System 2. What I’m really asserting is that System 1, the fast, intuitive judgment system, operates on these categories. System 2, on the other hand, the careful, rational thought system, can actually make use of proper numerical probabilities; it’s just very costly to boot up System 2 in the first place, much less ensure that it actually gets the right answer.

How might we test this? Well, I think that people are more likely to use System 1 when any of the following are true:

  1. They are under harsh time-pressure
  2. The decision isn’t very important
  3. The intuitive judgment is fast and obvious

And conversely they are likely to use System 2 when the following are true:

  1. They have plenty of time to think
  2. The decision is very important
  3. The intuitive judgment is difficult or unclear

So, it should be possible to arrange an experiment varying these parameters, such that in one treatment people almost always use System 1, and in another they almost always use System 2. And then, my prediction is that in the System 1 treatment, people will in fact not change their behavior at all when you change the probability from 15% to 25% (fairly unlikely) or 40% to 60% (roughly even odds).

To be clear, you can’t just present people with this choice between game E and game F:

Game E: You get a 60% chance of $50, and a 40% chance of nothing.

Game F: You get a 40% chance of $50, and a 60% chance of nothing.

People will obviously choose game E. If you can directly compare the numbers and one game is strictly better in every way, I think even without much effort people will be able to choose correctly.

Instead, what I’m saying is that if you make the following offers to two completely different sets of people, you will observe little difference in their choices, even though under expected utility theory you should.
Group I receives a choice between game E and game G:

Game E: You get a 60% chance of $50, and a 40% chance of nothing.

Game G: You get a 100% chance of $20.

Group II receives a choice between game F and game G:

Game F: You get a 40% chance of $50, and a 60% chance of nothing.

Game G: You get a 100% chance of $20.

Under two very plausible assumptions about marginal utility of wealth, I can fix what the rational judgment should be in each game.

The first assumption is that marginal utility of wealth is decreasing, so people are risk-averse (at least for gains, which these are). The second assumption is that most people’s lifetime income is at least two orders of magnitude higher than $50.

By the first assumption, group II should choose game G. The expected income is precisely the same, and being even ever so slightly risk-averse should make you go for the guaranteed $20.

By the second assumption, group I should choose game E. Yes, there is some risk, but because $50 should not be a huge sum to you, your risk aversion should be small and the higher expected income of $30 should sway you.

But I predict that most people will choose game G in both cases, and (within statistical error) the same proportion will choose F as chose E—thus showing that the difference between a 40% chance and a 60% chance was in fact negligible to their intuitive judgments.

However, this doesn’t actually disprove Kahneman’s theory; perhaps that part of the subjective probability function is just that flat. For that, I need to set up an experiment where I show discontinuity. I need to find the edge of a category and get people to switch categories sharply. Next week I’ll talk about how we might pull that off.

Bigotry is more powerful than the market

Nov 20, JDN 2457683

If there’s one message we can take from the election of Donald Trump, it is that bigotry remains a powerful force in our society. A lot of autoflagellating liberals have been trying to explain how this election result really reflects our failure to help people displaced by technology and globalization (despite the fact that personal income and local unemployment had negligible correlation with voting for Trump), or Hillary Clinton’s “bad campaign” that nonetheless managed the same proportion of Democrat turnout that re-elected her husband in 1996.

No, overwhelmingly, the strongest predictor of voting for Trump was being White, and living in an area where most people are White. (Well, actually, that’s if you exclude authoritarianism as an explanatory variable—but really I think that’s part of what we’re trying to explain.) Trump voters were actually concentrated in areas less affected by immigration and globalization. Indeed, there is evidence that these people aren’t racist because they have anxiety about the economy—they are anxious about the economy because they are racist. How does that work? Obama. They can’t believe that the economy is doing well when a Black man is in charge. So all the statistics and even personal experiences mean nothing to them. They know in their hearts that unemployment is rising, even as the BLS data clearly shows it’s falling.

The wide prevalence and enormous power of bigotry should be obvious. But economists rarely talk about it, and I think I know why: Their models say it shouldn’t exist. The free market is supposed to automatically eliminate all forms of bigotry, because they are inefficient.

The argument for why this is supposed to happen actually makes a great deal of sense: If a company has the choice of hiring a White man or a Black woman to do the same job, but they know that the market wage for Black women is lower than the market wage for White men (which it most certainly is), and they will do the same quality and quantity of work, why wouldn’t they hire the Black woman? And indeed, if human beings were rational profit-maximizers, this is probably how they would think.

More recently some neoclassical models have been developed to try to “explain” this behavior, but always without daring to give up the precious assumption of perfect rationality. So instead we get the two leading neoclassical theories of discrimination, which are statistical discrimination and taste-based discrimination.

Statistical discrimination is the idea that under asymmetric information (and we surely have that), features such as race and gender can act as signals of quality because they are correlated with actual quality for various reasons (usually left unspecified), so it is not irrational after all to choose based upon them, since they’re the best you have.

Taste-based discrimination is the idea that people are rationally maximizing preferences that simply aren’t oriented toward maximizing profit or well-being. Instead, they have this extra term in their utility function that says they should also treat White men better than women or Black people. It’s just this extra thing they have.

A small number of studies have been done trying to discern which of these is at work.
The correct answer, of course, is neither.

Statistical discrimination, at least, could be part of what’s going on. Knowing that Black people are less likely to be highly educated than Asians (as they definitely are) might actually be useful information in some circumstances… then again, you list your degree on your resume, don’t you? Knowing that women are more likely to drop out of the workforce after having a child could rationally (if coldly) affect your assessment of future productivity. But shouldn’t the fact that women CEOs outperform men CEOs be incentivizing shareholders to elect women CEOs? Yet that doesn’t seem to happen. Also, in general, people seem to be pretty bad at statistics.

The bigger problem with statistical discrimination as a theory is that it’s really only part of a theory. It explains why not all of the discrimination has to be irrational, but some of it still does. You need to explain why there are these huge disparities between groups in the first place, and statistical discrimination is unable to do that. In order for the statistics to differ this much, you need a past history of discrimination that wasn’t purely statistical.

Taste-based discrimination, on the other hand, is not a theory at all. It’s special pleading. Rather than admit that people are failing to rationally maximize their utility, we just redefine their utility so that whatever they happen to be doing now “maximizes” it.

This is really what makes the Axiom of Revealed Preference so insidious; if you really take it seriously, it says that whatever you do, must by definition be what you preferred. You can’t possibly be irrational, you can’t possibly be making mistakes of judgment, because by definition whatever you did must be what you wanted. Maybe you enjoy bashing your head into a wall, who am I to judge?

I mean, on some level taste-based discrimination is what’s happening; people think that the world is a better place if they put women and Black people in their place. So in that sense, they are trying to “maximize” some “utility function”. (By the way, most human beings behave in ways that are provably inconsistent with maximizing any well-defined utility function—the Allais Paradox is a classic example.) But the whole framework of calling it “taste-based” is a way of running away from the real explanation. If it’s just “taste”, well, it’s an unexplainable brute fact of the universe, and we just need to accept it. If people are happier being racist, what can you do, eh?

So I think it’s high time to start calling it what it is. This is not a question of taste. This is a question of tribal instinct. This is the product of millions of years of evolution optimizing the human brain to act in the perceived interest of whatever it defines as its “tribe”. It could be yourself, your family, your village, your town, your religion, your nation, your race, your gender, or even the whole of humanity or beyond into all sentient beings. But whatever it is, the fundamental tribe is the one thing you care most about. It is what you would sacrifice anything else for.

And what we learned on November 9 this year is that an awful lot of Americans define their tribe in very narrow terms. Nationalistic and xenophobic at best, racist and misogynistic at worst.

But I suppose this really isn’t so surprising, if you look at the history of our nation and the world. Segregation was not outlawed in US schools until 1955, and there are women who voted in this election who were born before American women got the right to vote in 1920. The nationalistic backlash against sending jobs to China (which was one of the chief ways that we reduced global poverty to its lowest level ever, by the way) really shouldn’t seem so strange when we remember that over 100,000 Japanese-Americans were literally forcibly relocated into camps as recently as 1942. The fact that so many White Americans seem all right with the biases against Black people in our justice system may not seem so strange when we recall that systemic lynching of Black people in the US didn’t end until the 1960s.

The wonder, in fact, is that we have made as much progress as we have. Tribal instinct is not a strange aberration of human behavior; it is our evolutionary default setting.

Indeed, perhaps it is unreasonable of me to ask humanity to change its ways so fast! We had millions of years to learn how to live the wrong way, and I’m giving you only a few centuries to learn the right way?

The problem, of course, is that the pace of technological change leaves us with no choice. It might be better if we could wait a thousand years for people to gradually adjust to globalization and become cosmopolitan; but climate change won’t wait a hundred, and nuclear weapons won’t wait at all. We are thrust into a world that is changing very fast indeed, and I understand that it is hard to keep up; but there is no way to turn back that tide of change.

Yet “turn back the tide” does seem to be part of the core message of the Trump voter, once you get past the racial slurs and sexist slogans. People are afraid of what the world is becoming. They feel that it is leaving them behind. Coal miners fret that we are leaving them behind by cutting coal consumption. Factory workers fear that we are leaving them behind by moving the factory to China or inventing robots to do the work in half the time for half the price.

And truth be told, they are not wrong about this. We are leaving them behind. Because we have to. Because coal is polluting our air and destroying our climate, we must stop using it. Moving the factories to China has raised them out of the most dire poverty, and given us a fighting chance toward ending world hunger. Inventing the robots is only the next logical step in the process that has carried humanity forward from the squalor and suffering of primitive life to the security and prosperity of modern society—and it is a step we must take, for the progress of civilization is not yet complete.

They wouldn’t have to let themselves be left behind, if they were willing to accept our help and learn to adapt. That carbon tax that closes your coal mine could also pay for your basic income and your job-matching program. The increased efficiency from the automated factories could provide an abundance of wealth that we could redistribute and share with you.

But this would require them to rethink their view of the world. They would have to accept that climate change is a real threat, and not a hoax created by… uh… never was clear on that point actually… the Chinese maybe? But 45% of Trump supporters don’t believe in climate change (and that’s actually not as bad as I’d have thought). They would have to accept that what they call “socialism” (which really is more precisely described as social democracy, or tax-and-transfer redistribution of wealth) is actually something they themselves need, and will need even more in the future. But despite rising inequality, redistribution of wealth remains fairly unpopular in the US, especially among Republicans.

Above all, it would require them to redefine their tribe, and start listening to—and valuing the lives of—people that they currently do not.

Perhaps we need to redefine our tribe as well; many liberals have argued that we mistakenly—and dangerously—did not include people like Trump voters in our tribe. But to be honest, that rings a little hollow to me: We aren’t the ones threatening to deport people or ban them from entering our borders. We aren’t the ones who want to build a wall (though some have in fact joked about building a wall to separate the West Coast from the rest of the country, I don’t think many people really want to do that). Perhaps we live in a bubble of liberal media? But I make a point of reading outlets like The American Conservative and The National Review for other perspectives (I usually disagree, but I do at least read them); how many Trump voters do you think have ever read the New York Times, let alone Huffington Post? Cosmopolitans almost by definition have the more inclusive tribe, the more open perspective on the world (in fact, do I even need the “almost”?).

Nor do I think we are actually ignoring their interests. We want to help them. We offer to help them. In fact, I want to give these people free money—that’s what a basic income would do, it would take money from people like me and give it to people like them—and they won’t let us, because that’s “socialism”! Rather, we are simply refusing to accept their offered solutions, because those so-called “solutions” are beyond unworkable; they are absurd, immoral and insane. We can’t bring back the coal mining jobs, unless we want Florida underwater in 50 years. We can’t reinstate the trade tariffs, unless we want millions of people in China to starve. We can’t tear down all the robots and force factories to use manual labor, unless we want to trigger a national—and then global—economic collapse. We can’t do it their way. So we’re trying to offer them another way, a better way, and they’re refusing to take it. So who here is ignoring the concerns of whom?

Of course, the fact that it’s really their fault doesn’t solve the problem. We do need to take it upon ourselves to do whatever we can, because, regardless of whose fault it is, the world will still suffer if we fail. And that presents us with our most difficult task of all, a task that I fully expect to spend a career trying to do and yet still probably failing: We must understand the human tribal instinct well enough that we can finally begin to change it. We must know enough about how human beings form their mental tribes that we can actually begin to shift those parameters. We must, in other words, cure bigotry—and we must do it now, for we are running out of time.

Congratulations, America.

Nov 13, JDN 2457676

Congratulations, you elected Donald Trump.

Instead of the candidate with decades of experience as Secretary of State, US Senator, and an internationally renowned philanthropist, you chose the first President in history to not have any experience whatsoever in government or the military.

Instead of the candidate with the most comprehensive, evidence-based plan for action against climate change (that is, the only candidate who supports nuclear energy), you elected the one who is planning to appoint a climate-change denier head of the EPA.

Perhaps to punish the candidate who carried out a longstanding custom of using private email servers because the public servers were so defective, you accepted the candidate who is being charged with not only mass fraud but also multiple counts of sexual assault.

Perhaps based on the Russian propaganda—not kidding, read the URL—saying that one candidate could trigger a Third World War, you chose the candidate who has no idea how international diplomacy works and wants to convert NATO into a mercantilist empire (and by the way has no apparent qualms about deploying nuclear weapons).

Because one candidate was “too close to Wall Street” in some vague ill-defined sense (oh my god, she gave speeches! And accepted donations!), you elected the other one who has already vowed to turn back the financial regulations that are currently protecting us from a repeat of the Great Recession.

Because you didn’t trust the candidate with one of the highest honest ratings ever recorded, you elected the one who is surrounded by hundreds of scandals and never even released his tax returns.
Even if you didn’t outright agree with it, you were willing to look past his promise to deport 11 million people and his long history of bigotry toward a wide variety of ethnic groups.
Even his Vice President, who seems like a great statesman simply by comparison, is one of the most fanatical right-wing Vice Presidents we’ve had in decades. He opposes not just abortion, but birth control. He supports—and has signed as governor—“religious freedom” bills designed to legalize discrimination against LGBT people.

Congratulations, America. You literally elected the candidate that was supported by Vladimir Putin, Kim Jong-un, the American Nazi Party, and the Klu Klux Klan. Now, reversed stupidity is not intelligence; being endorsed by someone horrible doesn’t necessarily mean you are horrible. But when this many horrible people endorse you, and start giving the same reasons, and those reasons are based on things you particularly have in common with those horrible people like bigotry and authoritarianism… yeah, I think it does say something about you.

Now, to be fair, much of the blame here goes to the Electoral College.

By current counts, Hillary Clinton won the popular vote by at least 500,000 votes. It is projected that she may even win by as much as 2 million. This will be the fourth time in US history that the Electoral College winner was definitely not the popular vote winner.

But even that is only possible because Hillary Clinton did not win the overwhelming landslide she deserved. The Electoral College should have been irrelevant, because she should have won at least 60% of every demographic in every state. Our whole nation should have declared together in one voice that we will not tolerate bigotry and authoritarianism. The fact that that didn’t happen is reason enough to be ashamed; even if Clinton will slightly win the popular vote that still says something truly terrible about our country.

Indeed, this is what it says:

We slightly preferred democracy over fascism.

We slightly preferred liberty over tyranny.

We slightly preferred justice over oppression.

We slightly preferred feminism over misogyny.

We slightly preferred equality over racism.

We slightly preferred reason over instinct.

We slightly preferred honesty over fraud.

We slightly preferred sustainability over ecological devastation.

We slightly preferred competence over incompetence.

We slightly preferred diplomacy over impulsiveness.

We slightly preferred humility over narcissism.

We were faced with the easiest choice ever given to us in any election, and just a narrow majority got the answer right—and then under the way our system works that wasn’t even enough.

I sincerely hope that Donald Trump is not as bad as I believe he is. The feeling of vindication at being able to tell so many right-wing family members “I told you so” pales in comparison to the fear and despair for the millions of people who will die from his belligerent war policy, his incompetent economic policy, and his insane (anti-)environmental policy. Even the working-class White people who voted for him will surely suffer greatly under his regime.

Yes, I sincerely hope that he is not as bad as we think he is, though I remember saying that George W. Bush was not as bad as we thought when he was elected—and he was. He was. His Iraq War killed hundreds of thousands of people based on lies. His economy policy triggered the worst economic collapse since the Great Depression. So now I have to ask: What if he is as bad as we think?

Fortunately, I do not believe that Trump will literally trigger a global nuclear war.

Then again, I didn’t believe he would win, either.