The game theory of holidays

Dec 25, JDN 2457748

When this post goes live, it will be Christmas; so I felt I should make the topic somehow involve the subject of Christmas, or holidays in general.

I decided I would pull back for as much perspective as possible, and ask this question: Why do we have holidays in the first place?

All human cultures have holidays, but not the same ones. Cultures with a lot of mutual contact will tend to synchronize their holidays temporally, but still often preserve wildly different rituals on those same holidays. Yes, we celebrate “Christmas” in both the US and in Austria; but I think they are baffled by the Elf on the Shelf and I know that I find the Krampus bizarre and terrifying.

Most cultures from temperate climates have some sort of celebration around the winter solstice, probably because this is an ecologically important time for us. Our food production is about to get much, much lower, so we’d better make sure we have sufficient quantities stored. (In an era of globalization and processed food that lasts for months, this is less important, of course.) But they aren’t the same celebration, and they generally aren’t exactly on the solstice.

What is a holiday, anyway? We all get off work, we visit our families, and we go through a series of ritualized actions with some sort of symbolic cultural meaning. Why do we do this?

First, why not work all year round? Wouldn’t that be more efficient? Well, no, because human beings are subject to exhaustion. We need to rest at least sometimes.

Well, why not simply have each person rest whenever they need to? Well, how do we know they need to? Do we just take their word for it? People might exaggerate their need for rest in order to shirk their duties and free-ride on the work of others.

It would help if we could have pre-scheduled rest times, to remove individual discretion.

Should we have these at the same time for everyone, or at different times for each person?

Well, from the perspective of efficiency, different times for each person would probably make the most sense. We could trade off work in shifts that way, and ensure production keeps moving. So why don’t we do that?
Well, now we get to the game theory part. Do you want to be the only one who gets today off? Or do you want other people to get today off as well?

You probably want other people to be off work today as well, at least your family and friends so that you can spend time with them. In fact, this is probably more important to you than having any particular day off.

We can write this as a normal-form game. Suppose we have four days to choose from, 1 through 4, and two people, who can each decide which day to take off, or they can not take a day off at all. They each get a payoff of 1 if they take the same day off, 0 if they take different days off, and -1 if they don’t take a day off at all. This is our resulting payoff matrix:

1 2 3 4 None
1 1/1 0/0 0/0 0/0 0/-1
2 0/0 1/1 0/0 0/0 0/-1
3 0/0 0/0 1/1 0/0 0/-1
4 0/0 0/0 0/0 1/1 0/-1
None -1/0 -1/0 -1/0 -1/0 -1/-1

 

It’s pretty obvious that each person will take some day off. But which day? How do they decide that?
This is what we call a coordination game; there are many possible equilibria to choose from, and the payoffs are highest if people can somehow coordinate their behavior.

If they can actually coordinate directly, it’s simple; one person should just suggest a day, and since the other one is indifferent, they have no reason not to agree to that day. From that point forward, they have coordinated on a equilibrium (a Nash equilibrium, in point of fact).

But suppose they can’t talk to each other, or suppose there aren’t two people to coordinate but dozens, or hundreds—or even thousands, once you include all the interlocking social networks. How could they find a way to coordinate on the same day?

They need something more intuitive, some “obvious” choice that they can call upon that they hope everyone else will as well. Even if they can’t communicate, as long as they can observe whether their coordination has succeeded or failed they can try to set these “obvious” choices by successive trial and error.

The result is what we call a Schelling point; players converge on this equilibrium not because there’s actually anything better about it, but because it seems obvious and they expect everyone else to think it will also seem obvious.

This is what I think is happening with holidays. Yes, we make up stories to justify them, or sometimes even have genuine reasons for them (Independence Day actually makes sense being on July 4, for instance), but the ultimate reason why we have a holiday on one day rather than other is that we had to have it some time, and this was a way of breaking the deadlock and finally setting a date.

In fact, weekends are probably a more optimal solution to this coordination problem than holidays, because human beings need rest on a fairly regular basis, not just every few months. Holiday seasons now serve more as an opportunity to have long vacations that allow travel, rather than as a rest between work days. But even those we had to originally justify as a matter of religion: Jews would not work on Saturday, Christians would not work on Sunday, so together we will not work on Saturday or Sunday. The logic here is hardly impeccable (why not make it religion-specific, for example?), but it was enough to give us a Schelling point.

This makes me wonder about what it would take to create a new holiday. How could we actually get people to celebrate Darwin Day or Sagan Day on a large scale, for example? Darwin and Sagan are both a lot more worth celebrating than most of the people who get holidays—Columbus especially leaps to mind. But even among those of us who really love Darwin and Sagan, these are sort of half-hearted celebrations that never attain the same status as Easter, much less Thanksgiving or Christmas.

I’d also like to secularize—or at least ecumenicalize—the winter solstice celebration. Christianity shouldn’t have a monopoly on what is really something like a human universal, or at least a “humans who live in temperate climates” universal. It really isn’t Christmas anyway; most of what we do is celebrating Yule, compounded by a modern expression in mass consumption that is thoroughly borne of modern capitalism. We have no reason to think Jesus was actually born in December, much less on the 25th. But that’s around the time when lots of other celebrations were going on anyway, and it’s much easier to convince people that they should change the name of their holiday than that they should stop celebrating it and start celebrating something else—I think precisely because that still preserves the Schelling point.

Creating holidays has obviously been done before—indeed it is literally the only way holidays ever come into existence. But part of their structure seems to be that the more transparent the reasons for choosing that date and those rituals, the more empty and insincere the holiday seems. Once you admit that this is an arbitrary choice meant to converge an equilibrium, it stops seeming like a good choice anymore.

Now, if we could find dates and rituals that really had good reasons behind them, we could probably escape that; but I’m not entirely sure we can. We can use Darwin’s birthday—but why not the first edition publication of On the Origin of Species? And Darwin himself is really that important, but why Sagan Day and not Einstein Day or Niels Bohr Day… and so on? The winter solstice itself is a very powerful choice; its deep astronomical and ecological significance might actually make it a strong enough attractor to defeat all contenders. But what do we do on the winter solstice celebration? What rituals best capture the feelings we are trying to express, and how do we defend those rituals against criticism and competition?

In the long run, I think what usually happens is that people just sort of start doing something, and eventually enough people are doing it that it becomes a tradition. Maybe it always feels awkward and insincere at first. Maybe you have to be prepared for it to change into something radically different as the decades roll on.

This year the winter solstice is on December 21st. I think I’ll be lighting a candle and gazing into the night sky, reflecting on our place in the universe. Unless you’re reading this on Patreon, by the time this goes live, you’ll have missed it; but you can try later, or maybe next year.

In fifty years all the cool kids will be doing it, I’m sure.

Student debt crisis? What student debt crisis?

Dec 18, JDN 2457741
As of this writing, I have over $99,000 in student loans. This is a good thing. It means that I was able to pay for my four years of college, and two years of a master’s program, in order to be able to start this coming five years of a PhD. When I have concluded these eleven years of postgraduate education and incurred six times the world per-capita income in debt, what then will become of me? Will I be left to live on the streets, destitute and overwhelmed by debt?

No. I’ll have a PhD. The average lifetime income of individuals with PhDs in the United States is $3.4 million. Indeed, the median annual income for economists in the US is almost exactly what I currently owe in debt—so if I save well, I could very well pay it off in just a few years. With an advanced degree in economics like mine, or similarly high-paying fields such as physics, medicine, and law one can expect the higher end of that scale, $4 million or more; with a degree in a less-lucrative field such as art, literature, history, or philosophy, one would have to settle for “only” say $3 million. The average lifetime income in the US for someone without any college education is only $1.2 million. So even in literature or history, a PhD is worth about $2 million in future income.

On average, an additional year of college results in a gain in lifetime future earnings of about 15% to 20%. Even when you adjust for interest rates and temporal discounting, this is a rate of return that would make any stock trader envious.

Fitting the law of diminishing returns, the rates of return on education in poor countries are even larger, often mind-bogglingly huge; the increase in lifetime income from a year of college education in Botswana was estimated at 38%. This implies that someone who graduates from college in Botswana earns four times as much money as someone who only finished high school.

We who pay $100,000 to receive an additional $2 to $3 million can hardly be called unfortunate.

Indeed, we are mind-bogglingly fortunate; we have been given an opportunity to better ourselves and the society we live in that is all but unprecedented in human history granted only to a privileged few even today. Right now, only about half of adults in the most educated countries in the world (Canada, Russia, Israel, Japan, Luxembourg, South Korea, and the United States) ever go to college. Only 30% of Americans ever earn a bachelor’s degree, and as recently as 1975 that figure was only 20%. Worldwide, the majority of people never graduate from high school. The average length of schooling in developing countries today is six yearsthat is, sixth grade—and this is an enormous improvement from the two years of average schooling found in developing countries in 1950.

If we look a bit further back in history, the improvements in education are even more staggering. In the United States in 1910, only 13.5% of people graduated high school, and only 2.7% completed a bachelor’s degree. There was no student debt crisis then, to be sure—because there were no college students.

Indeed, I have been underestimating the benefits of education thus far, because education is both a public and private good. The figures I’ve just given have been only the private financial return on education—the additional income received by an individual because they went to college. But there is also a non-financial return, such as the benefits of working in a more appealing or exciting career and the benefits of learning for its own sake. The reason so many people do go into history and literature instead of economics and physics very likely has to do with valuing these other aspects of education as highly as or even more highly than financial income, and it is entirely rational for people to do so. (An interesting survey question I’ve alas never seen asked: “How much money would we have to give you right now to convince you to quit working in philosophy for the rest of your life?”)

Yet even more important is the public return on education, the increased productivity and prosperity of our society as a result of greater education—and these returns are enormous. For every $1 spent on education in the US, the economy grows by an estimated $1.50. Public returns on college education worldwide are on the order of 10%-20% per year of education. This is over and above the 15-20% return already being made by the individuals going to school. This means that raising the average level of education in a country by just one year raises that country’s income by between 25% and 40%.

Indeed, perhaps the simplest way to understand the enormous social benefits of education is to note the strong correlation between education level and income level. This graph comes from the UN Human Development Report Data Explorer; it plots the HDI education index (which ranges from 0, least educated, to 1, most educated) and the per-capita GDP at purchasing power parity (on a log scale, so that each increment corresponds to a proportional increase in GDP); as you can see, educated countries tend to be rich countries, and vice-versa.

hdi_education_income_labeled

Of course, income drives education just as education drives income. But more detailed econometric studies generally (though not without some controversy) show the same basic result: The more educated a country’s people become, the richer that country becomes.

And indeed, the United States is a spectacularly rich country. The figure of “$1 trillion in college debt” sounds alarming (and has been used to such effect in many a news article, ranging from the New York Daily News, Slate, and POLITICO to USA Today and CNN all the way to Bloomberg, MarketWatch, and Business Insider, and even getting support from the Consumer Financial Protection Bureau and The Federal Reserve Bank of New York).

But the United States has a total GDP of over $18.6 trillion, and total net wealth somewhere around $84 trillion. Is it really so alarming that our nation’s most important investment would result in debt of less than two percent of our total nation’s wealth? Democracy Now asks who is getting rich off of $1.3 trillion in student debt? All of us—the students especially.

In fact, the probability of defaulting on student loans is inversely proportional to the amount of loans a student has. Students with over $100,000 in student debt default only 18% of the time, while students with less than $5,000 in student debt default 34% of the time. This should be shocking to those who think that we have a crisis of too much student debt; if student debt were an excess burden that is imposed upon us for little gain, default rates should rise as borrowing amounts increase, as we observe, for example, with credit cards: there is a positive correlation between carrying higher balances and being more likely to default. (This also raises doubts about the argument that higher debt loads should carry higher interest rates—why, if the default rate doesn’t go up?) But it makes perfect sense if you realize that college is an investment—indeed, almost certainly both the most profitable and the most socially responsible investment most people will ever have the opportunity to make. More debt means you had access to more credit to make a larger investment—and therefore your payoff was greater and you were more likely to be able to repay the debt.

Yes, job prospects were bad for college graduates right after the Great Recession—because it was right after the Great Recession, and job prospects were bad for everyone. Indeed, the unemployment rate for people with college degrees was substantially lower than for those without college degrees, all the way through the Second Depression. The New York Times has a nice little gadget where you can estimate the unemployment rate for college graduates; my hint for you is that I just said it’s lower, and I still guessed too high. There was variation across fields, of course; unsurprisingly computer science majors did extremely well and humanities majors did rather poorly. Underemployment was a big problem, but again, clearly because of the recession, not because going to college was a mistake. In fact, unemployment for college graduates (about 9%) has always been so much lower than unemployment for high school dropouts that the maximum unemployment rate for young college graduates is less than the minimum unemployment rate for young high school graduates (10%) over the entire period since the year 2000. Young high school dropouts have fared even worse; their minimum unemployment rate since 2000 was 18%, while their maximum was a terrifying Great Depression-level of 32%. Education isn’t just a good investment—it’s an astonishingly good investment.

There are a lot of things worth panicking about, now that Trump has been elected President. But student debt isn’t one of them. This is a very smart investment, made with a reasonable portion of our nation’s wealth. If you have student debt like I do, make sure you have enough—or otherwise you might not be able to pay it back.

What good are macroeconomic models? How could they be better?

Dec 11, JDN 2457734

One thing that I don’t think most people know, but which immediately obvious to any student of economics at the college level or above, is that there is a veritable cornucopia of different macroeconomic models. There are growth models (the Solow model, the Harrod-Domar model, the Ramsey model), monetary policy models (IS-LM, aggregate demand-aggregate supply), trade models (the Mundell-Fleming model, the Heckscher-Ohlin model), large-scale computational models (dynamic stochastic general equilibrium, agent-based computational economics), and I could go on.

This immediately raises the question: What are all these models for? What good are they?

A cynical view might be that they aren’t useful at all, that this is all false mathematical precision which makes economics persuasive without making it accurate or useful. And with such a proliferation of models and contradictory conclusions, I can see why such a view would be tempting.

But many of these models are useful, at least in certain circumstances. They aren’t completely arbitrary. Indeed, one of the litmus tests of the last decade has been how well the models held up against the events of the Great Recession and following Second Depression. The Keynesian and cognitive/behavioral models did rather well, albeit with significant gaps and flaws. The Monetarist, Real Business Cycle, and most other neoclassical models failed miserably, as did Austrian and Marxist notions so fluid and ill-defined that I’m not sure they deserve to even be called “models”. So there is at least some empirical basis for deciding what assumptions we should be willing to use in our models. Yet even if we restrict ourselves to Keynesian and cognitive/behavioral models, there are still a great many to choose from, which often yield inconsistent results.

So let’s compare with a science that is uncontroversially successful: Physics. How do mathematical models in physics compare with mathematical models in economics?

Well, there are still a lot of models, first of all. There’s the Bohr model, the Schrodinger equation, the Dirac equation, Newtonian mechanics, Lagrangian mechanics, Bohmian mechanics, Maxwell’s equations, Faraday’s law, Coulomb’s law, the Einstein field equations, the Minkowsky metric, the Schwarzschild metric, the Rindler metric, Feynman-Wheeler theory, the Navier-Stokes equations, and so on. So a cornucopia of models is not inherently a bad thing.

Yet, there is something about physics models that makes them more reliable than economics models.

Partly it is that the systems physicists study are literally two dozen orders of magnitude or more smaller and simpler than the systems economists study. Their task is inherently easier than ours.

But it’s not just that; their models aren’t just simpler—actually they often aren’t. The Navier-Stokes equations are a lot more complicated than the Solow model. They’re also clearly a lot more accurate.

The feature that models in physics seem to have that models in economics do not is something we might call nesting, or maybe consistency. Models in physics don’t come out of nowhere; you can’t just make up your own new model based on whatever assumptions you like and then start using it—which you very much can do in economics. Models in physics are required to fit consistently with one another, and usually inside one another, in the following sense:

The Dirac equation strictly generalizes the Schrodinger equation, which strictly generalizes the Bohr model. Bohmian mechanics is consistent with quantum mechanics, which strictly generalizes Lagrangian mechanics, which generalizes Newtonian mechanics. The Einstein field equations are consistent with Maxwell’s equations and strictly generalize the Minkowsky, Schwarzschild, and Rindler metrics. Maxwell’s equations strictly generalize Faraday’s law and Coulomb’s law.
In other words, there are a small number of canonical models—the Dirac equation, Maxwell’s equations and the Einstein field equation, essentially—inside which all other models are nested. The simpler models like Coulomb’s law and Newtonian mechanics are not contradictory with these canonical models; they are contained within them, subject to certain constraints (such as macroscopic systems far below the speed of light).

This is something I wish more people understood (I blame Kuhn for confusing everyone about what paradigm shifts really entail); Einstein did not overturn Newton’s laws, he extended them to domains where they previously had failed to apply.

This is why it is sensible to say that certain theories in physics are true; they are the canonical models that underlie all known phenomena. Other models can be useful, but not because we are relativists about truth or anything like that; Newtonian physics is a very good approximation of the Einstein field equations at the scale of many phenomena we care about, and is also much more mathematically tractable. If we ever find ourselves in situations where Newton’s equations no longer apply—near a black hole, traveling near the speed of light—then we know we can fall back on the more complex canonical model; but when the simpler model works, there’s no reason not to use it.

There are still very serious gaps in the knowledge of physics; in particular, there is a fundamental gulf between quantum mechanics and the Einstein field equations that has been unresolved for decades. A solution to this “quantum gravity problem” would be essentially a guaranteed Nobel Prize. So even a canonical model can be flawed, and can be extended or improved upon; the result is then a new canonical model which we now regard as our best approximation to truth.

Yet the contrast with economics is still quite clear. We don’t have one or two or even ten canonical models to refer back to. We can’t say that the Solow model is an approximation of some greater canonical model that works for these purposes—because we don’t have that greater canonical model. We can’t say that agent-based computational economics is approximately right, because we have nothing to approximate it to.

I went into economics thinking that neoclassical economics needed a new paradigm. I have now realized something much more alarming: Neoclassical economics doesn’t really have a paradigm. Or if it does, it’s a very informal paradigm, one that is expressed by the arbitrary judgments of journal editors, not one that can be written down as a series of equations. We assume perfect rationality, except when we don’t. We assume constant returns to scale, except when that doesn’t work. We assume perfect competition, except when that doesn’t get the results we wanted. The agents in our models are infinite identical psychopaths, and they are exactly as rational as needed for the conclusion I want.

This is quite likely why there is so much disagreement within economics. When you can permute the parameters however you like with no regard to a canonical model, you can more or less draw whatever conclusion you want, especially if you aren’t tightly bound to empirical evidence. I know a great many economists who are sure that raising minimum wage results in large disemployment effects, because the models they believe in say that it must, even though the empirical evidence has been quite clear that these effects are small if they are present at all. If we had a canonical model of employment that we could calibrate to the empirical evidence, that couldn’t happen anymore; there would be a coefficient I could point to that would refute their argument. But when every new paper comes with a new model, there’s no way to do that; one set of assumptions is as good as another.

Indeed, as I mentioned in an earlier post, a remarkable number of economists seem to embrace this relativism. “There is no true model.” they say; “We do what is useful.” Recently I encountered a book by the eminent economist Deirdre McCloskey which, though I confess I haven’t read it in its entirety, appears to be trying to argue that economics is just a meaningless language game that doesn’t have or need to have any connection with actual reality. (If any of you have read it and think I’m misunderstanding it, please explain. As it is I haven’t bought it for a reason any economist should respect: I am disinclined to incentivize such writing.)

Creating such a canonical model would no doubt be extremely difficult. Indeed, it is a task that would require the combined efforts of hundreds of researchers and could take generations to achieve. The true equations that underlie the economy could be totally intractable even for our best computers. But quantum mechanics wasn’t built in a day, either. The key challenge here lies in convincing economists that this is something worth doing—that if we really want to be taken seriously as scientists we need to start acting like them. Scientists believe in truth, and they are trying to find it out. While not immune to tribalism or ideology or other human limitations, they resist them as fiercely as possible, always turning back to the evidence above all else. And in their combined strivings, they attempt to build a grand edifice, a universal theory to stand the test of time—a canonical model.

Experimentally testing categorical prospect theory

Dec 4, JDN 2457727

In last week’s post I presented a new theory of probability judgments, which doesn’t rely upon people performing complicated math even subconsciously. Instead, I hypothesize that people try to assign categories to their subjective probabilities, and throw away all the information that wasn’t used to assign that category.

The way to most clearly distinguish this from cumulative prospect theory is to show discontinuity. Kahneman’s smooth, continuous function places fairly strong bounds on just how much a shift from 0% to 0.000001% can really affect your behavior. In particular, if you want to explain the fact that people do seem to behave differently around 10% compared to 1% probabilities, you can’t allow the slope of the smooth function to get much higher than 10 at any point, even near 0 and 1. (It does depend on the precise form of the function, but the more complicated you make it, the more free parameters you add to the model. In the most parsimonious form, which is a cubic polynomial, the maximum slope is actually much smaller than this—only 2.)

If that’s the case, then switching from 0.% to 0.0001% should have no more effect in reality than a switch from 0% to 0.00001% would to a rational expected utility optimizer. But in fact I think I can set up scenarios where it would have a larger effect than a switch from 0.001% to 0.01%.

Indeed, these games are already quite profitable for the majority of US states, and they are called lotteries.

Rationally, it should make very little difference to you whether your odds of winning the Powerball are 0 (you bought no ticket) or 0.000000001% (you bought a ticket), even when the prize is $100 million. This is because your utility of $100 million is nowhere near 100 million times as large as your marginal utility of $1. A good guess would be that your lifetime income is about $2 million, your utility is logarithmic, the units of utility are hectoQALY, and the baseline level is about 100,000.

I apologize for the extremely large number of decimals, but I had to do that in order to show any difference at all. I have bolded where the decimals first deviate from the baseline.

Your utility if you don’t have a ticket is ln(20) = 2.9957322736 hQALY.

Your utility if you have a ticket is (1-10^-9) ln(20) + 10^-9 ln(1020) = 2.9957322775 hQALY.

You gain a whopping 40 microQALY over your whole lifetime. I highly doubt you could even perceive such a difference.

And yet, people are willing to pay nontrivial sums for the chance to play such lotteries. Powerball tickets sell for about $2 each, and some people buy tickets every week. If you do that and live to be 80, you will spend some $8,000 on lottery tickets during your lifetime, which results in this expected utility: (1-4*10^-6) ln(20-0.08) + 4*10^-6 ln(1020) = 2.9917399955 hQALY.
You have now sacrificed 0.004 hectoQALY, which is to say 0.4 QALY—that’s months of happiness you’ve given up to play this stupid pointless game.

Which shouldn’t be surprising, as (with 99.9996% probability) you have given up four months of your lifetime income with nothing to show for it. Lifetime income of $2 million / lifespan of 80 years = $25,000 per year; $8,000 / $25,000 = 0.32. You’ve actually sacrificed slightly more than this, which comes from your risk aversion.

Why would anyone do such a thing? Because while the difference between 0 and 10^-9 may be trivial, the difference between “impossible” and “almost impossible” feels enormous. “You can’t win if you don’t play!” they say, but they might as well say “You can’t win if you do play either.” Indeed, the probability of winning without playing isn’t zero; you could find a winning ticket lying on the ground, or win due to an error that is then upheld in court, or be given the winnings bequeathed by a dying family member or gifted by an anonymous donor. These are of course vanishingly unlikely—but so was winning in the first place. You’re talking about the difference between 10^-9 and 10^-12, which in proportional terms sounds like a lot—but in absolute terms is nothing. If you drive to a drug store every week to buy a ticket, you are more likely to die in a car accident on the way to the drug store than you are to win the lottery.

Of course, these are not experimental conditions. So I need to devise a similar game, with smaller stakes but still large enough for people’s brains to care about the “almost impossible” category; maybe thousands? It’s not uncommon for an economics experiment to cost thousands, it’s just usually paid out to many people instead of randomly to one person or nobody. Conducting the experiment in an underdeveloped country like India would also effectively amplify the amounts paid, but at the fixed cost of transporting the research team to India.

But I think in general terms the experiment could look something like this. You are given $20 for participating in the experiment (we treat it as already given to you, to maximize your loss aversion and endowment effect and thereby give us more bang for our buck). You then have a chance to play a game, where you pay $X to get a P probability of $Y*X, and we vary these numbers.

The actual participants wouldn’t see the variables, just the numbers and possibly the rules: “You can pay $2 for a 1% chance of winning $200. You can also play multiple times if you wish.” “You can pay $10 for a 5% chance of winning $250. You can only play once or not at all.”

So I think the first step is to find some dilemmas, cases where people feel ambivalent, and different people differ in their choices. That’s a good role for a pilot study.

Then we take these dilemmas and start varying their probabilities slightly.

In particular, we try to vary them at the edge of where people have mental categories. If subjective probability is continuous, a slight change in actual probability should never result in a large change in behavior, and furthermore the effect of a change shouldn’t vary too much depending on where the change starts.

But if subjective probability is categorical, these categories should have edges. Then, when I present you with two dilemmas that are on opposite sides of one of the edges, your behavior should radically shift; while if I change it in a different way, I can make a large change without changing the result.

Based solely on my own intuition, I guessed that the categories roughly follow this pattern:

Impossible: 0%

Almost impossible: 0.1%

Very unlikely: 1%

Unlikely: 10%

Fairly unlikely: 20%

Roughly even odds: 50%

Fairly likely: 80%

Likely: 90%

Very likely: 99%

Almost certain: 99.9%

Certain: 100%

So for example, if I switch from 0%% to 0.01%, it should have a very large effect, because I’ve moved you out of your “impossible” category (indeed, I think the “impossible” category is almost completely sharp; literally anything above zero seems to be enough for most people, even 10^-9 or 10^-10). But if I move from 1% to 2%, it should have a small effect, because I’m still well within the “very unlikely” category. Yet the latter change is literally one hundred times larger than the former. It is possible to define continuous functions that would behave this way to an arbitrary level of approximation—but they get a lot less parsimonious very fast.

Now, immediately I run into a problem, because I’m not even sure those are my categories, much less that they are everyone else’s. If I knew precisely which categories to look for, I could tell whether or not I had found it. But the process of both finding the categories and determining if their edges are truly sharp is much more complicated, and requires a lot more statistical degrees of freedom to get beyond the noise.

One thing I’m considering is assigning these values as a prior, and then conducting a series of experiments which would adjust that prior. In effect I would be using optimal Bayesian probability reasoning to show that human beings do not use optimal Bayesian probability reasoning. Still, I think that actually pinning down the categories would require a large number of participants or a long series of experiments (in frequentist statistics this distinction is vital; in Bayesian statistics it is basically irrelevant—one of the simplest reasons to be Bayesian is that it no longer bothers you whether someone did 2 experiments of 100 people or 1 experiment of 200 people, provided they were the same experiment of course). And of course there’s always the possibility that my theory is totally off-base, and I find nothing; a dissertation replicating cumulative prospect theory is a lot less exciting (and, sadly, less publishable) than one refuting it.

Still, I think something like this is worth exploring. I highly doubt that people are doing very much math when they make most probabilistic judgments, and using categories would provide a very good way for people to make judgments usefully with no math at all.

How do people think about probability?

Nov 27, JDN 2457690

(This topic was chosen by vote of my Patreons.)

In neoclassical theory, it is assumed (explicitly or implicitly) that human beings judge probability in something like the optimal Bayesian way: We assign prior probabilities to events, and then when confronted with evidence we infer using the observed data to update our prior probabilities to posterior probabilities. Then, when we have to make decisions, we maximize our expected utility subject to our posterior probabilities.

This, of course, is nothing like how human beings actually think. Even very intelligent, rational, numerate people only engage in a vague approximation of this behavior, and only when dealing with major decisions likely to affect the course of their lives. (Yes, I literally decide which universities to attend based upon formal expected utility models. Thus far, I’ve never been dissatisfied with a decision made that way.) No one decides what to eat for lunch or what to do this weekend based on formal expected utility models—or at least I hope they don’t, because that point the computational cost far exceeds the expected benefit.

So how do human beings actually think about probability? Well, a good place to start is to look at ways in which we systematically deviate from expected utility theory.

A classic example is the Allais paradox. See if it applies to you.

In game A, you get $1 million dollars, guaranteed.
In game B, you have a 10% chance of getting $5 million, an 89% chance of getting $1 million, but now you have a 1% chance of getting nothing.

Which do you prefer, game A or game B?

In game C, you have an 11% chance of getting $1 million, and an 89% chance of getting nothing.

In game D, you have a 10% chance of getting $5 million, and a 90% chance of getting nothing.

Which do you prefer, game C or game D?

I have to think about it for a little bit and do some calculations, and it’s still very hard because it depends crucially on my projected lifetime income (which could easily exceed $3 million with a PhD, especially in economics) and the precise form of my marginal utility (I think I have constant relative risk aversion, but I’m not sure what parameter to use precisely), but in general I think I want to choose game A and game C, but I actually feel really ambivalent, because it’s not hard to find plausible parameters for my utility where I should go for the gamble.

But if you’re like most people, you choose game A and game D.

There is no coherent expected utility by which you would do this.

Why? Either a 10% chance of $5 million instead of $1 million is worth risking a 1% chance of nothing, or it isn’t. If it is, you should play B and D. If it’s not, you should play A and C. I can’t tell you for sure whether it is worth it—I can’t even fully decide for myself—but it either is or it isn’t.

Yet most people have a strong intuition that they should take game A but game D. Why? What does this say about how we judge probability?
The leading theory in behavioral economics right now is cumulative prospect theory, developed by the great Kahneman and Tversky, who essentially founded the field of behavioral economics. It’s quite intimidating to try to go up against them—which is probably why we should force ourselves to do it. Fear of challenging the favorite theories of the great scientists before us is how science stagnates.

I wrote about it more in a previous post, but as a brief review, cumulative prospect theory says that instead of judging based on a well-defined utility function, we instead consider gains and losses as fundamentally different sorts of thing, and in three specific ways:

First, we are loss-averse; we feel a loss about twice as intensely as a gain of the same amount.

Second, we are risk-averse for gains, but risk-seeking for losses; we assume that gaining twice as much isn’t actually twice as good (which is almost certainly true), but we also assume that losing twice as much isn’t actually twice as bad (which is almost certainly false and indeed contradictory with the previous).

Third, we judge probabilities as more important when they are close to certainty. We make a large distinction between a 0% probability and a 0.0000001% probability, but almost no distinction at all between a 41% probability and a 43% probability.

That last part is what I want to focus on for today. In Kahneman’s model, this is a continuous, monotonoic function that maps 0 to 0 and 1 to 1, but systematically overestimates probabilities below but near 1/2 and systematically underestimates probabilities above but near 1/2.

It looks something like this, where red is true probability and blue is subjective probability:

cumulative_prospect
I don’t believe this is actually how humans think, for two reasons:

  1. It’s too hard. Humans are astonishingly innumerate creatures, given the enormous processing power of our brains. It’s true that we have some intuitive capacity for “solving” very complex equations, but that’s almost all within our motor system—we can “solve a differential equation” when we catch a ball, but we have no idea how we’re doing it. But probability judgments are often made consciously, especially in experiments like the Allais paradox; and the conscious brain is terrible at math. It’s actually really amazing how bad we are at math. Any model of normal human judgment should assume from the start that we will not do complicated math at any point in the process. Maybe you can hypothesize that we do so subconsciously, but you’d better have a good reason for assuming that.
  2. There is no reason to do this. Why in the world would any kind of optimization system function this way? You start with perfectly good probabilities, and then instead of using them, you subject them to some bizarre, unmotivated transformation that makes them less accurate and costs computing power? You may as well hit yourself in the head with a brick.

So, why might it look like we are doing this? Well, my proposal, admittedly still rather half-baked, is that human beings don’t assign probabilities numerically at all; we assign them categorically.

You may call this, for lack of a better term, categorical prospect theory.

My theory is that people don’t actually have in their head “there is an 11% chance of rain today” (unless they specifically heard that from a weather report this morning); they have in their head “it’s fairly unlikely that it will rain today”.

That is, we assign some small number of discrete categories of probability, and fit things into them. I’m not sure what exactly the categories are, and part of what makes my job difficult here is that they may be fuzzy-edged and vary from person to person, but roughly speaking, I think they correspond to the sort of things psychologists usually put on Likert scales in surveys: Impossible, almost impossible, very unlikely, unlikely, fairly unlikely, roughly even odds, fairly likely, likely, very likely, almost certain, certain. If I’m putting numbers on these probability categories, they go something like this: 0, 0.001, 0.01, 0.10, 0.20, 0.50, 0.8, 0.9, 0.99, 0.999, 1.

Notice that this would preserve the same basic effect as cumulative prospect theory: You care a lot more about differences in probability when they are near 0 or 1, because those are much more likely to actually shift your category. Indeed, as written, you wouldn’t care about a shift from 0.4 to 0.6 at all, despite caring a great deal about a shift from 0.001 to 0.01.

How does this solve the above problems?

  1. It’s easy. Not only don’t you compute a probability and then recompute it for no reason; you never even have to compute it precisely. Just get it within some vague error bounds and that will tell you what box it goes in. Instead of computing an approximation to a continuous function, you just slot things into a small number of discrete boxes, a dozen at the most.
  2. That explains why we would do it: It’s easy. Our brains need to conserve their capacity, and they did especially in our ancestral environment when we struggled to survive. Rather than having to iterate your approximation to arbitrary precision, you just get within 0.1 or so and call it a day. That saves time and computing power, which saves energy, which could save your life.

What new problems have I introduced?

  1. It’s very hard to know exactly where people’s categories are, if they vary between individuals or even between situations, and whether they are fuzzy-edged.
  2. If you take the model I just gave literally, even quite large probability changes will have absolutely no effect as long as they remain within a category such as “roughly even odds”.

With regard to 2, I think Kahneman may himself be able to save me, with his dual process theory concept of System 1 and System 2. What I’m really asserting is that System 1, the fast, intuitive judgment system, operates on these categories. System 2, on the other hand, the careful, rational thought system, can actually make use of proper numerical probabilities; it’s just very costly to boot up System 2 in the first place, much less ensure that it actually gets the right answer.

How might we test this? Well, I think that people are more likely to use System 1 when any of the following are true:

  1. They are under harsh time-pressure
  2. The decision isn’t very important
  3. The intuitive judgment is fast and obvious

And conversely they are likely to use System 2 when the following are true:

  1. They have plenty of time to think
  2. The decision is very important
  3. The intuitive judgment is difficult or unclear

So, it should be possible to arrange an experiment varying these parameters, such that in one treatment people almost always use System 1, and in another they almost always use System 2. And then, my prediction is that in the System 1 treatment, people will in fact not change their behavior at all when you change the probability from 15% to 25% (fairly unlikely) or 40% to 60% (roughly even odds).

To be clear, you can’t just present people with this choice between game E and game F:

Game E: You get a 60% chance of $50, and a 40% chance of nothing.

Game F: You get a 40% chance of $50, and a 60% chance of nothing.

People will obviously choose game E. If you can directly compare the numbers and one game is strictly better in every way, I think even without much effort people will be able to choose correctly.

Instead, what I’m saying is that if you make the following offers to two completely different sets of people, you will observe little difference in their choices, even though under expected utility theory you should.
Group I receives a choice between game E and game G:

Game E: You get a 60% chance of $50, and a 40% chance of nothing.

Game G: You get a 100% chance of $20.

Group II receives a choice between game F and game G:

Game F: You get a 40% chance of $50, and a 60% chance of nothing.

Game G: You get a 100% chance of $20.

Under two very plausible assumptions about marginal utility of wealth, I can fix what the rational judgment should be in each game.

The first assumption is that marginal utility of wealth is decreasing, so people are risk-averse (at least for gains, which these are). The second assumption is that most people’s lifetime income is at least two orders of magnitude higher than $50.

By the first assumption, group II should choose game G. The expected income is precisely the same, and being even ever so slightly risk-averse should make you go for the guaranteed $20.

By the second assumption, group I should choose game E. Yes, there is some risk, but because $50 should not be a huge sum to you, your risk aversion should be small and the higher expected income of $30 should sway you.

But I predict that most people will choose game G in both cases, and (within statistical error) the same proportion will choose F as chose E—thus showing that the difference between a 40% chance and a 60% chance was in fact negligible to their intuitive judgments.

However, this doesn’t actually disprove Kahneman’s theory; perhaps that part of the subjective probability function is just that flat. For that, I need to set up an experiment where I show discontinuity. I need to find the edge of a category and get people to switch categories sharply. Next week I’ll talk about how we might pull that off.

Bigotry is more powerful than the market

Nov 20, JDN 2457683

If there’s one message we can take from the election of Donald Trump, it is that bigotry remains a powerful force in our society. A lot of autoflagellating liberals have been trying to explain how this election result really reflects our failure to help people displaced by technology and globalization (despite the fact that personal income and local unemployment had negligible correlation with voting for Trump), or Hillary Clinton’s “bad campaign” that nonetheless managed the same proportion of Democrat turnout that re-elected her husband in 1996.

No, overwhelmingly, the strongest predictor of voting for Trump was being White, and living in an area where most people are White. (Well, actually, that’s if you exclude authoritarianism as an explanatory variable—but really I think that’s part of what we’re trying to explain.) Trump voters were actually concentrated in areas less affected by immigration and globalization. Indeed, there is evidence that these people aren’t racist because they have anxiety about the economy—they are anxious about the economy because they are racist. How does that work? Obama. They can’t believe that the economy is doing well when a Black man is in charge. So all the statistics and even personal experiences mean nothing to them. They know in their hearts that unemployment is rising, even as the BLS data clearly shows it’s falling.

The wide prevalence and enormous power of bigotry should be obvious. But economists rarely talk about it, and I think I know why: Their models say it shouldn’t exist. The free market is supposed to automatically eliminate all forms of bigotry, because they are inefficient.

The argument for why this is supposed to happen actually makes a great deal of sense: If a company has the choice of hiring a White man or a Black woman to do the same job, but they know that the market wage for Black women is lower than the market wage for White men (which it most certainly is), and they will do the same quality and quantity of work, why wouldn’t they hire the Black woman? And indeed, if human beings were rational profit-maximizers, this is probably how they would think.

More recently some neoclassical models have been developed to try to “explain” this behavior, but always without daring to give up the precious assumption of perfect rationality. So instead we get the two leading neoclassical theories of discrimination, which are statistical discrimination and taste-based discrimination.

Statistical discrimination is the idea that under asymmetric information (and we surely have that), features such as race and gender can act as signals of quality because they are correlated with actual quality for various reasons (usually left unspecified), so it is not irrational after all to choose based upon them, since they’re the best you have.

Taste-based discrimination is the idea that people are rationally maximizing preferences that simply aren’t oriented toward maximizing profit or well-being. Instead, they have this extra term in their utility function that says they should also treat White men better than women or Black people. It’s just this extra thing they have.

A small number of studies have been done trying to discern which of these is at work.
The correct answer, of course, is neither.

Statistical discrimination, at least, could be part of what’s going on. Knowing that Black people are less likely to be highly educated than Asians (as they definitely are) might actually be useful information in some circumstances… then again, you list your degree on your resume, don’t you? Knowing that women are more likely to drop out of the workforce after having a child could rationally (if coldly) affect your assessment of future productivity. But shouldn’t the fact that women CEOs outperform men CEOs be incentivizing shareholders to elect women CEOs? Yet that doesn’t seem to happen. Also, in general, people seem to be pretty bad at statistics.

The bigger problem with statistical discrimination as a theory is that it’s really only part of a theory. It explains why not all of the discrimination has to be irrational, but some of it still does. You need to explain why there are these huge disparities between groups in the first place, and statistical discrimination is unable to do that. In order for the statistics to differ this much, you need a past history of discrimination that wasn’t purely statistical.

Taste-based discrimination, on the other hand, is not a theory at all. It’s special pleading. Rather than admit that people are failing to rationally maximize their utility, we just redefine their utility so that whatever they happen to be doing now “maximizes” it.

This is really what makes the Axiom of Revealed Preference so insidious; if you really take it seriously, it says that whatever you do, must by definition be what you preferred. You can’t possibly be irrational, you can’t possibly be making mistakes of judgment, because by definition whatever you did must be what you wanted. Maybe you enjoy bashing your head into a wall, who am I to judge?

I mean, on some level taste-based discrimination is what’s happening; people think that the world is a better place if they put women and Black people in their place. So in that sense, they are trying to “maximize” some “utility function”. (By the way, most human beings behave in ways that are provably inconsistent with maximizing any well-defined utility function—the Allais Paradox is a classic example.) But the whole framework of calling it “taste-based” is a way of running away from the real explanation. If it’s just “taste”, well, it’s an unexplainable brute fact of the universe, and we just need to accept it. If people are happier being racist, what can you do, eh?

So I think it’s high time to start calling it what it is. This is not a question of taste. This is a question of tribal instinct. This is the product of millions of years of evolution optimizing the human brain to act in the perceived interest of whatever it defines as its “tribe”. It could be yourself, your family, your village, your town, your religion, your nation, your race, your gender, or even the whole of humanity or beyond into all sentient beings. But whatever it is, the fundamental tribe is the one thing you care most about. It is what you would sacrifice anything else for.

And what we learned on November 9 this year is that an awful lot of Americans define their tribe in very narrow terms. Nationalistic and xenophobic at best, racist and misogynistic at worst.

But I suppose this really isn’t so surprising, if you look at the history of our nation and the world. Segregation was not outlawed in US schools until 1955, and there are women who voted in this election who were born before American women got the right to vote in 1920. The nationalistic backlash against sending jobs to China (which was one of the chief ways that we reduced global poverty to its lowest level ever, by the way) really shouldn’t seem so strange when we remember that over 100,000 Japanese-Americans were literally forcibly relocated into camps as recently as 1942. The fact that so many White Americans seem all right with the biases against Black people in our justice system may not seem so strange when we recall that systemic lynching of Black people in the US didn’t end until the 1960s.

The wonder, in fact, is that we have made as much progress as we have. Tribal instinct is not a strange aberration of human behavior; it is our evolutionary default setting.

Indeed, perhaps it is unreasonable of me to ask humanity to change its ways so fast! We had millions of years to learn how to live the wrong way, and I’m giving you only a few centuries to learn the right way?

The problem, of course, is that the pace of technological change leaves us with no choice. It might be better if we could wait a thousand years for people to gradually adjust to globalization and become cosmopolitan; but climate change won’t wait a hundred, and nuclear weapons won’t wait at all. We are thrust into a world that is changing very fast indeed, and I understand that it is hard to keep up; but there is no way to turn back that tide of change.

Yet “turn back the tide” does seem to be part of the core message of the Trump voter, once you get past the racial slurs and sexist slogans. People are afraid of what the world is becoming. They feel that it is leaving them behind. Coal miners fret that we are leaving them behind by cutting coal consumption. Factory workers fear that we are leaving them behind by moving the factory to China or inventing robots to do the work in half the time for half the price.

And truth be told, they are not wrong about this. We are leaving them behind. Because we have to. Because coal is polluting our air and destroying our climate, we must stop using it. Moving the factories to China has raised them out of the most dire poverty, and given us a fighting chance toward ending world hunger. Inventing the robots is only the next logical step in the process that has carried humanity forward from the squalor and suffering of primitive life to the security and prosperity of modern society—and it is a step we must take, for the progress of civilization is not yet complete.

They wouldn’t have to let themselves be left behind, if they were willing to accept our help and learn to adapt. That carbon tax that closes your coal mine could also pay for your basic income and your job-matching program. The increased efficiency from the automated factories could provide an abundance of wealth that we could redistribute and share with you.

But this would require them to rethink their view of the world. They would have to accept that climate change is a real threat, and not a hoax created by… uh… never was clear on that point actually… the Chinese maybe? But 45% of Trump supporters don’t believe in climate change (and that’s actually not as bad as I’d have thought). They would have to accept that what they call “socialism” (which really is more precisely described as social democracy, or tax-and-transfer redistribution of wealth) is actually something they themselves need, and will need even more in the future. But despite rising inequality, redistribution of wealth remains fairly unpopular in the US, especially among Republicans.

Above all, it would require them to redefine their tribe, and start listening to—and valuing the lives of—people that they currently do not.

Perhaps we need to redefine our tribe as well; many liberals have argued that we mistakenly—and dangerously—did not include people like Trump voters in our tribe. But to be honest, that rings a little hollow to me: We aren’t the ones threatening to deport people or ban them from entering our borders. We aren’t the ones who want to build a wall (though some have in fact joked about building a wall to separate the West Coast from the rest of the country, I don’t think many people really want to do that). Perhaps we live in a bubble of liberal media? But I make a point of reading outlets like The American Conservative and The National Review for other perspectives (I usually disagree, but I do at least read them); how many Trump voters do you think have ever read the New York Times, let alone Huffington Post? Cosmopolitans almost by definition have the more inclusive tribe, the more open perspective on the world (in fact, do I even need the “almost”?).

Nor do I think we are actually ignoring their interests. We want to help them. We offer to help them. In fact, I want to give these people free money—that’s what a basic income would do, it would take money from people like me and give it to people like them—and they won’t let us, because that’s “socialism”! Rather, we are simply refusing to accept their offered solutions, because those so-called “solutions” are beyond unworkable; they are absurd, immoral and insane. We can’t bring back the coal mining jobs, unless we want Florida underwater in 50 years. We can’t reinstate the trade tariffs, unless we want millions of people in China to starve. We can’t tear down all the robots and force factories to use manual labor, unless we want to trigger a national—and then global—economic collapse. We can’t do it their way. So we’re trying to offer them another way, a better way, and they’re refusing to take it. So who here is ignoring the concerns of whom?

Of course, the fact that it’s really their fault doesn’t solve the problem. We do need to take it upon ourselves to do whatever we can, because, regardless of whose fault it is, the world will still suffer if we fail. And that presents us with our most difficult task of all, a task that I fully expect to spend a career trying to do and yet still probably failing: We must understand the human tribal instinct well enough that we can finally begin to change it. We must know enough about how human beings form their mental tribes that we can actually begin to shift those parameters. We must, in other words, cure bigotry—and we must do it now, for we are running out of time.

Congratulations, America.

Nov 13, JDN 2457676

Congratulations, you elected Donald Trump.

Instead of the candidate with decades of experience as Secretary of State, US Senator, and an internationally renowned philanthropist, you chose the first President in history to not have any experience whatsoever in government or the military.

Instead of the candidate with the most comprehensive, evidence-based plan for action against climate change (that is, the only candidate who supports nuclear energy), you elected the one who is planning to appoint a climate-change denier head of the EPA.

Perhaps to punish the candidate who carried out a longstanding custom of using private email servers because the public servers were so defective, you accepted the candidate who is being charged with not only mass fraud but also multiple counts of sexual assault.

Perhaps based on the Russian propaganda—not kidding, read the URL—saying that one candidate could trigger a Third World War, you chose the candidate who has no idea how international diplomacy works and wants to convert NATO into a mercantilist empire (and by the way has no apparent qualms about deploying nuclear weapons).

Because one candidate was “too close to Wall Street” in some vague ill-defined sense (oh my god, she gave speeches! And accepted donations!), you elected the other one who has already vowed to turn back the financial regulations that are currently protecting us from a repeat of the Great Recession.

Because you didn’t trust the candidate with one of the highest honest ratings ever recorded, you elected the one who is surrounded by hundreds of scandals and never even released his tax returns.
Even if you didn’t outright agree with it, you were willing to look past his promise to deport 11 million people and his long history of bigotry toward a wide variety of ethnic groups.
Even his Vice President, who seems like a great statesman simply by comparison, is one of the most fanatical right-wing Vice Presidents we’ve had in decades. He opposes not just abortion, but birth control. He supports—and has signed as governor—“religious freedom” bills designed to legalize discrimination against LGBT people.

Congratulations, America. You literally elected the candidate that was supported by Vladimir Putin, Kim Jong-un, the American Nazi Party, and the Klu Klux Klan. Now, reversed stupidity is not intelligence; being endorsed by someone horrible doesn’t necessarily mean you are horrible. But when this many horrible people endorse you, and start giving the same reasons, and those reasons are based on things you particularly have in common with those horrible people like bigotry and authoritarianism… yeah, I think it does say something about you.

Now, to be fair, much of the blame here goes to the Electoral College.

By current counts, Hillary Clinton won the popular vote by at least 500,000 votes. It is projected that she may even win by as much as 2 million. This will be the fourth time in US history that the Electoral College winner was definitely not the popular vote winner.

But even that is only possible because Hillary Clinton did not win the overwhelming landslide she deserved. The Electoral College should have been irrelevant, because she should have won at least 60% of every demographic in every state. Our whole nation should have declared together in one voice that we will not tolerate bigotry and authoritarianism. The fact that that didn’t happen is reason enough to be ashamed; even if Clinton will slightly win the popular vote that still says something truly terrible about our country.

Indeed, this is what it says:

We slightly preferred democracy over fascism.

We slightly preferred liberty over tyranny.

We slightly preferred justice over oppression.

We slightly preferred feminism over misogyny.

We slightly preferred equality over racism.

We slightly preferred reason over instinct.

We slightly preferred honesty over fraud.

We slightly preferred sustainability over ecological devastation.

We slightly preferred competence over incompetence.

We slightly preferred diplomacy over impulsiveness.

We slightly preferred humility over narcissism.

We were faced with the easiest choice ever given to us in any election, and just a narrow majority got the answer right—and then under the way our system works that wasn’t even enough.

I sincerely hope that Donald Trump is not as bad as I believe he is. The feeling of vindication at being able to tell so many right-wing family members “I told you so” pales in comparison to the fear and despair for the millions of people who will die from his belligerent war policy, his incompetent economic policy, and his insane (anti-)environmental policy. Even the working-class White people who voted for him will surely suffer greatly under his regime.

Yes, I sincerely hope that he is not as bad as we think he is, though I remember saying that George W. Bush was not as bad as we thought when he was elected—and he was. He was. His Iraq War killed hundreds of thousands of people based on lies. His economy policy triggered the worst economic collapse since the Great Depression. So now I have to ask: What if he is as bad as we think?

Fortunately, I do not believe that Trump will literally trigger a global nuclear war.

Then again, I didn’t believe he would win, either.

Wrong answers are better than no answer

Nov 6, JDN 2457699

I’ve been hearing some disturbing sentiments from some surprising places lately, things like “Economics is not a science, it’s just an extension of politics” and “There’s no such thing as a true model”. I’ve now met multiple economists who speak this way, who seem to be some sort of “subjectivists” or “anti-realists” (those links are to explanations of moral subjectivism and anti-realism, which are also mistaken, but in a much less obvious way, and are far more common views to express). It is possible to read most of the individual statements in a non-subjectivist way, but in the context of all of them together, it really gives me the general impression that many of these economists… don’t believe in economics. (Nor do they even believe in believing it, or they’d put up a better show.)

I think what has happened is that in the wake of the Second Depression, economists have had a sort of “crisis of faith”. The models we thought were right were wrong, so we may as well give up; there’s no such thing as a true model. The science of economics failed, so maybe economics was never a science at all.

Never really thought I’d be in this position, but in such circumstances actually feel strongly inclined to defend neoclassical economics. Neoclassical economics is wrong; but subjectivism is not even wrong.

If a model is wrong, you can fix it. You can make it right, or at least less wrong. But if you give up on modeling altogether, your theory avoids being disproven only by making itself totally detached from reality. I can’t prove you wrong, but only because you’ve given up on the whole idea of being right or wrong.

As Isaac Asimov wrote, “when people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together.”

What we might call “folk economics”, what most people seem to believe about economics, is like thinking the Earth is flat—it’s fundamentally wrong, but not so obviously inaccurate on an individual scale that it can’t be a useful approximation for your daily life. Neoclassical economics is like thinking the Earth is spherical—it’s almost right, but still wrong in some subtle but important ways. Thinking that economics isn’t a science is wronger than both of them put together.

The sense in which “there’s no such thing as a true model” is true is a trivial one: There’s no such thing as a perfect model, because by the time you included everything you’d just get back the world itself. But there are better and worse models, and some of our very best models (quantum mechanics, Darwinian evolution) are really good enough that I think it’s quite perverse not to call them simply true. Economics doesn’t have such models yet for more than a handful of phenomena—but we’re working on it (at least, I thought that’s what we were doing!).

Indeed, a key point I like to make about realism—in science, morality, or whatever—is that if you think something can be wrong, you must be a realist. In order for an idea to be wrong, there must be some objective reality to compare it to that it can fail to match. If everything is just subjective beliefs and sociopolitical pressures, there is no such thing as “wrong”, only “unpopular”. I’ve heard many people say things like “Well, that’s just your opinion; you could be wrong.” No, if it’s just my opinion, then I cannot possibly be wrong. So choose a lane! Either you think I’m wrong, or you think it’s just my opinion—but you can’t have it both ways.

Now, it’s clearly true in the real world that there is a lot of very bad and unscientific economics going on. The worst is surely the stuff that comes out of right-wing think-tanks that are paid almost explicitly to come up with particular results that are convenient for their right-wing funders. (As Krugman puts it, “there are liberal professional economists, conservative professional economists, and professional conservative economists.”) But there’s also a lot of really unscientific economics done without such direct and obvious financial incentives. Economists get blinded by their own ideology, they choose what topics to work on based on what will garner the most prestige, they use fundamentally defective statistical techniques because journals won’t publish them if they don’t.

But of course, the same is true of many other fields, particularly in social science. Sociologists also get blinded by their pet theories; psychologists also abuse statistics because the journals make them do it; political scientists are influenced by their funding sources; anthropologists also choose what to work on based on what’s prestigious in the field.

Moreover, natural sciences do this too. String theorists are (almost by definition) blinded by their favorite theory. Biochemists are manipulated by the financial pressures of the pharmaceutical industry. Neuroscientists publish all sorts of statistically nonsensical research. I’d be very surprised if even geologists were immune to the social norms of academia telling them to work on the most prestigious problems. If this is enough reason to abandon a field as a science, it is a reason to abandon science, full stop. That is what you are arguing for here.

And really, this should be fairly obvious, actually. Are workers and factories and televisions actual things that are actually here? Obviously they are. Therefore you can be right or wrong about how they interact. There is an obvious objective reality here that one can have more or less accurate beliefs about.

For socially-constructed phenomena like money, markets, and prices, this isn’t as obvious; if everyone stopped believing in the US Dollar, like Tinkerbell the US Dollar would cease to exist. But there does remain some objective reality (or if you like, intersubjective reality) here: I can be right or wrong about the price of a dishwasher or the exchange rate from dollars to pounds.

So, in order to abandon the possibility of scientifically accurate economics, you have to say that even though there is this obvious physical reality of workers and factories and televisions, we can’t actually study that scientifically, even when it sure looks like we’re studying it scientifically by performing careful observations, rigorous statistics, and even randomized controlled experiments. Even when I perform my detailed Bayesian analysis of my randomized controlled experiment, nope, that’s not science. It doesn’t count, for some reason.

The only at all principled way I can see you could justify such a thing is to say that once you start studying other humans you lose all possibility of scientific objectivity—but notice that by making such a claim you haven’t just thrown out psychology and economics, you’ve also thrown out anthropology and neuroscience. The statements “DNA evidence shows that all modern human beings descend from a common migration out of Africa” and “Human nerve conduction speed is approximately 300 meters per second” aren’t scientific? Then what in the world are they?

Or is it specifically behavioral sciences that bother you? Now perhaps you can leave out biological anthropology and basic neuroscience; there’s some cultural anthropology and behavioral neuroscience you have to still include, but maybe that’s a bullet you’re willing to bite. There is perhaps something intuitively appealing here: Since science is a human behavior, you can’t use science to study human behavior without an unresolvable infinite regress.

But there are still two very big problems with this idea.

First, you’ve got to explain how there can be this obvious objective reality of human behavior that is nonetheless somehow forever beyond our understanding. Even though people actually do things, and we can study those things using the usual tools of science, somehow we’re not really doing science, and we can never actually learn anything about how human beings behave.

Second, you’ve got to explain why we’ve done as well as we have. For some reason, people seem to have this impression that psychology and especially economics have been dismal failures, they’ve brought us nothing but nonsense and misery.

But where exactly do you think we got the lowest poverty rate in the history of the world? That just happened by magic, or by accident while we were doing other things? No, economists did that, on purpose—the UN Millennium Goals were designed, implemented, and evaluated by economists. Against staunch opposition from both ends of the political spectrum, we have managed to bring free trade to the world, and with it, some measure of prosperity.

The only other science I can think of that has been more successful at its core mission is biology; as XCKD pointed out, the biologists killed a Horseman of the Apocalypse while the physicists were busy making a new one. Congratulations on beating Pestilence, biologists; we economists think we finally have Famine on the ropes now. Hey political scientists, how is War going? Oh, not bad, actually? War deaths per capita are near their lowest levels in history? But clearly it would be foolhardy to think that economics and political science are actually sciences!

I can at least see why people might think psychology is a failure, because rates of diagnosis of mental illness keep rising higher and higher; but the key word there is diagnosis. People were already suffering from anxiety and depression across the globe; it’s just that nobody was giving them therapy or medication for it. Some people argue that all we’ve done is pathologize normal human experience—but this wildly underestimates the severity of many mental disorders. Wanting to end your own life for reasons you yourself cannot understand is not normal human experience being pathologized. (And the fact that 40,000 Americans commit suicide every year may make it common, but it does not make it normal. Is trying to keep people from dying of influenza “pathologizing normal human experience”? Well, suicide kills almost as many.) It’s possible there is some overdiagnosis; but there is also an awful lot of real mental illness that previously went untreated—and yes, meta-analysis shows that treatment can and does work.

Of course, we’ve made a lot of mistakes. We will continue to make mistakes. Many of our existing models are seriously flawed in very important ways, and many economists continue to use those models incautiously, blind to their defects. The Second Depression was largely the fault of economists, because it was economists who told everyone that markets are efficient, banks will regulate themselves, leave it alone, don’t worry about it.

But we can do better. We will do better. And we can only do that because economics is a science, it does reflect reality, and therefore we make ourselves less wrong.

Belief in belief, and why it’s important

Oct 30, JDN 2457692

In my previous post on ridiculous beliefs, I passed briefly over this sentence:

“People invest their identity in beliefs, and decide what beliefs to profess based on the group identities they value most.”

Today I’d like to talk about the fact that “to profess” is a very important phrase in that sentence. Part of understanding ridiculous beliefs, I think, is understanding that many, if not most, of them are not actually proper beliefs. They are what Daniel Dennett calls “belief in belief”, and has elsewhere been referred to as “anomalous belief”. They are not beliefs in the ordinary sense that we would line up with the other beliefs in our worldview and use them to anticipate experiences and motivate actions. They are something else, lone islands of belief that are not weaved into our worldview. But all the same they are invested with importance, often moral or even ultimate importance; this one belief may not make any sense with everyone else, but you must believe it, because it is a vital part of your identity and your tribe. To abandon it would not simply be mistaken; it would be heresy, it would be treason.

How do I know this? Mainly because nobody has tried to stone me to death lately.

The Bible is quite explicit about at least a dozen reasons I am supposed to be executed forthwith; you likely share many of them: Heresy, apostasy, blasphemy, nonbelief, sodomy, fornication, covetousness, taking God’s name in vain, eating shellfish (though I don’t anymore!), wearing mixed fiber, shaving, working on the Sabbath, making images of things, and my personal favorite, not stoning other people for committing such crimes (as we call it in game theory, a second-order punishment).

Yet I have met many people who profess to be “Bible-believing Christians”, and even may oppose some of these activities (chiefly sodomy, blasphemy, and nonbelief) on the grounds that they are against what the Bible says—and yet not one has tried to arrange my execution, nor have I ever seriously feared that they might.

Is this because we live in a secular society? Well, yes—but not simply that. It isn’t just that these people are afraid of being punished by our secular government should they murder me for my sins; they believe that it is morally wrong to murder me, and would rarely even consider the option. Someone could point them to the passage in Leviticus (20:16, as it turns out) that explicitly says I should be executed, and it would not change their behavior toward me.

On first glance this is quite baffling. If I thought you were about to drink a glass of water that contained cyanide, I would stop you, by force if necessary. So if they truly believe that I am going to be sent to Hell—infinitely worse than cyanide—then shouldn’t they be willing to use any means necessary to stop that from happening? And wouldn’t this be all the more true if they believe that they themselves will go to Hell should they fail to punish me?

If these “Bible-believing Christians” truly believed in Hell the way that I believe in cyanide—that is, as proper beliefs which anticipate experience and motivate action—then they would in fact try to force my conversion or execute me, and in doing so would believe that they are doing right. This used to be quite common in many Christian societies (most infamously in the Salem Witch Trials), and still is disturbingly common in many Muslim societies—ISIS doesn’t just throw gay men off rooftops and stone them as a weird idiosyncrasy; it is written in the Hadith that they’re supposed to. Nor is this sort of thing confined to terrorist groups; the “legitimate” government of Saudi Arabia routinely beheads atheists or imprisons homosexuals (though has a very capricious enforcement system, likely so that the monarchy can trump up charges to justify executing whomever they choose). Beheading people because the book said so is what your behavior would look like if you honestly believed, as a proper belief, that the Qur’an or the Bible or whatever holy book actually contained the ultimate truth of the universe. The great irony of calling religion people’s “deeply-held belief” is that it is in almost all circumstances the exact opposite—it is their most weakly held belief, the one that they could most easily sacrifice without changing their behavior.

Yet perhaps we can’t even say that to people, because they will get equally defensive and insist that they really do hold this very important anomalous belief, and how dare you accuse them otherwise. Because one of the beliefs they really do hold, as a proper belief, and a rather deeply-held one, is that you must always profess to believe your religion and defend your belief in it, and if anyone catches you not believing it that’s a horrible, horrible thing. So even though it’s obvious to everyone—probably even to you—that your behavior looks nothing like what it would if you actually believed in this book, you must say that you do, scream that you do if necessary, for no one must ever, ever find out that it is not a proper belief.

Another common trick is to try to convince people that their beliefs do affect their behavior, even when they plainly don’t. We typically use the words “religious” and “moral” almost interchangeably, when they are at best orthogonal and arguably even opposed. Part of why so many people seem to hold so rigidly to their belief-in-belief is that they think that morality cannot be justified without recourse to religion; so even though on some level they know religion doesn’t make sense, they are afraid to admit it, because they think that means admitting that morality doesn’t make sense. If you are even tempted by this inference, I present to you the entire history of ethical philosophy. Divine Command theory has been a minority view among philosophers for centuries.

Indeed, it is precisely because your moral beliefs are not based on your religion that you feel a need to resort to that defense of your religion. If you simply believed religion as a proper belief, you would base your moral beliefs on your religion, sure enough; but you’d also defend your religion in a fundamentally different way, not as something you’re supposed to believe, not as a belief that makes you a good person, but as something that is just actually true. (And indeed, many fanatics actually do defend their beliefs in those terms.) No one ever uses the argument that if we stop believing in chairs we’ll all become murderers, because chairs are actually there. We don’t believe in belief in chairs; we believe in chairs.

And really, if such a belief were completely isolated, it would not be a problem; it would just be this weird thing you say you believe that everyone really knows you don’t and it doesn’t affect how you behave, but okay, whatever. The problem is that it’s never quite isolated from your proper beliefs; it does affect some things—and in particular it can offer a kind of “support” for other real, proper beliefs that you do have, support which is now immune to rational criticism.

For example, as I already mentioned: Most of these “Bible-believing Christians” do, in fact, morally oppose homosexuality, and say that their reason for doing so is based on the Bible. This cannot literally be true, because if they actually believed the Bible they wouldn’t want gay marriage taken off the books, they’d want a mass pogrom of 4-10% of the population (depending how you count), on a par with the Holocaust. Fortunately their proper belief that genocide is wrong is overriding. But they have no such overriding belief supporting the moral permissibility of homosexuality or the personal liberty of marriage rights, so the very tenuous link to their belief-in-belief in the Bible is sufficient to tilt their actual behavior.

Similarly, if the people I meet who say they think maybe 9/11 was an inside job by our government really believed that, they would most likely be trying to organize a violent revolution; any government willing to murder 3,000 of its own citizens in a false flag operation is one that must be overturned and can probably only be overturned by force. At the very least, they would flee the country. If they lived in a country where the government is actually like that, like Zimbabwe or North Korea, they wouldn’t fear being dismissed as conspiracy theorists, they’d fear being captured and executed. The very fact that you live within the United States and exercise your free speech rights here says pretty strongly that you don’t actually believe our government is that evil. But they wouldn’t be so outspoken about their conspiracy theories if they didn’t at least believe in believing them.

I also have to wonder how many of our politicians who lean on the Constitution as their source of authority have actually read the Constitution, as it says a number of rather explicit things against, oh, say, the establishment of religion (First Amendment) or searches and arrests without warrants (Fourth Amendment) that they don’t much seem to care about. Some are better about this than others; Rand Paul, for instance, actually takes the Constitution pretty seriously (and is frequently found arguing against things like warrantless searches as a result!), but Ted Cruz for example says he has spent decades “defending the Constitution”, despite saying things like “America is a Christian nation” that directly violate the First Amendment. Cruz doesn’t really seem to believe in the Constitution; but maybe he believes in believing the Constitution. (It’s also quite possible he’s just lying to manipulate voters.)

 

Debunking the Simulation Argument

Oct 23, JDN 2457685

Every subculture of humans has words, attitudes, and ideas that hold it together. The obvious example is religions, but the same is true of sports fandoms, towns, and even scientific disciplines. (I would estimate that 40-60% of scientific jargon, depending on discipline, is not actually useful, but simply a way of exhibiting membership in the tribe. Even physicists do this: “quantum entanglement” is useful jargon, but “p-brane” surely isn’t. Statisticians too: Why say the clear and understandable “unequal variance” when you could show off by saying “heteroskedasticity”? In certain disciplines of the humanities this figure can rise as high as 90%: “imaginary” as a noun leaps to mind.)

One particularly odd idea that seems to define certain subcultures of very intelligent and rational people is the Simulation Argument, originally (and probably best) propounded by Nick Bostrom:

This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation.

In this original formulation by Bostrom, the argument actually makes some sense. It can be escaped, because it makes some subtle anthropic assumptions that need to be considered more carefully (in short, there could be ancestor-simulations but we could still know we aren’t in one); but it deserves to be taken seriously. Indeed, I think proposition (2) is almost certainly true, and proposition (1) might be as well; thus I have no problem accepting the disjunction.

Of course, the typical form of the argument isn’t nearly so cogent. In popular outlets as prestigious as the New York Times, Scientific American and the New Yorker, the idea is simply presented as “We are living in a simulation.” The only major outlet I could find that properly presented Bostrom’s disjunction was PBS. Indeed, there are now some Silicon Valley billionaires who believe the argument, or at least think it merits enough attention to be worth funding research into how we might escape the simulation we are in. (Frankly, even if we were inside a simulation, it’s not clear that “escaping” would be something worthwhile or even possible.)

Yet most people, when presented with this idea, think it is profoundly silly and a waste of time.

I believe this is the correct response. I am 99.9% sure we are not living in a simulation.

But it’s one thing to know that an argument is wrong, and quite another to actually show why; in that respect the Simulation Argument is a lot like the Ontological Argument for God:

However, as Bertrand Russell observed, it is much easier to be persuaded that ontological arguments are no good than it is to say exactly what is wrong with them.

To resolve this problem, I am writing this post (at the behest of my Patreons) to provide you now with a concise and persuasive argument directly against the Simulation Argument. No longer will you have to rely on your intuition that it can’t be right; you actually will have compelling logical reasons to reject it.

Note that I will not deny the core principle of cognitive science that minds are computational and therefore in principle could be simulated in such a way that the “simulations” would be actual minds. That’s usually what defenders of the Simulation Argument assume you’re denying, and perhaps in many cases it is; but that’s not what I’m denying. Yeah, sure, minds are computational (probably). There’s still no reason to think we’re living in a simulation.

To make this refutation, I should definitely address the strongest form of the argument, which is Nick Bostrom’s original disjunction. As I already noted, I believe that the disjunction is in fact true; at least one of those propositions is almost certainly correct, and perhaps two of them.

Indeed, I can tell you which one: Proposition (2). That is, I see no reason whatsoever why an advanced “posthuman” species would want to create simulated universes remotely resembling our own.


First of all, let’s assume that we do make it that far and posthumans do come into existence. I really don’t have sufficient evidence to say this is so, and the combination of millions of racists and thousands of nuclear weapons does not bode particularly well for that probability. But I think there is at least some good chance that this will happen—perhaps 10%?—so, let’s concede that point for now, and say that yes, posthumans will one day exist.

To be fair, I am not a posthuman, and cannot say for certain what beings of vastly greater intelligence and knowledge than I might choose to do. But since we are assuming that they exist as the result of our descendants more or less achieving everything we ever hoped for—peace, prosperity, immortality, vast knowledge—one thing I think I can safely extrapolate is that they will be moral. They will have a sense of ethics and morality not too dissimilar from our own. It will probably not agree in every detail—certainly not with what ordinary people believe, but very likely not with what even our greatest philosophers believe. It will most likely be better than our current best morality—closer to the objective moral truth that underlies reality.

I say this because this is the pattern that has emerged throughout the advancement of civilization thus far, and the whole reason we’re assuming posthumans might exist is that we are projecting this advancement further into the future. Humans have, on average, in the long run, become more intelligent, more rational, more compassionate. We have given up entirely on ancient moral concepts that we now recognize to be fundamentally defective, such as “witchcraft” and “heresy”; we are in the process of abandoning others for which some of us see the flaws but others don’t, such as “blasphemy” and “apostasy”. We have dramatically expanded the rights of women and various minority groups. Indeed, we have expanded our concept of which beings are morally relevant, our “circle of concern”, from only those in our tribe on outward to whole nations, whole races of people—and for some of us, as far as all humans or even all vertebrates. Therefore I expect us to continue to expand this moral circle, until it encompasses all sentient beings in the universe. Indeed, on some level I already believe that, though I know I don’t actually live in accordance with that theory—blame me if you will for my weakness of will, but can you really doubt the theory? Does it not seem likely that this it the theory to which our posthuman descendants will ultimately converge?

If that is the case, then posthumans would never make a simulation remotely resembling the universe I live in.

Maybe not me in particular, for I live relatively well—though I must ask why the migraines were really necessary. But among humans in general, there are many millions who live in conditions of such abject squalor and suffering that to create a universe containing them can only be counted as the gravest of crimes, morally akin to the Holocaust.

Indeed, creating this universe must, by construction, literally include the Holocaust. Because the Holocaust happened in this universe, you know.

So unless you think that our posthuman descendants are monstersdemons really, immortal beings of vast knowledge and power who thrive on the death and suffering of other sentient beings, you cannot think that they would create our universe. They might create a universe of some sort—but they would not create this one. You may consider this a corollary of the Problem of Evil, which has always been one of the (many) knockdown arguments against the existence of God as depicted in any major religion.

To deny this, you must twist the simulation argument quite substantially, and say that only some of us are actual people, sentient beings instantiated by the simulation, while the vast majority are, for lack of a better word, NPCs. The millions of children starving in southeast Asia and central Africa aren’t real, they’re just simulated, so that the handful of us who are real have a convincing environment for the purposes of this experiment. Even then, it seems monstrous to deceive us in this way, to make us think that millions of children are starving just to see if we’ll try to save them.

Bostrom presents it as obvious that any species of posthumans would want to create ancestor-simulations, and to make this seem plausible he compares to the many simulations we already create with our current technology, which we call “video games”. But this is such a severe equivocation on the word “simulation” that it frankly seems disingenuous (or for the pun perhaps I should say dissimulation).

This universe can’t possibly be a simulation in the sense that Halo 4 is a simulation. Indeed, this is something that I know with near-perfect certainty, for I am a sentient being (“Cogito ergo sum” and all that). There is at least one actual sentient person here—me—and based on my observations of your behavior, I know with quite high probability that there are many others as well—all of you.

Whereas, if I thought for even a moment there was even a slight probability that Halo 4 contains actual sentient beings that I am murdering, I would never play the game again; indeed I think I would smash the machine, and launch upon a global argumentative crusade to convince everyone to stop playing violent video games forevermore. If I thought that these video game characters that I explode with virtual plasma grenades were actual sentient people—or even had a non-negligible chance of being such—then what I am doing would be literally murder.

So whatever else the posthumans would be doing by creating our universe inside some vast computer, it is not “simulation” in the sense of a video game. If they are doing this for amusement, they are monsters. Even if they are doing it for some higher purpose such as scientific research, I strongly doubt that it can be justified; and I even more strongly doubt that it could be justified frequently. Perhaps once or twice in the whole history of the civilization, as a last resort to achieve some vital scientific objective when all other methods have been thoroughly exhausted. Furthermore it would have to be toward some truly cosmic objective, such as forestalling the heat death of the universe. Anything less would not justify literally replicating thousands of genocides.

But the way Bostrom generates a nontrivial probability of us living in a simulation is by assuming that each posthuman civilization will create many simulations similar to our own, so that the prior probability of being in a simulation is so high that it overwhelms the much higher likelihood that we are in the real universe. (This a deeply Bayesian argument; of that part, I approve. In Bayesian reasoning, the likelihood is the probability that we would observe the evidence we do given that the theory is true, while the prior is the probability that the theory is true, before we’ve seen any evidence. The probability of the theory actually being true is proportional to the likelihood multiplied by the prior.) But if the Foundation IRB will only approve the construction of a Synthetic Universe in order to achieve some cosmic objective, then the prior probability is something like 2/3, or 9/10; and thus it is no match whatsoever for the some 10^12 evidence in favor of this being actual reality.

Just what is this so compelling likelihood? That brings me to my next point, which is a bit more technical, but important because it’s really where the Simulation Argument truly collapses.

How do I know we aren’t in a simulation?

The fundamental equations of the laws of nature do not have closed-form solutions.

Take a look at the Schrodinger Equation, the Einstein field equations, the Navier-Stokes Equations, even Maxwell’s Equations (which are relatively well-behaved all things considered). These are second-order partial differential equations all, extremely complex to solve. They are all defined over continuous time and space, which has uncountably many points in every interval (though there are some physicists who believe that spacetime may be discrete on the order of 10^-44 seconds.) Not one of them has a general closed-form solution, by which I mean a formula that you could just plug in numbers for the parameters on one side of the equation and output an answer on the other. (x^3 + y^3 = 3 is not a closed-form solution, but y = (3 – x^3)^(1/3) is.) They have such exact solutions in certain special cases, but in general we can only solve them approximately, if at all.

This is not particularly surprising if you assume we’re in the actual universe. I have no particular reason to think that the fundamental laws underlying reality should be of a form that is exactly solvable to minds like my own, or even solvable at all in any but a trivial sense. (They must be “solvable” in the sense of actually resulting in something in particular happening at any given time, but that’s all.)

But it is extremely surprising if you assume we’re in a universe that is simulated by posthumans. If posthumans are similar to us, but… more so I guess, then when they set about to simulate a universe, they should do so in a fashion not too dissimilar from how we would do it. And how would we do it? We’d code in a bunch of laws into a computer in discrete time (and definitely not with time-steps of 10^-44 seconds either!), and those laws would have to be encoded as functions, not equations. There could be many inputs in many different forms, perhaps even involving mathematical operations we haven’t invented yet—but each configuration of inputs would have to yield precisely one output, if the computer program is to run at all.

Indeed, if they are really like us, then their computers will probably only be capable of one core operation—conditional bit flipping, 1 to 0 or 0 to 1 depending on some state—and the rest will be successive applications of that operation. Bit shifts are many bit flips, addition is many bit shifts, multiplication is many additions, exponentiation is many multiplications. We would therefore expect the fundamental equations of the simulated universe to have an extremely simple functional form, literally something that can be written out as many successive steps of “if A, flip X to 1” and “if B, flip Y to 0”. It could be a lot of such steps mind you—existing programs require billions or trillions of such operations—but one thing it could never be is a partial differential equation that cannot be solved exactly.

What fans of the Simulation Argument seem to forget is that while this simple set of operations is extremely general, capable of generating quite literally any possible computable function (Turing proved that), it is not capable of generating any function that isn’t computable, much less any equation that can’t be solved into a function. So unless the laws of the universe can actually be reduced to computable functions, it’s not even possible for us to be inside a computer simulation.

What is the probability that all the fundamental equations of the universe can be reduced to computable functions? Well, it’s difficult to assign a precise figure of course. I have no idea what new discoveries might be made in science or mathematics in the next thousand years (if I did, I would make a few and win the Nobel Prize). But given that we have been trying to get closed-form solutions for the fundamental equations of the universe and failing miserably since at least Isaac Newton, I think that probability is quite small.

Then there’s the fact that (again unless you believe some humans in our universe are NPCs) there are 7.3 billion minds (and counting) that you have to simulate at once, even assuming that the simulation only includes this planet and yet somehow perfectly generates an apparent cosmos that even behaves as we would expect under things like parallax and redshift. There’s the fact that whenever we try to study the fundamental laws of our universe, we are able to do so, and never run into any problems of insufficient resolution; so apparently at least this planet and its environs are being simulated at the scale of nanometers and femtoseconds. This is a ludicrously huge amount of data, and while I cannot rule out the possibility of some larger universe existing that would allow a computer large enough to contain it, you have a very steep uphill battle if you want to argue that this is somehow what our posthuman descendants will consider the best use of their time and resources. Bostrom uses the video game comparison to make it sound like they are just cranking out copies of Halo 917 (“Plasma rifles? How quaint!”) when in fact it amounts to assuming that our descendants will just casually create universes of 10^50 particles running over space intervals of 10^-9 meters and time-steps of 10^-15 seconds that contain billions of actual sentient beings and thousands of genocides, and furthermore do so in a way that somehow manages to make the apparent fundamental equations inside those universes unsolvable.

Indeed, I think it’s conservative to say that the likelihood ratio is 10^12—observing what we do is a trillion times more likely if this is the real universe than if it’s a simulation. Therefore, unless you believe that our posthuman descendants would have reason to create at least a billion simulations of universes like our own, you can assign a probability that we are in the actual universe of at least 99.9%.

As indeed I do.