Krugman and rockets and feathers

Jul 17 JDN 2459797

Well, this feels like a milestone: Paul Krugman just wrote a column about a topic I’ve published research on. He didn’t actually cite our paper—in fact the literature review he links to is from 2014—but the topic is very much what we were studying: Asymmetric price transmission, ‘rockets and feathers’. He’s even talking about it from the perspective of industrial organization and market power, which is right in line with our results (and a bit different from the mainstream consensus among economic policy pundits).

The phenomenon is a well-documented one: When the price of an input (say, crude oil) rises, the price of outputs made from that input (say, gasoline) rise immediately, and basically one to one, sometimes even more than one to one. But when the price of an input falls, the price of outputs only falls slowly and gradually, taking a long time to converge to the same level as the input prices. Prices go up like a rocket, but down like a feather.

Many different explanations have been proposed to explain this phenomenon, and they aren’t all mutually exclusive. They include various aspects of market structure, substitution of inputs, and use of inventories to smooth the effects of prices.

One that I find particularly unpersuasive is the notion of menu costs: That it requires costly effort to actually change your prices, and this somehow results in the asymmetry. Most gas stations have digital price boards; it requires almost zero effort for them to change prices whenever they want. Moreover, there’s no clear reason this would result in asymmetry between raising and lowering prices. Some models extend the notion of “menu cost” to include expected customer responses, which is a much better explanation; but I think that’s far beyond the original meaning of the concept. If you fear to change your price because of how customers may respond, finding a cheaper way to print price labels won’t do a thing to change that.

But our paper—and Krugman’s article—is about one factor in particular: market power. We don’t see prices behave this way in highly competitive markets. We see it the most in oligopolies: Markets where there are only a small number of sellers, who thus have some control over how they set their prices.

Krugman explains it as follows:

When oil prices shoot up, owners of gas stations feel empowered not just to pass on the cost but also to raise their markups, because consumers can’t easily tell whether they’re being gouged when prices are going up everywhere. And gas stations may hang on to these extra markups for a while even when oil prices fall.

That’s actually a somewhat different mechanism from the one we found in our experiment, which is that asymmetric price transmission can be driven by tacit collusion. Explicit collusion is illegal: You can’t just call up the other gas stations and say, “Let’s all set the price at $5 per gallon.” But you can tacitly collude by responding to how they set their prices, and not trying to undercut them even when you could get a short-run benefit from doing so. It’s actually very similar to an Iterated Prisoner’s Dilemma: Cooperation is better for everyone, but worse for you as an individual; to get everyone to cooperate, it’s vital to severely punish those who don’t.

In our experiment, the participants in our experiment were acting as businesses setting their prices. The customers were fully automated, so there was no opportunity to “fool” them in this way. We also excluded any kind of menu costs or product inventories. But we still saw prices go up like rockets and down like feathers. Moreover, prices were always substantially higher than costs, especially during that phase when they are falling down like feathers.

Our explanation goes something like this: Businesses are trying to use their market power to maintain higher prices and thereby make higher profits, but they have to worry about other businesses undercutting their prices and taking all the business. Moreover, they also have to worry about others thinking that they are trying to undercut prices—they want to be perceived as cooperating, not defecting, in order to preserve the collusion and avoid being punished.

Consider how this affects their decisions when input prices change. If the price of oil goes up, then there’s no reason not to raise the price of gasoline immediately, because that isn’t violating the collusion. If anything, it’s being nice to your fellow colluders; they want prices as high as possible. You’ll want to raise the prices as high and fast as you can get away with, and you know they’ll do the same. But if the price of oil goes down, now gas stations are faced with a dilemma: You could lower prices to get more customers and make more profits, but the other gas stations might consider that a violation of your tacit collusion and could punish you by cutting their prices even more. Your best option is to lower prices very slowly, so that you can take advantage of the change in the input market, but also maintain the collusion with other gas stations. By slowly cutting prices, you can ensure that you are doing it together, and not trying to undercut other businesses.

Krugman’s explanation and ours are not mutually exclusive; in fact I think both are probably happening. They have one important feature in common, which fits the empirical data: Markets with less competition show greater degrees of asymmetric price transmission. The more concentrated the oligopoly, the more we see rockets and feathers.

They also share an important policy implication: Market power can make inflation worse. Contrary to what a lot of economic policy pundits have been saying, it isn’t ridiculous to think that breaking up monopolies or putting pressure on oligopolies to lower their prices could help reduce inflation. It probably won’t be as reliably effective as the Fed’s buying and selling of bonds to adjust interest rates—but we’re also doing that, and the two are not mutually exclusive. Besides, breaking up monopolies is a generally good thing to do anyway.

It’s not that unusual that I find myself agreeing with Krugman. I think what makes this one feel weird is that I have more expertise on the subject than he does.

How to pack the court

Jul 10 JDN 2459790

By now you have no doubt heard the news that Roe v. Wade was overturned. The New York Times has an annotated version of the full opinion.

My own views on abortion are like those of about 2/3 of Americans: More nuanced than can be neatly expressed by ‘pro-choice’ or ‘pro-life’, much more comfortable with first-trimester abortion (which is what 90% of abortions are, by the way) than later, and opposed to overturning Roe v. Wade in its entirety. I also find great appeal in Clinton’s motto on the issue: “safe, legal, and rare”.Several years ago I moderated an online discussion group that reached what we called the Twelve Week Compromise: Abortion would be legal for any reason up to 12 weeks of pregnancy, after which it would only be legal for extenuating circumstances including rape, incest, fetal nonviability, and severe health risk to the mother. This would render the vast majority of abortions legal without simply saying that it should be permitted without question. Roe v. Wade was actually slightly more permissive than this, but it was itself a very sound compromise.

But even if you didn’t like Roe v. Wade, you should be outraged at the manner in which it was overturned. If the Supreme Court can simply change its mind on rights that have been established for nearly 50 years, then none of our rights are safe. And in chilling comments, Clarence Thomas has declared that this is his precise intention: “In future cases, we should reconsider all of this Court’s substantive due process precedents, including Griswold, Lawrence, and Obergefell.” That is to say, Thomas wants to remove our rights to use contraception and have same-sex relationships. (If Lawrence were overturned, sodomy could be criminalized in several states!)

The good news here is that even the other conservative justices seem much less inclined to overturn these other precedents. Kavanaugh’s concurrent opinion explicitly states he has no intention of overturning “Griswold v. Connecticut, 381 U. S. 479 (1965); Eisenstadt v. Baird, 405 U. S. 438 (1972); Loving v. Virginia, 388 U. S. 1 (1967); and Obergefell v. Hodges, 576 U. S. 644 (2015)”. It seems quite notable that Thomas did not mention Loving v. Virginia, seeing as it was made around the same time as Roe v. Wade, based on very similar principles—and it affects him personally. And even if these precedents are unlikely to be overturned immediately, this ruling shows that the security of all of our rights can depend on the particular inclinations of individual justices.

The Supreme Court is honestly a terrible institution. Courts should not be more powerful than legislatures, lifetime appointments reek of monarchism, and the claim of being ‘apolitical’ that was dubious from the start is now obviously ludicrous. But precisely because it is so powerful, reforming it will be extremely difficult.

The first step is to pack the court. The question is no longer whether we should pack the court, but how, and why we didn’t do it sooner.

What does it mean to pack the court? Increase the number of justices, appointing new ones who are better than the current ones. (Since almost any randomly-selected American would be better than Clarence Thomas, Samuel Alito, or Brent Kavanaugh, this wouldn’t be hard.) This is 100% Constitutional, as the Constitution does not in any way restrict the number of justices. It can simply be done by an act of Congress.

But of course we can’t stop there. President Biden could appoint four more justices, and then whoever comes after him could appoint another three, and before we know it the Supreme Court has twenty-seven justices and each new President is expected to add a few more.

No, we need to fix the number of justices so that it can’t be increased any further. Ideally this would be done by Constitutional Amendment, though the odds of getting such a thing passed seem rather slim. But there is in fact a sensible way to add new justices now and then justify not adding any more later, and that is to tie justices to federal circuits.

There are currently 13 US federal circuit courts. If we added 4 more Supreme Court justices, there would be 13 Supreme Court justices. Each could even be assigned to be the nominal head of that federal circuit, and responsible for being the first to read appeals coming from that circuit.

Which justice goes where? Well, what if we let the circuits themselves choose? The selection could be made by a popular vote among the people who live there. Make the federal circuit a federal popular vote. The justice responsible for the federal circuit can also be the Chief Justice.

That would also require a Constitutional Amendment, but it would, at a stroke, fundamentally reform what the Supreme Court is and how its justices are chosen. For now, we could simply add three new justices, making the current number 13. Then they could decide amongst themselves who will get what circuit until we implement the full system to let circuits choose their justices.

I’m well aware that electing judges is problematic—but at this point I don’t think we have a choice. (I would also prefer to re-arrange the circuits: it’s weird that DC gets its own circuit instead of being part of circuit 4, and circuit 9 has way more people than circuit 1.) We can’t simply trust each new President to appoint a new justice whenever one happens to retire or die and then leave that justice in place for decades to come. Not in a world where someone like Donald Trump can be elected President.

A lot of centrist people are uncomfortable with such a move, seeing it as ‘playing dirty’. But it’s not. It’s playing hardball—taking seriously the threat that the current Republican Party poses to the future of American government and society, and taking substantive steps to fight that threat. (After its authoritarian shift that started in the mid 2000s but really took off under Trump, the Republican Party now has more in common with far-right extremist parties like Fidesz in Hungary than with mainstream center-right parties like the Tories.) But there is absolutely nothing un-Constitutional about this plan. It’s doing everything possible within the law.

We should have done this before they started overturning landmark precedents. But it’s not too late to do it before they overturn any more.

Why copyrights should be shorter

Jul 3 JDN 2459783

The copyright protection for Mickey Mouse is set to expire in 2024, though a recently-proposed bill that specifically targets large corporations would cause it to end immediately. Steamboat Willie was released in 1928.

This means that Mickey Mouse has been under copyright protection for 94 years, and is scheduled to last 96. Let me remind you that Walt Disney has been dead since 1966. This is, quite frankly, ridiculous. Mickey Mouse should have been released into the public domain decades ago.

Copyright in general has quite a shaky justification, and there are those who argue that it should be eliminated entirely. There’s something profoundly weird—and fundamentally monopolistic—about banning people from copying things.

But clearly we do need some way of ensuring that creators of artistic works can be fairly compensated for their efforts. Copyright is not the only way to do that: A few alternatives that I think are worth considering are expanded crowdfunding (Patreon and Kickstart already support quite a few artists, though most not very much), a large basic income (artists would still create even if they weren’t paid; they really just need money to live on), government grants directly to artists (we have the National Endowment for the Arts, but it doesn’t support very many artists), and some kind of central clearinghouse that surveys consumers about the art they enjoy and then compensates artists according to how much their work is appreciated. But all of these would require substantial changes, and suffer from their own flaws, so for the time being, let’s say we stick with copyright.

Even so, it’s utterly ludicrous that Disney has managed to hold onto the copyright on Mickey Mouse for this long. It makes absolutely no sense from the perspective of supporting artists—indeed, in this case the artist has been dead for over 50 years.

In fact, it wouldn’t even make sense if Walt Disney were still alive. (Not many people live 96 years past their first highly-successful creative work, but it’s at least possible, if you say published as a child and then lived to be a centenarian.) If the goal is to incentivize new creative art, the first few decades—indeed, the first few years—are clearly the most important for doing so.

To show why this is, I need to take a brief detour into finance, and the concept of a net present value.

As the saying goes: Time is money. $1 today is worth more than $1 a year from now. (And if you doubt this, let me remind you of the old joke: “I’ll give you $1 million dollars if you give me $100! Such a deal! Give me the $100 today, and I’ll give you $1 per year for the next million years.”)

The idea of a net present value is to precisely quantify the monetary value of time (or the time value of money), so that we can compare cashflows over time in a directly comparable way.

To compute a net present value, you need a discount rate. At a discount rate of r, an amount of money X that you get 1 year from now is worth X/(1+r). The discount rate should be positive, because money later is worth less than money now; this means that we want X/(1+r) < X, and therefore r > 0.

This is surprisingly hard to get precisely, but relatively easy to ballpark. A good guess is that it’s somewhere close to the prevailing interest rate, or maybe the average return on the stock market. It should definitely be at least the inflation rate. Right now inflation is running a little high (around 8%), so we’d want to use a relatively high discount rate currently, maybe 10% or 12%. But I think in a more typical scenario, something more like 5-6% would be a reasonable guess.

Once you have a discount rate, it’s pretty simple to figure out the net present value: Just add up all the future cashflows, each discounted by that discount rate for the time you have to wait for it.

So for instance if you get $100 per year for the next 5 years, this would be your net present value:

100/(1+r) + 100/(1+r)^2 + 100/(1+r)^3 + 100/(1+r)^4 + 100/(1+r)^5

If you get $50 this year, $60 next year, $70 the year after that, this would be your next present value:

50 + 60/(1+r) + 70/(1+r)^2

If the cashflow is the same X over time for some fixed amount of time T this can be collapsed into a single formula using a geometric series:

X (1 – (1+r)^(-T)) – 1)/r

This is really just a more compact way of adding up, X + X/(1+r) + X/(1+r)^2 + …; here, let’s do that example of $100 per year for 5 years, with r = 10%.

100/1.1 + 100/1.1^2 + 100/1.1^3 + 100/1.1^4 + 100/1.1^5 = $379

100 (1 – 1.1^(-5))/0.1 = $379

See, we get the same answer either way. Notice that this is less than $100 * 5 = $500, which is what we’d get if we had assumed that $1 a year from now is worth the same as $1 today. But it’s not too much less, because it’s only 5 years.

This formula allows us to consider what happens when the time interval becomes extremely long—even infinite. It gives us the power to ask the question, “What is the value of this perpetual cashflow?”

This feels a bit weird for individuals, since of course we die. We can have heirs, but rare indeed is the thousand-year dynasty. (The Imperial House of Japan does appear to have an unbroken hereditary line for the last 2000 years, but they’re basically alone in that.) But governments and corporations don’t have a lifespan, so it makes more sense for them. The US government was here 200 years ago, and may still be here 200 years from now. Oxford was here 900 years ago, and I see no particular reason to think it won’t still be here 900 years from now.

Since r > 0, (1+r)^(-T) gets smaller as T increases. As T approaches infinity, (1+r)^(-T) approaches zero. So for a perpetual cashflow, we can just make this term zero.

Thus, we can actually assess the value of $1 per year for the next million years! It is this:

1 (1-(1+r)^(10^6))/r

which is basically the same as this:

1/r

So if your discount rate is 10%, then $1 per year for 1 million years is worth about as much to you as $1/0.1 = $10 today. If your discount rate is 5%, it would be worth about $1/0.05 = $20 today. And suddenly it makes sense that you’re not willing to pay $100 for this deal.

What if the cashflow is changing? Then this formula won’t work. But if it’s simply a constant rate of growth, we can adjust for that. If the growth rate of the cashflow is g, so that you get X, then X (1+g), then x (1+g)^2, and so on, the formula becomes just a bit more complicated:

X (1-(1+r-g)^(-T))/(r-g)

So for instance if your cashflow grows at 6% per year and your discount rate is 10%, then it’s basically the same as if it didn’t grow at all but your discount rate is 4%. [This is actually an approximation, but it’s a pretty good one.] Let’s call this the effective discount rate.

For a perpetual cashflow, as long as r > g, this becomes:

X/(r-g)

With this in mind, let’s return to the question of copyright. How long should copyright protection last?

We want it to last long enough for artists to be fairly compensated for their work; but what does “fairly compensated” mean? Well, with the concept of a perpetual net present value in mind, we could quantify this as the majority of all revenue that would be expected to be earned by a perpetual copyright.

I think this is actually quite generous: We’re saying that you should get to keep the copyright long enough to get most of what you’d probably get if we allowed you to own it forever. In some cases this might actually result in a copyright that’s too long; but I don’t see how it could result in it being too short.

Mickey Mouse today earns about $3 million per year. That’s honestly amazing, to continue to rake in that much money after such a long period. But, adjusted for inflation, that’s actually quite a bit less than what he took in just a few years after his first films were released, nominally $1 million per year which comes to more like $19 million per year in today’s money.

This means that our discount rate is larger than our growth rate (r > g) even if r is just inflation; but in fact we should use a discount rate higher than inflation. Let’s use a plausible but slightly conservative discount rate of 5%.

To grow from nominally $1 million to nominally $3 million per year in 94 years means a growth rate of about 1% per year.

So, our effective discount rate is 4%.

Then, a perpetual copyright for Mickey Mouse should be worth approximately:

X/(r-g) = 10^6/(0.04) = $25 million

Yes, that’s right; an unending stream of over $1 million per year ends up being worth about the same as a single payment of $25 million way back in 1928.

But isn’t Mickey Mouse a “fictional billionare”, meaning his total income over his existence has been more than $1 billion? Sure. And indeed, at a discount rate of 5%, $1 billion today is worth about $10 million in 1928. So Mickey is indeed well above that. Even if I use Forbes’ higher estimate that Mickey Mouse has taken in $5.8 billion, that would still only be a net present value of $59 million in 1928.

Remember, time is money. When it takes this long to get a cashflow, it ends up worth substantially less.

So, if we were aiming to let Mickey earn half of his perpetual earnings in net present value, when should we have ended his copyright? By my estimate, when the net present value of earnings exceeded $12.5 million. If we use Forbes’s more generous estimate, when it exceeded $30 million.

So now let’s go back to the formula for a finite time horizon, and try to solve it for T, the time horizon. We want the net present value of the finite horizon to be half that of the infinite horizon:

X (1-(1+r-g)^(-T))/(r-g) = (X/2)/(r-g)

(1+r-g)^(-T) = 1/2

To solve this for T, I’ll need to use a logarithm, the inverse of an exponent.

T = ln(2)/ln(1+r – g)

This is a doubling time, very analogous to a half-life in physics. Since logarithms are very difficult to do by hand, if you don’t have a scientific calculator handy, you can also approximate it by dividing the percentage into 69:

T = 69/(r-g)%

This is because ln(2) = 0.69…, and when r-g is a small percentage, ln(1+r-g) is about the same as r-g.

For an effective discount rate of 4%, this becomes:

T = ln(2)/ln(1.04) = 69/4 = 17

That is, only seventeen years. Even for a hugely successful long-running property like Mickey Mouse (in fact, is there really anything on a par with Mickey Mouse?), the majority of the net present value was earned in less than 20 years.

Indeed, it seems especially sensible in this case, because back then, Walt Disney was still alive! He could actually enjoy the fruits of his labors for that period. Now it’s all going to some faceless shareholders of a massive megacorporation, only a few of which are even Walt Disney’s heirs. Only about 3% of Disney shares are owned by anyone actually in the Disney family.

This gives us an answer to the question, “How long should copyrights last?”: About 20 years.

If we’d used a higher discount rate, it would be even shorter: at 10%, you get only 10 years.

And a lower discount rate simply isn’t plausible; inflation and stock market growth are both too fast for net present value to be discounted much less than 4% or 5%. Maybe you could go as low as 3%, which would be 23 years.

Does this accomplish the goal of copyrights—which, remember, was to fairly compensate artists and incentivize the creation of artistic works? I’d say so. They get half of what they would have gotten if we never released their work into the public domain, and I don’t think I’ve ever met an artist who could honestly say that they’d create something if they could hold onto the rights for 96 years, but not if they could for only 20 years. (Maybe they exist, but if so, they are rare.) Most artists really just want to be credited—not paid, credited—for their work and to make a decent living. 20 years is enough for that.

This means that our current copyright system keeps works out of public domain nearly five times as long as there is any real economic justification for.

Small deviations can have large consequences.

Jun 26 JDN 2459787

A common rejoinder that behavioral economists get from neoclassical economists is that most people are mostly rational most of the time, so what’s the big deal? If humans are 90% rational, why worry so much about the other 10%?

Well, it turns out that small deviations from rationality can have surprisingly large consequences. Let’s consider an example.

Suppose we have a market for some asset. Without even trying to veil my ulterior motive, let’s make that asset Bitcoin. Its fundamental value is of course $0; it’s not backed by anything (not even taxes or a central bank), it has no particular uses that aren’t already better served by existing methods, and it’s not even scalable.

Now, suppose that 99% of the population rationally recognizes that the fundamental value of the asset is indeed $0. But 1% of the population doesn’t; they irrationally believe that the asset is worth $20,000. What will the price of that asset be, in equilibrium?

If you assume that the majority will prevail, it should be $0. If you did some kind of weighted average, you’d think maybe its price will be something positive but relatively small, like $200. But is this actually the price it will take on?

Consider someone who currently owns 1 unit of the asset, and recognizes that it is fundamentally worthless. What should they do? Well, if they also know that there are people out there who believe it is worth $20,000, the answer is obvious: They should sell it to those people. Indeed, they should sell it for something quite close to $20,000 if they can.

Now, suppose they don’t already own the asset, but are considering whether or not to buy it. They know it’s worthless, but they also know that there are people who will buy it for close to $20,000. Here’s the kicker: This is a reason for them to buy it at anything meaningfully less than $20,000.

Suppose, for instance, they could buy it for $10,000. Spending $10,000 to buy something you know is worthless seems like a terribly irrational thing to do. But it isn’t irrational, if you also know that somewhere out there is someone who will pay $20,000 for that same asset and you have a reasonable chance of finding that person and selling it to them.

The equilibrium outcome, then, is that the price of the asset will be almost $20,000! Even though 99% of the population recognizes that this asset is worthless, the fact that 1% of people believe it’s worth as much as a car will result in it selling at that price. Thus, even a slight deviation from a perfectly-rational population can result in a market that is radically at odds with reality.

And it gets worse! Suppose that in fact everyone knows that the asset is worthless, but most people think that there is some small portion of the population who believes the asset has value. Then, it will still be priced at that value in equilibrium, as people trade it back and forth searching in vain for the person who really wants it! (This is called the Greater Fool theory.)

That is, the price of an asset in a free market—even in a market where most people are mostly rational most of the time—will in fact be determined by the highest price anyone believes that anyone else thinks it has. And this is true of essentially any asset market—any market where people are buying something, not to use it, but to sell it to someone else.

Of course, beliefs—and particularly beliefs about beliefs—can very easily change, so that equilibrium price could move in any direction basically without warning.

Suddenly, the cycle of bubble and crash, boom and bust, doesn’t seem so surprising does it? The wonder is that prices ever become stable at all.


Then again, do they? Last I checked, the only prices that were remotely stable were for goods like apples and cars and televisions, goods that are bought and sold to be consumed. (Or national currencies managed by competent central banks, whose entire job involves doing whatever it takes to keep those prices stable.) For pretty much everything else—and certainly any purely financial asset that isn’t a national currency—prices are indeed precisely as wildly unpredictable and utterly irrational as this model would predict.

So much for the Efficient Market Hypothesis? Sadly I doubt that the people who still believe this nonsense will be convinced.

Multilevel selection: A tale of three tribes

Jun 19 JDN 2459780

There’s something odd about the debate in evolutionary theory about multilevel selection (sometimes called “group selection”). On one side are the mainstream theorists who insist that selection only happens at the individual level (or is the gene level?); and on the other are devout group-selectionists who insist that group selection is everywhere and the only possible explanation of altruism.

Both of these sides are wrong. Selection does happen at multiple levels, but it’s entirely possible for altruism to emerge without it.

The usual argument by the mainstream is that group selection would require the implausible assumption that group live and die on the same timescale as individuals. The usual argument by group-selectionists is that there’s no other explanation for why humans are so altruistic. But neither of these things are true.

There is plenty of discussion out there about why group selection isn’t necessary for altruism: Kin selection is probably the clearest example. So I’m going to focus on showing that group selection can work even when groups live and die much slower than individuals.

To do this, I would like to present you a model. It’s a very pared-down, simplified version, but it is nevertheless a valid evolutionary game theory model.

Consider a world where the only kind of interaction is Iterated Prisoner’s Dilemmas. For the uninitiated, an Iterated Prisoner’s Dilemma is as follows.

Time goes on forever. At each point in time, some people are born, and some people die; people have a limited lifespan and some general idea of how long it is, but nobody can predict for sure when they will die. (So far, this isn’t even a model; all of this is literally true.)

In this world, people are randomly matched with others one on one, and they play a game together, where each person can choose either “Cooperate” or “Defect”. They choose in secret and reveal simultaneously. If both choose “Cooperate”, everyone gets 3 points. If both choose “Defect”, everyone gets 2 points. If one chooses “Cooperate” and the other chooses “Defect”, the “Cooperate” person gets only 1 point while the “Defect” person gets 4 points.

What are these points? Since this is evolution, let’s call them offspring. An average lifetime score of 4 points means 4 offspring per couple per generation—you get rapid population growth. 1 point means 1 offspring per couple per generation—your genes will gradually die out.

That makes the payoffs follow this table:


CD
C3, 31, 4
D4, 12, 2

There are two very notable properties of this game; together they seem paradoxical, which is probably why the game has such broad applicability and such enduring popularity.

  1. Everyone, as a group, is always better off if more people choose “Cooperate”.
  2. Each person, as an individual, regardless of what the others do, is always better off choosing “Defect”.

Thus, Iterated Prisoner’s Dilemmas are ideal for understanding altruism, as they directly model a conflict between individual self-interest and group welfare. (They didn’t do a good job of explaining it in A Beautiful Mind, but that one line in particular was correct: the Prisoner’s Dilemma is precisely what proves “Adam Smith was wrong.”)

Each person is matched with someone else at random for a few rounds, and then re-matched with someone else; and nobody knows how long they will be with any particular person. (For technical reasons, with these particular payoffs, the chance of going to another round needs to be at least 50%; but that’s not too important for what I have to say here.)

Now, suppose there are three tribes of people, who are related by family ties but also still occasionally intermingle with one another.

In the Hobbes tribe, people always play “Defect”.

In the Rousseau tribe, people always play “Cooperate”.

In the Axelrod tribe, people play “Cooperate” the first time they meet someone, then copy whatever the other person did in the previous round. (This is called “tit for tat“.)

How will these tribes evolve? In the long run, will all tribes survive, or will some prevail over others?

The Rousseau tribe seems quite nice; everyone always gets along! Unfortunately, the Rousseau tribe will inevitably and catastrophically collapse. As soon as a single Hobbes gets in, or a mutation arises to make someone behave like a Hobbes, that individual will become far more successful than everyone else, have vastly more offspring, and ultimately take over the entire population.

The Hobbes tribe seems pretty bad, but it’ll be stable. If a Rousseau should come visit, they’ll just be ruthlessly exploited and makes the Hobbes better off. If an Axelrod arrives, they’ll learn not to be exploited (after the first encounter), but they won’t do any better than the Hobbeses do.

What about the Axelrod tribe? They seem similar to the Rousseau tribe, because everyone is choosing “Cooperate” all the time—will they suffer the same fate? No, they won’t! They’ll do just fine, it turns out. Should a Rousseau come to visit, nobody will even notice; they’ll just keep on choosing “Cooperate” and everything will be fine. And what if a Hobbes comes? They’ll try to exploit the Axelrods, and succeed at first—but soon enough they will be punished for their sins, and in the long run they’ll be worse off (this is why the probability of continuing needs to be sufficiently high).

The net result, then, will be that the Rousseau tribe dies out and only the Hobbes and Axelrod tribes remain. But that’s not the end of the story.

Look back at that payoff table. Both tribes are stable, but each round the Hobbeses are getting 2 each round, while the Axelrods are getting 3. Remember that these are offspring per couple per generation. This means that the Hobbes tribe will have a roughly constant population, while the Axelrods will have an increasing population.

If the two tribes then come into conflict, perhaps competing over resources, the larger population will most likely prevail. This means that, in the long run, the Axelrod tribe will come to dominate. In the end, all the world will be ruled by Axelrods.

And indeed, most human beings behave like Axelrods: We’re nice to most people most of the time, but we’re no chumps. Betray our trust, and you will be punished severely. (It seems we also have a small incursion of Hobbeses: We call them psychopaths. Perhaps there are a few Rousseaus among us as well, whom the Hobbeses exploit.)

What is this? It’s multilevel selection. It’s group selection, if you like that term. There’s clearly no better way to describe it.

Moreover, we can’t simply stop at the reciprocal altruism as most mainstream theorists do; yes, Axelrods exhibit reciprocal altruism. But that’s not the only equilibrium! Why is reciprocal altruism so common? Why in the real world are there fifty Axelrods for every Hobbes? Multilevel selection.

And at no point did I assume either (1) that individual selection wasn’t operating, or (2) that the timescales of groups and individuals were the same. Indeed, I’m explicitly assuming the opposite: Individual selection continues to work at every generation, and groups only live or die over many generations.

The key insight that makes this possible is that the game is iterated—it happens over many rounds, and nobody knows exactly how many. This results in multiple Nash equilibria for individual selection, and then group selection can occur over equilibria.

This is by no means restricted to the Prisoner’s Dilemma. In fact, any nontrivial game will result in multiple equilibria when it is iterated, and group selection should always favor the groups that choose a relatively cooperative, efficient outcome. As long as such a strategy emerges by mutation, and gets some chance to get a foothold, it will be successful in the long run.

Indeed, since these conditions don’t seem all that difficult to meet, we would expect that group selection should actually occur quite frequently, and should be a major explanation for a lot of important forms of altruism.

And in fact this seems to be the case. Humans look awfully group-selected. (Like I said, we behave very much like Axelrods.) Many other social species, such as apes, dolphins, and wolves, do as well. There is altruism in nature that doesn’t look group-selected, for instance among eusocial insects; but much of the really impressive altruism seems more like equilibrium selection at the group level than it does like direct selection at the individual level.

Even multicellular life can be considered group selection: A bunch of cells “agree” to set aside some of their own interest in self-replication in favor of supporting a common, unified whole. (And should any mutated cells try to defect and multiply out of control, what happens? We call that cancer.) This can only work when there are multiple equilibria to select from at the individual level—but there nearly always are.

I finally have a published paper.

Jun 12 JDN 2459773

Here it is, my first peer-reviewed publication: “Imperfect Tactic Collusion and Asymmetric Price Transmission”, in the Journal of Economic Behavior and Organization.

Due to the convention in economics that authors are displayed alphabetically, I am listed third of four, and will be typically collapsed into “Bulutay et. al.”. I don’t actually think it should be “Julius et. al.”; I think Dave Hales did the most important work, and I wanted it to be “Hales et. al.”; but anything non-alphabetical is unusual in economics, and it would have taken a strong justification to convince the others to go along with it. This is a very stupid norm (and I attribute approximately 20% of Daron Acemoglu’s superstar status to it), but like any norm, it is difficult to dislodge.

I thought I would feel different when this day finally came. I thought I would feel joy, or at least satisfaction. I had been hoping that satisfaction would finally spur me forward in resubmitting my single-author paper, “Experimental Public Goods Games with Progressive Taxation”, so I could finally get a publication that actually does have “Julius (2022)” (or, at this rate, 2023, 2024…?). But that motivating satisfaction never came.

I did feel some vague sense of relief: Thank goodness, this ordeal is finally over and I can move on. But that doesn’t have the same motivating force; it doesn’t make me want to go back to the other papers I can now hardly bear to look at.

This reaction (or lack thereof?) could be attributed to circumstances: I have been through a lot lately. I was already overwhelmed by finishing my dissertation and going on the job market, and then there was the pandemic, and I had to postpone my wedding, and then when I finally got a job we had to suddenly move abroad, and then it was awful finding a place to live, and then we actually got married (which was lovely, but still stressful), and it took months to get my medications sorted with the NHS, and then I had a sudden resurgence of migraines which kept me from doing most of my work for weeks, and then I actually caught COVID and had to deal with that for a few weeks too. So it really isn’t too surprising that I’d be exhausted and depressed after all that.

Then again, it could be something deeper. I didn’t feel this way about my wedding. That genuinely gave me the joy and satisfaction that I had been expecting; I think it really was the best day of my life so far. So it isn’t as if I’m incapable of these feelings under my current state.

Rather, I fear that I am becoming more permanently disillusioned with academia. Now that I see how the sausage is made, I am no longer so sure I want to be one of the people making it. Publishing that paper didn’t feel like I had accomplished something, or even made some significant contribution to human knowledge. In fact, the actual work of publication was mostly done by my co-authors, because I was too overwhelmed by the job market at the time. But what I did have to do—and what I’ve tried to do with my own paper—felt like a miserable, exhausting ordeal.

More and more, I’m becoming convinced that a single experiment tells us very little, and we are being asked to present each one as if it were a major achievement when it’s more like a single brick in a wall.

But whatever new knowledge our experiments may have gleaned, that part was done years ago. We could have simply posted the draft as a working paper on the web and moved on, and the world would know just as much and our lives would have been a lot easier.

Oh, but then it would not have the imprimatur of peer review! And for our careers, that means absolutely everything. (Literally, when they’re deciding tenure, nothing else seems to matter.) But for human knowledge, does it really mean much? The more referee reports I’ve read, the more arbitrary they feel to me. This isn’t an objective assessment of scientific merit; it’s the half-baked opinion of a single randomly chosen researcher who may know next to nothing about the topic—or worse, have a vested interest in defending a contrary paradigm.

Yes, of course, what gets through peer review is of considerably higher quality than any randomly-selected content on the Internet. (The latter can be horrifically bad.) But is this not also true of what gets submitted for peer review? In fact, aren’t many blogs written by esteemed economists (say, Krugman? Romer? Nate Silver?) of considerably higher quality as well, despite having virtually none of the gatekeepers? I think Krugman’s blog is nominally edited by the New York Times, and Silver has a whole staff at FiveThirtyEight (they’re hiring, in fact!), but I’m fairly certain Romer just posts whatever he wants like I do. Of course, they had to establish their reputations (Krugman and Romer each won a Nobel). But still, it seems like maybe peer-review isn’t doing the most important work here.

Even blogs by far less famous economists (e.g. Miles Kimball, Brad DeLong) are also very good, and probably contribute more to advancing the knowledge of the average person than any given peer-reviewed paper, simply because they are more readable and more widely read. What we call “research” means going from zero people knowing a thing to maybe a dozen people knowing it; “publishing” means going from a dozen to at most a thousand; to go from a thousand to a billion, we call that “education”.

They all matter, of course; but I think we tend to overvalue research relative to education. A world where a few people know something is really not much better than a world where nobody does, while a world where almost everyone knows something can be radically superior. And the more I see just how far behind the cutting edge of research most economists are—let alone most average people—the more apparent it becomes to me that we are investing far too much in expanding that cutting edge (and far, far too much in gatekeeping who gets to do that!) and not nearly enough in disseminating that knowledge to humanity.

I think maybe that’s why finally publishing a paper felt so anticlimactic for me. I know that hardly anyone will ever actually read the damn thing. Just getting to this point took far more effort than it should have; dozens if not hundreds of hours of work, months of stress and frustration, all to satisfy whatever arbitrary criteria the particular reviewers happened to use so that we could all clear this stupid hurdle and finally get that line on our CVs. (And we wonder why academics are so depressed?) Far from being inspired to do the whole process again, I feel as if I have finally emerged from the torture chamber and may at last get some chance for my wounds to heal.

Even publishing fiction was not this miserable. Don’t get me wrong; it was miserable, especially for me, as I hate and fear rejection to the very core of my being in a way most people do not seem to understand. But there at least the subjectivity and arbitrariness of the process is almost universally acknowledged. Agents and editors don’t speak of your work being “flawed” or “wrong”; they don’t even say it’s “unimportant” or “uninteresting”. They say it’s “not a good fit” or “not what we’re looking for right now”. (Journal editors sometimes make noises like that too, but there’s always a subtext of “If this were better science, we’d have taken it.”) Unlike peer reviewers, they don’t come back with suggestions for “improvements” that are often pointless or utterly infeasible.

And unlike peer reviewers, fiction publishers acknowledge their own subjectivity and that of the market they serve. Nobody really thinks that Fifty Shades of Grey was good in any deep sense; but it was popular and successful, and that’s all the publisher really cares about. As a result, failing to be the next Fifty Shades of Grey ends up stinging a lot less than failing to be the next article in American Economic Review. Indeed, I’ve never had any illusions that my work would be popular among mainstream economists. But I once labored under the belief that it would be more important that it is true; and I guess I now consider that an illusion.

Moreover, fiction writers understand that rejection hurts; I’ve been shocked how few academics actually seem to. Nearly every writing conference I’ve ever been to has at least one seminar on dealing with rejection, often several; at academic conferences, I’ve literally never seen one. There seems to be a completely different mindset among academics—at least, the successful, tenured ones—about the process of peer review, what it means, even how it feels. When I try to talk with my mentors about the pain of getting rejected, they just… don’t get it. They offer me guidance on how to deal with anger at rejection, when that is not at all what I feel—what I feel is utter, hopeless, crushing despair.

There is a type of person who reacts to rejection with anger: Narcissists. (Look no further than the textbook example, Donald Trump.) I am coming to fear that I’m just not narcissistic enough to be a successful academic. I’m not even utterly lacking in narcissism: I am almost exactly average for a Millennial on the Narcissistic Personality Inventory. I score fairly high on Authority and Superiority (I consider myself a good leader and a highly competent individual) but very low on Exploitativeness and Self-Sufficiency (I don’t like hurting people and I know no man is an island). Then again, maybe I’m just narcissistic in the wrong way: I score quite low on “grandiose narcissism”, but relatively high on “vulnerable narcissism”. I hate to promote myself, but I find rejection devastating. This combination seems to be exactly what doesn’t work in academia. But it seems to be par for the course among writers and poets. Perhaps I have the mind of a scientist, but I have the soul of a poet. (Send me through the wormhole! Please? Please!?)

Why do poor people dislike inflation?

Jun 5 JDN 2459736

The United States and United Kingdom are both very unaccustomed to inflation. Neither has seen double-digit inflation since the 1980s.

Here’s US inflation since 1990:

And here is the same graph for the UK:

While a return to double-digits remains possible, at this point it likely won’t happen, and if it does, it will occur only briefly.

This is no doubt a major reason why the dollar and the pound are widely used as reserve currencies (especially the dollar), and is likely due to the fact that they are managed by the world’s most competent central banks. Brexit would almost have made sense if the UK had been pressured to join the Euro; but they weren’t, because everyone knew the pound was better managed.

The Euro also doesn’t have much inflation, but if anything they err on the side of too low, mainly because Germany appears to believe that inflation is literally Hitler. In fact, the rise of the Nazis didn’t have much to do with the Weimar hyperinflation. The Great Depression was by far a greater factor—unemployment is much, much worse than inflation. (By the way, it’s weird that you can put that graph back to the 1980s. It, uh, wasn’t the Euro then. Euros didn’t start circulating until 1999. Is that an aggregate of the franc and the deutsche mark and whatever else? The Euro itself has never had double-digit inflation—ever.)

But it’s always a little surreal for me to see how panicked people in the US and UK get when our inflation rises a couple of percentage points. There seems to be an entire subgenre of economics news that basically consists of rich people saying the sky is falling because inflation has risen—or will, or may rise—by two points. (Hey, anybody got any ideas how we can get them to panic like this over rises in sea level or aggregate temperature?)

Compare this to some other countries thathave real inflation: In Brazil, 10% inflation is a pretty typical year. In Argentina, 10% is a really good year—they’re currently pushing 60%. Kenya’s inflation is pretty well under control now, but it went over 30% during the crisis in 2008. Botswana was doing a nice job of bringing down their inflation until the COVID pandemic threw them out of whack, and now they’re hitting double-digits too. And of course there’s always Zimbabwe, which seemed to look at Weimar Germany and think, “We can beat that.” (80,000,000,000% in one month!? Any time you find yourself talking about billion percent, something has gone terribly, terribly wrong.)

Hyperinflation is a real problem—it isn’t what put Hitler into power, but it has led to real crises in Germany, Zimbabwe, and elsewhere. Once you start getting over 100% per year, and especially when it starts rapidly accelerating, that’s a genuine crisis. Moreover, even though they clearly don’t constitute hyperinflation, I can see why people might legitimately worry about price increases of 20% or 30% per year. (Let alone 60% like Argentina is dealing with right now.) But why is going from 2% to 6% any cause for alarm? Yet alarmed we seem to be.

I can even understand why rich people would be upset about inflation (though the magnitudeof their concern does still seem disproportionate). Inflation erodes the value of financial assets, because most bonds, options, etc. are denominated in nominal, not inflation-adjusted terms. (Though there are such things as inflation-indexed bonds.) So high inflation can in fact make rich people slightly less rich.

But why in the world are so many poor people upset about inflation?

Inflation doesn’t just erode the value of financial assets; it also erodes the value of financial debts. And most poor people have more debts than they have assets—indeed, it’s not uncommon for poor people to have substantial debt and no financial assets to speak of (what little wealth they have being non-financial, e.g. a car or a home). Thus, their net wealth position improves as prices rise.

The interest rate response can compensate for this to some extent, but most people’s debts are fixed-rate. Moreover, if it’s the higher interest rates you’re worried about, you should want the Federal Reserve and the Bank of England not to fight inflation too hard, because the way they fight it is chiefly by raising interest rates.

In surveys, almost everyone thinks that inflation is very bad: 92% think that controlling inflation should be a high priority, and 90% think that if inflation gets too high, something very bad will happen. This is greater agreement among Americans than is found for statements like “I like apple pie” or “kittens are nice”, and comparable to “fair elections are important”!

I admit, I question the survey design here: I would answer ‘yes’ to both questions if we’re talking about a theoretical 10,000% hyperinflation, but ‘no’ if we’re talking about a realistic 10% inflation. So I would like to see, but could not find, a survey asking people what level of inflation is sufficient cause for concern. But since most of these people seemed concerned about actual, realistic inflation (85% reported anger at seeing actual, higher prices), it still suggests a lot of strong feelings that even mild inflation is bad.

So it does seem to be the case that a lot of poor and middle-class people really strongly dislike inflation even in the actual, mild levels in which it occurs in the US and UK.

The main fear seems to be that inflation will erode people’s purchasing power—that as the price of gasoline and groceries rise, people won’t be able to eat as well or drive as much. And that, indeed, would be a real loss of utility worth worrying about.

But in fact this makes very little sense: Most forms of income—particularly labor income, which is the only real income for some 80%-90% of the population—actually increases with inflation, more or less one-to-one. Yes, there’s some delay—you won’t get your annual cost-of-living raise immediately, but several months down the road. But this could have at most a small effect on your real consumption.

To see this, suppose that inflation has risen from 2% to 6%. (Really, you need not suppose; it has.) Now consider your cost-of-living raise, which nearly everyone gets. It will presumably rise the same way: So if it was 3% before, it will now be 7%. Now consider how much your purchasing power is affected over the course of the year.

For concreteness, let’s say your initial income was $3,000 per month at the start of the year (a fairly typical amount for a middle-class American, indeed almost exactly the median personal income). Let’s compare the case of no inflation with a 1% raise, 2% inflation with a 3% raise, and 5% inflation with a 6% raise.

If there was no inflation, your real income would remain simply $3,000 per month, until the end of the year when it would become $3,030 per month. That’s the baseline to compare against.

If inflation is 2%, your real income would gradually fall, by about 0.16% per month, before being bumped up 3% at the end of the year. So in January you’d have $3,000, in February $2,995, in March $2,990. Come December, your real income has fallen to $2,941. But then next January it will immediately be bumped up 3% to $3,029, almost the same as it would have been with no inflation at all. The total lost income over the entire year is about $380, or about 1% of your total income.

If inflation instead rises to 6%, your real income will fall by 0.49% per month, reaching a minimum of $2,830 in December before being bumped back up to $3,028 next January. Your total loss for the whole year will be about $1110, or about 3% of your total income.

Indeed, it’s a pretty good heuristic to say that for an inflation rate of x% with annual cost-of-living raises, your loss of real income relative to having no inflation at all is about (x/2)%. (This breaks down for really high levels of inflation, at which point it becomes a wild over-estimate, since even 200% inflation doesn’t make your real income go to zero.)

This isn’t nothing, of course. You’d feel it. Going from 2% to 6% inflation at an income of $3000 per month is like losing $700 over the course of a year, which could be a month of groceries for a family of four. (Not that anyone can really raise a family of four on a single middle-class income these days. When did The Simpsons begin to seem aspirational?)

But this isn’t the whole story. Suppose that this same family of four had a mortgage payment of $1000 per month; that is also decreasing in real value by the same proportion. And let’s assume it’s a fixed-rate mortgage, as most are, so we don’t have to factor in any changes in interest rates.

With no inflation, their mortgage payment remains $1000. It’s 33.3% of their income this year, and it will be 33.0% of their income next year after they get that 1% raise.

With 2% inflation, their mortgage payment will also fall by 0.16% per month; $998 in February, $996 in March, and so on, down to $980 in December. This amounts to an increase in real income of about $130—taking away a third of the loss that was introduced by the inflation.

With 6% inflation, their mortgage payment will also fall by 0.49% per month; $995 in February, $990 in March, and so on, until it’s only $943 in December. This amounts to an increase in real income of over $370—again taking away a third of the loss.

Indeed, it’s no coincidence that it’s one third; the proportion of lost real income you’ll get back by cheaper mortgage payments is precisely the proportion of your income that was spent on mortgage payments at the start—so if, like too many Americans, they are paying more than a third of their income on mortgage, their real loss of income from inflation will be even lower.

And what if they are renting instead? They’re probably on an annual lease, so that payment won’t increase in nominal terms either—and hence will decrease in real terms, in just the same way as a mortgage payment. Likewise car payments, credit card payments, any debt that has a fixed interest rate. If they’re still paying back student loans, their financial situation is almost certainly improved by inflation.

This means that the real loss from an increase of inflation from 2% to 6% is something like 1.5% of total income, or about $500 for a typical American adult. That’s clearly not nearly as bad as a similar increase in unemployment, which would translate one-to-one into lost income on average; moreover, this loss would be concentrated among people who lost their jobs, so it’s actually worse than that once you account for risk aversion. It’s clearly better to lose 1% of your income than to have a 1% chance of losing nearly all your income—and inflation is the former while unemployment is the latter.

Indeed, the only reason you lost purchasing power at all was that your cost-of-living increases didn’t occur often enough. If instead you had a labor contract that instituted cost-of-living raises every month, or even every paycheck, instead of every year, you would get all the benefits of a cheaper mortgage and virtually none of the costs of a weaker paycheck. Convince your employer to make this adjustment, and you will actually benefit from higher inflation.

So if poor and middle-class people are upset about eroding purchasing power, they should be mad at their employers for not implementing more frequent cost-of-living adjustments; the inflation itself really isn’t the problem.

If I had a trillion dollars…

May 29 JDN 2459729

(To the tune of “If I had a million dollars” by Barenaked Ladies; by the way, he does now)

[Inspired by the book How to Spend a Trillion Dollars]

If I had a trillion dollars… if I had a trillion dollars!

I’d buy everyone a house—and yes, I mean, every homeless American.

[500,000 homeless households * $300,000 median home price = $150 billion]

If I had a trillion dollars… if I had a trillion dollars!

I’d give to the extreme poor—and then there would be no extreme poor!

[Global poverty gap: $160 billion]

If I had a trillion dollars… if I had a trillion dollars!

I’d send people to Mars—hey, maybe we’d find some alien life!

[Estimated cost of manned Mars mission: $100 billion]

If I had a trillion dollars… if I had a trillion dollars!

I’d build us a Moon base—haven’t you always wanted a Moon base?

[Estimated cost of a permanent Lunar base: $35 billion. NASA is bad at forecasting cost, so let’s allow cost overruns to take us to $100 billion.]

If I had a trillion dollars… if I had a trillion dollars!

I’d build a new particle accelerator—let’s finally figure out dark matter!

[Cost of planned new accelerator at CERN: $24 billion. Let’s do 4 times bigger and make it $100 billion.]

If I had a trillion dollars… if I had a trillion dollars!

I’d save the Amazon—pay all the ranchers to do something else!

[Brazil, where 90% of Amazon cattle ranching is, produces about 10 million tons of beef per year, which at an average price of $5000 per ton is $50 billion. So I could pay all the farmers two years of revenue to protect the Amazon instead of destroying it for $100 billion.]

If I had a trillion dollars…

We wouldn’t have to drive anymore!

If I had a trillion dollars…

We’d build high-speed rail—it won’t cost more!

[Cost of proposed high-speed rail system: $240 billion]

If I had a trillion dollars… if I had trillion dollars!

Hey wait, I could get it from a carbon tax!

[Even a moderate carbon tax could raise $1 trillion in 10 years.]

If I had a trillion dollars… I’d save the world….

All of the above really could be done for under $1 trillion. (Some of them would need to be repeated, so we could call it $1 trillion per year.)

I, of course, do not, and will almost certainly never have, anything approaching $1 trillion.

But here’s the thing: There are people who do.

Elon Musk and Jeff Bezos together have a staggering $350 billion. That’s two people with enough money to end world hunger. And don’t give me that old excuse that it’s not in cash: UNICEF gladly accepts donations in stock. They could, right now, give their stocks to UNICEF and thereby end world hunger. They are choosing not to do that. In fact, the goodwill generated by giving, say, half their stocks to UNICEF might actually result in enough people buying into their companies that their stock prices would rise enough to make up the difference—thus costing them literally nothing.

The total net wealth of all the world’s billionaires is a mind-boggling $12.7 trillion. That’s more than half a year of US GDP. Held by just over 2600 people—a small town.

The US government spends $4 trillion in a normal year—and $5 trillion the last couple of years due to the pandemic. Nearly $1 trillion of that is military spending, which could be cut in half and still be the highest in the world. After seeing how pathetic Russia’s army actually is in battle (they paint Zs on their tanks because apparently their IFF system is useless!), are we really still scared of them? Do we really need eleven carrier battle groups?

Yes, the total cost of mitigating climate change is probably in the tens of trillions—but the cost of not mitigating climate change could be over $100 trillion. And it’s not as if the world can’t come up with tens of trillions; we already do. World GDP is now over $100 trillion per year; just 2% of that for 10 years is $20 trillion.

Do these sound like good ideas to you? Would you want to do them? I think most people would want most of them. So now the question becomes: Why aren’t we doing them?

Will we ever have the space opera future?

May 22 JDN 2459722

Space opera has long been a staple of science fiction. Like many natural categories, it’s not that easy to define; it has something to do with interstellar travel, a variety of alien species, grand events, and a big, complicated world that stretches far beyond any particular story we might tell about it.

Star Trek is the paradigmatic example, and Star Wars also largely fits, but there are numerous of other examples, including most of my favorite science fiction worlds: Dune, the Culture, Mass Effect, Revelation Space, the Liaden, Farscape, Babylon 5, the Zones of Thought.

I think space opera is really the sort of science fiction I most enjoy. Even when it is dark, there is still something aspirational about it. Even a corrupt feudal transplanetary empire or a terrible interstellar war still means a universe where people get to travel the stars.

How likely is it that we—and I mean ‘we’ in the broad sense, humanity and its descendants—will actually get the chance to live in such a universe?

First, let’s consider the most traditional kind of space opera, the Star Trek world, where FTL is commonplace and humans interact as equals with a wide variety of alien species that are different enough to be interesting, but similar enough to be relatable.

This, sad to say, is extremely unlikely. FTL is probably impossible, or if not literally impossible then utterly infeasible by any foreseeable technology. Yes, the Alcubierre drive works in theory… all you need is tons of something that has negative mass.

And while, by sheer probability, there almost have to be other sapient lifeforms somewhere out there in this vast universe, our failure to contact or even find clear evidence of any of them for such a long period suggests that they are either short-lived or few and far between. Moreover, any who do exist are likely to be radically different from us and difficult to interact with at all, much less relate to on a personal level. Maybe they don’t have eyes or ears; maybe they live only in liquid hydrogen or molten lead; maybe they communicate entirely by pheromones that are toxic to us.

Does this mean that the aspirations of space opera are ultimately illusory? Is it just a pure fantasy that will forever be beyond us? Not necessarily.

I can see two other ways to create a very space-opera-like world, one of which is definitely feasible, and the other is very likely to be. Let’s start with the one that’s definitely feasible—indeed so feasible we will very likely get to experience it in our own lifetimes.

That is to make it a simulation. An MMO video game, in a way, but something much grander than any MMO that has yet been made. Not just EVE and No Man’s Sky, not just World of Warcraft and Minecraft and Second Life, but also Facebook and Instagram and Zoom and so much more. Oz from Summer Wars; OASIS from Ready Player One. A complete, multifaceted virtual reality in which we can spend most if not all of our lives. One complete with not just sight and sound, but also touch, smell, even taste.

Since it’s a simulation, we can make our own rules. If we want FTL and teleportation, we can have them. (And I would like to note that in fact teleportation is available in EVE, No Man’s Sky, World of Warcraft, Minecraft, and even Second Life. It’s easy to implement in a simulation, and it really seems to be something people want to have.) If we want to meet—or even be—people from a hundred different sapient species, some more humanoid than others, we can. Each of us could rule entire planets, command entire starfleets.

And we could do this, if not right now, today, then very, very soon—the VR hardware is finally maturing and the software capability already exists if there is a development team with the will and the skills (and the budget) to do it. We almost certainly will do this—in fact, we’ll do it hundreds or thousands of different ways. You need not be content with any particular space opera world, when you can choose from a cornucopia of them; and fantasy worlds too, and plenty of other kinds of worlds besides.

Yet, I admit, there is something missing from that future. While such a virtual-reality simulation might reach the point where it would be fair to say it’s no longer simply a “video game”, it still won’t be real. We won’t actually be Vulcans or Delvians or Gek or Asari. We will merely pretend to be. When we take off the VR suit at the end of the day, we will still be humans, and still be stuck here on Earth. And even if most of the toil of maintaining this society and economy can be automated, there will still be some time we have to spend living ordinary lives in ordinary human bodies.

So, is there some chance that we might really live in a space-opera future? Where we will meet actual, flesh-and-blood people who have blue skin, antennae, or six limbs? Where we will actually, physically move between planets, feeling the different gravity beneath our feet and looking up at the alien sky?

Yes. There is a way this could happen. Not now, not for awhile yet. We ourselves probably won’t live to see it. But if humanity manages to continue thriving for a few more centuries, and technology continues to improve at anything like its current pace, then that day may come.

We won’t have FTL, so we’ll be bounded by the speed of light. But the speed of light is still quite fast. It can get you to Mars in minutes, to Jupiter in hours, and even to Alpha Centauri in a voyage that wouldn’t shock Magellan or Zheng He. Leaving this arm of the Milky Way, let alone traveling to another galaxy, is out of the question (at least if you ever want to come back while anyone you know is still alive—actually as a one-way trip it’s surprisingly feasible thanks to time dilation).

This means that if we manage to invent a truly superior kind of spacecraft engine, one which combines the high thrust of a hydrolox rocket with the high specific impulse of an ion thruster—and that is physically possible, because it’s well within what nuclear rockets ought to be capable of—then we could travel between planets in our solar system, and maybe even to nearby solar systems, in reasonable amounts of time. The world of The Expanse could therefore be in reach (well, the early seasons anyway), where human colonies have settled on Mars and Ceres and Ganymede and formed their own new societies with their own unique cultures.

We may yet run into some kind of extraterrestrial life—bacteria probably, insects maybe, jellyfish if we’re lucky—but we probably ever won’t actually encounter any alien sapients. If there are any, they are probably too primitive to interact with us, or they died out millennia ago, or they’re simply too far away to reach.

But if we cannot find Vulcans and Delvians and Asari, then we can become them. We can modify ourselves with cybernetics, biotechnology, or even nanotechnology, until we remake ourselves into whatever sort of beings we want to be. We may never find a whole interplanetary empire ruled by a race of sapient felinoids, but if furry conventions are any indication, there are plenty of people who would make themselves into sapient felinoids if given the opportunity.

Such a universe would actually be more diverse than a typical space opera. There would be no “planets of hats“, no entire societies of people acting—or perhaps even looking—the same. The hybridization of different species is almost by definition impossible, but when the ‘species’ are cosmetic body mods, we can combine them however we like. A Klingon and a human could have a child—and for that matter the child could grow up and decide to be a Turian.

Honestly there are only two reasons I’m not certain we’ll go this route:

One, we’re still far too able and willing to kill each other, so who knows if we’ll even make it that long. There’s also still plenty of room for some sort of ecological catastrophe to wipe us out.

And two, most people are remarkably boring. We already live in a world where one could go to work every day wearing a cape, a fursuit, a pirate outfit, or a Starfleet uniform, and yet people don’t let you. There’s nothing infeasible about me delivering a lecture dressed as a Kzin Starfleet science officer, and nor would it even particularly impair my ability to deliver the lecture well; and yet I’m quite certain it would be greatly frowned upon if I were to do so, and could even jeopardize my career (especially since I don’t have tenure).

Would it be distracting to the students if I were to do something like that? Probably, at least at first. But once they got used to it, it might actually make them feel at ease. If it were a social norm that lecturers—and students—can dress however they like (perhaps limited by local decency regulations, though those, too, often seem overly strict), students might show up to class in bunny pajamas or pirate outfits or full-body fursuits, but would that really be a bad thing? It could in fact be a good thing, if it helps them express their own identity and makes them more comfortable in their own skin.

But no, we live in a world where the mainstream view is that every man should wear exactly the same thing at every formal occasion. I felt awkward at the AEA conference because my shirt had color.

This means that there is really one major obstacle to building the space opera future: Social norms. If we don’t get to live in this world one day, it will be because the world is ruled by the sort of person who thinks that everyone should be the same.

Scalability and inequality

May 15 JDN 2459715

Why are some molecules (e.g. DNA) billions of times larger than others (e.g. H2O), but all atoms are within a much narrower range of sizes (only a few hundred)?

Why are some animals (e.g. elephants) millions of times as heavy as other (e.g. mice), but their cells are basically the same size?

Why does capital income vary so much more (factors of thousands or millions) than wages (factors of tens or hundreds)?

These three questions turn out to have much the same answer: Scalability.

Atoms are not very scalable: Adding another proton to a nucleus causes interactions with all the other protons, which makes the whole atom unstable after a hundred protons or so. But molecules, particularly organic polymers such as DNA, are tremendously scalable: You can add another piece to one end without affecting anything else in the molecule, and keep on doing that more or less forever.

Cells are not very scalable: Even with the aid of active transport mechanisms and complex cellular machinery, a cell’s functionality is still very much limited by its surface area. But animals are tremendously scalable: The same exponential growth that got you from a zygote to a mouse only needs to continue a couple years longer and it’ll get you all the way to an elephant. (A baby elephant, anyway; an adult will require a dozen or so years—remarkably comparable to humans, in fact.)

Labor income is not very scalable: There are only so many hours in a day, and the more hours you work the less productive you’ll be in each additional hour. But capital income is perfectly scalable: We can add another digit to that brokerage account with nothing more than a few milliseconds of electronic pulses, and keep doing that basically forever (due to the way integer storage works, above 2^63 it would require special coding, but it can be done; and seeing as that’s over 9 quintillion, it’s not likely to be a problem any time soon—though I am vaguely tempted to write a short story about an interplanetary corporation that gets thrown into turmoil by an integer overflow error).

This isn’t just an effect of our accounting either. Capital is scalable in a way that labor is not. When your contribution to production is owning a factory, there’s really nothing to stop you from owning another factory, and then another, and another. But when your contribution is working at a factory, you can only work so hard for so many hours.

When a phenomenon is highly scalable, it can take on a wide range of outcomes—as we see in molecules, animals, and capital income. When it’s not, it will only take on a narrow range of outcomes—as we see in atoms, cells, and labor income.

Exponential growth is also part of the story here: Animals certainly grow exponentially, and so can capital when invested; even some polymers function that way (e.g. under polymerase chain reaction). But I think the scalability is actually more important: Growing rapidly isn’t so useful if you’re going to immediately be blocked by a scalability constraint. (This actually relates to the difference between r- and K- evolutionary strategies, and offers further insight into the differences between mice and elephants.) Conversely, even if you grow slowly, given enough time, you’ll reach whatever constraint you’re up against.

Indeed, we can even say something about the probability distribution we are likely to get from random processes that are scalable or non-scalable.

A non-scalable random process will generally converge toward the familiar normal distribution, a “bell curve”:

[Image from Wikipedia: By Inductiveload – self-made, Mathematica, Inkscape, Public Domain, https://commons.wikimedia.org/w/index.php?curid=3817954]

The normal distribution has most of its weight near the middle; most of the population ends up near there. This is clearly the case for labor income: Most people are middle class, while some are poor and a few are rich.

But a scalable random process will typically converge toward quite a different distribution, a Pareto distribution:

[Image from Wikipedia: By Danvildanvil – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=31096324]

A Pareto distribution has most of its weight near zero, but covers an extremely wide range. Indeed it is what we call fat tailed, meaning that really extreme events occur often enough to have a meaningful effect on the average. A Pareto distribution has most of the people at the bottom, but the ones at the top are really on top.

And indeed, that’s exactly how capital income works: Most people have little or no capital income (indeed only about half of Americans and only a third(!) of Brits own any stocks at all), while a handful of hectobillionaires make utterly ludicrous amounts of money literally in their sleep.

Indeed, it turns out that income in general is pretty close to distributed normally (or maybe lognormally) for most of the income range, and then becomes very much Pareto at the top—where nearly all the income is capital income.

This fundamental difference in scalability between capital and labor underlies much of what makes income inequality so difficult to fight. Capital is scalable, and begets more capital. Labor is non-scalable, and we only have to much to give.

It would require a radically different system of capital ownership to really eliminate this gap—and, well, that’s been tried, and so far, it hasn’t worked out so well. Our best option is probably to let people continue to own whatever amounts of capital, and then tax the proceeds in order to redistribute the resulting income. That certainly has its own downsides, but they seem to be a lot more manageable than either unfettered anarcho-capitalism or totalitarian communism.