Good enough is perfect, perfect is bad

Jan 8 JDN 2459953

Not too long ago, I read the book How to Keep House While Drowning by KC Davis, which I highly recommend. It offers a great deal of useful and practical advice, especially for someone neurodivergent and depressed living through an interminable pandemic (which I am, but honestly, odds are, you may be too). And to say it is a quick and easy read is actually an unfair understatement; it is explicitly designed to be readable in short bursts by people with ADHD, and it has a level of accessibility that most other books don’t even aspire to and I honestly hadn’t realized was possible. (The extreme contrast between this and academic papers is particularly apparent to me.)

One piece of advice that really stuck with me was this: Good enough is perfect.

At first, it sounded like nonsense; no, perfect is perfect, good enough is just good enough. But in fact there is a deep sense in which it is absolutely true.

Indeed, let me make it a bit stronger: Good enough is perfect; perfect is bad.

I doubt Davis thought of it in these terms, but this is a concise, elegant statement of the principles of bounded rationality. Sometimes it can be optimal not to optimize.

Suppose that you are trying to optimize something, but you have limited computational resources in which to do so. This is actually not a lot for you to suppose—it’s literally true of basically everyone basically every moment of every day.

But let’s make it a bit more concrete, and say that you need to find the solution to the following math problem: “What is the product of 2419 times 1137?” (Pretend you don’t have a calculator, as it would trivialize the exercise. I thought about using a problem you couldn’t do with a standard calculator, but I realized that would also make it much weirder and more obscure for my readers.)

Now, suppose that there are some quick, simple ways to get reasonably close to the correct answer, and some slow, difficult ways to actually get the answer precisely.

In this particular problem, the former is to approximate: What’s 2500 times 1000? 2,500,000. So it’s probably about 2,500,000.

Or we could approximate a bit more closely: Say 2400 times 1100, that’s about 100 times 100 times 24 times 11, which is 2 times 12 times 11 (times 10,000), which is 2 times (110 plus 22), which is 2 times 132 (times 10,000), which is 2,640,000.

Or, we could actually go through all the steps to do the full multiplication (remember I’m assuming you have no calculator), multiply, carry the 1s, add all four sums, re-check everything and probably fix it because you messed up somewhere; and then eventually you will get: 2,750,403.

So, our really fast method was only off by about 10%. Our moderately-fast method was only off by 4%. And both of them were a lot faster than getting the exact answer by hand.

Which of these methods you’d actually want to use depends on the context and the tools at hand. If you had a calculator, sure, get the exact answer. Even if you didn’t, but you were balancing the budget for a corporation, I’m pretty sure they’d care about that extra $110,403. (Then again, they might not care about the $403 or at least the $3.) But just as an intellectual exercise, you really didn’t need to do anything; the optimal choice may have been to take my word for it. Or, if you were at all curious, you might be better off choosing the quick approximation rather than the precise answer. Since nothing of any real significance hinged on getting that answer, it may be simply a waste of your time to bother finding it.

This is of course a contrived example. But it’s not so far from many choices we make in real life.

Yes, if you are making a big choice—which job to take, what city to move to, whether to get married, which car or house to buy—you should get a precise answer. In fact, I make spreadsheets with formal utility calculations whenever I make a big choice, and I haven’t regretted it yet. (Did I really make a spreadsheet for getting married? You’re damn right I did; there were a lot of big financial decisions to make there—taxes, insurance, the wedding itself! I didn’t decide whom to marry that way, of course; but we always had the option of staying unmarried.)

But most of the choices we make from day to day are small choices: What should I have for lunch today? Should I vacuum the carpet now? What time should I go to bed? In the aggregate they may all add up to important things—but each one of them really won’t matter that much. If you were to construct a formal model to optimize your decision of everything to do each day, you’d spend your whole day doing nothing but constructing formal models. Perfect is bad.

In fact, even for big decisions, you can’t really get a perfect answer. There are just too many unknowns. Sometimes you can spend more effort gathering additional information—but that’s costly too, and sometimes the information you would most want simply isn’t available. (You can look up the weather in a city, visit it, ask people about it—but you can’t really know what it’s like to live there until you do.) Even those spreadsheet models I use to make big decisions contain error bars and robustness checks, and if, even after investing a lot of effort trying to get precise results, I still find two or more choices just can’t be clearly distinguished to within a good margin of error, I go with my gut. And that seems to have been the best choice for me to make. Good enough is perfect.

I think that being gifted as a child trained me to be dangerously perfectionist as an adult. (Many of you may find this familiar.) When it came to solving math problems, or answering quizzes, perfection really was an attainable goal a lot of the time.

As I got older and progressed further in my education, maybe getting every answer right was no longer feasible; but I still could get the best possible grade, and did, in most of my undergraduate classes and all of my graduate classes. To be clear, I’m not trying to brag here; if anything, I’m a little embarrassed. What it mainly shows is that I had learned the wrong priorities. In fact, one of the main reasons why I didn’t get a 4.0 average in undergrad is that I spent a lot more time back then writing novels and nonfiction books, which to this day I still consider my most important accomplishments and grieve that I’ve not (yet?) been able to get them commercially published. I did my best work when I wasn’t trying to be perfect. Good enough is perfect; perfect is bad.

Now here I am on the other side of the academic system, trying to carve out a career, and suddenly, there is no perfection. When my exam is being graded by someone else, there is a way to get the most points. When I’m the one grading the exams, there is no “correct answer” anymore. There is no one scoring me to see if I did the grading the “right way”—and so, no way to be sure I did it right.

Actually, here at Edinburgh, there are other instructors who moderate grades and often require me to revise them, which feels a bit like “getting it wrong”; but it’s really more like we had different ideas of what the grade curve should look like (not to mention US versus UK grading norms). There is no longer an objectively correct answer the way there is for, say, the derivative of x^3, the capital of France, or the definition of comparative advantage. (Or, one question I got wrong on an undergrad exam because I had zoned out of that lecture to write a book on my laptop: Whether cocaine is a dopamine reuptake inhibitor. It is. And the fact that I still remember that because I got it wrong over a decade ago tells you a lot about me.)

And then when it comes to research, it’s even worse: What even constitutes “good” research, let alone “perfect” research? What would be most scientifically rigorous isn’t what journals would be most likely to publish—and without much bigger grants, I can afford neither. I find myself longing for the research paper that will be so spectacular that top journals have to publish it, removing all risk of rejection and failure—in other words, perfect.

Yet such a paper plainly does not exist. Even if I were to do something that would win me a Nobel or a Fields Medal (this is, shall we say, unlikely), it probably wouldn’t be recognized as such immediately—a typical Nobel isn’t awarded until 20 or 30 years after the work that spawned it, and while Fields Medals are faster, they’re by no means instant or guaranteed. In fact, a lot of ground-breaking, paradigm-shifting research was originally relegated to minor journals because the top journals considered it too radical to publish.

Or I could try to do something trendy—feed into DSGE or GTFO—and try to get published that way. But I know my heart wouldn’t be in it, and so I’d be miserable the whole time. In fact, because it is neither my passion nor my expertise, I probably wouldn’t even do as good a job as someone who really buys into the core assumptions. I already have trouble speaking frequentist sometimes: Are we allowed to say “almost significant” for p = 0.06? Maximizing the likelihood is still kosher, right? Just so long as I don’t impose a prior? But speaking DSGE fluently and sincerely? I’d have an easier time speaking in Latin.

What I know—on some level at least—I ought to be doing is finding the research that I think is most worthwhile, given the resources I have available, and then getting it published wherever I can. Or, in fact, I should probably constrain a little by what I know about journals: I should do the most worthwhile research that is feasible for me and has a serious chance of getting published in a peer-reviewed journal. It’s sad that those two things aren’t the same, but they clearly aren’t. This constraint binds, and its Lagrange multiplier is measured in humanity’s future.

But one thing is very clear: By trying to find the perfect paper, I have floundered and, for the last year and a half, not written any papers at all. The right choice would surely have been to write something.

Because good enough is perfect, and perfect is bad.

Inequality-adjusted GDP and median income

Dec 11 JDN 2459925

There are many problems with GDP as a measure of a nation’s prosperity. For one, GDP ignores natural resources and ecological degradation; so a tree is only counted in GDP once it is cut down. For another, it doesn’t value unpaid work, so caring for a child only increases GDP if you are a paid nanny rather than the child’s parents.

But one of the most obvious problems is the use of an average to evaluate overall prosperity, without considering the level of inequality.

Consider two countries. In Alphania, everyone has an income of about $50,000. In Betavia, 99% of people have an income of $1,000 and 1% have an income of $10 million. What is the per-capita GDP of each country? Alphania’s is $50,000 of course; but Betavia’s is $100,990. Does it really make sense to say that Betavia is a more prosperous country? Maybe it has more wealth overall, but its huge inequality means that it is really not at a high level of development. It honestly sounds like an awful place to live.

A much more sensible measure would be something like median income: How much does a typical person have? In Alphania this is still $50,000; but in Betavia it is only $1,000.

Yet even this leaves out most of the actual distribution; by definition a median is only determined by what is the 50th percentile. We could vary all other incomes a great deal without changing the median.

A better measure would be some sort of inequality-adjusted per-capita GDP, which rescales GDP based on the level of inequality in a country. But we would need a good way of making that adjustment.

I contend that the most sensible way would be to adopt some kind of model of marginal utility of income, and then figure out what income would correspond to the overall average level of utility.

In other words, average over the level of happiness that people in a country get from their income, and then figure out what level of income would correspond to that level of happiness. If we magically gave everyone the same amount of money, how much would they need to get in order for the average happiness in the country to remain the same?

This is clearly going to be less than the average level of income, because marginal utility of income is decreasing; a dollar is not worth as much in real terms to a rich person as it is to a poor person. So if we could somehow redistribute all income evenly while keeping the average the same, that would actually increase overall happiness (though, for many reasons, we can’t simply do that).

For example, suppose that utility of income is logarithmic: U = ln(I).

This means that the marginal utility of an additional dollar is inversely proportional to how many dollars you already have: U'(I) = 1/I.

It also means that a 1% gain or loss in your income feels about the same regardless of how much income you have: ln((1+r)Y) = ln(Y) + ln(1+r). This seems like a quite reasonable, maybe even a bit conservative, assumption; I suspect that losing 1% of your income actually hurts more when you are poor than when you are rich.

Then the inequality adjusted GDP Y is a value such that ln(Y) is equal to the overall average level of utility: E[U] = ln(Y), so Y = exp(E[U]).

This sounds like a very difficult thing to calculate. But fortunately, the distribution of actual income seems to quite closely follow a log-normal distribution. This means that when we take the logarithm of income to get utility, we just get back a very nice, convenient normal distribution!

In fact, it turns out that for a log-normal distribution, the following holds: exp(E[ln(Y)]) = median(Y)

The income which corresponds to the average utility turns out to simply be the median income! We went looking for a better measure than median income, and ended up finding out that median income was the right measure all along.

This wouldn’t hold for most other distributions; and since real-world economies don’t perfectly follow a log-normal distribution, a more precise estimate would need to be adjusted accordingly. But the approximation is quite good for most countries we have good data on, so even for the ones we don’t, median income is likely a very good estimate.

The ranking of countries by median income isn’t radically different from the ranking by per-capita GDP; rich countries are still rich and poor countries are still poor. But it is different enough to matter.

Luxembourg is in 1st place on both lists. Scandinavian countries and the US are in the top 10 in both cases. So it’s fair to say that #ScandinaviaIsBetter for real, and the US really is so rich that our higher inequality doesn’t make our median income lower than the rest of the First World.

But some countries are quite different. Ireland looks quite good in per-capita GDP, but quite bad in median income. This is because a lot of the GDP in Ireland is actually profits by corporations that are only nominally headquartered in Ireland and don’t actually employ very many people there.

The comparison between the US, the UK, and Canada seems particularly instructive. If you look at per-capita GDP PPP, the US looks much richer at $75,000 compared to Canada’s $57,800 (a difference of 29% or 26 log points). But if you look at median personal income, they are nearly equal: $19,300 in the US and $18,600 in Canada (3.7% or 3.7 log points).

On the other hand, in per-capita GDP PPP, the UK looks close to Canada at $55,800 (3.6% or 3.6 lp); but in median income it is dramatically worse, at only $14,800 (26% or 23 lp). So Canada and the UK have similar overall levels of wealth, but life for a typical Canadian is much better than life for a typical Briton because of the higher inequality in Britain. And the US has more wealth than Canada, but it doesn’t meaningfully improve the lifestyle of a typical American relative to a typical Canadian.

The Efficient Roulette Hypothesis

Nov 27 JDN 2459911

The efficient market hypothesis is often stated in several different ways, and these are often treated as equivalent. There are at least three very different definitions of it that people seem to use interchangeably:

  1. Market prices are optimal and efficient.
  2. Market prices aggregate and reflect all publicly-available relevant information.
  3. Market prices are difficult or impossible to predict.

The first reading, I will call the efficiency hypothesis, because, well, it is what we would expect a phrase like “efficient market hypothesis” to mean. The ordinary meaning of those words would imply that we are asserting that market prices are in some way optimal or near-optimal, that markets get prices “right” in some sense at least the vast majority of the time.

The second reading I’ll call the information hypothesis; it implies that market prices are an information aggregation mechanism which automatically incorporates all publicly-available information. This already seems quite different from efficiency, but it seems at least tangentially related, since information aggregation could be one useful function that markets serve.

The third reading I will call the unpredictability hypothesis; it says simply that market prices are very difficult to predict, and so you can’t reasonably expect to make money by anticipating market price changes far in advance of everyone else. But as I’ll get to in more detail shortly, that doesn’t have the slightest thing to do with efficiency.

The empirical data in favor of the unpredictability hypothesis is quite overwhelming. It’s exceedingly hard to beat the market, and for most people, most of the time, the smartest way to invest is just to buy a diversified portfolio and let it sit.

The empirical data in favor of the information hypothesis is mixed, but it’s at least plausible; most prices do seem to respond to public announcements of information in ways we would expect, and prediction markets can be surprisingly accurate at forecasting the future.

The empirical data in favor of the efficiency hypothesis, on the other hand, is basically nonexistent. On the one hand this is a difficult hypothesis to test directly, since it isn’t clear what sort of benchmark we should be comparing against—so it risks being not even wrong. But if you consider basically any plausible standard one could try to set for how an efficient market would run, our actual financial markets in no way resemble it. They are erratic, jumping up and down for stupid reasons or no reason at all. They are prone to bubbles, wildly overvaluing worthless assets. They have collapsed governments and ruined millions of lives without cause. They have resulted in the highest-paying people in the world doing jobs that accomplish basically nothing of genuine value. They are, in short, a paradigmatic example of what inefficiency looks like.

Yet, we still have economists who insist that “the efficient market hypothesis” is a proven fact, because the unpredictability hypothesis is clearly correct.

I do not think this is an accident. It’s not a mistake, or an awkwardly-chosen technical term that people are misinterpreting.

This is a motte and bailey doctrine.

Motte-and-bailey was a strategy in medieval warfare. Defending an entire region is very difficult, so instead what was often done was constructing a small, highly defensible fortification—the motte—while accepting that the land surrounding it—the bailey—would not be well-defended. Most of the time, the people stayed on the bailey, where the land was fertile and it was relatively pleasant to live. But should they be attacked, they could retreat to the motte and defend themselves until the danger was defeated.

A motte-and-bailey doctrine is an analogous strategy used in argumentation. You use the same words for two different versions of an idea: The motte is a narrow, defensible core of your idea that you can provide strong evidence for, but it isn’t very strong and may not even be interesting or controversial. The bailey is a broad, expansive version of your idea that is interesting and controversial and leads to lots of significant conclusions, but can’t be well-supported by evidence.

The bailey is the efficiency hypothesis: That market prices are optimal and we are fools to try to intervene or even regulate them because the almighty Invisible Hand is superior to us.

The motte is the unpredictability hypothesis: Market prices are very hard to predict, and most people who try to make money by beating the market fail.

By referring to both of these very different ideas as “the efficient market hypothesis”, economists can act as if they are defending the bailey, and prescribe policies that deregulate financial markets on the grounds that they are so optimal and efficient; but then when pressed for evidence to support their beliefs, they can pivot to the motte, and merely show that markets are unpredictable. As long as people don’t catch on and recognize that these are two very different meanings of “the efficient market hypothesis”, then they can use the evidence for unpredictability to support their goal of deregulation.

Yet when you look closely at this argument, it collapses. Unpredictability is not evidence of efficiency; if anything, it’s the opposite. Since the world doesn’t really change on a minute-by-minute basis, an efficient system should actually be relatively predictable in the short term. If prices reflected the real value of companies, they would change only very gradually, as the fortunes of the company change as a result of real-world events. An earthquake or a discovery of a new mine would change stock prices in relevant industries; but most of the time, they’d be basically flat. The occurrence of minute-by-minute or even second-by-second changes in prices basically proves that we are not tracking any genuine changes in value.

Roulette wheels are extremely unpredictable by design—by law, even—and yet no one would accuse them of being an efficient way of allocating resources. If you bet on roulette wheels and try to beat the house, you will almost surely fail, just as you would if you try to beat the stock market—and dare I say, for much the same reasons?

So if we’re going to insist that “efficiency” just means unpredictability, rather than actual, you know, efficiency, then we should all speak of the Efficient Roulette Hypothesis. Anything we can’t predict is now automatically “efficient” and should therefore be left unregulated.

Krugman and rockets and feathers

Jul 17 JDN 2459797

Well, this feels like a milestone: Paul Krugman just wrote a column about a topic I’ve published research on. He didn’t actually cite our paper—in fact the literature review he links to is from 2014—but the topic is very much what we were studying: Asymmetric price transmission, ‘rockets and feathers’. He’s even talking about it from the perspective of industrial organization and market power, which is right in line with our results (and a bit different from the mainstream consensus among economic policy pundits).

The phenomenon is a well-documented one: When the price of an input (say, crude oil) rises, the price of outputs made from that input (say, gasoline) rise immediately, and basically one to one, sometimes even more than one to one. But when the price of an input falls, the price of outputs only falls slowly and gradually, taking a long time to converge to the same level as the input prices. Prices go up like a rocket, but down like a feather.

Many different explanations have been proposed to explain this phenomenon, and they aren’t all mutually exclusive. They include various aspects of market structure, substitution of inputs, and use of inventories to smooth the effects of prices.

One that I find particularly unpersuasive is the notion of menu costs: That it requires costly effort to actually change your prices, and this somehow results in the asymmetry. Most gas stations have digital price boards; it requires almost zero effort for them to change prices whenever they want. Moreover, there’s no clear reason this would result in asymmetry between raising and lowering prices. Some models extend the notion of “menu cost” to include expected customer responses, which is a much better explanation; but I think that’s far beyond the original meaning of the concept. If you fear to change your price because of how customers may respond, finding a cheaper way to print price labels won’t do a thing to change that.

But our paper—and Krugman’s article—is about one factor in particular: market power. We don’t see prices behave this way in highly competitive markets. We see it the most in oligopolies: Markets where there are only a small number of sellers, who thus have some control over how they set their prices.

Krugman explains it as follows:

When oil prices shoot up, owners of gas stations feel empowered not just to pass on the cost but also to raise their markups, because consumers can’t easily tell whether they’re being gouged when prices are going up everywhere. And gas stations may hang on to these extra markups for a while even when oil prices fall.

That’s actually a somewhat different mechanism from the one we found in our experiment, which is that asymmetric price transmission can be driven by tacit collusion. Explicit collusion is illegal: You can’t just call up the other gas stations and say, “Let’s all set the price at $5 per gallon.” But you can tacitly collude by responding to how they set their prices, and not trying to undercut them even when you could get a short-run benefit from doing so. It’s actually very similar to an Iterated Prisoner’s Dilemma: Cooperation is better for everyone, but worse for you as an individual; to get everyone to cooperate, it’s vital to severely punish those who don’t.

In our experiment, the participants in our experiment were acting as businesses setting their prices. The customers were fully automated, so there was no opportunity to “fool” them in this way. We also excluded any kind of menu costs or product inventories. But we still saw prices go up like rockets and down like feathers. Moreover, prices were always substantially higher than costs, especially during that phase when they are falling down like feathers.

Our explanation goes something like this: Businesses are trying to use their market power to maintain higher prices and thereby make higher profits, but they have to worry about other businesses undercutting their prices and taking all the business. Moreover, they also have to worry about others thinking that they are trying to undercut prices—they want to be perceived as cooperating, not defecting, in order to preserve the collusion and avoid being punished.

Consider how this affects their decisions when input prices change. If the price of oil goes up, then there’s no reason not to raise the price of gasoline immediately, because that isn’t violating the collusion. If anything, it’s being nice to your fellow colluders; they want prices as high as possible. You’ll want to raise the prices as high and fast as you can get away with, and you know they’ll do the same. But if the price of oil goes down, now gas stations are faced with a dilemma: You could lower prices to get more customers and make more profits, but the other gas stations might consider that a violation of your tacit collusion and could punish you by cutting their prices even more. Your best option is to lower prices very slowly, so that you can take advantage of the change in the input market, but also maintain the collusion with other gas stations. By slowly cutting prices, you can ensure that you are doing it together, and not trying to undercut other businesses.

Krugman’s explanation and ours are not mutually exclusive; in fact I think both are probably happening. They have one important feature in common, which fits the empirical data: Markets with less competition show greater degrees of asymmetric price transmission. The more concentrated the oligopoly, the more we see rockets and feathers.

They also share an important policy implication: Market power can make inflation worse. Contrary to what a lot of economic policy pundits have been saying, it isn’t ridiculous to think that breaking up monopolies or putting pressure on oligopolies to lower their prices could help reduce inflation. It probably won’t be as reliably effective as the Fed’s buying and selling of bonds to adjust interest rates—but we’re also doing that, and the two are not mutually exclusive. Besides, breaking up monopolies is a generally good thing to do anyway.

It’s not that unusual that I find myself agreeing with Krugman. I think what makes this one feel weird is that I have more expertise on the subject than he does.

Small deviations can have large consequences.

Jun 26 JDN 2459787

A common rejoinder that behavioral economists get from neoclassical economists is that most people are mostly rational most of the time, so what’s the big deal? If humans are 90% rational, why worry so much about the other 10%?

Well, it turns out that small deviations from rationality can have surprisingly large consequences. Let’s consider an example.

Suppose we have a market for some asset. Without even trying to veil my ulterior motive, let’s make that asset Bitcoin. Its fundamental value is of course $0; it’s not backed by anything (not even taxes or a central bank), it has no particular uses that aren’t already better served by existing methods, and it’s not even scalable.

Now, suppose that 99% of the population rationally recognizes that the fundamental value of the asset is indeed $0. But 1% of the population doesn’t; they irrationally believe that the asset is worth $20,000. What will the price of that asset be, in equilibrium?

If you assume that the majority will prevail, it should be $0. If you did some kind of weighted average, you’d think maybe its price will be something positive but relatively small, like $200. But is this actually the price it will take on?

Consider someone who currently owns 1 unit of the asset, and recognizes that it is fundamentally worthless. What should they do? Well, if they also know that there are people out there who believe it is worth $20,000, the answer is obvious: They should sell it to those people. Indeed, they should sell it for something quite close to $20,000 if they can.

Now, suppose they don’t already own the asset, but are considering whether or not to buy it. They know it’s worthless, but they also know that there are people who will buy it for close to $20,000. Here’s the kicker: This is a reason for them to buy it at anything meaningfully less than $20,000.

Suppose, for instance, they could buy it for $10,000. Spending $10,000 to buy something you know is worthless seems like a terribly irrational thing to do. But it isn’t irrational, if you also know that somewhere out there is someone who will pay $20,000 for that same asset and you have a reasonable chance of finding that person and selling it to them.

The equilibrium outcome, then, is that the price of the asset will be almost $20,000! Even though 99% of the population recognizes that this asset is worthless, the fact that 1% of people believe it’s worth as much as a car will result in it selling at that price. Thus, even a slight deviation from a perfectly-rational population can result in a market that is radically at odds with reality.

And it gets worse! Suppose that in fact everyone knows that the asset is worthless, but most people think that there is some small portion of the population who believes the asset has value. Then, it will still be priced at that value in equilibrium, as people trade it back and forth searching in vain for the person who really wants it! (This is called the Greater Fool theory.)

That is, the price of an asset in a free market—even in a market where most people are mostly rational most of the time—will in fact be determined by the highest price anyone believes that anyone else thinks it has. And this is true of essentially any asset market—any market where people are buying something, not to use it, but to sell it to someone else.

Of course, beliefs—and particularly beliefs about beliefs—can very easily change, so that equilibrium price could move in any direction basically without warning.

Suddenly, the cycle of bubble and crash, boom and bust, doesn’t seem so surprising does it? The wonder is that prices ever become stable at all.


Then again, do they? Last I checked, the only prices that were remotely stable were for goods like apples and cars and televisions, goods that are bought and sold to be consumed. (Or national currencies managed by competent central banks, whose entire job involves doing whatever it takes to keep those prices stable.) For pretty much everything else—and certainly any purely financial asset that isn’t a national currency—prices are indeed precisely as wildly unpredictable and utterly irrational as this model would predict.

So much for the Efficient Market Hypothesis? Sadly I doubt that the people who still believe this nonsense will be convinced.

Maybe we should forgive student debt after all.

May 8 JDN 2459708

President Biden has been promising some form of student debt relief since the start of his campaign, though so far all he has actually implemented is a series of no-interest deferments and some improvements to the existing forgiveness programs. (This is still significant—it has definitely helped a lot of people with cashflow during the pandemic.) Actual forgiveness for a large segment of the population remains elusive, and if it does happen, it’s unclear how extensive it will be in either intensity (amount forgiven) or scope (who is eligible).

I personally had been fine with this; while I have a substantial loan balance myself, I also have a PhD in economics, which—theoretically—should at some point entitle me to sufficient income to repay those loans.

Moreover, until recently I had been one of the few left-wing people I know to not be terribly enthusiastic about loan forgiveness. It struck me as a poor use of those government funds, because $1.75 trillion is an awful lot of money, and college graduates are a relatively privileged population. (And yes, it is valid to consider this a question of “spending”, because the US government is the least liquidity-constrained entity on Earth. In lieu of forgiving $1.75 trillion in debt, they could borrow $1.75 trillion in debt and use it to pay for whatever they want, and their ultimate budget balance would be basically the same in each case.)

But I say all this in the past tense because Krugman’s recent column has caused me to reconsider. He gives two strong reasons why debt forgiveness may actually be a good idea.

The first is that Congress is useless. Thanks to gerrymandering and the 40% or so of our population who keeps electing Republicans no matter how crazy they get, it’s all but impossible to pass useful legislation. The pandemic relief programs were the exception that proves the rule: Somehow those managed to get through, even though in any other context it’s clear that Congress would never have approved any kind of (non-military) program that spent that much money or helped that many poor people.

Student loans are the purview of the Department of Education, which is entirely under control of the Executive Branch, and therefore, ultimately, the President of the United States. So Biden could forgive student loans by executive order and there’s very little Congress could do to stop him. Even if that $1.75 trillion could be better spent, if it wasn’t going to be anyway, we may as well use it for this.

The second is that “college graduates” is too broad a category. Usually I’m on guard for this sort of thing, but in this case I faltered, and did not notice the fallacy of composition so many labor economists were making by lumping all college grads into the same economic category. Yes, some of us are doing well, but many are not. Within-group inequality matters.

A key insight here comes from carefully analyzing the college wage premium, which is the median income of college graduates, divided by the median income of high school graduates. This is an estimate of the overall value of a college education. It’s pretty large, as a matter of fact: It amounts to something like a doubling of your income, or about $1 million over one’s whole lifespan.

From about 1980-2000, wage inequality grew about as fast as today, and the college wage premium grew even faster. So it was plausible—if not necessarily correct—to believe that the wage inequality reflected the higher income and higher productivity of college grads. But since 2000, wage inequality has continued to grow, while the college wage premium has been utterly stagnant. Thus, higher inequality can no longer (if it ever could) be explained by the effects of college education.

Now some college graduates are definitely making a lot more money—such as those who went into finance. But it turns out that most are not. As Krugman points out, the 95th percentile of male college grads has seen a 25% increase in real (inflation-adjusted) income in the last 20 years, while the median male college grad has actually seen a slight decrease. (I’m not sure why Krugman restricted to males, so I’m curious how it looks if you include women. But probably not radically different?)

I still don’t think student loan forgiveness would be the best use of that (enormous sum of) money. But if it’s what’s politically feasible, it definitely could help a lot of people. And it would be easy enough to make it more progressive, by phasing out forgiveness for graduates with higher incomes.

And hey, it would certainly help me, so maybe I shouldn’t argue too strongly against it?

Rethinking progressive taxation

Apr 17 JDN 2459687

There is an extremely common and quite bizarre result in the standard theory of taxation, which is that the optimal marginal tax rate for the highest incomes should be zero. Ever since that result came out, economists have basically divided into two camps.

The more left-leaning have said, “This is obviously wrong; so why is it wrong? What are we missing?”; the more right-leaning have said, “The model says so, so it must be right! Cut taxes on the rich!”

I probably don’t need to tell you that I’m very much in the first camp. But more recently I’ve come to realize that even the answers left-leaning economists have been giving for why this result is wrong are also missing something vital.

There have been papers explaining that “the zero top rate only applies at extreme incomes” (uh, $50 billion sounds pretty extreme to me!) or “the optimal tax system can be U-shaped” (I don’t want U-shaped—we’re not supposed to be taxing the poor!)


And many economists still seem to find it reasonable to say that marginal tax rates should decline over some significant part of the distribution.

In my view, there are really two reasons why taxes should be progressive, and they are sufficiently general reasons that they should almost always override other considerations.

The first is diminishing marginal utility of wealth. The real value of a dollar is much less to someone who already has $1 million than to someone who has only $100. Thus, if we want to raise the most revenue while causing the least pain, we typically want to tax people who have a lot of money rather than people who have very little.

But the right-wing economists have an answer to this one, based on these fancy models: Yes, taking a given amount from the rich would be better (a lump-sum tax), but you can’t do that; you can only tax their income at a certain rate. (So far, that seems right. Lump-sum taxes are silly and economists talk about them too much.) But the rich are rich because they are more productive! If you tax them more, they will work less, and that will harm society as a whole due to their lost productivity.

This is the fundamental intuition behind the “top rate should be zero” result: The rich are so fantastically productive that it isn’t worth it to tax them. We simply can’t risk them working less.

But are the rich actually so fantastically productive? Are they really that smart? Do they really work that hard?

If Tony Stark were real, okay, don’t tax him. He is a one-man Singularity: He invented the perfect power source on his own, “in a cave, with a box of scraps!”; he created a true AI basically by himself; he single-handedly discovered a new stable island element and used it to make his already perfect power source even better.

But despite what his fanboys may tell you, Elon Musk is not Tony Stark. Tesla and SpaceX have done a lot of very good things, but in order to do they really didn’t need Elon Musk for much. Mainly, they needed his money. Give me $270 billion and I could make companies that build electric cars and launch rockets into space too. (Indeed, I probably would—though I’d also set up some charitable foundations as well, more like what Bill Gates did with his similarly mind-boggling wealth.)

Don’t get me wrong; Elon Musk is a very intelligent man, and he works, if anything, obsessively. (He makes his employees work excessively too—and that’s a problem.) But if he were to suddenly die, as long as a reasonably competent CEO replaced him, Tesla and SpaceX would go on working more or less as they already do. The spectacular productivity of these companies is not due to Musk alone, but thousands of highly-skilled employees. These people would be productive if Musk had not existed, and they will continue to be productive once Musk is gone.

And they aren’t particularly rich. They aren’t poor either, mind you—a typical engineer at Tesla or SpaceX is quite well-paid, and rightly so. (Median salary at SpaceX is over $115,000.) These people are brilliant, tremendously hard-working, and highly productive; and they get quite well-paid. But very few of these people are in the top 1%, and basically none of them will ever be billionaires—let alone the truly staggering wealth of a hectobillionaire like Musk himself.

How, then, does one become a billionaire? Not by being brilliant, hard-working, or productive—at least that is not sufficient, and the existence of, say, Donald Trump suggests that it is not necessary either. No, the really quintessential feature every billionaire has is remarkably simple and consistent across the board: They own a monopoly.

You can pretty much go down the list, finding what monopoly each billionaire owned: Bill Gates owned software patents on (what is still) the most widely-used OS and office suite in the world. J.K. Rowling owns copyrights on the most successful novels in history. Elon Musk owns technology patents on various innovations in energy storage and spaceflight technology—very few of which he himself invented, I might add. Andrew Carnegie owned the steel industry. John D. Rockefeller owned the oil industry. And so on.

I honestly can’t find any real exceptions: Basically every billionaire either owned a monopoly or inherited from ancestors who did. The closest things to exceptions are billionaire who did something even worse, like defrauding thousands of people, enslaving an indigenous population or running a nation with an iron fist. (And even then, Leopold II and Vladimir Putin both exerted a lot of monopoly power as part of their murderous tyranny.)

In other words, billionaire wealth is almost entirely rent. You don’t earn a billion dollars. You don’t get it by working. You get it by owning—and by using that ownership to exert monopoly power.

This means that taxing billionaire wealth wouldn’t incentivize them to work less; they already don’t work for their money. It would just incentivize them to fight less hard at extracting wealth from everyone else using their monopoly power—which hardly seems like a downside.

Since virtually all of the wealth at the top is simply rent, we have no reason not to tax it away. It isn’t genuine productivity at all; it’s just extracting wealth that other people produced.

Thus, my second, and ultimately most decisive reason for wanting strongly progressive taxes: rent-seeking. The very rich don’t actually deserve the vast majority of what they have, and we should take it back so that we can give it to people who really need and deserve it.

Now, there is a somewhat more charitable version of the view that high taxes even on the top 0.01% would hurt productivity, and it is worth addressing. That is based on the idea that entrepreneurship is valuable, and part of the incentive for becoming and entrepreneur is the chance at one day striking it fabulously rich, so taxing the fabulously rich might result in a world of fewer entrepreneurs.

This isn’t nearly as ridiculous as the idea that Elon Musk somehow works a million times as hard as the rest of us, but it’s still pretty easy to find flaws in it.

Suppose you were considering starting a business. Indeed, perhaps you already have considered it. What are your main deciding factors in whether or not you will?

Surely they do not include the difference between a 0.0001% chance of making $200 billion and a 0.0001% chance of making $50 billion. Indeed, that probably doesn’t factor in at all; you know you’ll almost certainly never get there, and even if you did, there’s basically no real difference in your way of life between $50 billion and $200 billion.

No, more likely they include things like this: (1) How likely are you to turn a profit at all? Even a profit of $50,000 per year would probably be enough to be worth it, but how sure are you that you can manage that? (2) How much funding can you get to start it in the first place? Depending on what sort of business you’re hoping to found, it could be as little as thousands or as much as millions of dollars to get it set up, well before it starts taking in any revenue. And even a few thousand is a lot for most middle-class people to come up with in one chunk and be willing to risk losing.

This means that there is a very simple policy we could implement which would dramatically increase entrepreneurship while taxing only billionaires more, and it goes like this: Add an extra 1% marginal tax to capital gains for billionaires, and plow it into a fund that gives grants of $10,000 to $100,000 to promising new startups.

That 1% tax could raise several billion dollars a year—yes, really; US billionaires gained some $2 trillion in capital gains last year, so we’d raise $20 billion—and thereby fund many, many startups. Say the average grant is $20,000 and the total revenue is $20 billion; that’s one million new startups funded every single year. Every single year! Currently, about 4 million new businesses are founded each year in the US (leading the world by a wide margin); this could raise that to 5 million.

So don’t tell me this is about incentivizing entrepreneurship. We could do that far better than we currently do, with some very simple policy changes.

Meanwhile, the economics literature on optimal taxation seems to be completely missing the point. Most of it is still mired in the assumption that the rich are rich because they are productive, and thus terribly concerned about the “trade-off” between efficiency and equity involved in higher taxes. But when you realize that the vast, vast majority—easily 99.9%—of billionaire wealth is unearned rents, then it becomes obvious that this trade-off is an illusion. We can improve efficiency and equity simultaneously, by taking some of this ludicrous hoard of unearned wealth and putting it back into productive activities, or giving it to the people who need it most. The only people who will be harmed by this are billionaires themselves, and by diminishing marginal utility of wealth, they won’t be harmed very much.

Fortunately, the tide is turning, and more economists are starting to see the light. One of the best examples comes from Piketty, Saez, and Stantcheva in their paper on how CEO “pay for luck” (e.g. stock options) respond to top tax rates. There are a few other papers that touch on similar issues, such as Lockwood, Nathanson, and Weyl and Rothschild and Scheuer. But there’s clearly a lot of space left for new work to be done. The old results that told us not to raise taxes were wrong on a deep, fundamental level, and we need to replace them with something better.

The alienation of labor

Apr 10 JDN 2459680

Marx famously wrote that capitalism “alienates labor”. Much ink has been spilled over interpreting exactly what he meant by that, but I think the most useful and charitable reading goes something like the following:

When you make something for yourself, it feels fully yours. The effort you put into it feels valuable and meaningful. Whether you’re building a house to live in it or just cooking an omelet to eat it, your labor is directly reflected in your rewards, and you have a clear sense of purpose and value in what you are doing.

But when you make something for an employer, it feels like theirs, not yours. You have been instructed by your superiors to make a certain thing a certain way, for reasons you may or may not understand (and may or may not even agree with). Once you deliver the product—which may be as concrete as a carburetor or as abstract as an accounting report—you will likely never see it again; it will be used or not by someone else somewhere else whom you may not even ever get the chance to meet. Such labor feels tedious, effortful, exhausting—and also often empty, pointless, and meaningless.

On that reading, Marx isn’t wrong. There really is something to this. (I don’t know if this is really Marx’s intended meaning or not, and really I don’t much care—this is a valid thing and we should be addressing it, whether Marx meant to or not.)

There is a little parable about this, which I can’t quite remember where I heard:

Three men are moving heavy stones from one place to another. A traveler passes by and asks them, “What are you doing?”

The first man sighs and says, “We do whatever the boss tells us to do.”

The second man shrugs and says, “We pick up the rocks here, we move them over there.”

The third man smiles and says, “We’re building a cathedral.”

The three answers are quite different—yet all three men may be telling the truth as they see it.

The first man is fully alienated from his labor: he does whatever the boss says, following instructions that he considers arbitrary and mechanical. The second man is partially alienated: he knows the mechanics of what he is trying to accomplish, which may allow him to improve efficiency in some way (e.g. devise better ways to transport the rocks faster or with less effort), but he doesn’t understand the purpose behind it all, so ultimately his work still feels meaningless. But the third man is not alienated: he understands the purpose of his work, and he values that purpose. He sees that what he is doing is contributing to a greater whole that he considers worthwhile. It’s not hard to imagine that the third man will be the happiest, and the first will be the unhappiest.

There really is something about the capitalist wage-labor structure that can easily feed into this sort of alienation. You get a job because you need money to live, not because you necessarily value whatever the job does. You do as you are told so that you can keep your job and continue to get paid.

Some jobs are much more alienating than others. Most teachers and nurses see their work as a vocation, even a calling—their work has deep meaning for them and they value its purpose. At the other extreme there are corporate lawyers and derivatives traders, who must on some level understand that their work contributes almost nothing to the world (may in fact actively cause harm), but they continue to do the work because it pays them very well.

But there are many jobs in between which can be experienced both ways. Working in retail can be an agonizing grind where you must face a grueling gauntlet of ungrateful customers day in and day out—or it can be a way to participate in your local community and help your neighbors get the things they need. Working in manufacturing can be a mechanical process of inserting tab A into slot B and screwing it into place over, and over, and over again—or it can be a chance to create something, convert raw materials into something useful and valuable that other people can cherish.

And while individual perspective and framing surely matter here—those three men were all working in the same quarry, building the same cathedral—there is also an important objective component as well. Working as an artisan is not as alienating as working on an assembly line. Hosting a tent at a farmer’s market is not as alienating as working the register at Walmart. Tutoring an individual student is more purposeful than recording video lectures for a MOOC. Running a quirky local book store is more fulfilling than stocking shelves at Barnes & Noble.

Moreover, capitalism really does seem to push us more toward the alienating side of the spectrum. Assembly lines are far more efficient than artisans, so we make most of our products on assembly lines. Buying food at Walmart is cheaper and more convenient than at farmer’s markets, so more people shop there. Hiring one video lecturer for 10,000 students is a lot cheaper than paying 100 in-person lecturers, let alone 1,000 private tutors. And Barnes & Noble doesn’t drive out local book stores by some nefarious means: It just provides better service at lower prices. If you want a specific book for a good price right now, you’re much more likely to find it at Barnes & Noble. (And even more likely to find it on Amazon.)

Finding meaning in your work is very important for human happiness. Indeed, along with health and social relationships, it’s one of the biggest determinants of happiness. For most people in First World countries, it seems to be more important than income (though income certainly does matter).

Yet the increased efficiency and productivity upon which our modern standard of living depends seems to be based upon a system of production—in a word, capitalism—that systematically alienates us from meaning in our work.

This puts us in a dilemma: Do we keep things as they are, accepting that we will feel an increasing sense of alienation and ennui as our wealth continues to grow and we get ever-fancier toys to occupy our meaningless lives? Or do we turn back the clock, returning to a world where work once again has meaning, but at the cost of making everyone poorer—and some people desperately so?

Well, first of all, to some extent this is a false dichotomy. There are jobs that are highly meaningful but also highly productive, such as teaching and engineering. (Even recording a video lecture is a lot more fulfilling than plenty of jobs out there.) We could try to direct more people into jobs like these. There are jobs that are neither particularly fulfilling nor especially productive, like driving trucks, washing floors and waiting tables. We could redouble our efforts into automating such jobs out of existence. There are meaningless jobs that are lucrative only by rent-seeking, producing little or no genuine value, like the aforementioned corporate lawyers and derivatives traders. These, quite frankly, could simply be banned—or if there is some need for them in particular circumstances (I guess someone should defend corporations when they get sued; but they far more often go unjustly unpunished than unjustly punished!), strictly regulated and their numbers and pay rates curtailed.

Nevertheless, we still have decisions to make, as a society, about what we value most. Do we want a world of cheap, mostly adequate education, that feels alienating even to the people producing it? Then MOOCs are clearly the way to go; pennies on the dollar for education that could well be half as good! Or do we want a world of high-quality, personalized teaching, by highly-qualified academics, that will help students learn better and feel more fulfilling for the teachers? More pointedly—are we willing to pay for that higher-quality education, knowing it will be more expensive?

Moreover, in the First World at least, our standard of living is… pretty high already? Like seriously, what do we really need that we don’t already have? We could always imagine more, of course—a bigger house, a nicer car, dining at fancier restaurants, and so on. But most of us have roofs over our heads, clothes on our backs, and food on our tables.

Economic growth has done amazing things for us—but maybe we’re kind of… done? Maybe we don’t need to keep growing like this, and should start redirecting our efforts away from greater efficiency and toward greater fulfillment. Maybe there are economic possibilities we haven’t been considering.

Note that I specifically mean First World countries here. In Third World countries it’s totally different—they need growth, lots of it, as fast as possible. Fulfillment at work ends up being a pretty low priority when your children are starving and dying of malaria.

But then, you may wonder: If we stop buying cheap plastic toys to fill the emptiness in our hearts, won’t that throw all those Chinese factory workers back into poverty?

In the system as it stands? Yes, that’s a real concern. A sudden drop in consumption spending in general, or even imports in particular, in First World countries could be economically devastating for millions of people in Third World countries.

But there’s nothing inherent about this arrangement. There are less-alienating ways of working that can still provide a decent standard of living, and there’s no fundamental reason why people around the world couldn’t all be doing them. If they aren’t, it’s in the short run because they don’t have the education or the physical machinery—and in the long run it’s usually because their government is corrupt and authoritarian. A functional democratic government can get you capital and education remarkably fast—it certainly did in South Korea, Taiwan, and Japan.

Automation is clearly a big part of the answer here. Many people in the First World seem to suspect that our way of life depends upon the exploited labor of impoverished people in Third World countries, but this is largely untrue. Most of that work could be done by robots and highly-skilled technicians and engineers; it just isn’t because that would cost more. Yes, that higher cost would mean some reduction in standard of living—but it wouldn’t be nearly as dramatic as many people seem to think. We would have slightly smaller houses and slightly older cars and slightly slower laptops, but we’d still have houses and cars and laptops.

So I don’t think we should all cast off our worldly possessions just yet. Whether or not it would make us better off, it would cause great harm to countries that depend on their exports to us. But in the long run, I do think we should be working to achieve a future for humanity that isn’t so obsessed with efficiency and growth, and instead tries to provide both a decent standard of living and a life of meaning and purpose.

Reversals in progress against poverty

Jan 16 JDN 2459606

I don’t need to tell you that the COVID pandemic has been very bad for the world. Yet perhaps the worst outcome of the pandemic is one that most people don’t recognize: It has reversed years of progress against global poverty.

Estimates of the number of people who will be thrown into extreme poverty as a result of the pandemic are consistently around 100 million, though some forecasts have predicted this will rise to 150 million, or, in the most pessimistic scenarios, even as high as 500 million.

Pre-COVID projections showed the global poverty rate falling steadily from 8.4% in 2019 to 6.3% by 2030. But COVID resulted in the first upward surge in global poverty in decades, and updated models now suggest that the global poverty rate in 2030 will be as high as 7.0%. That difference is 0.7% of a forecasted population of 8.5 billion—so that’s a difference of 59 million people.

This is a terrible reversal of fortune, and a global tragedy. Ten or perhaps even hundreds of millions of people will suffer the pain of poverty because of this global pandemic and the numerous missteps by many of the world’s governments—not least the United States—in response to it.

Yet it’s important to keep in mind that this is a short-term reversal in a long-term trend toward reduced poverty. Yes, the most optimistic predictions are turning out to be wrong—but the general pattern of dramatic reductions in global poverty over the late 20th and early 21st century are still holding up.

That post-COVID estimate of a global poverty rate of 7.0% needs to be compared against the fact that as recently as 1980 the global poverty rate at the same income level (adjust for inflation and purchasing power of course) income level was a whopping 44%.

This pattern makes me feel deeply ambivalent about the effects of globalization on inequality. While it now seems clear that globalization has exacerbated inequality within First World countries—and triggered a terrible backlash of right-wing populism as a result—it also seems clear that globalization was a major reason for the dramatic reductions in global poverty in the past few decades.

I think the best answer I’ve been able to come up with is that globalization is overall a good thing, and we must continue it—but we also need to be much more mindful of its costs, and we must make policy that mitigates those costs. Expanded trade has winners and losers, and we should be taxing the winners to compensate the losers. To make good economic policy, it simply isn’t enough to increase aggregate GDP; you actually have to make life better for everyone (or at least as many people as you can).

Unfortunately, knowing what policies to make is only half the battle. We must actually implement those policies, which means winning elections, which means restoring the public’s faith in the authority of economic experts.

Some of the people voting for Donald Trump were just what Hillary Clinton correctly (if tone-deafly) referred to as “deplorables“: racists, misogynists, xenophobes. But I think that many others weren’t voting for Trump but against Clinton; they weren’t embracing far-right populism but rather rejecting center-left technocratic globalization. They were tired of being told what to do by experts who didn’t seem to care about them or their interests.

And the thing is, they were right about that. Not about voting for Trump—that’s unforgivable—but about the fact that expert elites had been ignoring their interests and needed a wake-up call. There were a hundred better ways of making that wake-up call that didn’t involve putting a narcissistic, incompetent maniac in charge of the world’s largest economy, military and nuclear arsenal, and millions of people should be ashamed of themselves for not taking those better options. Yet the fact remains: The wake-up call was necessary, and we should be responding to it.

We expert elites (I think I can officially carry that card, now that I have a PhD and a faculty position at a leading research university) need to do a much better job of two things: First, articulating the case for our policy recommendations in a way that ordinary people can understand, so that they feel justified and not simply rammed down people’s throats; and second, recognizing the costs and downsides of these policies and taking action to mitigate them whenever possible.

For instance: Yes, we need to destroy all the coal jobs. They are killing workers and the planet. Coal companies need to be transitioned to new industries or else shut down. This is not optional. It must be done. But we also need to explain to those coal miners why it’s necessary to move on from coal to solar and nuclear, and we need to be implementing various policies to help those workers move on to better, safer jobs that pay as well and don’t involve filling their lungs with soot and the atmosphere with carbon dioxide. We need to articulate, emphasize—and loudly repeat—that this isn’t about hurting coal miners to help everyone else, but about helping everyone, coal miners included, and that if anyone gets hurt it will only be a handful of psychopathic billionaires who already have more money than any human being could possibly need or deserve.

Another example: We cannot stop trading with India and China. Hundreds of millions of innocent people would suddenly be thrown out of work and into poverty if we did. We need the products they make for us, and they need the money we pay for those products. But we must also acknowledge that trading with poor countries does put downward pressure on wages back home, and take action to help First World workers who are now forced to compete with global labor markets. Maybe this takes the form of better unemployment benefits, or job-matching programs, or government-sponsored job training. But we cannot simply shrug and let people lose their jobs and their homes because the factories they worked in were moved to China.

The economics of interstellar travel

Dec 19 JDN 2459568

Since these are rather dark times—the Omicron strain means that COVID is still very much with us, after nearly two years—I thought we could all use something a bit more light-hearted and optimistic.

In 1978 Paul Krugman wrote a paper entitled “The Theory of Interstellar Trade”, which has what is surely one of the greatest abstracts of all time:

This paper extends interplanetary trade theory to an interstellar setting. It is chiefly concerned with the following question: how should interest charges on goods in transit be computed when the goods travel at close to the speed of light? This is a problem because the time taken in transit will appear less to an observer travelling with the goods than to a stationary observer. A solution is derived from economic theory, and two useless but true theorems are proved.

The rest of the paper is equally delightful, and well worth a read. Of particular note are these two sentences, which should give you a feel: “The rest of the paper is, will be, or has been, depending on the reader’s inertial frame, divided into three sections.” and “This extension is left as an exercise for interested readers because the author does not understand general relativity, and therefore cannot do it himself.”

As someone with training in both economics and relativistic physics, I can tell you that Krugman’s analysis is entirely valid, given its assumptions. (Really, this is unsurprising: He’s a Nobel Laureate. One could imagine he got his physics wrong, but he didn’t—and of course he didn’t get his economics wrong.) But, like much high-falutin economic theory, it relies upon assumptions that are unlikely to be true.

Set aside the assumptions of perfect competition and unlimited arbitrage that yield Krugman’s key result of equalized interest rates. These are indeed implausible, but they’re also so standard in economics as to be pedestrian.

No, what really concerns me is this: Why bother with interstellar trade at all?

Don’t get me wrong: I’m all in favor of interstellar travel and interstellar colonization. I want humanity to expand and explore the galaxy (or rather, I want that to be done by whatever humanity becomes, likely some kind of cybernetically and biogenetically enhanced transhumans in endless varieties we can scarcely imagine). But once we’ve gone through all the effort to spread ourselves to distant stars, it’s not clear to me that we’d ever have much reason to trade across interstellar distances.

If we ever manage to invent efficient, reliable, affordable faster-than-light (FTL) travel ala Star Trek, sure. In that case, there’s no fundamental difference between interstellar trade and any other kind of trade. But that’s not what Krugman’s paper is about, as its key theorems are actually about interest rates and prices in different inertial reference frames, which is only relevant if you’re limited to relativistic—that is, slower-than-light—velocities.

Moreover, as far as we can tell, that’s impossible. Yes, there are still some vague slivers of hope left with the Alcubierre Drive, wormholes, etc.; but by far the most likely scenario is that FTL travel is simply impossible and always will be.

FTL communication is much more plausible, as it merely requires the exploitation of nonlocal quantum entanglement outside quantum equilibrium; if the Bohm Interpretation is correct (as I strongly believe it is), then this is a technological problem rather than a theoretical one. At best this might one day lead to some form of nonlocal teleportation—but definitely not FTL starships. Since our souls are made of software, sending information can, in principle, send a person; but we almost surely won’t be sending mass faster than light.

So let’s assume, as Krugman did, that we will be limited to travel close to, but less than, the speed of light. (I recently picked up a term for this from Ursula K. Le Guin: “NAFAL”, “nearly-as-fast-as-light”.)

This means that any transfer of material from one star system to another will take, at minimum, years. It could even be decades or centuries, depending on how close to the speed of light we are able to get.

Assuming we have abundant antimatter or some similarly extremely energy-dense propulsion, it would reasonable to expect that we could build interstellar spacecraft that would be capable of accelerating at approximately Earth gravity (i.e. 1 g) for several years at a time. This would be quite comfortable for the crew of the ship—it would just feel like standing on Earth. And it turns out that this is sufficient to attain velocities quite close to the speed of light over the distances to nearby stars.

I will spare you the complicated derivation, but there are well-known equations which allow us to convert from proper acceleration (the acceleration felt on a spacecraft, i.e. 1 g in this case) to maximum velocity and total travel time, and they imply that a vessel which was constantly accelerating at 1 g (speeding up for the first half, then slowing down for the second half) could reach most nearby stars within about 50 to 100 years Earth time, or as little as 10 to 20 years ship time.

With higher levels of acceleration, you can shorten the trip; but that would require designing ships (or engineering crews?) in such a way as to sustain these high levels of acceleration for years at a time. Humans can sustain 3 g’s for hours, but not for years.

Even with only 1-g acceleration, the fuel costs for such a trip are staggering: Even with antimatter fuel you need dozens or hundreds of times as much mass in fuel as you have in payload—and with anything less than antimatter it’s basically just not possible. Yet there is nothing in the laws of physics saying you can’t do it, and I believe that someday we will.

Yet I sincerely doubt we would want to make such trips often. It’s one thing to send occasional waves of colonists, perhaps one each generation. It’s quite another to establish real two-way trade in goods.

Imagine placing an order for something—anything—and not receiving it for another 50 years. Even if, as I hope and believe, our descendants have attained far longer lifespans than we have, asymptotically approaching immortality, it seems unlikely that they’d be willing to wait decades for their shipments to arrive. In the same amount of time you could establish an entire industry in your own star system, built from the ground up, fully scaled to service entire planets.

In order to justify such a transit, you need to be carrying something truly impossible to produce locally. And there just won’t be very many such things.

People, yes. Definitely in the first wave of colonization, but likely in later waves as well, people will want to move themselves and their families across star systems, and will be willing to wait (especially since the time they experience on the ship won’t be nearly as daunting).

And there will be knowledge and experiences that are unique to particular star systems—but we’ll be sending that by radio signal and it will only take as many years as there are light-years between us; or we may even manage to figure out FTL ansibles and send it even faster than that.

It’s difficult for me to imagine what sort of goods could ever be so precious, so irreplaceable, that it would actually make sense to trade them across an interstellar distance. All habitable planets are likely to be made of essentially the same elements, in approximately the same proportions; whatever you may want, it’s almost certainly going to be easier to get it locally than it would be to buy it from another star system.

This is also why I think alien invasion is unlikely: There’s nothing they would particularly want from us that they couldn’t get more easily. Their most likely reason for invading would be specifically to conquer and rule us.

Certainly if you want gold or neodymium or deuterium, it’ll be thousands of times easier to get it at home. But even if you want something hard to make, like antimatter, or something organic and unique, like oregano, building up the industry to manufacture a product or the agriculture to grow a living organism is almost certainly going to be faster and easier than buying it from another solar system.

This is why I believe that for the first generation of interstellar colonists, imports will be textbooks, blueprints, and schematics to help build, and films, games, and songs to stay entertained and tied to home; exports will consist of of scientific data about the new planet as well as artistic depictions of life on an alien world. For later generations, it won’t be so lopsided: The colonies will have new ideas in science and engineering as well as new art forms to share. Billions of people on Earth and thousands or millions on each colony world will await each new transmission of knowledge and art with bated breath.

Long-distance trade historically was mainly conducted via precious metals such as gold; but if interstellar travel is feasible, gold is going to be dirt cheap. Any civilization capable of even sending a small intrepid crew of colonists to Epsilon Eridani is going to consider mining asteroids an utterly trivial task.

Will such transactions involve money? Will we sell these ideas, or simply give them away? Unlike my previous post where I focused on the local economy, here I find myself agreeing with Star Trek: Money isn’t going to make sense for interstellar travel. Unless we have very fast communication, the time lag between paying money out and then seeing it circulate back will be so long that the money returned to you will be basically worthless. And that’s assuming you figure out a way to make transactions clear that doesn’t require real-time authentication—because you won’t have it.

Consider Epsilon Eridani, a plausible choice for one of the first star systems we will colonize. That’s 10.5 light-years away, so a round-trip signal will take 21 years. If inflation is a steady 2%, that means that $100 today will need to come back as $151 to have the same value by the time you hear back from your transaction. If you had the option to invest in a 5% bond instead, you’d have $279 by then. And this is a nearby star.

It would be much easier to simply trade data for data, maybe just gigabyte for gigabyte or maybe by some more sophisticated notion of relative prices. You don’t need to worry about what your dollar will be worth 20 years from now; you know how much effort went into designing that blueprint for an antimatter processor and you know how much you’ll appreciate seeing that VR documentary on the rings of Aegir. You may even have in mind how much it cost you to pay people to design prototypes and how much you can sell the documentary for; but those monetary transactions will be conducted within your own star system, independently of whatever monetary system prevails on other stars.

Indeed, it’s likely that we wouldn’t even bother trying to negotiate how much to send—because that itself would have such overhead and face the same time-lags—and would instead simply make a habit of sending everything we possibly can. Such interchanges could be managed by governments at each end, supported by public endowments. “This year’s content from Epsilon Eridani, brought to you by the Smithsonian Institution.”

We probably won’t ever have—or need, or want—huge freighter ships carrying containers of goods from star to star. But with any luck, we will one day have art and ideas from across the galaxy shared by all of the endless variety of beings humanity has become.