# Maybe we should forgive student debt after all.

May 8 JDN 2459708

President Biden has been promising some form of student debt relief since the start of his campaign, though so far all he has actually implemented is a series of no-interest deferments and some improvements to the existing forgiveness programs. (This is still significant—it has definitely helped a lot of people with cashflow during the pandemic.) Actual forgiveness for a large segment of the population remains elusive, and if it does happen, it’s unclear how extensive it will be in either intensity (amount forgiven) or scope (who is eligible).

I personally had been fine with this; while I have a substantial loan balance myself, I also have a PhD in economics, which—theoretically—should at some point entitle me to sufficient income to repay those loans.

Moreover, until recently I had been one of the few left-wing people I know to not be terribly enthusiastic about loan forgiveness. It struck me as a poor use of those government funds, because $1.75 trillion is an awful lot of money, and college graduates are a relatively privileged population. (And yes, it is valid to consider this a question of “spending”, because the US government is the least liquidity-constrained entity on Earth. In lieu of forgiving$1.75 trillion in debt, they could borrow $1.75 trillion in debt and use it to pay for whatever they want, and their ultimate budget balance would be basically the same in each case.) But I say all this in the past tense because Krugman’s recent column has caused me to reconsider. He gives two strong reasons why debt forgiveness may actually be a good idea. The first is that Congress is useless. Thanks to gerrymandering and the 40% or so of our population who keeps electing Republicans no matter how crazy they get, it’s all but impossible to pass useful legislation. The pandemic relief programs were the exception that proves the rule: Somehow those managed to get through, even though in any other context it’s clear that Congress would never have approved any kind of (non-military) program that spent that much money or helped that many poor people. Student loans are the purview of the Department of Education, which is entirely under control of the Executive Branch, and therefore, ultimately, the President of the United States. So Biden could forgive student loans by executive order and there’s very little Congress could do to stop him. Even if that$1.75 trillion could be better spent, if it wasn’t going to be anyway, we may as well use it for this.

The second is that “college graduates” is too broad a category. Usually I’m on guard for this sort of thing, but in this case I faltered, and did not notice the fallacy of composition so many labor economists were making by lumping all college grads into the same economic category. Yes, some of us are doing well, but many are not. Within-group inequality matters.

A key insight here comes from carefully analyzing the college wage premium, which is the median income of college graduates, divided by the median income of high school graduates. This is an estimate of the overall value of a college education. It’s pretty large, as a matter of fact: It amounts to something like a doubling of your income, or about $1 million over one’s whole lifespan. From about 1980-2000, wage inequality grew about as fast as today, and the college wage premium grew even faster. So it was plausible—if not necessarily correct—to believe that the wage inequality reflected the higher income and higher productivity of college grads. But since 2000, wage inequality has continued to grow, while the college wage premium has been utterly stagnant. Thus, higher inequality can no longer (if it ever could) be explained by the effects of college education. Now some college graduates are definitely making a lot more money—such as those who went into finance. But it turns out that most are not. As Krugman points out, the 95th percentile of male college grads has seen a 25% increase in real (inflation-adjusted) income in the last 20 years, while the median male college grad has actually seen a slight decrease. (I’m not sure why Krugman restricted to males, so I’m curious how it looks if you include women. But probably not radically different?) I still don’t think student loan forgiveness would be the best use of that (enormous sum of) money. But if it’s what’s politically feasible, it definitely could help a lot of people. And it would be easy enough to make it more progressive, by phasing out forgiveness for graduates with higher incomes. And hey, it would certainly help me, so maybe I shouldn’t argue too strongly against it? # Rethinking progressive taxation Apr 17 JDN 2459687 There is an extremely common and quite bizarre result in the standard theory of taxation, which is that the optimal marginal tax rate for the highest incomes should be zero. Ever since that result came out, economists have basically divided into two camps. The more left-leaning have said, “This is obviously wrong; so why is it wrong? What are we missing?”; the more right-leaning have said, “The model says so, so it must be right! Cut taxes on the rich!” I probably don’t need to tell you that I’m very much in the first camp. But more recently I’ve come to realize that even the answers left-leaning economists have been giving for why this result is wrong are also missing something vital. There have been papers explaining that “the zero top rate only applies at extreme incomes” (uh,$50 billion sounds pretty extreme to me!) or “the optimal tax system can be U-shaped” (I don’t want U-shaped—we’re not supposed to be taxing the poor!)

And many economists still seem to find it reasonable to say that marginal tax rates should decline over some significant part of the distribution.

In my view, there are really two reasons why taxes should be progressive, and they are sufficiently general reasons that they should almost always override other considerations.

The first is diminishing marginal utility of wealth. The real value of a dollar is much less to someone who already has $1 million than to someone who has only$100. Thus, if we want to raise the most revenue while causing the least pain, we typically want to tax people who have a lot of money rather than people who have very little.

But the right-wing economists have an answer to this one, based on these fancy models: Yes, taking a given amount from the rich would be better (a lump-sum tax), but you can’t do that; you can only tax their income at a certain rate. (So far, that seems right. Lump-sum taxes are silly and economists talk about them too much.) But the rich are rich because they are more productive! If you tax them more, they will work less, and that will harm society as a whole due to their lost productivity.

This is the fundamental intuition behind the “top rate should be zero” result: The rich are so fantastically productive that it isn’t worth it to tax them. We simply can’t risk them working less.

But are the rich actually so fantastically productive? Are they really that smart? Do they really work that hard?

If Tony Stark were real, okay, don’t tax him. He is a one-man Singularity: He invented the perfect power source on his own, “in a cave, with a box of scraps!”; he created a true AI basically by himself; he single-handedly discovered a new stable island element and used it to make his already perfect power source even better.

But despite what his fanboys may tell you, Elon Musk is not Tony Stark. Tesla and SpaceX have done a lot of very good things, but in order to do they really didn’t need Elon Musk for much. Mainly, they needed his money. Give me $270 billion and I could make companies that build electric cars and launch rockets into space too. (Indeed, I probably would—though I’d also set up some charitable foundations as well, more like what Bill Gates did with his similarly mind-boggling wealth.) Don’t get me wrong; Elon Musk is a very intelligent man, and he works, if anything, obsessively. (He makes his employees work excessively too—and that’s a problem.) But if he were to suddenly die, as long as a reasonably competent CEO replaced him, Tesla and SpaceX would go on working more or less as they already do. The spectacular productivity of these companies is not due to Musk alone, but thousands of highly-skilled employees. These people would be productive if Musk had not existed, and they will continue to be productive once Musk is gone. And they aren’t particularly rich. They aren’t poor either, mind you—a typical engineer at Tesla or SpaceX is quite well-paid, and rightly so. (Median salary at SpaceX is over$115,000.) These people are brilliant, tremendously hard-working, and highly productive; and they get quite well-paid. But very few of these people are in the top 1%, and basically none of them will ever be billionaires—let alone the truly staggering wealth of a hectobillionaire like Musk himself.

How, then, does one become a billionaire? Not by being brilliant, hard-working, or productive—at least that is not sufficient, and the existence of, say, Donald Trump suggests that it is not necessary either. No, the really quintessential feature every billionaire has is remarkably simple and consistent across the board: They own a monopoly.

You can pretty much go down the list, finding what monopoly each billionaire owned: Bill Gates owned software patents on (what is still) the most widely-used OS and office suite in the world. J.K. Rowling owns copyrights on the most successful novels in history. Elon Musk owns technology patents on various innovations in energy storage and spaceflight technology—very few of which he himself invented, I might add. Andrew Carnegie owned the steel industry. John D. Rockefeller owned the oil industry. And so on.

I honestly can’t find any real exceptions: Basically every billionaire either owned a monopoly or inherited from ancestors who did. The closest things to exceptions are billionaire who did something even worse, like defrauding thousands of people, enslaving an indigenous population or running a nation with an iron fist. (And even then, Leopold II and Vladimir Putin both exerted a lot of monopoly power as part of their murderous tyranny.)

In other words, billionaire wealth is almost entirely rent. You don’t earn a billion dollars. You don’t get it by working. You get it by owning—and by using that ownership to exert monopoly power.

This means that taxing billionaire wealth wouldn’t incentivize them to work less; they already don’t work for their money. It would just incentivize them to fight less hard at extracting wealth from everyone else using their monopoly power—which hardly seems like a downside.

Since virtually all of the wealth at the top is simply rent, we have no reason not to tax it away. It isn’t genuine productivity at all; it’s just extracting wealth that other people produced.

Thus, my second, and ultimately most decisive reason for wanting strongly progressive taxes: rent-seeking. The very rich don’t actually deserve the vast majority of what they have, and we should take it back so that we can give it to people who really need and deserve it.

Now, there is a somewhat more charitable version of the view that high taxes even on the top 0.01% would hurt productivity, and it is worth addressing. That is based on the idea that entrepreneurship is valuable, and part of the incentive for becoming and entrepreneur is the chance at one day striking it fabulously rich, so taxing the fabulously rich might result in a world of fewer entrepreneurs.

This isn’t nearly as ridiculous as the idea that Elon Musk somehow works a million times as hard as the rest of us, but it’s still pretty easy to find flaws in it.

Suppose you were considering starting a business. Indeed, perhaps you already have considered it. What are your main deciding factors in whether or not you will?

Surely they do not include the difference between a 0.0001% chance of making $200 billion and a 0.0001% chance of making$50 billion. Indeed, that probably doesn’t factor in at all; you know you’ll almost certainly never get there, and even if you did, there’s basically no real difference in your way of life between $50 billion and$200 billion.

No, more likely they include things like this: (1) How likely are you to turn a profit at all? Even a profit of $50,000 per year would probably be enough to be worth it, but how sure are you that you can manage that? (2) How much funding can you get to start it in the first place? Depending on what sort of business you’re hoping to found, it could be as little as thousands or as much as millions of dollars to get it set up, well before it starts taking in any revenue. And even a few thousand is a lot for most middle-class people to come up with in one chunk and be willing to risk losing. This means that there is a very simple policy we could implement which would dramatically increase entrepreneurship while taxing only billionaires more, and it goes like this: Add an extra 1% marginal tax to capital gains for billionaires, and plow it into a fund that gives grants of$10,000 to $100,000 to promising new startups. That 1% tax could raise several billion dollars a year—yes, really; US billionaires gained some$2 trillion in capital gains last year, so we’d raise $20 billion—and thereby fund many, many startups. Say the average grant is$20,000 and the total revenue is $20 billion; that’s one million new startups funded every single year. Every single year! Currently, about 4 million new businesses are founded each year in the US (leading the world by a wide margin); this could raise that to 5 million. So don’t tell me this is about incentivizing entrepreneurship. We could do that far better than we currently do, with some very simple policy changes. Meanwhile, the economics literature on optimal taxation seems to be completely missing the point. Most of it is still mired in the assumption that the rich are rich because they are productive, and thus terribly concerned about the “trade-off” between efficiency and equity involved in higher taxes. But when you realize that the vast, vast majority—easily 99.9%—of billionaire wealth is unearned rents, then it becomes obvious that this trade-off is an illusion. We can improve efficiency and equity simultaneously, by taking some of this ludicrous hoard of unearned wealth and putting it back into productive activities, or giving it to the people who need it most. The only people who will be harmed by this are billionaires themselves, and by diminishing marginal utility of wealth, they won’t be harmed very much. Fortunately, the tide is turning, and more economists are starting to see the light. One of the best examples comes from Piketty, Saez, and Stantcheva in their paper on how CEO “pay for luck” (e.g. stock options) respond to top tax rates. There are a few other papers that touch on similar issues, such as Lockwood, Nathanson, and Weyl and Rothschild and Scheuer. But there’s clearly a lot of space left for new work to be done. The old results that told us not to raise taxes were wrong on a deep, fundamental level, and we need to replace them with something better. # The alienation of labor Apr 10 JDN 2459680 Marx famously wrote that capitalism “alienates labor”. Much ink has been spilled over interpreting exactly what he meant by that, but I think the most useful and charitable reading goes something like the following: When you make something for yourself, it feels fully yours. The effort you put into it feels valuable and meaningful. Whether you’re building a house to live in it or just cooking an omelet to eat it, your labor is directly reflected in your rewards, and you have a clear sense of purpose and value in what you are doing. But when you make something for an employer, it feels like theirs, not yours. You have been instructed by your superiors to make a certain thing a certain way, for reasons you may or may not understand (and may or may not even agree with). Once you deliver the product—which may be as concrete as a carburetor or as abstract as an accounting report—you will likely never see it again; it will be used or not by someone else somewhere else whom you may not even ever get the chance to meet. Such labor feels tedious, effortful, exhausting—and also often empty, pointless, and meaningless. On that reading, Marx isn’t wrong. There really is something to this. (I don’t know if this is really Marx’s intended meaning or not, and really I don’t much care—this is a valid thing and we should be addressing it, whether Marx meant to or not.) There is a little parable about this, which I can’t quite remember where I heard: Three men are moving heavy stones from one place to another. A traveler passes by and asks them, “What are you doing?” The first man sighs and says, “We do whatever the boss tells us to do.” The second man shrugs and says, “We pick up the rocks here, we move them over there.” The third man smiles and says, “We’re building a cathedral.” The three answers are quite different—yet all three men may be telling the truth as they see it. The first man is fully alienated from his labor: he does whatever the boss says, following instructions that he considers arbitrary and mechanical. The second man is partially alienated: he knows the mechanics of what he is trying to accomplish, which may allow him to improve efficiency in some way (e.g. devise better ways to transport the rocks faster or with less effort), but he doesn’t understand the purpose behind it all, so ultimately his work still feels meaningless. But the third man is not alienated: he understands the purpose of his work, and he values that purpose. He sees that what he is doing is contributing to a greater whole that he considers worthwhile. It’s not hard to imagine that the third man will be the happiest, and the first will be the unhappiest. There really is something about the capitalist wage-labor structure that can easily feed into this sort of alienation. You get a job because you need money to live, not because you necessarily value whatever the job does. You do as you are told so that you can keep your job and continue to get paid. Some jobs are much more alienating than others. Most teachers and nurses see their work as a vocation, even a calling—their work has deep meaning for them and they value its purpose. At the other extreme there are corporate lawyers and derivatives traders, who must on some level understand that their work contributes almost nothing to the world (may in fact actively cause harm), but they continue to do the work because it pays them very well. But there are many jobs in between which can be experienced both ways. Working in retail can be an agonizing grind where you must face a grueling gauntlet of ungrateful customers day in and day out—or it can be a way to participate in your local community and help your neighbors get the things they need. Working in manufacturing can be a mechanical process of inserting tab A into slot B and screwing it into place over, and over, and over again—or it can be a chance to create something, convert raw materials into something useful and valuable that other people can cherish. And while individual perspective and framing surely matter here—those three men were all working in the same quarry, building the same cathedral—there is also an important objective component as well. Working as an artisan is not as alienating as working on an assembly line. Hosting a tent at a farmer’s market is not as alienating as working the register at Walmart. Tutoring an individual student is more purposeful than recording video lectures for a MOOC. Running a quirky local book store is more fulfilling than stocking shelves at Barnes & Noble. Moreover, capitalism really does seem to push us more toward the alienating side of the spectrum. Assembly lines are far more efficient than artisans, so we make most of our products on assembly lines. Buying food at Walmart is cheaper and more convenient than at farmer’s markets, so more people shop there. Hiring one video lecturer for 10,000 students is a lot cheaper than paying 100 in-person lecturers, let alone 1,000 private tutors. And Barnes & Noble doesn’t drive out local book stores by some nefarious means: It just provides better service at lower prices. If you want a specific book for a good price right now, you’re much more likely to find it at Barnes & Noble. (And even more likely to find it on Amazon.) Finding meaning in your work is very important for human happiness. Indeed, along with health and social relationships, it’s one of the biggest determinants of happiness. For most people in First World countries, it seems to be more important than income (though income certainly does matter). Yet the increased efficiency and productivity upon which our modern standard of living depends seems to be based upon a system of production—in a word, capitalism—that systematically alienates us from meaning in our work. This puts us in a dilemma: Do we keep things as they are, accepting that we will feel an increasing sense of alienation and ennui as our wealth continues to grow and we get ever-fancier toys to occupy our meaningless lives? Or do we turn back the clock, returning to a world where work once again has meaning, but at the cost of making everyone poorer—and some people desperately so? Well, first of all, to some extent this is a false dichotomy. There are jobs that are highly meaningful but also highly productive, such as teaching and engineering. (Even recording a video lecture is a lot more fulfilling than plenty of jobs out there.) We could try to direct more people into jobs like these. There are jobs that are neither particularly fulfilling nor especially productive, like driving trucks, washing floors and waiting tables. We could redouble our efforts into automating such jobs out of existence. There are meaningless jobs that are lucrative only by rent-seeking, producing little or no genuine value, like the aforementioned corporate lawyers and derivatives traders. These, quite frankly, could simply be banned—or if there is some need for them in particular circumstances (I guess someone should defend corporations when they get sued; but they far more often go unjustly unpunished than unjustly punished!), strictly regulated and their numbers and pay rates curtailed. Nevertheless, we still have decisions to make, as a society, about what we value most. Do we want a world of cheap, mostly adequate education, that feels alienating even to the people producing it? Then MOOCs are clearly the way to go; pennies on the dollar for education that could well be half as good! Or do we want a world of high-quality, personalized teaching, by highly-qualified academics, that will help students learn better and feel more fulfilling for the teachers? More pointedly—are we willing to pay for that higher-quality education, knowing it will be more expensive? Moreover, in the First World at least, our standard of living is… pretty high already? Like seriously, what do we really need that we don’t already have? We could always imagine more, of course—a bigger house, a nicer car, dining at fancier restaurants, and so on. But most of us have roofs over our heads, clothes on our backs, and food on our tables. Economic growth has done amazing things for us—but maybe we’re kind of… done? Maybe we don’t need to keep growing like this, and should start redirecting our efforts away from greater efficiency and toward greater fulfillment. Maybe there are economic possibilities we haven’t been considering. Note that I specifically mean First World countries here. In Third World countries it’s totally different—they need growth, lots of it, as fast as possible. Fulfillment at work ends up being a pretty low priority when your children are starving and dying of malaria. But then, you may wonder: If we stop buying cheap plastic toys to fill the emptiness in our hearts, won’t that throw all those Chinese factory workers back into poverty? In the system as it stands? Yes, that’s a real concern. A sudden drop in consumption spending in general, or even imports in particular, in First World countries could be economically devastating for millions of people in Third World countries. But there’s nothing inherent about this arrangement. There are less-alienating ways of working that can still provide a decent standard of living, and there’s no fundamental reason why people around the world couldn’t all be doing them. If they aren’t, it’s in the short run because they don’t have the education or the physical machinery—and in the long run it’s usually because their government is corrupt and authoritarian. A functional democratic government can get you capital and education remarkably fast—it certainly did in South Korea, Taiwan, and Japan. Automation is clearly a big part of the answer here. Many people in the First World seem to suspect that our way of life depends upon the exploited labor of impoverished people in Third World countries, but this is largely untrue. Most of that work could be done by robots and highly-skilled technicians and engineers; it just isn’t because that would cost more. Yes, that higher cost would mean some reduction in standard of living—but it wouldn’t be nearly as dramatic as many people seem to think. We would have slightly smaller houses and slightly older cars and slightly slower laptops, but we’d still have houses and cars and laptops. So I don’t think we should all cast off our worldly possessions just yet. Whether or not it would make us better off, it would cause great harm to countries that depend on their exports to us. But in the long run, I do think we should be working to achieve a future for humanity that isn’t so obsessed with efficiency and growth, and instead tries to provide both a decent standard of living and a life of meaning and purpose. # Reversals in progress against poverty Jan 16 JDN 2459606 I don’t need to tell you that the COVID pandemic has been very bad for the world. Yet perhaps the worst outcome of the pandemic is one that most people don’t recognize: It has reversed years of progress against global poverty. Estimates of the number of people who will be thrown into extreme poverty as a result of the pandemic are consistently around 100 million, though some forecasts have predicted this will rise to 150 million, or, in the most pessimistic scenarios, even as high as 500 million. Pre-COVID projections showed the global poverty rate falling steadily from 8.4% in 2019 to 6.3% by 2030. But COVID resulted in the first upward surge in global poverty in decades, and updated models now suggest that the global poverty rate in 2030 will be as high as 7.0%. That difference is 0.7% of a forecasted population of 8.5 billion—so that’s a difference of 59 million people. This is a terrible reversal of fortune, and a global tragedy. Ten or perhaps even hundreds of millions of people will suffer the pain of poverty because of this global pandemic and the numerous missteps by many of the world’s governments—not least the United States—in response to it. Yet it’s important to keep in mind that this is a short-term reversal in a long-term trend toward reduced poverty. Yes, the most optimistic predictions are turning out to be wrong—but the general pattern of dramatic reductions in global poverty over the late 20th and early 21st century are still holding up. That post-COVID estimate of a global poverty rate of 7.0% needs to be compared against the fact that as recently as 1980 the global poverty rate at the same income level (adjust for inflation and purchasing power of course) income level was a whopping 44%. This pattern makes me feel deeply ambivalent about the effects of globalization on inequality. While it now seems clear that globalization has exacerbated inequality within First World countries—and triggered a terrible backlash of right-wing populism as a result—it also seems clear that globalization was a major reason for the dramatic reductions in global poverty in the past few decades. I think the best answer I’ve been able to come up with is that globalization is overall a good thing, and we must continue it—but we also need to be much more mindful of its costs, and we must make policy that mitigates those costs. Expanded trade has winners and losers, and we should be taxing the winners to compensate the losers. To make good economic policy, it simply isn’t enough to increase aggregate GDP; you actually have to make life better for everyone (or at least as many people as you can). Unfortunately, knowing what policies to make is only half the battle. We must actually implement those policies, which means winning elections, which means restoring the public’s faith in the authority of economic experts. Some of the people voting for Donald Trump were just what Hillary Clinton correctly (if tone-deafly) referred to as “deplorables“: racists, misogynists, xenophobes. But I think that many others weren’t voting for Trump but against Clinton; they weren’t embracing far-right populism but rather rejecting center-left technocratic globalization. They were tired of being told what to do by experts who didn’t seem to care about them or their interests. And the thing is, they were right about that. Not about voting for Trump—that’s unforgivable—but about the fact that expert elites had been ignoring their interests and needed a wake-up call. There were a hundred better ways of making that wake-up call that didn’t involve putting a narcissistic, incompetent maniac in charge of the world’s largest economy, military and nuclear arsenal, and millions of people should be ashamed of themselves for not taking those better options. Yet the fact remains: The wake-up call was necessary, and we should be responding to it. We expert elites (I think I can officially carry that card, now that I have a PhD and a faculty position at a leading research university) need to do a much better job of two things: First, articulating the case for our policy recommendations in a way that ordinary people can understand, so that they feel justified and not simply rammed down people’s throats; and second, recognizing the costs and downsides of these policies and taking action to mitigate them whenever possible. For instance: Yes, we need to destroy all the coal jobs. They are killing workers and the planet. Coal companies need to be transitioned to new industries or else shut down. This is not optional. It must be done. But we also need to explain to those coal miners why it’s necessary to move on from coal to solar and nuclear, and we need to be implementing various policies to help those workers move on to better, safer jobs that pay as well and don’t involve filling their lungs with soot and the atmosphere with carbon dioxide. We need to articulate, emphasize—and loudly repeat—that this isn’t about hurting coal miners to help everyone else, but about helping everyone, coal miners included, and that if anyone gets hurt it will only be a handful of psychopathic billionaires who already have more money than any human being could possibly need or deserve. Another example: We cannot stop trading with India and China. Hundreds of millions of innocent people would suddenly be thrown out of work and into poverty if we did. We need the products they make for us, and they need the money we pay for those products. But we must also acknowledge that trading with poor countries does put downward pressure on wages back home, and take action to help First World workers who are now forced to compete with global labor markets. Maybe this takes the form of better unemployment benefits, or job-matching programs, or government-sponsored job training. But we cannot simply shrug and let people lose their jobs and their homes because the factories they worked in were moved to China. # The economics of interstellar travel Dec 19 JDN 2459568 Since these are rather dark times—the Omicron strain means that COVID is still very much with us, after nearly two years—I thought we could all use something a bit more light-hearted and optimistic. In 1978 Paul Krugman wrote a paper entitled “The Theory of Interstellar Trade”, which has what is surely one of the greatest abstracts of all time: This paper extends interplanetary trade theory to an interstellar setting. It is chiefly concerned with the following question: how should interest charges on goods in transit be computed when the goods travel at close to the speed of light? This is a problem because the time taken in transit will appear less to an observer travelling with the goods than to a stationary observer. A solution is derived from economic theory, and two useless but true theorems are proved. The rest of the paper is equally delightful, and well worth a read. Of particular note are these two sentences, which should give you a feel: “The rest of the paper is, will be, or has been, depending on the reader’s inertial frame, divided into three sections.” and “This extension is left as an exercise for interested readers because the author does not understand general relativity, and therefore cannot do it himself.” As someone with training in both economics and relativistic physics, I can tell you that Krugman’s analysis is entirely valid, given its assumptions. (Really, this is unsurprising: He’s a Nobel Laureate. One could imagine he got his physics wrong, but he didn’t—and of course he didn’t get his economics wrong.) But, like much high-falutin economic theory, it relies upon assumptions that are unlikely to be true. Set aside the assumptions of perfect competition and unlimited arbitrage that yield Krugman’s key result of equalized interest rates. These are indeed implausible, but they’re also so standard in economics as to be pedestrian. No, what really concerns me is this: Why bother with interstellar trade at all? Don’t get me wrong: I’m all in favor of interstellar travel and interstellar colonization. I want humanity to expand and explore the galaxy (or rather, I want that to be done by whatever humanity becomes, likely some kind of cybernetically and biogenetically enhanced transhumans in endless varieties we can scarcely imagine). But once we’ve gone through all the effort to spread ourselves to distant stars, it’s not clear to me that we’d ever have much reason to trade across interstellar distances. If we ever manage to invent efficient, reliable, affordable faster-than-light (FTL) travel ala Star Trek, sure. In that case, there’s no fundamental difference between interstellar trade and any other kind of trade. But that’s not what Krugman’s paper is about, as its key theorems are actually about interest rates and prices in different inertial reference frames, which is only relevant if you’re limited to relativistic—that is, slower-than-light—velocities. Moreover, as far as we can tell, that’s impossible. Yes, there are still some vague slivers of hope left with the Alcubierre Drive, wormholes, etc.; but by far the most likely scenario is that FTL travel is simply impossible and always will be. FTL communication is much more plausible, as it merely requires the exploitation of nonlocal quantum entanglement outside quantum equilibrium; if the Bohm Interpretation is correct (as I strongly believe it is), then this is a technological problem rather than a theoretical one. At best this might one day lead to some form of nonlocal teleportation—but definitely not FTL starships. Since our souls are made of software, sending information can, in principle, send a person; but we almost surely won’t be sending mass faster than light. So let’s assume, as Krugman did, that we will be limited to travel close to, but less than, the speed of light. (I recently picked up a term for this from Ursula K. Le Guin: “NAFAL”, “nearly-as-fast-as-light”.) This means that any transfer of material from one star system to another will take, at minimum, years. It could even be decades or centuries, depending on how close to the speed of light we are able to get. Assuming we have abundant antimatter or some similarly extremely energy-dense propulsion, it would reasonable to expect that we could build interstellar spacecraft that would be capable of accelerating at approximately Earth gravity (i.e. 1 g) for several years at a time. This would be quite comfortable for the crew of the ship—it would just feel like standing on Earth. And it turns out that this is sufficient to attain velocities quite close to the speed of light over the distances to nearby stars. I will spare you the complicated derivation, but there are well-known equations which allow us to convert from proper acceleration (the acceleration felt on a spacecraft, i.e. 1 g in this case) to maximum velocity and total travel time, and they imply that a vessel which was constantly accelerating at 1 g (speeding up for the first half, then slowing down for the second half) could reach most nearby stars within about 50 to 100 years Earth time, or as little as 10 to 20 years ship time. With higher levels of acceleration, you can shorten the trip; but that would require designing ships (or engineering crews?) in such a way as to sustain these high levels of acceleration for years at a time. Humans can sustain 3 g’s for hours, but not for years. Even with only 1-g acceleration, the fuel costs for such a trip are staggering: Even with antimatter fuel you need dozens or hundreds of times as much mass in fuel as you have in payload—and with anything less than antimatter it’s basically just not possible. Yet there is nothing in the laws of physics saying you can’t do it, and I believe that someday we will. Yet I sincerely doubt we would want to make such trips often. It’s one thing to send occasional waves of colonists, perhaps one each generation. It’s quite another to establish real two-way trade in goods. Imagine placing an order for something—anything—and not receiving it for another 50 years. Even if, as I hope and believe, our descendants have attained far longer lifespans than we have, asymptotically approaching immortality, it seems unlikely that they’d be willing to wait decades for their shipments to arrive. In the same amount of time you could establish an entire industry in your own star system, built from the ground up, fully scaled to service entire planets. In order to justify such a transit, you need to be carrying something truly impossible to produce locally. And there just won’t be very many such things. People, yes. Definitely in the first wave of colonization, but likely in later waves as well, people will want to move themselves and their families across star systems, and will be willing to wait (especially since the time they experience on the ship won’t be nearly as daunting). And there will be knowledge and experiences that are unique to particular star systems—but we’ll be sending that by radio signal and it will only take as many years as there are light-years between us; or we may even manage to figure out FTL ansibles and send it even faster than that. It’s difficult for me to imagine what sort of goods could ever be so precious, so irreplaceable, that it would actually make sense to trade them across an interstellar distance. All habitable planets are likely to be made of essentially the same elements, in approximately the same proportions; whatever you may want, it’s almost certainly going to be easier to get it locally than it would be to buy it from another star system. This is also why I think alien invasion is unlikely: There’s nothing they would particularly want from us that they couldn’t get more easily. Their most likely reason for invading would be specifically to conquer and rule us. Certainly if you want gold or neodymium or deuterium, it’ll be thousands of times easier to get it at home. But even if you want something hard to make, like antimatter, or something organic and unique, like oregano, building up the industry to manufacture a product or the agriculture to grow a living organism is almost certainly going to be faster and easier than buying it from another solar system. This is why I believe that for the first generation of interstellar colonists, imports will be textbooks, blueprints, and schematics to help build, and films, games, and songs to stay entertained and tied to home; exports will consist of of scientific data about the new planet as well as artistic depictions of life on an alien world. For later generations, it won’t be so lopsided: The colonies will have new ideas in science and engineering as well as new art forms to share. Billions of people on Earth and thousands or millions on each colony world will await each new transmission of knowledge and art with bated breath. Long-distance trade historically was mainly conducted via precious metals such as gold; but if interstellar travel is feasible, gold is going to be dirt cheap. Any civilization capable of even sending a small intrepid crew of colonists to Epsilon Eridani is going to consider mining asteroids an utterly trivial task. Will such transactions involve money? Will we sell these ideas, or simply give them away? Unlike my previous post where I focused on the local economy, here I find myself agreeing with Star Trek: Money isn’t going to make sense for interstellar travel. Unless we have very fast communication, the time lag between paying money out and then seeing it circulate back will be so long that the money returned to you will be basically worthless. And that’s assuming you figure out a way to make transactions clear that doesn’t require real-time authentication—because you won’t have it. Consider Epsilon Eridani, a plausible choice for one of the first star systems we will colonize. That’s 10.5 light-years away, so a round-trip signal will take 21 years. If inflation is a steady 2%, that means that$100 today will need to come back as $151 to have the same value by the time you hear back from your transaction. If you had the option to invest in a 5% bond instead, you’d have$279 by then. And this is a nearby star.

It would be much easier to simply trade data for data, maybe just gigabyte for gigabyte or maybe by some more sophisticated notion of relative prices. You don’t need to worry about what your dollar will be worth 20 years from now; you know how much effort went into designing that blueprint for an antimatter processor and you know how much you’ll appreciate seeing that VR documentary on the rings of Aegir. You may even have in mind how much it cost you to pay people to design prototypes and how much you can sell the documentary for; but those monetary transactions will be conducted within your own star system, independently of whatever monetary system prevails on other stars.

Indeed, it’s likely that we wouldn’t even bother trying to negotiate how much to send—because that itself would have such overhead and face the same time-lags—and would instead simply make a habit of sending everything we possibly can. Such interchanges could be managed by governments at each end, supported by public endowments. “This year’s content from Epsilon Eridani, brought to you by the Smithsonian Institution.”

We probably won’t ever have—or need, or want—huge freighter ships carrying containers of goods from star to star. But with any luck, we will one day have art and ideas from across the galaxy shared by all of the endless variety of beings humanity has become.

# Are unions collusion?

Oct 31 JDN 2459519

The standard argument from center-right economists against labor unions is that they are a form of collusion: Producers are coordinating and intentionally holding back from what would be in their individual self-interest in order to gain a collective advantage. And this is basically true: In the broadest sense of the term, labor unions are are form of collusion. Since collusion is generally regarded as bad, therefore (this argument goes), unions are bad.

What this argument misses out on is why collusion is generally regarded as bad. The typical case for collusion is between large corporations, each of which already controls a large share of the market—collusion then allows them to act as if they control an even larger share, potentially even acting as a monopoly.

Labor unions are not like this. Literally no individual laborer controls a large segment of the market. (Some very specialized laborers, like professional athletes, or, say, economists, might control a not completely trivial segment of their particular job market—but we’re still talking something like 1% at most. Even Tiger Woods or Paul Krugman is not literally irreplaceable.) Moreover, even the largest unions can rarely achieve anything like a monopoly over a particular labor market.

Thus whereas typical collusion involves going from a large market share to an even larger—often even dominant—market share, labor unions involve going from a tiny market share to a moderate—and usually not dominant—market share.

But that, by itself, wouldn’t be enough to justify unions. While small family businesses banding together in collusion is surely less harmful than large corporations doing the same, it would probably still be a bad thing, insofar as it would raise prices and reduce the quantity or quality of products sold. It would just be less bad.

Yet unions differ from even this milder collusion in another important respect: They do not exist to increase bargaining power versus consumers. They exist to increase bargaining power versus corporations.

And corporations, it turns out, already have a great deal of bargaining power. While a labor union acts as something like a monopoly (or at least oligopoly), corporations act like the opposite: oligopsony or even monopsony.

While monopoly or monopsony on its own is highly unfair and inefficient, the combination of the two—bilateral monopolyis actually relatively fair and efficient. Bilateral monopoly is probably not as good as a truly competitive market, but it is definitely better than either a monopoly or monopsony alone. Whereas a monopoly has too much bargaining power for the seller (resulting in prices that are too high), and a monopsony has too much bargaining power for the buyer (resulting in prices that are too low), a bilateral monopoly has relatively balanced bargaining power, and thus gets an outcome that’s not too much different from fair competition in a free market.

Thus, unions really exist as a correction mechanism for the excessive bargaining power of corporations. Most unions are between workers in large industries who work for a relatively small number of employers, such as miners, truckers, and factory workers. (Teachers are also an interesting example, because they work for the government, which effectively has a monopsony on public education services.) In isolation they may seem inefficient; but in context they really exist to compensate for other, worse inefficiencies.

We could imagine a world where this was not so: Say there is a market with many independent buyers who are unwilling or unable to reliably collude, and they are served by a small number of powerful unions that use their bargaining power to raise prices and reduce output.

We have some markets that already look a bit like that: Consider the licensing systems for doctors and lawyers. These are basically guilds, which are collusive in the same way as labor unions.

Note that unlike, say, miners, truckers, or factory workers, doctors and lawyers are not a large segment of the population; they are bargaining against consumers just as much as corporations; and they are extremely well-paid and very likely undersupplied. (Doctors are definitely undersupplied; with lawyers it’s a bit more complicated, but given how often corporations get away with terrible things and don’t get sued for it, I think it’s fair to say that in the current system, lawyers are undersupplied.) So I think it is fair to be concerned that the guild systems for doctors and lawyers are too powerful. We want some system for certifying the quality of doctors and lawyers, but the existing standards are so demanding that they result in a shortage of much-needed labor.

One way to tell that unions aren’t inefficient is to look at how unionization relates to unemployment. If unions were acting as a harmful monopoly on labor, unemployment should be higher in places with greater unionization rates. The empirical data suggests that if there is any such effect, it’s a small one. There are far more important determinants of unemployment than unionization. (Wages, on the other hand, show a strong positive link with unionization.) Much like the standard prediction that raising minimum wage would reduce employment, the prediction that unions raise unemployment has largely not been borne out by the data. And for much the same reason: We had ignored the bargaining power of employers, which minimum wage and unions both reduce.

Thus, the justifiability of unions isn’t something that we could infer a priori without looking at the actual structure of the labor market. Unions aren’t always or inherently good—but they are usually good in the system as it stands. (Actually there’s one particular class of unions that do not seem to be good, and that’s police unions: But this is a topic for another time.)

My ultimate conclusion? Yes, unions are a form of collusion. But to infer from that they must be bad is to commit a Noncentral Fallacy. Unions are the good kind of collusion.

# When to give up

Jun 6 JDN 2459372

Perseverance is widely regarded as a virtue, and for good reason. Often one of the most important deciding factors in success is the capacity to keep trying after repeated failure. I think this has been a major barrier for me personally; many things came easily to me when I was young, and I internalized the sense that if something doesn’t come easily, it must be beyond my reach.

Yet it’s also worth noting that this is not the only deciding factor—some things really are beyond our capabilities. Indeed, some things are outright impossible. And we often don’t know what is possible and what isn’t.

This raises the question: When should we persevere, and when should we give up?

There is actually reason to think that people often don’t give up when they should. Steven Levitt (of Freakonomics fame)recently published a study that asked people who were on the verge of a difficult decision to flip a coin, and then base their decision on the coin flip: Heads, make a change; tails, keep things as they are. Many didn’t actually follow the coin flip—but enough did that there was a statistical difference between those who saw heads and those who saw tails. The study found that the people who flipped heads and made a change were on average happier a couple of years later than the people who flipped tails and kept things as they were.

This question is particularly salient for me lately, because the academic job market has gone so poorly for me. I’ve spent most of my life believing that academia is where I belong; my intellect and my passion for teaching and research has convinced me and many others that this is the right path for me. But now that I have a taste of what it is actually like to apply for tenure-track jobs and submit papers to journals, I am utterly miserable. I hate every minute of it. I’ve spent the entire past year depressed and feeling like I have accomplished absolutely nothing.

In theory, once one actually gets tenure it’s supposed to get easier. But that could be a long way away—or it might never happen at all. As it is, there’s basically no chance I’ll get a tenure track position this year, and it’s unclear what my chances would be if I tried again next year.

If I could actually get a paper published, that would no doubt improve my odds of landing a better job next year. But I haven’t been able to do that, and each new rejection cuts so deep that I can barely stand to look at my papers anymore, much less actually continue submitting them. And apparently even tenured professors still get their papers rejected repeatedly, which means that this pain will never go away. I simply cannot imagine being happy if this is what I am expected to do for the rest of my life.

I found this list of criteria for when you should give up something—and most of them fit me. I’m not sure I know in my heart it can’t work out, but I increasingly suspect that. I’m not sure I want it anymore, now that I have a better idea of what it’s really like. Pursuing it is definitely making me utterly miserable. I wouldn’t say it’s the only reason, but I definitely do worry what other people will think if I quit; I feel like I’d be letting a lot of people down. I also wonder who I am without it, where I belong if not here. I don’t know what other paths are out there, but maybe there is something better. This constant stream of failure and rejection has definitely made me feel like I hate myself. And above all, when I imagine quitting, I absolutely feel an enormous sense of relief.

Publishing in journals seems to be the thing that successful academics care about most, and it means almost nothing to me anymore. I only want it because of all the pressure to have it, because of all the rewards that come from having it. It has become fully instrumental to me, with no intrinsic meaning or value. I have no particular desire to be lauded by the same system that lauded Fischer Black or Kenneth Rogoff—both of whose egregious and easily-avoidable mistakes are responsible for the suffering of millions people around the world.

I want people to read my ideas. But people don’t actually read journals. They skim them. They read the abstracts. They look at the graphs and regression tables. (You have the meeting that should have been an email? I raise you the paper that should have been a regression table.) They see if there’s something in there that they should be citing for their own work, and if there is, maybe then they actually read the paper—but everyone is so hyper-specialized that only a handful of people will ever actually want to cite any given paper. The vast majority of research papers are incredibly tedious to read and very few people actually bother. As a method for disseminating ideas, this is perhaps slightly better than standing on a street corner and shouting into a megaphone.

I would much rather write books; people sometimes actually read books, especially when they are written for a wide audience and hence not forced into the straitjacket of standard ‘scientific writing’ that no human being actually gets any enjoyment out of writing or reading. I’ve seen a pretty clear improvement in writing quality of papers written by Nobel laureates—after they get their Nobels or similar accolades. Once they establish themselves, they are free to actually write in ways that are compelling and interesting, rather than having to present everything in the most dry, tedious way possible. If your paper reads like something that a normal person would actually find interesting or enjoyable to read, you will be—as I have been—immediately told that you must remove all such dangerous flavor until the result is as tasteless as possible.

No, the purpose of research journals is not to share ideas. Its function is not to share, but to evaluate. And it isn’t even really to evaluate research—it’s to evaluate researchers. It’s to outsource the efforts of academic hiring to an utterly unaccountable and arbitrary system run mostly by for-profit corporations. It may have some secondary effect of evaluating ideas for validity; at least the really awful ideas are usually excluded. But its primary function is to decide the academic pecking order.

I had thought that scientific peer review was supposed to select for truth. Perhaps sometimes it does. It seems to do so reasonably well in the natural sciences, at least. But in the social sciences? That’s far less clear. Peer-reviewed papers are much more likely to be accurate than any randomly-selected content; but there are still a disturbingly large number of peer-reviewed published papers that are utterly wrong, and some unknown but undoubtedly vast number of good papers that have never seen the light of day.

Then again, when I imagine giving up on an academic career, I don’t just feel relief—I also feel regret and loss. I feel like I’ve wasted years of my life putting together a dream that has now crumbled in my hands. I even feel some anger, some sense that I was betrayed by those who told me that this was about doing good research when it turns out it’s actually about being thick-skinned enough that you can take an endless assault of rejections. It feels like I’ve been running a marathon, and I just rounded a curve to discover that the last five miles must be ridden on horseback, when I don’t have a horse, I have no equestrian training, and in fact I’m allergic to horses.

I wish someone had told me it would be like this. Maybe they tried and I didn’t listen. They did say that papers would get rejected. They did say that the tenure track was high-pressure and publish-or-perish was a major source of anxiety. But they never said that it would tear at my soul like this. They never said that I would have to go through multiple rounds of agony, self-doubt, and despair in order to get even the slighest recognition for my years of work. They never said that the whole field would treat me like I’m worthless because I can’t satisfy the arbitrary demands of a handful of anonymous reviewers. They never said that I would begin to feel worthless after several rounds of this.

That’s really what I want to give up on. I want to give up on hitching my financial security, my career, my future, my self-worth to a system as capricious as peer review.

I don’t want to give up on research. I don’t want to give up on teaching. I still believe strongly in discovering new truths and sharing them with others. I’m just increasingly realizing that academia isn’t nearly as good at that as I thought it was.

It isn’t even that I think it’s impossible for me to succeed in academia. I think that if I continued trying to get a tenure-track job, I would land one eventually. Maybe next year. Or maybe I’d spend a few years at a postdoc first. And I’d probably manage to publish some paper in some reasonably respectable journal at some point in the future. But I don’t know how long it would take, or how good a journal it would be—and I’m already past the point where I really don’t care anymore, where I can’t afford to care, where if I really allowed myself to care it would only devastate me when I inevitably fail again. Now that I see what is really involved in the process, how arduous and arbitrary it is, publishing in a journal means almost nothing to me. I want to be validated; I want to be appreciated; I want to be recognized. But the system is set up to provide nothing but rejection, rejection, rejection. If even the best work won’t be recognized immediately and even the worst work can make it with enough tries, then the whole system begins to seem meaningless. It’s just rolls of the dice. And I didn’t sign up to be a gambler.

The job market will probably be better next year than it was this year. But how much better? Yes, there will be more openings, but there will also be more applicants: Everyone who would normally be on the market, plus everyone like me who didn’t make it this year, plus everyone who decided to hold back this year because they knew they wouldn’t make it (as I probably should have done). Yes, in a normal year, I could be fairly confident of getting some reasonably decent position—but this wasn’t a normal year, and next year won’t be one either, and the one after that might still not be. If I can’t get a paper published in a good journal between now and then—and I’m increasingly convinced that I can’t—then I really can’t expect my odds to be greatly improved from what they were this time around. And if I don’t know that this terrible gauntlet is going to lead to something good, I’d really much rather avoid it altogether. It was miserable enough when I went into it being (over)confident that it would work out all right.

Perhaps the most important question when deciding whether to give up is this: What will happen if you do? What alternatives do you have? If giving up means dying, then don’t give up. (“Learn to let go” is very bad advice to someone hanging from the edge of a cliff.) But while it may feel that way sometimes, rarely does giving up on a career or a relationship or a project yield such catastrophic results.

When people are on the fence about making a change and then do so, even based on the flip of a coin, it usually makes them better off. Note that this is different from saying you should make all your decisions randomly; if you are confident that you don’t want to make a change, don’t make a change. This advice is for people who feel like they want a change but are afraid to take the chance, people who find themselves ambivalent about what direction to go next—people like me.

I don’t know where I should go next. I don’t know where I belong. I know it isn’t Wall Street. I’m pretty sure it’s not consulting. Maybe it’s nonprofits. Maybe it’s government. Maybe it’s freelance writing. Maybe it’s starting my own business. I guess I’d still consider working in academia; if Purdue called me back to say they made a terrible mistake and they want me after all, I’d probably take the offer. But since such an outcome is now vanishingly unlikely, perhaps it’s time, after all, to give up.

# Social science is broken. Can we fix it?

May 16 JDN 2459349

Social science is broken. I am of course not the first to say so. The Atlantic recently published an article outlining the sorry state of scientific publishing, and several years ago Slate Star Codex published a lengthy post (with somewhat harsher language than I generally use on this blog) showing how parapsychology, despite being obviously false, can still meet the standards that most social science is expected to meet. I myself discussed the replication crisis in social science on this very blog a few years back.

I was pessimistic then about the incentives of scientific publishing be fixed any time soon, and I am even more pessimistic now.

Back then I noted that journals are often run by for-profit corporations that care more about getting attention than getting the facts right, university administrations are incompetent and top-heavy, and publish-or-perish creates cutthroat competition without providing incentives for genuinely rigorous research. But these are widely known facts, even if so few in the scientific community seem willing to face up to them.

Now I am increasingly concerned that the reason we aren’t fixing this system is that the people with the most power to fix it don’t want to. (Indeed, as I have learned more about political economy I have come to believe this more and more about all the broken institutions in the world. American democracy has its deep flaws because politicians like it that way. China’s government is corrupt because that corruption is profitable for many of China’s leaders. Et cetera.)

I know economics best, so that is where I will focus; but most of what I’m saying here would also apply to other social sciences such as sociology and psychology as well. (Indeed it was psychology that published Daryl Bem.)

Rogoff and Reinhart’s 2010 article “Growth in a Time of Debt”, which was a weak correlation-based argument to begin with, was later revealed (by an intrepid grad student! His name is Thomas Herndon.) to be based upon deep, fundamental errors. Yet the article remains published, without any notice of retraction or correction, in the American Economic Review, probably the most prestigious journal in economics (and undeniably in the vaunted “Top Five”). And the paper itself was widely used by governments around the world to justify massive austerity policies—which backfired with catastrophic consequences.

Why wouldn’t the AER remove the article from their website? Or issue a retraction? Or at least add a note on the page explaining the errors? If their primary concern were scientific truth, they would have done something like this. Their failure to do so is a silence that speaks volumes, a hound that didn’t bark in the night.

It’s rational, if incredibly selfish, for Rogoff and Reinhart themselves to not want a retraction. It was one of their most widely-cited papers. But why wouldn’t AER’s editors want to retract a paper that had been so embarrassingly debunked?

And so I came to realize: These are all people who have succeeded in the current system. Their work is valued, respected, and supported by the system of scientific publishing as it stands. If we were to radically change that system, as we would necessarily have to do in order to re-align incentives toward scientific truth, they would stand to lose, because they would suddenly be competing against other people who are not as good at satisfying the magical 0.05, but are in fact at least as good—perhaps even better—actual scientists than they are.

I know how they would respond to this criticism: I’m someone who hasn’t succeeded in the current system, so I’m biased against it. This is true, to some extent. Indeed, I take it quite seriously, because while tenured professors stand to lose prestige, they can’t really lose their jobs even if there is a sudden flood of far superior research. So in directly economic terms, we would expect the bias against the current system among grad students, adjuncts, and assistant professors to be larger than the bias in favor of the current system among tenured professors and prestigious researchers.

Yet there are other motives aside from money: Norms and social status are among the most powerful motivations human beings have, and these biases are far stronger in favor of the current system—even among grad students and junior faculty. Grad school is many things, some good, some bad; but one of them is a ritual gauntlet that indoctrinates you into the belief that working in academia is the One True Path, without which your life is a failure. If your claim is that grad students are upset at the current system because we overestimate our own qualifications and are feeling sour grapes, you need to explain our prevalence of Impostor Syndrome. By and large, grad students don’t overestimate our abilities—we underestimate them. If we think we’re as good at this as you are, that probably means we’re better. Indeed I have little doubt that Thomas Herndon is a better economist than Kenneth Rogoff will ever be.

I have additional evidence that insider bias is important here: When Paul Romer—Nobel laureate—left academia he published an utterly scathing criticism of the state of academic macroeconomics. That is, once he had escaped the incentives toward insider bias, he turned against the entire field.

Romer pulls absolutely no punches: He literally compares the standard methods of DSGE models to “phlogiston” and “gremlins”. And the paper is worth reading, because it’s obviously entirely correct. He pulls no punches and every single one lands on target. It’s also a pretty fun read, at least if you have the background knowledge to appreciate the dry in-jokes. (Much like “Transgressing the Boundaries: Toward a Transformative Hermeneutics of Quantum Gravity.” I still laugh out loud every time I read the phrase “hegemonic Zermelo-Frankel axioms”, though I realize most people would be utterly nonplussed. For the unitiated, these are the Zermelo-Frankel axioms. Can’t you just see the colonialist imperialism in sentences like “\forall x \forall y (\forall z, z \in x \iff z \in y) \implies x = y”?)

In other words, the Upton Sinclair Principle seems to be applying here: “It is difficult to get a man to understand something when his salary depends upon not understanding it.” The people with the most power to change the system of scientific publishing are journal editors and prestigious researchers, and they are the people for whom the current system is running quite swimmingly.

It’s not that good science can’t succeed in the current system—it often does. In fact, I’m willing to grant that it almost always does, eventually. When the evidence has mounted for long enough and the most adamant of the ancien regime finally retire or die, then, at last, the paradigm will shift. But this process takes literally decades longer than it should. In principle, a wrong theory can be invalidated by a single rigorous experiment. In practice, it generally takes about 30 years of experiments, most of which don’t get published, until the powers that be finally give in.

This delay has serious consequences. It means that many of the researchers working on the forefront of a new paradigm—precisely the people that the scientific community ought to be supporting most—will suffer from being unable to publish their work, get grant funding, or even get hired in the first place. It means that not only will good science take too long to win, but that much good science will never get done at all, because the people who wanted to do it couldn’t find the support they needed to do so. This means that the delay is in fact much longer than it appears: Because it took 30 years for one good idea to take hold, all the other good ideas that would have sprung from it in that time will be lost, at least until someone in the future comes up with them.

I don’t think I’ll ever forget it: At the AEA conference a few years back, I went to a luncheon celebrating Richard Thaler, one of the founders of behavioral economics, whom I regard as one of the top 5 greatest economists of the 20th century (I’m thinking something like, “Keynes > Nash > Thaler > Ramsey > Schelling”). Yes, now he is being rightfully recognized for his seminal work; he won a Nobel, and he has an endowed chair at Chicago, and he got an AEA luncheon in his honor among many other accolades. But it was not always so. Someone speaking at the luncheon offhandedly remarked something like, “Did we think Richard would win a Nobel? Honestly most of us weren’t sure he’d get tenure.” Most of the room laughed; I had to resist the urge to scream. If Richard Thaler wasn’t certain to get tenure, then the entire system is broken. This would be like finding out that Erwin Schrodinger or Niels Bohr wasn’t sure he would get tenure in physics.

A. Gary Schilling, a renowned Wall Street economist (read: One Who Has Turned to the Dark Side), once remarked (the quote is often falsely attributed to Keynes): “markets can remain irrational a lot longer than you and I can remain solvent.” In the same spirit, I would say this: the scientific community can remain wrong a lot longer than you and I can extend our graduate fellowships and tenure clocks.

# Economic Possibilities for Ourselves

May 2 JDN 2459335

In 1930, John Maynard Keynes wrote one of the greatest essays ever written on economics, “Economic Possibilities for our Grandchildren.” You can read it here.

In that essay he wrote:

“I would predict that the standard of life in progressive countries one hundred years hence will be between four and eight times as high as it is.”

US population in 1930: 122 million; US real GDP in 1930: $1.1 trillion. Per-capita GDP:$9,000

US population in 2020: 329 million; US real GDP in 2020: $18.4 trillion. Per-capita GDP:$56,000

That’s a factor of 6. Keynes said 4 to 8; that makes his estimate almost perfect. We aren’t just inside his error bar, we’re in the center of it. If anything he was under-confident. Of course we still have 10 years left before a full century has passed: At a growth rate of 1% in per-capita GDP, that will make the ratio closer to 7—still well within his confidence interval.

I’d like to take a moment to marvel at how good this estimate is. Keynes predicted the growth rate of the entire US economy one hundred years in the future to within plus or minus 30%, and got it right.

With this in mind, it’s quite astonishing what Keynes got wrong in his essay.

The point of the essay is that what Keynes calls “the economic problem” will soon be solved. By “the economic problem”, he means the scarcity of resources that makes it impossible for everyone in the world to make a decent living. Keynes predicts that by 2030—so just a few years from now—humanity will have effectively solved this problem, and we will live in a world where everyone can live comfortably with adequate basic necessities like shelter, food, water, clothing, and medicine.

He laments that with the dramatically higher productivity that technological advancement brings, we will be thrust into a life of leisure that we are unprepared to handle. Evolved for a world of scarcity, we built our culture around scarcity, and we may not know what to do with ourselves in a world of abundance.

Keynes sounds his most naive when he imagines that we would spread out our work over more workers each with fewer hours:

“For many ages to come the old Adam will be so strong in us that everybody will need to do some work if he is to be contented. We shall do more things for ourselves than is usual with the rich today, only too glad to have small duties and tasks and routines. But beyond this, we shall endeavour to spread the bread thin on the butter-to make what work there is still to be done to be as widely shared as possible. Three-hour shifts or a fifteen-hour week may put off the problem for a great while. For three hours a day is quite enough to satisfy the old Adam in most of us!”

Plainly that is nothing like what happened. Americans do on average work fewer hours today than we did in the past, but not by anything like this much: average annual hours fell from about 1,900 in 1950 to about 1,700 today. Where Keynes was predicting a drop of 60%, the actual drop was only about 10%.

Here’s another change Keynes predicted that I wish we’d made, but we certainly haven’t:

“When the accumulation of wealth is no longer of high social importance, there will be great changes in the code of morals. We shall be able to rid ourselves of many of the pseudo-moral principles which have hag-ridden us for two hundred years, by which we have exalted some of the most distasteful of human qualities into the position of the highest virtues. We shall be able to afford to dare to assess the money-motive at its true value. The love of money as a possession—as distinguished from the love of money as a means to the enjoyments and realities of life—will be recognised for what it is, a somewhat disgusting morbidity, one of those semicriminal, semi-pathological propensities which one hands over with a shudder to the specialists in mental disease.”

Sadly, people still idolize Jeff Bezos and Elon Musk just as much their forebears idolized Henry Ford or Andrew Carnegie. And really there’s nothing semi- about it: The acquisition of billions of dollars by exploiting others is clearly indicative of narcissism if not psychopathy.

It’s not that we couldn’t have made the world that Keynes imagined. There’s plenty of stuff—his forecast for our per-capita GDP was impeccable. But when we automated away all of the most important work, Keynes thought we would turn to lives of leisure, exploring art, music, literature, film, games, sports. But instead we did something he did not anticipate: We invented new kinds of work.

This would be fine if the new work we invented is genuinely productive; and some of it is, no doubt. Keynes could not have anticipated the emergence of 3D graphics designers, smartphone engineers, or web developers, but these jobs do genuinely productive and beneficial work that makes use of our extraordinary new technologies.

But think for a moment about Facebook and Google, now two of the world’s largest and most powerful corporations. What do they sell? Think carefully! Facebook doesn’t sell social media. Google doesn’t sell search algorithms. Those are services they provide as platforms for what they actually sell: Advertising.

That is, some of the most profitable, powerful corporations in the world today make all of their revenue entirely from trying to persuade people to buy things they don’t actually need. The actual benefits they provide to humanity are sort of incidental; they exist to provide an incentive to look at the ads.

Paul Krugman often talks about Solow’s famous remark that “computers showed up everywhere but the productivity statistics”; aggregate productivity growth has, if anything, been slower in the last 40 years than in the previous 40.

But this aggregate is a very foolish measure. It’s averaging together all sorts of work into one big lump.

If you look specifically at manufacturing output per workerthe sort of thing you’d actually expect to increase due to automation—it has in fact increased, at breakneck speed: The average American worker produced four times as much output per hour in 2000 as in 1950.

The problem is that instead of splitting up the manufacturing work to give people free time, we moved them all into services—which have not meaningfully increased their productivity in the same period. The average growth rate in multifactor productivity in the service industries since the 1970s has been a measly 0.2% per year, meaning that our total output per worker in service industries is only 10% higher than it was in 1970.

While our population is more than double what it was in 1950, our total manufacturing employment is now less than it was in 1950. Our employment in services is four times what it was in 1950. We moved everyone out of the sector that actually got more productive and stuffed them into the sector that didn’t.

This is why the productivity statistics are misleading. Suppose we had 100 workers, and 2 industries.

Initially, in manufacturing, each worker can produce goods worth $20 per hour. In services, each worker can only produce services worth$10 per hour. 50 workers work in each industry, so average productivity is (50*$20+50*$10)/100 = $15 per hour. Then, after new technological advances, productivity in manufacturing increases to$80 per hour, but people don’t actually want to spend that much on manufactured good. So 30 workers from manufacturing move over to services, which still only produce $10 per hour. Now total productivity is (20*$80+80*$10)/100 =$24 per hour.

Overall productivity now appears to only have risen 60% over that time period (in 50 years this would be 0.9% per year), but in fact it rose 300% in manufacturing (2.2% per year) but 0% in services. What looks like anemic growth in productivity is actually a shift of workers out of the productive sectors into the unproductive sectors.

Keynes imagined that once we had made manufacturing so efficient that everyone could have whatever appliances they like, we’d give them the chance to live their lives without having to work. Instead, we found jobs for them—in large part, jobs that didn’t need doing.

Advertising is the clearest example: It’s almost pure rent-seeking, and if it were suddenly deleted from the universe almost everyone would actually be better off.

But there are plenty of other jobs, what the late David Graeber called “bullshit jobs”, that have the same character: Sales, consulting, brokering, lobbying, public relations, and most of what goes on in management, law and finance. Graeber had a silly theory that we did this on purpose either to make the rich feel important or to keep people working so they wouldn’t question the existing system. The real explanation is much simpler: These jobs are rent-seeking. They do make profits for the corporations that employ them, but they contribute little or nothing to human society as a whole.

I’m not sure how surprised Keynes would be by this outcome. In parts of the essay he acknowledges that the attitude which considers work a virtue and idleness a vice is well-entrenched in our society, and seems to recognize that the transition to a world where most people work very little is one that would be widely resisted. But his vision of what the world would be like in the early 21st century does now seem to be overly optimistic, not in its forecasts of our productivity and output—which, I really cannot stress enough, were absolutely spot on—but in its predictions of how society would adapt to that abundance.

It seems that most people still aren’t quite ready to give up on a world built around jobs. Most people still think of a job as the primary purpose of an adult’s life, that someone who isn’t working for an employer is somehow wasting their life and free-riding on everyone else.

In some sense this is perhaps true; but why is it more true of someone living on unemployment than of someone who works in marketing, or stock brokering, or lobbying, or corporate law? At least people living on unemployment aren’t actively making the world worse. And since unemployment pays less than all but the lowest-paying jobs, the amount of resources that are taken up by people on unemployment is considerably less than the rents which are appropriated by industries like consulting and finance.

Indeed, whenever you encounter a billionaire, there’s one thing you know for certain: They are very good at rent-seeking. Whether by monopoly power, or exploitation, or outright corruption, all the ways it’s possible to make a billion dollars are forms of rent-seeking. And this is for a very simple and obvious reason: No one can possibly work so hard and be so productive as to actually earn a billion dollars. No one’s real opportunity cost is actually that high—and the difference between income and real opportunity cost is by definition economic rent.

If we’re truly concerned about free-riding on other people’s work, we should really be thinking in terms of the generations of scientists and engineers before us who made all of this technology possible, as well as the institutions and infrastructure that have bequeathed us a secure stock of capital. You didn’t build that applies to all of us: Even if all the necessary raw materials were present, none of us could build a smartphone by hand alone on a desert island. Most of us couldn’t even sew a pair of pants or build a house—though that is at least the sort of thing that it’s possible to do by hand.

But in fact I think free-riding on our forebears is a perfectly acceptable activity. I am glad we do it, and I hope our descendants do it to us. I want to build a future where life is better than it is now; I want to leave the world better than we found it. If there were some way to inter-temporally transfer income back to the past, I suppose maybe we ought to do so—but as far as we know, there isn’t. Nothing can change the fact that most people were desperately poor for most of human history.

What we now have the power to decide is what will happen to people in the future: Will we continue to maintain this system where our wealth is decided by our willingness to work for corporations, at jobs that may be utterly unnecessary or even actively detrimental? Or will we build a new system, one where everyone gets the chance to share in the abundance that our ancestors have given us and each person gets the chance to live their life in the way that they find most meaningful?

Keynes imagined a bright future for the generation of his grandchildren. We now live in that generation, and we have precisely the abundance of resources he predicted we would. Can we now find a way to build that bright future?

# What happened with GameStop?

Feb 7 JDN 2459253

No doubt by now you’ve heard about the recent bubble in GameStop stock that triggered several trading stops, nearly destroyed a hedge fund, and launched a thousand memes. What really strikes me about this whole thing is how ordinary it is: This is basically the sort of thing that happens in our financial markets all the time. So why are so many people suddenly paying so much attention to it?

There are a few important ways this is unusual: Most importantly, the bubble was triggered by a large number of middle-class people investing small amounts, rather than by a handful of billionaires or hedge funds. It’s also more explicitly collusive than usual, with public statements in writing about what stocks are being manipulated rather than hushed whispers between executives at golf courses. Partly as a consequence of these, the response from the government and the financial industry has been quite different as well, trying to halt trading and block transactions in a way that they would never do if the crisis had been caused by large financial institutions.

If you’re interested in the technical details of what happened, what a short squeeze is and how it can make a hedge fund lose enormous amounts of money unexpectedly, I recommend this summary by KQED. But the gist of it is simple enough: Melvin Capital placed huge bets that GameStop stock would fall in price, and a coalition of middle-class traders coordinated on Reddit to screw them over by buying a bunch of GameStop stock and driving up the price. It worked, and now Melvin Capital lost something on the order of $3-5 billion in just a few days. The particular kind of bet they placed is called a short, and it’s a completely routine practice on Wall Street despite the fact that I could never quite understand why it is a thing that should be allowed. The essence of a short is quite simple: When you short, you are selling something you don’t own. You “borrow” it (it isn’t really even borrowing), and then sell it to someone else, promising to buy it back and return it to where you borrowed it from at some point in the future. This amounts to a bet that the price will decline, so that the price at which you buy it is lower than the price at which you sold it. Doesn’t that seem like an odd thing to be allowed to do? Normally you can’t sell something you have merely borrowed. I can’t borrow a car and then sell it; car title in fact exists precisely to prevent this from happening. If I were to borrow your coat and then sell it to a thrift store, I’d have committed larceny. It’s really quite immaterial whether I plan to buy it back afterward; in general we do not allow people to sell things that they do not own. Now perhaps the problem is that when I borrow your coat or your car, you expect me to return that precise object—not a similar coat or a car of equivalent Blue Book value, but your coat or your car. When I borrow a share of GameStop stock, no one really cares whether it is that specific share which I return—indeed, it would be almost impossible to even know whether it was. So in that way it’s a bit like borrowing money: If I borrow$20 from you, you don’t expect me to pay back that precise \$20 bill. Indeed you’d be shocked if I did, since presumably I borrowed it in order to spend it or invest it, so how would I ever get it back?

But you also don’t sell money, generally speaking. Yes, there are currency exchanges and money-market accounts; but these are rather exceptional cases. In general, money is not bought and sold the way coats or cars are.

What about consumable commodities? You probably don’t care too much about any particular banana, sandwich, or gallon of gasoline. Perhaps in some circumstances we might “loan” someone a gallon of gasoline, intending them to repay us at some later time with a different gallon of gasoline. But far more likely, I think, would be simply giving a friend a gallon of gasoline and then not expecting any particular repayment except perhaps a vague offer of providing a similar favor in the future. I have in fact heard someone say the sentence “Can I borrow your sandwich?”, but it felt very odd when I heard it. (Indeed, I responded something like, “No, you can keep it.”)

And in order to actually be shorting gasoline (which is a thing that you, too, can do, perhaps even right now, if you have a margin account on a commodities exchange), it isn’t enough to borrow a gallon with the expectation of repaying a different gallon; you must also sell that gallon you borrowed. And now it seems very odd indeed to say to a friend, “Hey, can I borrow a gallon of gasoline so that I can sell it to someone for a profit?”

The usual arguments for why shorting should be allowed are much like the arguments for exotic financial instruments in general: “Increase liquidity”, “promote efficient markets”. These arguments are so general and so ubiquitous that they essentially amount to the strongest form of laissez-faire: Whatever Wall Street bankers feel like doing is fine and good and part of what makes American capitalism great.

In fact, I was never quite clear why margin accounts are something we decided to allow; margin trading is inherently high-leverage and thus inherently high-risk. Borrowing money in order to arbitrage financial assets doesn’t just seem like a very risky thing to do, it has been one way or another implicated in virtually every financial crisis that has ever occurred. It would be an exaggeration to say that leveraged arbitrage is the one single cause of financial crises, but it would be a shockingly small exaggeration. I think it absolutely is fair to say that if leveraged arbitrage did not exist, financial crises would be far rarer and further between.

Indeed, I am increasingly dubious of the whole idea of allowing arbitrage in general. Some amount of arbitrage may be unavoidable; there may always be people people who see that prices are different for the same item in two different markets, and then exploit that difference before anyone can stop them. But this is a bit like saying that theft is probably inevitable: Yes, every human society that has had a system of property ownership (which is most of them—even communal hunter-gatherers have rules about personal property), has had some amount of theft. That doesn’t mean there is nothing we can do to reduce theft, or that we should simply allow theft wherever it occurs.

The moral argument against arbitrage is straightforward enough: You’re not doing anything. No good is produced; no service is provided. You are making money without actually contributing any real value to anyone. You just make money by having money. This is what people in the Middle Ages found suspicious about lending money at interest; but lending money actually is doing something—sometimes people need more money than they have, and lending it to them is providing a useful service for which you deserve some compensation.

A common argument economists make is that arbitrage will make prices more “efficient”, but when you ask them what they mean by “efficient”, the answer they give is that it removes arbitrage opportunities! So the good thing about arbitrage is that it stops you from doing more arbitrage?

And what if it doesn’t stop you? Many of the ways to exploit price gaps (particularly the simplest ones like “where it’s cheap, buy it; where it’s expensive, sell it”) will automatically close those gaps, but it’s not at all clear to me that all the ways to exploit price gaps will necessarily do so. And even if it’s a small minority of market manipulation strategies that exploit gaps without closing them, those are precisely the strategies that will be most profitable in the long run, because they don’t undermine their own success. Then, left to their own devices, markets will evolve to use such strategies more and more, because those are the strategies that work.

That is, in order for arbitrage to be beneficial, it must always be beneficial; there must be no way to exploit price gaps without inevitably closing those price gaps. If that is not the case, then evolutionary pressure will push more and more of the financial system toward using methods of arbitrage that don’t close gaps—or even exacerbate them. And indeed, when you look at how ludicrously volatile and crisis-prone our financial system has become, it sure looks an awful lot like an evolutionary equilibrium where harmful arbitrage strategies have evolved to dominate.

A world where arbitrage actually led to efficient pricing would be a world where the S&P 500 rises a steady 0.02% per day, each and every day. Maybe you’d see a big move when there was actually a major event, like the start of a war or the invention of a vaccine for a pandemic. You’d probably see a jump up or down of a percentage point or two with each quarterly Fed announcement. But daily moves of even five or six percentage points would be a very rare occurrence—because the real expected long-run aggregate value of the 500 largest publicly-traded corporations in America is what the S&P 500 is supposed to represent, and that is not a number that should change very much very often. The fact that I couldn’t really tell you what that number is without multi-trillion-dollar error bars is so much the worse for anyone who thinks that financial markets can somehow get it exactly right every minute of every day.

Moreover, it’s not hard to imagine how we might close price gaps without simply allowing people to exploit them. There could be a bunch of economists at the Federal Reserve whose job it is to locate markets where there are arbitrage opportunities, and then a bundle of government funds that they can allocate to buying and selling assets in order to close those price gaps. Any profits made are received by the treasury; any losses taken are borne by the treasury. The economists would get paid a comfortable salary, and perhaps get bonuses based on doing a good job in closing large or important price gaps; but there is no need to give them even a substantial fraction of the proceeds, much less all of it. This is already how our money supply is managed, and it works quite well, indeed obviously much better than an alternative with “skin in the game”: Can you imagine the dystopian nightmare we’d live in if the Chair of the Federal Reserve actually received even a 1% share of the US money supply? (Actually I think that’s basically what happened in Zimbabwe: The people who decided how much money to print got to keep a chunk of the money that was printed.)

I don’t actually think this GameStop bubble is all that important in itself. A decade from now, it may be no more memorable than Left Shark or the Macarena. But what is really striking about it is how little it differs from business-as-usual on Wall Street. The fact that a few million Redditors can gather together to buy a stock “for the lulz” or to “stick it to the Man” and thereby bring hedge funds to their knees is not such a big deal in itself, but it is symptomatic of much deeper structural flaws in our financial system.