Krugman and rockets and feathers

Jul 17 JDN 2459797

Well, this feels like a milestone: Paul Krugman just wrote a column about a topic I’ve published research on. He didn’t actually cite our paper—in fact the literature review he links to is from 2014—but the topic is very much what we were studying: Asymmetric price transmission, ‘rockets and feathers’. He’s even talking about it from the perspective of industrial organization and market power, which is right in line with our results (and a bit different from the mainstream consensus among economic policy pundits).

The phenomenon is a well-documented one: When the price of an input (say, crude oil) rises, the price of outputs made from that input (say, gasoline) rise immediately, and basically one to one, sometimes even more than one to one. But when the price of an input falls, the price of outputs only falls slowly and gradually, taking a long time to converge to the same level as the input prices. Prices go up like a rocket, but down like a feather.

Many different explanations have been proposed to explain this phenomenon, and they aren’t all mutually exclusive. They include various aspects of market structure, substitution of inputs, and use of inventories to smooth the effects of prices.

One that I find particularly unpersuasive is the notion of menu costs: That it requires costly effort to actually change your prices, and this somehow results in the asymmetry. Most gas stations have digital price boards; it requires almost zero effort for them to change prices whenever they want. Moreover, there’s no clear reason this would result in asymmetry between raising and lowering prices. Some models extend the notion of “menu cost” to include expected customer responses, which is a much better explanation; but I think that’s far beyond the original meaning of the concept. If you fear to change your price because of how customers may respond, finding a cheaper way to print price labels won’t do a thing to change that.

But our paper—and Krugman’s article—is about one factor in particular: market power. We don’t see prices behave this way in highly competitive markets. We see it the most in oligopolies: Markets where there are only a small number of sellers, who thus have some control over how they set their prices.

Krugman explains it as follows:

When oil prices shoot up, owners of gas stations feel empowered not just to pass on the cost but also to raise their markups, because consumers can’t easily tell whether they’re being gouged when prices are going up everywhere. And gas stations may hang on to these extra markups for a while even when oil prices fall.

That’s actually a somewhat different mechanism from the one we found in our experiment, which is that asymmetric price transmission can be driven by tacit collusion. Explicit collusion is illegal: You can’t just call up the other gas stations and say, “Let’s all set the price at $5 per gallon.” But you can tacitly collude by responding to how they set their prices, and not trying to undercut them even when you could get a short-run benefit from doing so. It’s actually very similar to an Iterated Prisoner’s Dilemma: Cooperation is better for everyone, but worse for you as an individual; to get everyone to cooperate, it’s vital to severely punish those who don’t.

In our experiment, the participants in our experiment were acting as businesses setting their prices. The customers were fully automated, so there was no opportunity to “fool” them in this way. We also excluded any kind of menu costs or product inventories. But we still saw prices go up like rockets and down like feathers. Moreover, prices were always substantially higher than costs, especially during that phase when they are falling down like feathers.

Our explanation goes something like this: Businesses are trying to use their market power to maintain higher prices and thereby make higher profits, but they have to worry about other businesses undercutting their prices and taking all the business. Moreover, they also have to worry about others thinking that they are trying to undercut prices—they want to be perceived as cooperating, not defecting, in order to preserve the collusion and avoid being punished.

Consider how this affects their decisions when input prices change. If the price of oil goes up, then there’s no reason not to raise the price of gasoline immediately, because that isn’t violating the collusion. If anything, it’s being nice to your fellow colluders; they want prices as high as possible. You’ll want to raise the prices as high and fast as you can get away with, and you know they’ll do the same. But if the price of oil goes down, now gas stations are faced with a dilemma: You could lower prices to get more customers and make more profits, but the other gas stations might consider that a violation of your tacit collusion and could punish you by cutting their prices even more. Your best option is to lower prices very slowly, so that you can take advantage of the change in the input market, but also maintain the collusion with other gas stations. By slowly cutting prices, you can ensure that you are doing it together, and not trying to undercut other businesses.

Krugman’s explanation and ours are not mutually exclusive; in fact I think both are probably happening. They have one important feature in common, which fits the empirical data: Markets with less competition show greater degrees of asymmetric price transmission. The more concentrated the oligopoly, the more we see rockets and feathers.

They also share an important policy implication: Market power can make inflation worse. Contrary to what a lot of economic policy pundits have been saying, it isn’t ridiculous to think that breaking up monopolies or putting pressure on oligopolies to lower their prices could help reduce inflation. It probably won’t be as reliably effective as the Fed’s buying and selling of bonds to adjust interest rates—but we’re also doing that, and the two are not mutually exclusive. Besides, breaking up monopolies is a generally good thing to do anyway.

It’s not that unusual that I find myself agreeing with Krugman. I think what makes this one feel weird is that I have more expertise on the subject than he does.

Small deviations can have large consequences.

Jun 26 JDN 2459787

A common rejoinder that behavioral economists get from neoclassical economists is that most people are mostly rational most of the time, so what’s the big deal? If humans are 90% rational, why worry so much about the other 10%?

Well, it turns out that small deviations from rationality can have surprisingly large consequences. Let’s consider an example.

Suppose we have a market for some asset. Without even trying to veil my ulterior motive, let’s make that asset Bitcoin. Its fundamental value is of course $0; it’s not backed by anything (not even taxes or a central bank), it has no particular uses that aren’t already better served by existing methods, and it’s not even scalable.

Now, suppose that 99% of the population rationally recognizes that the fundamental value of the asset is indeed $0. But 1% of the population doesn’t; they irrationally believe that the asset is worth $20,000. What will the price of that asset be, in equilibrium?

If you assume that the majority will prevail, it should be $0. If you did some kind of weighted average, you’d think maybe its price will be something positive but relatively small, like $200. But is this actually the price it will take on?

Consider someone who currently owns 1 unit of the asset, and recognizes that it is fundamentally worthless. What should they do? Well, if they also know that there are people out there who believe it is worth $20,000, the answer is obvious: They should sell it to those people. Indeed, they should sell it for something quite close to $20,000 if they can.

Now, suppose they don’t already own the asset, but are considering whether or not to buy it. They know it’s worthless, but they also know that there are people who will buy it for close to $20,000. Here’s the kicker: This is a reason for them to buy it at anything meaningfully less than $20,000.

Suppose, for instance, they could buy it for $10,000. Spending $10,000 to buy something you know is worthless seems like a terribly irrational thing to do. But it isn’t irrational, if you also know that somewhere out there is someone who will pay $20,000 for that same asset and you have a reasonable chance of finding that person and selling it to them.

The equilibrium outcome, then, is that the price of the asset will be almost $20,000! Even though 99% of the population recognizes that this asset is worthless, the fact that 1% of people believe it’s worth as much as a car will result in it selling at that price. Thus, even a slight deviation from a perfectly-rational population can result in a market that is radically at odds with reality.

And it gets worse! Suppose that in fact everyone knows that the asset is worthless, but most people think that there is some small portion of the population who believes the asset has value. Then, it will still be priced at that value in equilibrium, as people trade it back and forth searching in vain for the person who really wants it! (This is called the Greater Fool theory.)

That is, the price of an asset in a free market—even in a market where most people are mostly rational most of the time—will in fact be determined by the highest price anyone believes that anyone else thinks it has. And this is true of essentially any asset market—any market where people are buying something, not to use it, but to sell it to someone else.

Of course, beliefs—and particularly beliefs about beliefs—can very easily change, so that equilibrium price could move in any direction basically without warning.

Suddenly, the cycle of bubble and crash, boom and bust, doesn’t seem so surprising does it? The wonder is that prices ever become stable at all.


Then again, do they? Last I checked, the only prices that were remotely stable were for goods like apples and cars and televisions, goods that are bought and sold to be consumed. (Or national currencies managed by competent central banks, whose entire job involves doing whatever it takes to keep those prices stable.) For pretty much everything else—and certainly any purely financial asset that isn’t a national currency—prices are indeed precisely as wildly unpredictable and utterly irrational as this model would predict.

So much for the Efficient Market Hypothesis? Sadly I doubt that the people who still believe this nonsense will be convinced.

Maybe we should forgive student debt after all.

May 8 JDN 2459708

President Biden has been promising some form of student debt relief since the start of his campaign, though so far all he has actually implemented is a series of no-interest deferments and some improvements to the existing forgiveness programs. (This is still significant—it has definitely helped a lot of people with cashflow during the pandemic.) Actual forgiveness for a large segment of the population remains elusive, and if it does happen, it’s unclear how extensive it will be in either intensity (amount forgiven) or scope (who is eligible).

I personally had been fine with this; while I have a substantial loan balance myself, I also have a PhD in economics, which—theoretically—should at some point entitle me to sufficient income to repay those loans.

Moreover, until recently I had been one of the few left-wing people I know to not be terribly enthusiastic about loan forgiveness. It struck me as a poor use of those government funds, because $1.75 trillion is an awful lot of money, and college graduates are a relatively privileged population. (And yes, it is valid to consider this a question of “spending”, because the US government is the least liquidity-constrained entity on Earth. In lieu of forgiving $1.75 trillion in debt, they could borrow $1.75 trillion in debt and use it to pay for whatever they want, and their ultimate budget balance would be basically the same in each case.)

But I say all this in the past tense because Krugman’s recent column has caused me to reconsider. He gives two strong reasons why debt forgiveness may actually be a good idea.

The first is that Congress is useless. Thanks to gerrymandering and the 40% or so of our population who keeps electing Republicans no matter how crazy they get, it’s all but impossible to pass useful legislation. The pandemic relief programs were the exception that proves the rule: Somehow those managed to get through, even though in any other context it’s clear that Congress would never have approved any kind of (non-military) program that spent that much money or helped that many poor people.

Student loans are the purview of the Department of Education, which is entirely under control of the Executive Branch, and therefore, ultimately, the President of the United States. So Biden could forgive student loans by executive order and there’s very little Congress could do to stop him. Even if that $1.75 trillion could be better spent, if it wasn’t going to be anyway, we may as well use it for this.

The second is that “college graduates” is too broad a category. Usually I’m on guard for this sort of thing, but in this case I faltered, and did not notice the fallacy of composition so many labor economists were making by lumping all college grads into the same economic category. Yes, some of us are doing well, but many are not. Within-group inequality matters.

A key insight here comes from carefully analyzing the college wage premium, which is the median income of college graduates, divided by the median income of high school graduates. This is an estimate of the overall value of a college education. It’s pretty large, as a matter of fact: It amounts to something like a doubling of your income, or about $1 million over one’s whole lifespan.

From about 1980-2000, wage inequality grew about as fast as today, and the college wage premium grew even faster. So it was plausible—if not necessarily correct—to believe that the wage inequality reflected the higher income and higher productivity of college grads. But since 2000, wage inequality has continued to grow, while the college wage premium has been utterly stagnant. Thus, higher inequality can no longer (if it ever could) be explained by the effects of college education.

Now some college graduates are definitely making a lot more money—such as those who went into finance. But it turns out that most are not. As Krugman points out, the 95th percentile of male college grads has seen a 25% increase in real (inflation-adjusted) income in the last 20 years, while the median male college grad has actually seen a slight decrease. (I’m not sure why Krugman restricted to males, so I’m curious how it looks if you include women. But probably not radically different?)

I still don’t think student loan forgiveness would be the best use of that (enormous sum of) money. But if it’s what’s politically feasible, it definitely could help a lot of people. And it would be easy enough to make it more progressive, by phasing out forgiveness for graduates with higher incomes.

And hey, it would certainly help me, so maybe I shouldn’t argue too strongly against it?

Rethinking progressive taxation

Apr 17 JDN 2459687

There is an extremely common and quite bizarre result in the standard theory of taxation, which is that the optimal marginal tax rate for the highest incomes should be zero. Ever since that result came out, economists have basically divided into two camps.

The more left-leaning have said, “This is obviously wrong; so why is it wrong? What are we missing?”; the more right-leaning have said, “The model says so, so it must be right! Cut taxes on the rich!”

I probably don’t need to tell you that I’m very much in the first camp. But more recently I’ve come to realize that even the answers left-leaning economists have been giving for why this result is wrong are also missing something vital.

There have been papers explaining that “the zero top rate only applies at extreme incomes” (uh, $50 billion sounds pretty extreme to me!) or “the optimal tax system can be U-shaped” (I don’t want U-shaped—we’re not supposed to be taxing the poor!)


And many economists still seem to find it reasonable to say that marginal tax rates should decline over some significant part of the distribution.

In my view, there are really two reasons why taxes should be progressive, and they are sufficiently general reasons that they should almost always override other considerations.

The first is diminishing marginal utility of wealth. The real value of a dollar is much less to someone who already has $1 million than to someone who has only $100. Thus, if we want to raise the most revenue while causing the least pain, we typically want to tax people who have a lot of money rather than people who have very little.

But the right-wing economists have an answer to this one, based on these fancy models: Yes, taking a given amount from the rich would be better (a lump-sum tax), but you can’t do that; you can only tax their income at a certain rate. (So far, that seems right. Lump-sum taxes are silly and economists talk about them too much.) But the rich are rich because they are more productive! If you tax them more, they will work less, and that will harm society as a whole due to their lost productivity.

This is the fundamental intuition behind the “top rate should be zero” result: The rich are so fantastically productive that it isn’t worth it to tax them. We simply can’t risk them working less.

But are the rich actually so fantastically productive? Are they really that smart? Do they really work that hard?

If Tony Stark were real, okay, don’t tax him. He is a one-man Singularity: He invented the perfect power source on his own, “in a cave, with a box of scraps!”; he created a true AI basically by himself; he single-handedly discovered a new stable island element and used it to make his already perfect power source even better.

But despite what his fanboys may tell you, Elon Musk is not Tony Stark. Tesla and SpaceX have done a lot of very good things, but in order to do they really didn’t need Elon Musk for much. Mainly, they needed his money. Give me $270 billion and I could make companies that build electric cars and launch rockets into space too. (Indeed, I probably would—though I’d also set up some charitable foundations as well, more like what Bill Gates did with his similarly mind-boggling wealth.)

Don’t get me wrong; Elon Musk is a very intelligent man, and he works, if anything, obsessively. (He makes his employees work excessively too—and that’s a problem.) But if he were to suddenly die, as long as a reasonably competent CEO replaced him, Tesla and SpaceX would go on working more or less as they already do. The spectacular productivity of these companies is not due to Musk alone, but thousands of highly-skilled employees. These people would be productive if Musk had not existed, and they will continue to be productive once Musk is gone.

And they aren’t particularly rich. They aren’t poor either, mind you—a typical engineer at Tesla or SpaceX is quite well-paid, and rightly so. (Median salary at SpaceX is over $115,000.) These people are brilliant, tremendously hard-working, and highly productive; and they get quite well-paid. But very few of these people are in the top 1%, and basically none of them will ever be billionaires—let alone the truly staggering wealth of a hectobillionaire like Musk himself.

How, then, does one become a billionaire? Not by being brilliant, hard-working, or productive—at least that is not sufficient, and the existence of, say, Donald Trump suggests that it is not necessary either. No, the really quintessential feature every billionaire has is remarkably simple and consistent across the board: They own a monopoly.

You can pretty much go down the list, finding what monopoly each billionaire owned: Bill Gates owned software patents on (what is still) the most widely-used OS and office suite in the world. J.K. Rowling owns copyrights on the most successful novels in history. Elon Musk owns technology patents on various innovations in energy storage and spaceflight technology—very few of which he himself invented, I might add. Andrew Carnegie owned the steel industry. John D. Rockefeller owned the oil industry. And so on.

I honestly can’t find any real exceptions: Basically every billionaire either owned a monopoly or inherited from ancestors who did. The closest things to exceptions are billionaire who did something even worse, like defrauding thousands of people, enslaving an indigenous population or running a nation with an iron fist. (And even then, Leopold II and Vladimir Putin both exerted a lot of monopoly power as part of their murderous tyranny.)

In other words, billionaire wealth is almost entirely rent. You don’t earn a billion dollars. You don’t get it by working. You get it by owning—and by using that ownership to exert monopoly power.

This means that taxing billionaire wealth wouldn’t incentivize them to work less; they already don’t work for their money. It would just incentivize them to fight less hard at extracting wealth from everyone else using their monopoly power—which hardly seems like a downside.

Since virtually all of the wealth at the top is simply rent, we have no reason not to tax it away. It isn’t genuine productivity at all; it’s just extracting wealth that other people produced.

Thus, my second, and ultimately most decisive reason for wanting strongly progressive taxes: rent-seeking. The very rich don’t actually deserve the vast majority of what they have, and we should take it back so that we can give it to people who really need and deserve it.

Now, there is a somewhat more charitable version of the view that high taxes even on the top 0.01% would hurt productivity, and it is worth addressing. That is based on the idea that entrepreneurship is valuable, and part of the incentive for becoming and entrepreneur is the chance at one day striking it fabulously rich, so taxing the fabulously rich might result in a world of fewer entrepreneurs.

This isn’t nearly as ridiculous as the idea that Elon Musk somehow works a million times as hard as the rest of us, but it’s still pretty easy to find flaws in it.

Suppose you were considering starting a business. Indeed, perhaps you already have considered it. What are your main deciding factors in whether or not you will?

Surely they do not include the difference between a 0.0001% chance of making $200 billion and a 0.0001% chance of making $50 billion. Indeed, that probably doesn’t factor in at all; you know you’ll almost certainly never get there, and even if you did, there’s basically no real difference in your way of life between $50 billion and $200 billion.

No, more likely they include things like this: (1) How likely are you to turn a profit at all? Even a profit of $50,000 per year would probably be enough to be worth it, but how sure are you that you can manage that? (2) How much funding can you get to start it in the first place? Depending on what sort of business you’re hoping to found, it could be as little as thousands or as much as millions of dollars to get it set up, well before it starts taking in any revenue. And even a few thousand is a lot for most middle-class people to come up with in one chunk and be willing to risk losing.

This means that there is a very simple policy we could implement which would dramatically increase entrepreneurship while taxing only billionaires more, and it goes like this: Add an extra 1% marginal tax to capital gains for billionaires, and plow it into a fund that gives grants of $10,000 to $100,000 to promising new startups.

That 1% tax could raise several billion dollars a year—yes, really; US billionaires gained some $2 trillion in capital gains last year, so we’d raise $20 billion—and thereby fund many, many startups. Say the average grant is $20,000 and the total revenue is $20 billion; that’s one million new startups funded every single year. Every single year! Currently, about 4 million new businesses are founded each year in the US (leading the world by a wide margin); this could raise that to 5 million.

So don’t tell me this is about incentivizing entrepreneurship. We could do that far better than we currently do, with some very simple policy changes.

Meanwhile, the economics literature on optimal taxation seems to be completely missing the point. Most of it is still mired in the assumption that the rich are rich because they are productive, and thus terribly concerned about the “trade-off” between efficiency and equity involved in higher taxes. But when you realize that the vast, vast majority—easily 99.9%—of billionaire wealth is unearned rents, then it becomes obvious that this trade-off is an illusion. We can improve efficiency and equity simultaneously, by taking some of this ludicrous hoard of unearned wealth and putting it back into productive activities, or giving it to the people who need it most. The only people who will be harmed by this are billionaires themselves, and by diminishing marginal utility of wealth, they won’t be harmed very much.

Fortunately, the tide is turning, and more economists are starting to see the light. One of the best examples comes from Piketty, Saez, and Stantcheva in their paper on how CEO “pay for luck” (e.g. stock options) respond to top tax rates. There are a few other papers that touch on similar issues, such as Lockwood, Nathanson, and Weyl and Rothschild and Scheuer. But there’s clearly a lot of space left for new work to be done. The old results that told us not to raise taxes were wrong on a deep, fundamental level, and we need to replace them with something better.

The alienation of labor

Apr 10 JDN 2459680

Marx famously wrote that capitalism “alienates labor”. Much ink has been spilled over interpreting exactly what he meant by that, but I think the most useful and charitable reading goes something like the following:

When you make something for yourself, it feels fully yours. The effort you put into it feels valuable and meaningful. Whether you’re building a house to live in it or just cooking an omelet to eat it, your labor is directly reflected in your rewards, and you have a clear sense of purpose and value in what you are doing.

But when you make something for an employer, it feels like theirs, not yours. You have been instructed by your superiors to make a certain thing a certain way, for reasons you may or may not understand (and may or may not even agree with). Once you deliver the product—which may be as concrete as a carburetor or as abstract as an accounting report—you will likely never see it again; it will be used or not by someone else somewhere else whom you may not even ever get the chance to meet. Such labor feels tedious, effortful, exhausting—and also often empty, pointless, and meaningless.

On that reading, Marx isn’t wrong. There really is something to this. (I don’t know if this is really Marx’s intended meaning or not, and really I don’t much care—this is a valid thing and we should be addressing it, whether Marx meant to or not.)

There is a little parable about this, which I can’t quite remember where I heard:

Three men are moving heavy stones from one place to another. A traveler passes by and asks them, “What are you doing?”

The first man sighs and says, “We do whatever the boss tells us to do.”

The second man shrugs and says, “We pick up the rocks here, we move them over there.”

The third man smiles and says, “We’re building a cathedral.”

The three answers are quite different—yet all three men may be telling the truth as they see it.

The first man is fully alienated from his labor: he does whatever the boss says, following instructions that he considers arbitrary and mechanical. The second man is partially alienated: he knows the mechanics of what he is trying to accomplish, which may allow him to improve efficiency in some way (e.g. devise better ways to transport the rocks faster or with less effort), but he doesn’t understand the purpose behind it all, so ultimately his work still feels meaningless. But the third man is not alienated: he understands the purpose of his work, and he values that purpose. He sees that what he is doing is contributing to a greater whole that he considers worthwhile. It’s not hard to imagine that the third man will be the happiest, and the first will be the unhappiest.

There really is something about the capitalist wage-labor structure that can easily feed into this sort of alienation. You get a job because you need money to live, not because you necessarily value whatever the job does. You do as you are told so that you can keep your job and continue to get paid.

Some jobs are much more alienating than others. Most teachers and nurses see their work as a vocation, even a calling—their work has deep meaning for them and they value its purpose. At the other extreme there are corporate lawyers and derivatives traders, who must on some level understand that their work contributes almost nothing to the world (may in fact actively cause harm), but they continue to do the work because it pays them very well.

But there are many jobs in between which can be experienced both ways. Working in retail can be an agonizing grind where you must face a grueling gauntlet of ungrateful customers day in and day out—or it can be a way to participate in your local community and help your neighbors get the things they need. Working in manufacturing can be a mechanical process of inserting tab A into slot B and screwing it into place over, and over, and over again—or it can be a chance to create something, convert raw materials into something useful and valuable that other people can cherish.

And while individual perspective and framing surely matter here—those three men were all working in the same quarry, building the same cathedral—there is also an important objective component as well. Working as an artisan is not as alienating as working on an assembly line. Hosting a tent at a farmer’s market is not as alienating as working the register at Walmart. Tutoring an individual student is more purposeful than recording video lectures for a MOOC. Running a quirky local book store is more fulfilling than stocking shelves at Barnes & Noble.

Moreover, capitalism really does seem to push us more toward the alienating side of the spectrum. Assembly lines are far more efficient than artisans, so we make most of our products on assembly lines. Buying food at Walmart is cheaper and more convenient than at farmer’s markets, so more people shop there. Hiring one video lecturer for 10,000 students is a lot cheaper than paying 100 in-person lecturers, let alone 1,000 private tutors. And Barnes & Noble doesn’t drive out local book stores by some nefarious means: It just provides better service at lower prices. If you want a specific book for a good price right now, you’re much more likely to find it at Barnes & Noble. (And even more likely to find it on Amazon.)

Finding meaning in your work is very important for human happiness. Indeed, along with health and social relationships, it’s one of the biggest determinants of happiness. For most people in First World countries, it seems to be more important than income (though income certainly does matter).

Yet the increased efficiency and productivity upon which our modern standard of living depends seems to be based upon a system of production—in a word, capitalism—that systematically alienates us from meaning in our work.

This puts us in a dilemma: Do we keep things as they are, accepting that we will feel an increasing sense of alienation and ennui as our wealth continues to grow and we get ever-fancier toys to occupy our meaningless lives? Or do we turn back the clock, returning to a world where work once again has meaning, but at the cost of making everyone poorer—and some people desperately so?

Well, first of all, to some extent this is a false dichotomy. There are jobs that are highly meaningful but also highly productive, such as teaching and engineering. (Even recording a video lecture is a lot more fulfilling than plenty of jobs out there.) We could try to direct more people into jobs like these. There are jobs that are neither particularly fulfilling nor especially productive, like driving trucks, washing floors and waiting tables. We could redouble our efforts into automating such jobs out of existence. There are meaningless jobs that are lucrative only by rent-seeking, producing little or no genuine value, like the aforementioned corporate lawyers and derivatives traders. These, quite frankly, could simply be banned—or if there is some need for them in particular circumstances (I guess someone should defend corporations when they get sued; but they far more often go unjustly unpunished than unjustly punished!), strictly regulated and their numbers and pay rates curtailed.

Nevertheless, we still have decisions to make, as a society, about what we value most. Do we want a world of cheap, mostly adequate education, that feels alienating even to the people producing it? Then MOOCs are clearly the way to go; pennies on the dollar for education that could well be half as good! Or do we want a world of high-quality, personalized teaching, by highly-qualified academics, that will help students learn better and feel more fulfilling for the teachers? More pointedly—are we willing to pay for that higher-quality education, knowing it will be more expensive?

Moreover, in the First World at least, our standard of living is… pretty high already? Like seriously, what do we really need that we don’t already have? We could always imagine more, of course—a bigger house, a nicer car, dining at fancier restaurants, and so on. But most of us have roofs over our heads, clothes on our backs, and food on our tables.

Economic growth has done amazing things for us—but maybe we’re kind of… done? Maybe we don’t need to keep growing like this, and should start redirecting our efforts away from greater efficiency and toward greater fulfillment. Maybe there are economic possibilities we haven’t been considering.

Note that I specifically mean First World countries here. In Third World countries it’s totally different—they need growth, lots of it, as fast as possible. Fulfillment at work ends up being a pretty low priority when your children are starving and dying of malaria.

But then, you may wonder: If we stop buying cheap plastic toys to fill the emptiness in our hearts, won’t that throw all those Chinese factory workers back into poverty?

In the system as it stands? Yes, that’s a real concern. A sudden drop in consumption spending in general, or even imports in particular, in First World countries could be economically devastating for millions of people in Third World countries.

But there’s nothing inherent about this arrangement. There are less-alienating ways of working that can still provide a decent standard of living, and there’s no fundamental reason why people around the world couldn’t all be doing them. If they aren’t, it’s in the short run because they don’t have the education or the physical machinery—and in the long run it’s usually because their government is corrupt and authoritarian. A functional democratic government can get you capital and education remarkably fast—it certainly did in South Korea, Taiwan, and Japan.

Automation is clearly a big part of the answer here. Many people in the First World seem to suspect that our way of life depends upon the exploited labor of impoverished people in Third World countries, but this is largely untrue. Most of that work could be done by robots and highly-skilled technicians and engineers; it just isn’t because that would cost more. Yes, that higher cost would mean some reduction in standard of living—but it wouldn’t be nearly as dramatic as many people seem to think. We would have slightly smaller houses and slightly older cars and slightly slower laptops, but we’d still have houses and cars and laptops.

So I don’t think we should all cast off our worldly possessions just yet. Whether or not it would make us better off, it would cause great harm to countries that depend on their exports to us. But in the long run, I do think we should be working to achieve a future for humanity that isn’t so obsessed with efficiency and growth, and instead tries to provide both a decent standard of living and a life of meaning and purpose.

Reversals in progress against poverty

Jan 16 JDN 2459606

I don’t need to tell you that the COVID pandemic has been very bad for the world. Yet perhaps the worst outcome of the pandemic is one that most people don’t recognize: It has reversed years of progress against global poverty.

Estimates of the number of people who will be thrown into extreme poverty as a result of the pandemic are consistently around 100 million, though some forecasts have predicted this will rise to 150 million, or, in the most pessimistic scenarios, even as high as 500 million.

Pre-COVID projections showed the global poverty rate falling steadily from 8.4% in 2019 to 6.3% by 2030. But COVID resulted in the first upward surge in global poverty in decades, and updated models now suggest that the global poverty rate in 2030 will be as high as 7.0%. That difference is 0.7% of a forecasted population of 8.5 billion—so that’s a difference of 59 million people.

This is a terrible reversal of fortune, and a global tragedy. Ten or perhaps even hundreds of millions of people will suffer the pain of poverty because of this global pandemic and the numerous missteps by many of the world’s governments—not least the United States—in response to it.

Yet it’s important to keep in mind that this is a short-term reversal in a long-term trend toward reduced poverty. Yes, the most optimistic predictions are turning out to be wrong—but the general pattern of dramatic reductions in global poverty over the late 20th and early 21st century are still holding up.

That post-COVID estimate of a global poverty rate of 7.0% needs to be compared against the fact that as recently as 1980 the global poverty rate at the same income level (adjust for inflation and purchasing power of course) income level was a whopping 44%.

This pattern makes me feel deeply ambivalent about the effects of globalization on inequality. While it now seems clear that globalization has exacerbated inequality within First World countries—and triggered a terrible backlash of right-wing populism as a result—it also seems clear that globalization was a major reason for the dramatic reductions in global poverty in the past few decades.

I think the best answer I’ve been able to come up with is that globalization is overall a good thing, and we must continue it—but we also need to be much more mindful of its costs, and we must make policy that mitigates those costs. Expanded trade has winners and losers, and we should be taxing the winners to compensate the losers. To make good economic policy, it simply isn’t enough to increase aggregate GDP; you actually have to make life better for everyone (or at least as many people as you can).

Unfortunately, knowing what policies to make is only half the battle. We must actually implement those policies, which means winning elections, which means restoring the public’s faith in the authority of economic experts.

Some of the people voting for Donald Trump were just what Hillary Clinton correctly (if tone-deafly) referred to as “deplorables“: racists, misogynists, xenophobes. But I think that many others weren’t voting for Trump but against Clinton; they weren’t embracing far-right populism but rather rejecting center-left technocratic globalization. They were tired of being told what to do by experts who didn’t seem to care about them or their interests.

And the thing is, they were right about that. Not about voting for Trump—that’s unforgivable—but about the fact that expert elites had been ignoring their interests and needed a wake-up call. There were a hundred better ways of making that wake-up call that didn’t involve putting a narcissistic, incompetent maniac in charge of the world’s largest economy, military and nuclear arsenal, and millions of people should be ashamed of themselves for not taking those better options. Yet the fact remains: The wake-up call was necessary, and we should be responding to it.

We expert elites (I think I can officially carry that card, now that I have a PhD and a faculty position at a leading research university) need to do a much better job of two things: First, articulating the case for our policy recommendations in a way that ordinary people can understand, so that they feel justified and not simply rammed down people’s throats; and second, recognizing the costs and downsides of these policies and taking action to mitigate them whenever possible.

For instance: Yes, we need to destroy all the coal jobs. They are killing workers and the planet. Coal companies need to be transitioned to new industries or else shut down. This is not optional. It must be done. But we also need to explain to those coal miners why it’s necessary to move on from coal to solar and nuclear, and we need to be implementing various policies to help those workers move on to better, safer jobs that pay as well and don’t involve filling their lungs with soot and the atmosphere with carbon dioxide. We need to articulate, emphasize—and loudly repeat—that this isn’t about hurting coal miners to help everyone else, but about helping everyone, coal miners included, and that if anyone gets hurt it will only be a handful of psychopathic billionaires who already have more money than any human being could possibly need or deserve.

Another example: We cannot stop trading with India and China. Hundreds of millions of innocent people would suddenly be thrown out of work and into poverty if we did. We need the products they make for us, and they need the money we pay for those products. But we must also acknowledge that trading with poor countries does put downward pressure on wages back home, and take action to help First World workers who are now forced to compete with global labor markets. Maybe this takes the form of better unemployment benefits, or job-matching programs, or government-sponsored job training. But we cannot simply shrug and let people lose their jobs and their homes because the factories they worked in were moved to China.

The economics of interstellar travel

Dec 19 JDN 2459568

Since these are rather dark times—the Omicron strain means that COVID is still very much with us, after nearly two years—I thought we could all use something a bit more light-hearted and optimistic.

In 1978 Paul Krugman wrote a paper entitled “The Theory of Interstellar Trade”, which has what is surely one of the greatest abstracts of all time:

This paper extends interplanetary trade theory to an interstellar setting. It is chiefly concerned with the following question: how should interest charges on goods in transit be computed when the goods travel at close to the speed of light? This is a problem because the time taken in transit will appear less to an observer travelling with the goods than to a stationary observer. A solution is derived from economic theory, and two useless but true theorems are proved.

The rest of the paper is equally delightful, and well worth a read. Of particular note are these two sentences, which should give you a feel: “The rest of the paper is, will be, or has been, depending on the reader’s inertial frame, divided into three sections.” and “This extension is left as an exercise for interested readers because the author does not understand general relativity, and therefore cannot do it himself.”

As someone with training in both economics and relativistic physics, I can tell you that Krugman’s analysis is entirely valid, given its assumptions. (Really, this is unsurprising: He’s a Nobel Laureate. One could imagine he got his physics wrong, but he didn’t—and of course he didn’t get his economics wrong.) But, like much high-falutin economic theory, it relies upon assumptions that are unlikely to be true.

Set aside the assumptions of perfect competition and unlimited arbitrage that yield Krugman’s key result of equalized interest rates. These are indeed implausible, but they’re also so standard in economics as to be pedestrian.

No, what really concerns me is this: Why bother with interstellar trade at all?

Don’t get me wrong: I’m all in favor of interstellar travel and interstellar colonization. I want humanity to expand and explore the galaxy (or rather, I want that to be done by whatever humanity becomes, likely some kind of cybernetically and biogenetically enhanced transhumans in endless varieties we can scarcely imagine). But once we’ve gone through all the effort to spread ourselves to distant stars, it’s not clear to me that we’d ever have much reason to trade across interstellar distances.

If we ever manage to invent efficient, reliable, affordable faster-than-light (FTL) travel ala Star Trek, sure. In that case, there’s no fundamental difference between interstellar trade and any other kind of trade. But that’s not what Krugman’s paper is about, as its key theorems are actually about interest rates and prices in different inertial reference frames, which is only relevant if you’re limited to relativistic—that is, slower-than-light—velocities.

Moreover, as far as we can tell, that’s impossible. Yes, there are still some vague slivers of hope left with the Alcubierre Drive, wormholes, etc.; but by far the most likely scenario is that FTL travel is simply impossible and always will be.

FTL communication is much more plausible, as it merely requires the exploitation of nonlocal quantum entanglement outside quantum equilibrium; if the Bohm Interpretation is correct (as I strongly believe it is), then this is a technological problem rather than a theoretical one. At best this might one day lead to some form of nonlocal teleportation—but definitely not FTL starships. Since our souls are made of software, sending information can, in principle, send a person; but we almost surely won’t be sending mass faster than light.

So let’s assume, as Krugman did, that we will be limited to travel close to, but less than, the speed of light. (I recently picked up a term for this from Ursula K. Le Guin: “NAFAL”, “nearly-as-fast-as-light”.)

This means that any transfer of material from one star system to another will take, at minimum, years. It could even be decades or centuries, depending on how close to the speed of light we are able to get.

Assuming we have abundant antimatter or some similarly extremely energy-dense propulsion, it would reasonable to expect that we could build interstellar spacecraft that would be capable of accelerating at approximately Earth gravity (i.e. 1 g) for several years at a time. This would be quite comfortable for the crew of the ship—it would just feel like standing on Earth. And it turns out that this is sufficient to attain velocities quite close to the speed of light over the distances to nearby stars.

I will spare you the complicated derivation, but there are well-known equations which allow us to convert from proper acceleration (the acceleration felt on a spacecraft, i.e. 1 g in this case) to maximum velocity and total travel time, and they imply that a vessel which was constantly accelerating at 1 g (speeding up for the first half, then slowing down for the second half) could reach most nearby stars within about 50 to 100 years Earth time, or as little as 10 to 20 years ship time.

With higher levels of acceleration, you can shorten the trip; but that would require designing ships (or engineering crews?) in such a way as to sustain these high levels of acceleration for years at a time. Humans can sustain 3 g’s for hours, but not for years.

Even with only 1-g acceleration, the fuel costs for such a trip are staggering: Even with antimatter fuel you need dozens or hundreds of times as much mass in fuel as you have in payload—and with anything less than antimatter it’s basically just not possible. Yet there is nothing in the laws of physics saying you can’t do it, and I believe that someday we will.

Yet I sincerely doubt we would want to make such trips often. It’s one thing to send occasional waves of colonists, perhaps one each generation. It’s quite another to establish real two-way trade in goods.

Imagine placing an order for something—anything—and not receiving it for another 50 years. Even if, as I hope and believe, our descendants have attained far longer lifespans than we have, asymptotically approaching immortality, it seems unlikely that they’d be willing to wait decades for their shipments to arrive. In the same amount of time you could establish an entire industry in your own star system, built from the ground up, fully scaled to service entire planets.

In order to justify such a transit, you need to be carrying something truly impossible to produce locally. And there just won’t be very many such things.

People, yes. Definitely in the first wave of colonization, but likely in later waves as well, people will want to move themselves and their families across star systems, and will be willing to wait (especially since the time they experience on the ship won’t be nearly as daunting).

And there will be knowledge and experiences that are unique to particular star systems—but we’ll be sending that by radio signal and it will only take as many years as there are light-years between us; or we may even manage to figure out FTL ansibles and send it even faster than that.

It’s difficult for me to imagine what sort of goods could ever be so precious, so irreplaceable, that it would actually make sense to trade them across an interstellar distance. All habitable planets are likely to be made of essentially the same elements, in approximately the same proportions; whatever you may want, it’s almost certainly going to be easier to get it locally than it would be to buy it from another star system.

This is also why I think alien invasion is unlikely: There’s nothing they would particularly want from us that they couldn’t get more easily. Their most likely reason for invading would be specifically to conquer and rule us.

Certainly if you want gold or neodymium or deuterium, it’ll be thousands of times easier to get it at home. But even if you want something hard to make, like antimatter, or something organic and unique, like oregano, building up the industry to manufacture a product or the agriculture to grow a living organism is almost certainly going to be faster and easier than buying it from another solar system.

This is why I believe that for the first generation of interstellar colonists, imports will be textbooks, blueprints, and schematics to help build, and films, games, and songs to stay entertained and tied to home; exports will consist of of scientific data about the new planet as well as artistic depictions of life on an alien world. For later generations, it won’t be so lopsided: The colonies will have new ideas in science and engineering as well as new art forms to share. Billions of people on Earth and thousands or millions on each colony world will await each new transmission of knowledge and art with bated breath.

Long-distance trade historically was mainly conducted via precious metals such as gold; but if interstellar travel is feasible, gold is going to be dirt cheap. Any civilization capable of even sending a small intrepid crew of colonists to Epsilon Eridani is going to consider mining asteroids an utterly trivial task.

Will such transactions involve money? Will we sell these ideas, or simply give them away? Unlike my previous post where I focused on the local economy, here I find myself agreeing with Star Trek: Money isn’t going to make sense for interstellar travel. Unless we have very fast communication, the time lag between paying money out and then seeing it circulate back will be so long that the money returned to you will be basically worthless. And that’s assuming you figure out a way to make transactions clear that doesn’t require real-time authentication—because you won’t have it.

Consider Epsilon Eridani, a plausible choice for one of the first star systems we will colonize. That’s 10.5 light-years away, so a round-trip signal will take 21 years. If inflation is a steady 2%, that means that $100 today will need to come back as $151 to have the same value by the time you hear back from your transaction. If you had the option to invest in a 5% bond instead, you’d have $279 by then. And this is a nearby star.

It would be much easier to simply trade data for data, maybe just gigabyte for gigabyte or maybe by some more sophisticated notion of relative prices. You don’t need to worry about what your dollar will be worth 20 years from now; you know how much effort went into designing that blueprint for an antimatter processor and you know how much you’ll appreciate seeing that VR documentary on the rings of Aegir. You may even have in mind how much it cost you to pay people to design prototypes and how much you can sell the documentary for; but those monetary transactions will be conducted within your own star system, independently of whatever monetary system prevails on other stars.

Indeed, it’s likely that we wouldn’t even bother trying to negotiate how much to send—because that itself would have such overhead and face the same time-lags—and would instead simply make a habit of sending everything we possibly can. Such interchanges could be managed by governments at each end, supported by public endowments. “This year’s content from Epsilon Eridani, brought to you by the Smithsonian Institution.”

We probably won’t ever have—or need, or want—huge freighter ships carrying containers of goods from star to star. But with any luck, we will one day have art and ideas from across the galaxy shared by all of the endless variety of beings humanity has become.

Are unions collusion?

Oct 31 JDN 2459519

The standard argument from center-right economists against labor unions is that they are a form of collusion: Producers are coordinating and intentionally holding back from what would be in their individual self-interest in order to gain a collective advantage. And this is basically true: In the broadest sense of the term, labor unions are are form of collusion. Since collusion is generally regarded as bad, therefore (this argument goes), unions are bad.

What this argument misses out on is why collusion is generally regarded as bad. The typical case for collusion is between large corporations, each of which already controls a large share of the market—collusion then allows them to act as if they control an even larger share, potentially even acting as a monopoly.

Labor unions are not like this. Literally no individual laborer controls a large segment of the market. (Some very specialized laborers, like professional athletes, or, say, economists, might control a not completely trivial segment of their particular job market—but we’re still talking something like 1% at most. Even Tiger Woods or Paul Krugman is not literally irreplaceable.) Moreover, even the largest unions can rarely achieve anything like a monopoly over a particular labor market.

Thus whereas typical collusion involves going from a large market share to an even larger—often even dominant—market share, labor unions involve going from a tiny market share to a moderate—and usually not dominant—market share.

But that, by itself, wouldn’t be enough to justify unions. While small family businesses banding together in collusion is surely less harmful than large corporations doing the same, it would probably still be a bad thing, insofar as it would raise prices and reduce the quantity or quality of products sold. It would just be less bad.

Yet unions differ from even this milder collusion in another important respect: They do not exist to increase bargaining power versus consumers. They exist to increase bargaining power versus corporations.

And corporations, it turns out, already have a great deal of bargaining power. While a labor union acts as something like a monopoly (or at least oligopoly), corporations act like the opposite: oligopsony or even monopsony.

While monopoly or monopsony on its own is highly unfair and inefficient, the combination of the two—bilateral monopolyis actually relatively fair and efficient. Bilateral monopoly is probably not as good as a truly competitive market, but it is definitely better than either a monopoly or monopsony alone. Whereas a monopoly has too much bargaining power for the seller (resulting in prices that are too high), and a monopsony has too much bargaining power for the buyer (resulting in prices that are too low), a bilateral monopoly has relatively balanced bargaining power, and thus gets an outcome that’s not too much different from fair competition in a free market.

Thus, unions really exist as a correction mechanism for the excessive bargaining power of corporations. Most unions are between workers in large industries who work for a relatively small number of employers, such as miners, truckers, and factory workers. (Teachers are also an interesting example, because they work for the government, which effectively has a monopsony on public education services.) In isolation they may seem inefficient; but in context they really exist to compensate for other, worse inefficiencies.


We could imagine a world where this was not so: Say there is a market with many independent buyers who are unwilling or unable to reliably collude, and they are served by a small number of powerful unions that use their bargaining power to raise prices and reduce output.


We have some markets that already look a bit like that: Consider the licensing systems for doctors and lawyers. These are basically guilds, which are collusive in the same way as labor unions.

Note that unlike, say, miners, truckers, or factory workers, doctors and lawyers are not a large segment of the population; they are bargaining against consumers just as much as corporations; and they are extremely well-paid and very likely undersupplied. (Doctors are definitely undersupplied; with lawyers it’s a bit more complicated, but given how often corporations get away with terrible things and don’t get sued for it, I think it’s fair to say that in the current system, lawyers are undersupplied.) So I think it is fair to be concerned that the guild systems for doctors and lawyers are too powerful. We want some system for certifying the quality of doctors and lawyers, but the existing standards are so demanding that they result in a shortage of much-needed labor.

One way to tell that unions aren’t inefficient is to look at how unionization relates to unemployment. If unions were acting as a harmful monopoly on labor, unemployment should be higher in places with greater unionization rates. The empirical data suggests that if there is any such effect, it’s a small one. There are far more important determinants of unemployment than unionization. (Wages, on the other hand, show a strong positive link with unionization.) Much like the standard prediction that raising minimum wage would reduce employment, the prediction that unions raise unemployment has largely not been borne out by the data. And for much the same reason: We had ignored the bargaining power of employers, which minimum wage and unions both reduce.

Thus, the justifiability of unions isn’t something that we could infer a priori without looking at the actual structure of the labor market. Unions aren’t always or inherently good—but they are usually good in the system as it stands. (Actually there’s one particular class of unions that do not seem to be good, and that’s police unions: But this is a topic for another time.)

My ultimate conclusion? Yes, unions are a form of collusion. But to infer from that they must be bad is to commit a Noncentral Fallacy. Unions are the good kind of collusion.

When to give up

Jun 6 JDN 2459372

Perseverance is widely regarded as a virtue, and for good reason. Often one of the most important deciding factors in success is the capacity to keep trying after repeated failure. I think this has been a major barrier for me personally; many things came easily to me when I was young, and I internalized the sense that if something doesn’t come easily, it must be beyond my reach.

Yet it’s also worth noting that this is not the only deciding factor—some things really are beyond our capabilities. Indeed, some things are outright impossible. And we often don’t know what is possible and what isn’t.

This raises the question: When should we persevere, and when should we give up?

There is actually reason to think that people often don’t give up when they should. Steven Levitt (of Freakonomics fame)recently published a study that asked people who were on the verge of a difficult decision to flip a coin, and then base their decision on the coin flip: Heads, make a change; tails, keep things as they are. Many didn’t actually follow the coin flip—but enough did that there was a statistical difference between those who saw heads and those who saw tails. The study found that the people who flipped heads and made a change were on average happier a couple of years later than the people who flipped tails and kept things as they were.

This question is particularly salient for me lately, because the academic job market has gone so poorly for me. I’ve spent most of my life believing that academia is where I belong; my intellect and my passion for teaching and research has convinced me and many others that this is the right path for me. But now that I have a taste of what it is actually like to apply for tenure-track jobs and submit papers to journals, I am utterly miserable. I hate every minute of it. I’ve spent the entire past year depressed and feeling like I have accomplished absolutely nothing.

In theory, once one actually gets tenure it’s supposed to get easier. But that could be a long way away—or it might never happen at all. As it is, there’s basically no chance I’ll get a tenure track position this year, and it’s unclear what my chances would be if I tried again next year.

If I could actually get a paper published, that would no doubt improve my odds of landing a better job next year. But I haven’t been able to do that, and each new rejection cuts so deep that I can barely stand to look at my papers anymore, much less actually continue submitting them. And apparently even tenured professors still get their papers rejected repeatedly, which means that this pain will never go away. I simply cannot imagine being happy if this is what I am expected to do for the rest of my life.

I found this list of criteria for when you should give up something—and most of them fit me. I’m not sure I know in my heart it can’t work out, but I increasingly suspect that. I’m not sure I want it anymore, now that I have a better idea of what it’s really like. Pursuing it is definitely making me utterly miserable. I wouldn’t say it’s the only reason, but I definitely do worry what other people will think if I quit; I feel like I’d be letting a lot of people down. I also wonder who I am without it, where I belong if not here. I don’t know what other paths are out there, but maybe there is something better. This constant stream of failure and rejection has definitely made me feel like I hate myself. And above all, when I imagine quitting, I absolutely feel an enormous sense of relief.

Publishing in journals seems to be the thing that successful academics care about most, and it means almost nothing to me anymore. I only want it because of all the pressure to have it, because of all the rewards that come from having it. It has become fully instrumental to me, with no intrinsic meaning or value. I have no particular desire to be lauded by the same system that lauded Fischer Black or Kenneth Rogoff—both of whose egregious and easily-avoidable mistakes are responsible for the suffering of millions people around the world.

I want people to read my ideas. But people don’t actually read journals. They skim them. They read the abstracts. They look at the graphs and regression tables. (You have the meeting that should have been an email? I raise you the paper that should have been a regression table.) They see if there’s something in there that they should be citing for their own work, and if there is, maybe then they actually read the paper—but everyone is so hyper-specialized that only a handful of people will ever actually want to cite any given paper. The vast majority of research papers are incredibly tedious to read and very few people actually bother. As a method for disseminating ideas, this is perhaps slightly better than standing on a street corner and shouting into a megaphone.

I would much rather write books; people sometimes actually read books, especially when they are written for a wide audience and hence not forced into the straitjacket of standard ‘scientific writing’ that no human being actually gets any enjoyment out of writing or reading. I’ve seen a pretty clear improvement in writing quality of papers written by Nobel laureates—after they get their Nobels or similar accolades. Once they establish themselves, they are free to actually write in ways that are compelling and interesting, rather than having to present everything in the most dry, tedious way possible. If your paper reads like something that a normal person would actually find interesting or enjoyable to read, you will be—as I have been—immediately told that you must remove all such dangerous flavor until the result is as tasteless as possible.

No, the purpose of research journals is not to share ideas. Its function is not to share, but to evaluate. And it isn’t even really to evaluate research—it’s to evaluate researchers. It’s to outsource the efforts of academic hiring to an utterly unaccountable and arbitrary system run mostly by for-profit corporations. It may have some secondary effect of evaluating ideas for validity; at least the really awful ideas are usually excluded. But its primary function is to decide the academic pecking order.

I had thought that scientific peer review was supposed to select for truth. Perhaps sometimes it does. It seems to do so reasonably well in the natural sciences, at least. But in the social sciences? That’s far less clear. Peer-reviewed papers are much more likely to be accurate than any randomly-selected content; but there are still a disturbingly large number of peer-reviewed published papers that are utterly wrong, and some unknown but undoubtedly vast number of good papers that have never seen the light of day.

Then again, when I imagine giving up on an academic career, I don’t just feel relief—I also feel regret and loss. I feel like I’ve wasted years of my life putting together a dream that has now crumbled in my hands. I even feel some anger, some sense that I was betrayed by those who told me that this was about doing good research when it turns out it’s actually about being thick-skinned enough that you can take an endless assault of rejections. It feels like I’ve been running a marathon, and I just rounded a curve to discover that the last five miles must be ridden on horseback, when I don’t have a horse, I have no equestrian training, and in fact I’m allergic to horses.

I wish someone had told me it would be like this. Maybe they tried and I didn’t listen. They did say that papers would get rejected. They did say that the tenure track was high-pressure and publish-or-perish was a major source of anxiety. But they never said that it would tear at my soul like this. They never said that I would have to go through multiple rounds of agony, self-doubt, and despair in order to get even the slighest recognition for my years of work. They never said that the whole field would treat me like I’m worthless because I can’t satisfy the arbitrary demands of a handful of anonymous reviewers. They never said that I would begin to feel worthless after several rounds of this.

That’s really what I want to give up on. I want to give up on hitching my financial security, my career, my future, my self-worth to a system as capricious as peer review.

I don’t want to give up on research. I don’t want to give up on teaching. I still believe strongly in discovering new truths and sharing them with others. I’m just increasingly realizing that academia isn’t nearly as good at that as I thought it was.

It isn’t even that I think it’s impossible for me to succeed in academia. I think that if I continued trying to get a tenure-track job, I would land one eventually. Maybe next year. Or maybe I’d spend a few years at a postdoc first. And I’d probably manage to publish some paper in some reasonably respectable journal at some point in the future. But I don’t know how long it would take, or how good a journal it would be—and I’m already past the point where I really don’t care anymore, where I can’t afford to care, where if I really allowed myself to care it would only devastate me when I inevitably fail again. Now that I see what is really involved in the process, how arduous and arbitrary it is, publishing in a journal means almost nothing to me. I want to be validated; I want to be appreciated; I want to be recognized. But the system is set up to provide nothing but rejection, rejection, rejection. If even the best work won’t be recognized immediately and even the worst work can make it with enough tries, then the whole system begins to seem meaningless. It’s just rolls of the dice. And I didn’t sign up to be a gambler.

The job market will probably be better next year than it was this year. But how much better? Yes, there will be more openings, but there will also be more applicants: Everyone who would normally be on the market, plus everyone like me who didn’t make it this year, plus everyone who decided to hold back this year because they knew they wouldn’t make it (as I probably should have done). Yes, in a normal year, I could be fairly confident of getting some reasonably decent position—but this wasn’t a normal year, and next year won’t be one either, and the one after that might still not be. If I can’t get a paper published in a good journal between now and then—and I’m increasingly convinced that I can’t—then I really can’t expect my odds to be greatly improved from what they were this time around. And if I don’t know that this terrible gauntlet is going to lead to something good, I’d really much rather avoid it altogether. It was miserable enough when I went into it being (over)confident that it would work out all right.

Perhaps the most important question when deciding whether to give up is this: What will happen if you do? What alternatives do you have? If giving up means dying, then don’t give up. (“Learn to let go” is very bad advice to someone hanging from the edge of a cliff.) But while it may feel that way sometimes, rarely does giving up on a career or a relationship or a project yield such catastrophic results.

When people are on the fence about making a change and then do so, even based on the flip of a coin, it usually makes them better off. Note that this is different from saying you should make all your decisions randomly; if you are confident that you don’t want to make a change, don’t make a change. This advice is for people who feel like they want a change but are afraid to take the chance, people who find themselves ambivalent about what direction to go next—people like me.

I don’t know where I should go next. I don’t know where I belong. I know it isn’t Wall Street. I’m pretty sure it’s not consulting. Maybe it’s nonprofits. Maybe it’s government. Maybe it’s freelance writing. Maybe it’s starting my own business. I guess I’d still consider working in academia; if Purdue called me back to say they made a terrible mistake and they want me after all, I’d probably take the offer. But since such an outcome is now vanishingly unlikely, perhaps it’s time, after all, to give up.

Social science is broken. Can we fix it?

May 16 JDN 2459349

Social science is broken. I am of course not the first to say so. The Atlantic recently published an article outlining the sorry state of scientific publishing, and several years ago Slate Star Codex published a lengthy post (with somewhat harsher language than I generally use on this blog) showing how parapsychology, despite being obviously false, can still meet the standards that most social science is expected to meet. I myself discussed the replication crisis in social science on this very blog a few years back.

I was pessimistic then about the incentives of scientific publishing be fixed any time soon, and I am even more pessimistic now.

Back then I noted that journals are often run by for-profit corporations that care more about getting attention than getting the facts right, university administrations are incompetent and top-heavy, and publish-or-perish creates cutthroat competition without providing incentives for genuinely rigorous research. But these are widely known facts, even if so few in the scientific community seem willing to face up to them.

Now I am increasingly concerned that the reason we aren’t fixing this system is that the people with the most power to fix it don’t want to. (Indeed, as I have learned more about political economy I have come to believe this more and more about all the broken institutions in the world. American democracy has its deep flaws because politicians like it that way. China’s government is corrupt because that corruption is profitable for many of China’s leaders. Et cetera.)

I know economics best, so that is where I will focus; but most of what I’m saying here would also apply to other social sciences such as sociology and psychology as well. (Indeed it was psychology that published Daryl Bem.)

Rogoff and Reinhart’s 2010 article “Growth in a Time of Debt”, which was a weak correlation-based argument to begin with, was later revealed (by an intrepid grad student! His name is Thomas Herndon.) to be based upon deep, fundamental errors. Yet the article remains published, without any notice of retraction or correction, in the American Economic Review, probably the most prestigious journal in economics (and undeniably in the vaunted “Top Five”). And the paper itself was widely used by governments around the world to justify massive austerity policies—which backfired with catastrophic consequences.

Why wouldn’t the AER remove the article from their website? Or issue a retraction? Or at least add a note on the page explaining the errors? If their primary concern were scientific truth, they would have done something like this. Their failure to do so is a silence that speaks volumes, a hound that didn’t bark in the night.

It’s rational, if incredibly selfish, for Rogoff and Reinhart themselves to not want a retraction. It was one of their most widely-cited papers. But why wouldn’t AER’s editors want to retract a paper that had been so embarrassingly debunked?

And so I came to realize: These are all people who have succeeded in the current system. Their work is valued, respected, and supported by the system of scientific publishing as it stands. If we were to radically change that system, as we would necessarily have to do in order to re-align incentives toward scientific truth, they would stand to lose, because they would suddenly be competing against other people who are not as good at satisfying the magical 0.05, but are in fact at least as good—perhaps even better—actual scientists than they are.

I know how they would respond to this criticism: I’m someone who hasn’t succeeded in the current system, so I’m biased against it. This is true, to some extent. Indeed, I take it quite seriously, because while tenured professors stand to lose prestige, they can’t really lose their jobs even if there is a sudden flood of far superior research. So in directly economic terms, we would expect the bias against the current system among grad students, adjuncts, and assistant professors to be larger than the bias in favor of the current system among tenured professors and prestigious researchers.

Yet there are other motives aside from money: Norms and social status are among the most powerful motivations human beings have, and these biases are far stronger in favor of the current system—even among grad students and junior faculty. Grad school is many things, some good, some bad; but one of them is a ritual gauntlet that indoctrinates you into the belief that working in academia is the One True Path, without which your life is a failure. If your claim is that grad students are upset at the current system because we overestimate our own qualifications and are feeling sour grapes, you need to explain our prevalence of Impostor Syndrome. By and large, grad students don’t overestimate our abilities—we underestimate them. If we think we’re as good at this as you are, that probably means we’re better. Indeed I have little doubt that Thomas Herndon is a better economist than Kenneth Rogoff will ever be.

I have additional evidence that insider bias is important here: When Paul Romer—Nobel laureate—left academia he published an utterly scathing criticism of the state of academic macroeconomics. That is, once he had escaped the incentives toward insider bias, he turned against the entire field.

Romer pulls absolutely no punches: He literally compares the standard methods of DSGE models to “phlogiston” and “gremlins”. And the paper is worth reading, because it’s obviously entirely correct. He pulls no punches and every single one lands on target. It’s also a pretty fun read, at least if you have the background knowledge to appreciate the dry in-jokes. (Much like “Transgressing the Boundaries: Toward a Transformative Hermeneutics of Quantum Gravity.” I still laugh out loud every time I read the phrase “hegemonic Zermelo-Frankel axioms”, though I realize most people would be utterly nonplussed. For the unitiated, these are the Zermelo-Frankel axioms. Can’t you just see the colonialist imperialism in sentences like “\forall x \forall y (\forall z, z \in x \iff z \in y) \implies x = y”?)

In other words, the Upton Sinclair Principle seems to be applying here: “It is difficult to get a man to understand something when his salary depends upon not understanding it.” The people with the most power to change the system of scientific publishing are journal editors and prestigious researchers, and they are the people for whom the current system is running quite swimmingly.

It’s not that good science can’t succeed in the current system—it often does. In fact, I’m willing to grant that it almost always does, eventually. When the evidence has mounted for long enough and the most adamant of the ancien regime finally retire or die, then, at last, the paradigm will shift. But this process takes literally decades longer than it should. In principle, a wrong theory can be invalidated by a single rigorous experiment. In practice, it generally takes about 30 years of experiments, most of which don’t get published, until the powers that be finally give in.

This delay has serious consequences. It means that many of the researchers working on the forefront of a new paradigm—precisely the people that the scientific community ought to be supporting most—will suffer from being unable to publish their work, get grant funding, or even get hired in the first place. It means that not only will good science take too long to win, but that much good science will never get done at all, because the people who wanted to do it couldn’t find the support they needed to do so. This means that the delay is in fact much longer than it appears: Because it took 30 years for one good idea to take hold, all the other good ideas that would have sprung from it in that time will be lost, at least until someone in the future comes up with them.

I don’t think I’ll ever forget it: At the AEA conference a few years back, I went to a luncheon celebrating Richard Thaler, one of the founders of behavioral economics, whom I regard as one of the top 5 greatest economists of the 20th century (I’m thinking something like, “Keynes > Nash > Thaler > Ramsey > Schelling”). Yes, now he is being rightfully recognized for his seminal work; he won a Nobel, and he has an endowed chair at Chicago, and he got an AEA luncheon in his honor among many other accolades. But it was not always so. Someone speaking at the luncheon offhandedly remarked something like, “Did we think Richard would win a Nobel? Honestly most of us weren’t sure he’d get tenure.” Most of the room laughed; I had to resist the urge to scream. If Richard Thaler wasn’t certain to get tenure, then the entire system is broken. This would be like finding out that Erwin Schrodinger or Niels Bohr wasn’t sure he would get tenure in physics.

A. Gary Schilling, a renowned Wall Street economist (read: One Who Has Turned to the Dark Side), once remarked (the quote is often falsely attributed to Keynes): “markets can remain irrational a lot longer than you and I can remain solvent.” In the same spirit, I would say this: the scientific community can remain wrong a lot longer than you and I can extend our graduate fellowships and tenure clocks.