Finance is the commodification of trust

Jul 18 JDN 2459414

What is it about finance?

Why is it that whenever we have an economic crisis, it seems to be triggered by the financial industry? Why has the dramatic rise in income and wealth inequality come in tandem with a rise in finance as a proportion of our economic output? Why are so many major banks implicated in crimes ranging from tax evasion to money laundering for terrorists?

In other words, why are the people who run our financial industry such utter scum? What is it about finance that it seems to attract the very worst people on Earth?

One obvious answer is that it is extremely lucrative: Incomes in the financial industry are higher than almost any other industry. Perhaps people who are particularly unscrupulous are drawn to the industries that make the most money, and don’t care about much else. But other people like making money too, so this is far from a full explanation. Indeed, incomes for physicists are comparable to those of Wall Street brokers, yet physicists rarely seem to be implicated in mass corruption scandals.

I think there is a deeper reason: Finance is the commodification of trust.

Many industries sell products, physical artifacts like shirts or televisions. Others sell services like healthcare or auto repair, which involve the physical movement of objects through space. Information-based industries are a bit different—what a software developer or an economist sells isn’t really a physical object moving through space. But then what they are selling is something more like knowledge—information that can be used to do useful things.

Finance is different. When you make a loan or sell a stock, you aren’t selling a thing—and you aren’t really doing a thing either. You aren’t selling information, either. You’re selling trust. You are making money by making promises.

Most people are generally uncomfortable with the idea of selling promises. It isn’t that we’d never do it—but we’re reluctant to do it. We try to avoid it whenever we can. But if you want to be successful in finance, you can’t have that kind of reluctance. To succeed on Wall Street, you need to be constantly selling trust every hour of every day.

Don’t get me wrong: Certain kinds of finance are tremendously useful, and we’d be much worse off without them. I would never want to get rid of government bonds, auto loans or home mortgages. I’m actually pretty reluctant to even get rid of student loans, despite the large personal benefits I would get if all student loans were suddenly forgiven. (I would be okay with a system like Elizabeth Warren’s proposal, where people with college degrees pay a surtax that supports free tuition. The problem with most proposals for free college is that they make people who never went to college pay for those who did, and that seems unfair and regressive to me.)

But the Medieval suspicion against “usury“—the notion that there is something immoral about making money just from having money and making promises—isn’t entirely unfounded. There really is something deeply problematic about a system in which the best way to get rich is to sell commodified packages of trust, and the best way to make money is to already have it.

Moreover, the more complex finance gets, the more divorced it becomes from genuinely necessary transactions, and the more commodified it becomes. A mortgage deal that you make with a particular banker in your own community isn’t particularly commodified; a mortgage that is sliced and redistributed into mortgage-backed securities that are sold anonymously around the world is about as commodified as anything can be. It’s rather like the difference between buying a bag of apples from your town farmers’ market versus ordering a barrel of apple juice concentrate. (And of course the most commodified version of all is the financial one: buying apple juice concentrate futures.)

Commodified trust is trust that has lost its connection to real human needs. Those bankers who foreclosed on thousands of mortgages (many of them illegally) weren’t thinking about the people they were making homeless—why would they, when for them those people have always been nothing more than numbers on a spreadsheet? Your local banker might be willing to work with you to help you keep your home, because they see you as a person. (They might not for various reasons, but at least they might.) But there’s no reason for HSBC to do so, especially when they know that they are so rich and powerful they can get away with just about anything (have I mentioned money laundering for terrorists?).

I don’t think we can get rid of finance. We will always need some mechanism to let people who need money but don’t have it borrow that money from people who have it but don’t need it, and it makes sense to have interest charges to compensate lenders for the time and risk involved.

Yet there is much of finance we can clearly dispense with. Credit default swaps could simply be banned, and we’d gain much and lose little. Credit default swaps are basically unregulated insurance, and there’s no reason to allow that. If banks need insurance, they can buy the regulated kind like everyone else. Those regulations are there for a reason. We could ban collateralized debt obligations and similar tranche-based securities, again with far more benefit than harm. We probably still need stocks and commodity futures, and perhaps also stock options—but we could regulate their sale considerably more, particularly with regard to short-selling. Banking should be boring.

Some amount of commodification may be inevitable, but clearly much of what we currently have could be eliminated. In particular, the selling of loans should simply be banned. Maybe even your local banker won’t ever really get to know you or care about you—but there’s no reason we have to allow them to sell your loan to some bank in another country that you’ve never even heard of. When you make a deal with a bank, the deal should be between you and that bank—not potentially any bank in the world that decides to buy the contract at any point in the future. Maybe we’ll always be numbers on spreadsheets—but at least we should be able to choose whose spreadsheets.

If banks want more liquidity, they can borrow from other banks—themselves, taking on the risk themselves. A lending relationship is built on trust. You are free to trust whomever you choose; but forcing me to trust someone I’ve never met is something you have no right to do.

In fact, we might actually be able to get rid of banks—credit unions have a far cleaner record than banks, and provide nearly all of the financial services that are genuinely necessary. Indeed, if you’re considering getting an auto loan or a home mortgage, I highly recommend you try a credit union first.

For now, we can’t simply get rid of banks—we’re too dependent on them. But we could at least acknowledge that banks are too powerful, they get away with far too much, and their whole industry is founded upon practices that need to be kept on a very tight leash.

An unusual recession, a rapid recovery

Jul 11 JDN 2459407

It seems like an egregious understatement to say that the last couple of years have been unusual. The COVID-19 pandemic was historic, comparable in threat—though not in outcome—to the 1918 influenza pandemic.

At this point it looks like we may not be able to fully eradicate COVID. And there are still many places around the world where variants of the virus continue to spread. I personally am a bit worried about the recent surge in the UK; it might add some obstacles (as if I needed any more) to my move to Edinburgh. Yet even in hard-hit places like India and Brazil things are starting to get better. Overall, it seems like the worst is over.

This pandemic disrupted our society in so many ways, great and small, and we are still figuring out what the long-term consequences will be.

But as an economist, one of the things I found most unusual is that this recession fit Real Business Cycle theory.

Real Business Cycle theory (henceforth RBC) posits that recessions are caused by negative technology shocks which result in a sudden drop in labor supply, reducing employment and output. This is generally combined with sophisticated mathematical modeling (DSGE or GTFO), and it typically leads to the conclusion that the recession is optimal and we should do nothing to correct it (which was after all the original motivation of the entire theory—they didn’t like the interventionist policy conclusions of Keynesian models). Alternatively it could suggest that, if we can, we should try to intervene to produce a positive technology shock (but nobody’s really sure how to do that).

For a typical recession, this is utter nonsense. It is obvious to anyone who cares to look that major recessions like the Great Depression and the Great Recession were caused by a lack of labor demand, not supply. There is no apparent technology shock to cause either recession. Instead, they seem to be preciptiated by a financial crisis, which then causes a crisis of liquidity which leads to a downward spiral of layoffs reducing spending and causing more layoffs. Millions of people lose their jobs and become desperate to find new ones, with hundreds of people applying to each opening. RBC predicts a shortage of labor where there is instead a glut. RBC predicts that wages should go up in recessions—but they almost always go down.

But for the COVID-19 recession, RBC actually had some truth to it. We had something very much like a negative technology shock—namely the pandemic. COVID-19 greatly increased the cost of working and the cost of shopping. This led to a reduction in labor demand as usual, but also a reduction in labor supply for once. And while we did go through a phase in which hundreds of people applied to each new opening, we then followed it up with a labor shortage and rising wages. A fall in labor supply should create inflation, and we now have the highest inflation we’ve had in decades—but there’s good reason to think it’s just a transitory spike that will soon settle back to normal.

The recovery from this recession was also much more rapid: Once vaccines started rolling out, the economy began to recover almost immediately. We recovered most of the employment losses in just the first six months, and we’re on track to recover completely in half the time it took after the Great Recession.

This makes it the exception that proves the rule: Now that you’ve seen a recession that actually resembles RBC, you can see just how radically different it was from a typical recession.

Moreover, even in this weird recession the usual policy conclusions from RBC are off-base. It would have been disastrous to withhold the economic relief payments—which I’m happy to say even most Republicans realized. The one thing that RBC got right as far as policy is that a positive technology shock was our salvation—vaccines.

Indeed, while the cause of this recession was very strange and not what Keynesian models were designed to handle, our government largely followed Keynesian policy advice—and it worked. We ran massive government deficits—over $3 trillion in 2020—and the result was rapid recovery in consumer spending and then employment. I honestly wouldn’t have thought our government had the political will to run a deficit like that, even when the economic models told them they should; but I’m very glad to be wrong. We ran the huge deficit just as the models said we should—and it worked. I wonder how the 2010s might have gone differently had we done the same after 2008.

Perhaps we’ve learned from some of our mistakes.

A prouder year for America, and for me

Jul 4 JDN 2459380

Living under Trump from 2017 to 2020, it was difficult to be patriotic. How can we be proud of a country that would put a man like that in charge? And then there was the COVID pandemic, which initially the US handled terribly—largely because of the aforementioned Trump.

But then Biden took office, and almost immediately things started to improve. This is a testament to how important policy can be—and how different the Democrats and Republicans have become.

The US now has one of the best rates of COVID vaccination in the world (though lately progress seems to be stalling and other countries are catching up). Daily cases in the US are now the lowest they have been since March 2020. Even real GDP is almost back up to its pre-pandemic level (even per-capita), and the surge of inflation we got as things began to re-open already seems to be subsiding.

I can actually celebrate the 4th of July with some enthusiasm this year, whereas the last four years involved continually reminding myself that I was celebrating the liberating values of America’s founding, not the current terrible state of its government. Of course our government policy still retains many significant flaws—but it isn’t the utter embarrassment it was just a year ago.

This may be my last 4th of July to celebrate for the next few years, as I will soon be moving to Scotland (more on that in a moment).

2020 was a very bad year, but even halfway through it’s clear that 2021 is going to be a lot better.

This was true for just about everyone. I was no exception.

The direct effects of the pandemic on me were relatively minor.

Transitioning to remote work was even easier than I expected it to be; in fact I was even able to run experiments online using the same research subject pool as we’d previously used for the lab. I not only didn’t suffer any financial hardship from the lockdowns, I ended up better off because of the relief payments (and the freezing of student loan payments as well as the ludicrous stock boom, which I managed to buy in near the trough of). Ordering groceries online for delivery is so convenient I’m tempted to continue it after the pandemic is over (though it does cost more).

I was careful and/or fortunate enough not to get sick (now that I am fully vaccinated, my future risk is negligible), as were most of my friends and family. I am not close to anyone who died from the virus, though I do have some second-order links to some who died (grandparents of a couple of my friends, the thesis advisor of one of my co-authors).

It was other things, that really made 2020 a miserable year for me. Some of them were indirect effects of the pandemic, and some may not even have been related.

For me, 2020 was a year full of disappointments. It was the year I nearly finished my dissertation and went on the job market, applying for over one hundred jobs—and got zero offers. It was the year I was scheduled to present at an international conference—which was then canceled. It was the year my papers were rejected by multiple journals. It was the year I was scheduled to be married—and then we were forced to postpone the wedding.

But now, in 2021, several of these situations are already improving. We will be married on October 9, and most (though assuredly not all) of the preparations for the wedding are now done. My dissertation is now done except for some formalities. After over a year of searching and applying to over two hundred postings in all, I finally found a job, a postdoc position at the University of Edinburgh. (A postdoc isn’t ideal, but on the other hand, Edinburgh is more prestigious than I thought I’d be able to get.) I still haven’t managed to publish any papers, but I no longer feel as desperate a need to do so now that I’m not scrambling to find a job. Now of course we have to plan for a move overseas, though fortunately the university will reimburse our costs for the visa and most of the moving expenses.

Of course, 2021 isn’t over—neither is the COVID pandemic. But already it looks like it’s going to be a lot better than 2020.

Responsible business owners support regulations

Jun 27 JDN 2459373

In last week’s post I explained why business owners so consistently overestimate the harms of regulations: In short, they ignore the difference between imposing a rule on a single competitor and imposing that same rule on all competitors equally. The former would be disastrous; the latter is often inconsequential.

In this follow-up post I’m going to explain why ethical, responsible business owners should want many types of regulation—and that in fact if they were already trying to behave ethically and responsibly, regulations can make them more profitable in doing so.

Let’s use an extreme example just to make things clear. Suppose you are running a factory building widgets, you are competing with several other factories, and you find out that some of the other factories are using slave labor in their production.

What would be the best thing for you to do? In terms of maximizing profit, you’ve really got two possible approaches: You could start using slaves yourself, or you could find a way to stop the other factories from using slaves. If you are even remotely a decent human being, you will choose the latter. How can you do that? By supporting regulations.

By lobbying your government to ban slavery—or, if it’s already banned, to enforce those laws more effectively—you can free the workers enslaved by the other factories while also increasing your own profits. This is a very big win-win. (I guess it’s not a Pareto improvement, because the factory owners who were using slaves are probably worse off—but it’s hard to feel bad for them.)

Slavery is an extreme example (but sadly not an unrealistic one), but a similar principle applies to many other cases. If you are a business owner who wants to be environmentally responsible, you should support regulations on pollution—because you’re already trying to comply with them, so imposing them on your competitors who aren’t will give you an advantage. If you are a business owner who wants to pay high wages, you should support increasing minimum wage. Whatever socially responsible activities you already do, you have an economic incentive to make them mandatory for other companies.

Voluntary social responsibility sounds nice in theory, but in a highly competitive market it’s actually very difficult to sustain. I don’t doubt that many owners of sweatshops would like to pay their workers better, but they know they’d have to raise their prices a bit in order to afford it, and then they would get outcompeted and might even have to shut down. So any individual sweatshop owner really doesn’t have much choice: Either you meet the prevailing market price, or you go out of business. (The multinationals who buy from them, however, have plenty of market power and massive profits. They absolutely could afford to change their supply chain practices to support factories that pay their workers better.) Thus the best thing for them to do would be to support a higher minimum wage that would apply to their competitors as well.

Consumer pressure can provide some space for voluntary social responsibility, if customers are willing to pay more for products made by socially responsible companies. But people often don’t seem willing to pay all that much, and even when they are, it can be very difficult for consumers to really know which companies are being responsible (this is particular true for environmental sustainability: hence the widespread practice of greenwashing). In order for consumer pressure to work, you need a critical mass of a large number of consumers who are all sufficiently committed and well-informed. Regulation can often accomplish the same goals much more reliably.

In fact, there’s some risk that businesses could lobby for too many regulations, because they are more interested in undermining their competition than they are about being socially responsible. If you have lots of idiosyncratic business practices, it could be in your best interest to make those practices mandatory even if they have no particular benefits—simply because you were already doing them, and so the cost of transitioning to them will fall entirely on your competitors.


Regarding publicly-traded corporations in particular, there’s another reason why socially responsible CEOs would want regulations: Shareholders. If you’re trying to be socially responsible but it’s cutting into your profits, your shareholders may retaliate by devaluing your stock, firing you, or even suing you—as Dodge sued Ford in 1919 for the “crime” of making wages too high and prices too low. But if there are regulations that require you to be socially responsible, your shareholders can’t really complain; you’re simply complying with the law. In this case you wouldn’t want to be too vocal about supporting the regulations (since your shareholders might object to that); but you would, in fact, support them.

Market competition is a very cutthroat game, and both the prizes for winning and the penalties for losing are substantial. Regulations are what decides the rules of that game. If there’s a particular way that you want to play—either because it has benefits for the rest of society, or simply because it’s your preference—it is advantageous for you to get that written into the rules that everyone needs to follow.

Why business owners are always so wrong about regulations

Jun 20 JDN 2459386

Minimum wage. Environmental regulations. Worker safety. Even bans on child slavery.No matter what the regulation is, it seems that businesses will always oppose it, always warn that these new regulations will destroy their business and leave thousands out of work—and always be utterly, completely wrong.

In fact, the overall impact of US federal government regulations on employment is basically negligible, and the impact on GDP is very clearly positive. This really isn’t surprising if you think about it: Despite what some may have you believe, our government doesn’t go around randomly regulating things for no reason. The regulations we impose are specifically chosen because their benefits outweighed their costs, and the rigorous, nonpartisan analysis of our civil service is one of the best-kept secrets of American success and the envy of the world.

But when businesses are so consistently insistent that new regulations (of whatever kind, however minor or reasonable they may be) will inevitably destroy their industry—when such catastrophic outcomes have basically never occurred, that cries out for an explanation. How can such otherwise competent, experienced, knowledgeable people be always so utterly wrong about something so basic? These people are experts in what they do. Shouldn’t business owners know what would happen if we required them to raise wages a little, or require basic safety standards, or reduce pollution caps, or not allow their suppliers to enslave children?

Well, what do you mean by “them”? Herein lies the problem. There is a fundamental difference between what would happen if we required any specific business to comply with a new regulation (but left their competitors exempt), versus what happens if we require an entire industry to comply with that same regulation.

Business owners are accustomed to thinking in an open system, what economists call partial equilibrium: They think about how things will affect them specifically, and not how they will affect broader industries or the economy as a whole. If wages go up, they’ll lay off workers. If the price of their input goes down, they’ll buy more inputs and produce more outputs. They aren’t thinking about how these effects interact with one another at a systemic level, because they don’t have to.

This works because even a huge multinational corporation is only a small portion of the US economy, and doesn’t have much control over the system as a whole. So in general when a business tries to maximize its profit in partial equilibrium, it tends to get the right answer (at least as far as maximizing GDP goes).

But large-scale regulation is one time where we absolutely cannot do this. If we try to analyze federal regulations purely in partial equilibrium terms, we will be consistently and systematically wrong—as indeed business owners are.

If we went to a specific corporation and told them, “You must pay your workers $2 more per hour.”, what would happen? They would be forced to lay off workers. No doubt about it. If we specifically targeted one particular corporation and required them to raise their wages, they would be unable to compete with other businesses who had not been forced to comply. In fact, they really might go out of business completely. This is the panic that business owners are expressing when they warn that even really basic regulations like “You can’t dump toxic waste in our rivers” or “You must not force children to pick cocoa beans for you” will cause total economic collapse.

But when you regulate an entire industry in this way, no such dire outcomes happen. The competitors are also forced to comply, and so no businesses are given special advantages relative to one another. Maybe there’s some small reduction in employment or output as a result, but at least if the regulation is reasonably well-planned—as virtually all US federal regulations are, by extremely competent people—those effects will be much smaller than the benefits of safer workers, or cleaner water, or whatever was the reason for the regulation in the first place.

Think of it this way. Businesses are in a constant state of fierce, tight competition. So let’s consider a similarly tight competition such as the Olympics. The gold medal for the 100-meter sprint is typically won by someone who runs the whole distance in less than 10 seconds.

Suppose we had told one of the competitors: “You must wait an extra 3 seconds before starting.” If we did this to one specific runner, that runner would lose. With certainty. There has never been an Olympic 100-meter sprint where the first-place runner was more than 3 seconds faster than the second-place runner. So it is basically impossible for that runner to ever win the gold, simply because of that 3-second handicap. And if we imposed that constraint on some runners but not others, we would ensure that only runners without the handicap had any hope of winning the race.

But now suppose we had simply started the competition 3 seconds late. We had a minor technical issue with the starting gun, we fixed it in 3 seconds, and then everything went as normal. Basically no one would notice. The winner of the race would be the same as before, all the running times would be effectively the same. Things like this have almost certainly happened, perhaps dozens of times, and no one noticed or cared.

It’s the same 3-second delay, but the outcome is completely different.

The difference is simple but vital: Are you imposing this constraint on some competitors, or on all competitors? A constraint imposed on some competitors will be utterly catastrophic for those competitors. A constraint imposed on all competitors may be basically unnoticeable to all involved.

Now, with regulations it does get a bit more complicated than that: We typically can’t impose regulations on literally everyone, because there is no global federal government with the authority to do that. Even international human rights law, sadly, is not that well enforced. (International intellectual property lawvery nearly is—and that contrast itself says something truly appalling about our entire civilization.) But when regulation is imposed by a large entity like the United States (or even the State of California), it generally affects enough of the competitors—and competitors who already had major advantages to begin with, like the advanced infrastructure, impregnable national security, and educated population of the United States—that the effects on competition are, if not negligible, at least small enough to be outweighed by the benefits of the regulation.

So, whenever we propose a new regulation and business owners immediately panic about its catastrophic effects, we can safely ignore them. They do this every time, and they are always wrong.

But take heed: Economists are trained to think in terms of closed systems and general equilibrium. So if economists are worried about the outcome of a regulation, then there is legitimate reason to be concerned. It’s not that we know better how to run their businesses—we certainly don’t. Rather, we much better understand the difference between imposing a 3-second delay on a single runner versus simply starting the whole race 3 seconds later.

Could the Star Trek economy really work?

Jun 13 JDN 2459379

“The economics of the future are somewhat different”, Jean-Luc Picard explains to Lily Sloane in Star Trek: First Contact.

Captain Picard’s explanation is not very thorough, and all we have about the economic system of the Federation comes from similar short glimpes across the various Star Trek films and TV series. The best glimpses of what the Earth’s economy is like largely come from the Picard series in particular.

But I think we can safely conclude that all of the following are true:

1. Energy is extraordinarily abundant, with a single individual having access to an energy scale that would rival the energy production of entire nations at present. By E=mc2, simply being able to teleport a human being or materialize a hamburger from raw energy, as seems to be routine in Starfleet, would require something on the order of 10^17 joules, or about 28 billion kilowatt-hours. The total energy supply of the world economy today is about 6*10^20 joules, or 100 trillion kilowatt-hours.

2. There is broad-based prosperity, but not absolute equality. At the very least different people live differently, though it is unclear whether anyone actually has a better standard of living than anyone else. The Picard family still seems to own their family vineyard that has been passed down for generations, and since the population of Earth is given as about 9 billion (a plausible but perhaps slightly low figure for our long-run stable population equilibrium), its acreage is large enough that clearly not everyone on Earth can own that much land.

3. Most resources that we currently think of as scarce are not scarce any longer. Replicator technology allows for the instantaneous production of food, clothing, raw materials, even sophisticated electronics. There is no longer a “manufacturing sector” as such; there are just replicators and people who use or program them. Most likely, even new replicators are made by replicating parts in other replicators and then assembling them. There are a few resources which remain scarce, such as dilithium (somehow involved in generating these massive quantities of energy) and latinum (a bizarre substance that is prized by many other cultures yet for unexplained reasons cannot be viably produced in replicators). Essentially everything else that is scarce is inherently so, such as front-row seats at concerts, original paintings, officer commissions in Starfleet, or land in San Francisco.

4. Interplanetary and even interstellar trade is routine. Starships with warp capability are available to both civilian and government institutions, and imports and exports can be made to planets dozens or even hundreds of light-years away as quickly as we can currently traverse the oceans with a container ship.

5. Money as we know it does not exist. People are not paid wages or salaries for their work. There is still some ownership of personal property, and particular families (including the Picards) seem to own land; but there does not appear to be any private ownership of capital. For that matter there doesn’t even appear to be be much in the way of capital; we never see any factories. There is obviously housing, there is infrastructure such as roads, public transit, and presumably power plants (very, very powerful power plants, see 1!), but that may be all. Nearly all manufacturing seems to be done by replicators, and what can’t be done by replicators (e.g. building new starships) seems to be all orchestrated by state-owned enterprises such as Starfleet.

Could such an economy actually work? Let’s stipulate that we really do manage to achieve such an extraordinary energy scale, millions of times more than what we can currently produce. Even very cheap, widespread nuclear energy would not be enough to make this plausible; we would need at least abundant antimatter, and quite likely something even more exotic than this, like zero point energy. Along this comes some horrifying risks—imagine an accident at a zero-point power plant that tears a hole in the fabric of space next to a major city, or a fanatical terrorist with a handheld 20-megaton antimatter bomb. But let’s assume we’ve found ways to manage those risks as well.

Furthermore, let’s stipulate that it’s possible to build replicators and warp drives and teleporters and all the similarly advanced technology that the Federation has, much of which is so radically advanced we can’t even be sure that such a thing is possible.

What I really want to ask is whether it’s possible to sustain a functional economy at this scale without money. George Roddenberry clearly seemed to think so. I am less convinced.

First of all, I want to acknowledge that there have been human societies which did not use money, or even any clear notion of a barter system. In fact, most human cultures for most of our history as a species allocated resources based on collective tribal ownership and personal favors. Some of the best parts of Debt: The First 5000 Years are about these different ways of allocating resources, which actually came much more naturally to us than money.

But there seem to have been rather harsh constraints on what sort of standard of living could be maintained in such societies. There was essentially zero technological advancement for thousands of years in most hunter-gatherer cultures, and even the wealthiest people in most of those societies overall had worse health, shorter lifespans, and far, far less access to goods and services than people we would consider in poverty today.

Then again, perhaps money is only needed to catalyze technological advancement; perhaps once you’ve already got all the technology you need, you can take money away and return to a better way of life without greed or inequality. That seems to be what Star Trek is claiming: That once we can make a sandwich or a jacket or a phone or even a car at the push of a button, we won’t need to worry about paying people because everyone can just have whatever they need.

Yet whatever they need is quite different from whatever they want, and therein lies the problem. Yes, I believe that with even moderate technological advancement—the sort of thing I expect to see in the next 50 years, not the next 300—we will have sufficient productivity that we could provide for the basic needs of every human being on Earth. A roof over your head, food on your table, clothes to wear, a doctor and a dentist to see twice a year, emergency services, running water, electricity, even Internet access and public transit—these are things we could feasibly provide to literally everyone with only about two or three times our current level of GDP, which means only about 2% annual economic growth for the next 50 years. Indeed, we could already provide them for every person in First World countries, and it is quite frankly appalling that we fail to do so.

However, most of us in the First World already live a good deal better than that. We don’t have the most basic housing possible, we have nice houses we want to live in. We don’t take buses everywhere, we own our own cars. We don’t eat the cheapest food that would provide adequate nutrition, we eat a wide variety of foods; we order pizza and Chinese takeout, and even eat at fancy restaurants on occasion. It’s less clear that we could provide this standard of living to everyone on Earth—but if economic growth continues long enough, maybe we can.

Worse, most of us would like to live even better than we do. My car is several years old right now, and it runs on gasoline; I’d very much like to upgrade to a brand-new electric car. My apartment is nice enough, but it’s quite small; I’d like to move to a larger place that would give me more space not only for daily living, but also for storage and for entertaining guests. I work comfortable hours for decent pay at a white-collar job that can be done entirely remotely on mostly my own schedule, but I’d prefer to take some time off and live independently while I focus more on my own writing. I sometimes enjoy cooking, but often it can be a chore, and sometimes I wish I could just go eat out at a nice restaurant for dinner every night. I don’t make all these changes because I can’t afford to—that is, because I don’t have the money.

Perhaps most of us would feel no need to have a billion dollars. I don’t really know what $100 billion actually gets you, as far as financial security, independence, or even consumption, that $50 million wouldn’t already. You can have total financial freedom and security with a middle-class American lifestyle with net wealth of about $2 million. If you want to also live in a mansion, drink Dom Perignon with every meal and drive a Lamborghini (which, quite frankly, I have no particular desire to do), you’ll need several million more—but even then you clearly don’t need $1 billion, let alone $100 billion. So there is indeed something pathological about wanting a billion dollars for yourself, and perhaps in the Federation they have mental health treatments for “wealth addiction” that prevent people from experiencing such pathological levels of greed.

Yet in fact, with the world as it stands, I would want a billion dollars. Not to own it. Not to let it sit and grow in some brokerage account. Not to simply be rich and be on the Forbes list. I couldn’t care less about those things. But with a billion dollars, I could donate enormous amounts to charities, saving thousands or even millions of lives. I could found my own institutions—research institutes, charitable foundations—and make my mark on the world. With $100 billion, I could make a serious stab at colonizing Mars—as Elon Musk seems to be doing, but most other billionaires have no particular interest in.

And it begins to strain credulity to imagine a world of such spectacular abundance that everyone could have enough to do that.

This is why I always struggle to answer when people ask me things like “If money were not object, how would you live your life?”; if money were no object, I’d end world hunger, cure cancer, and colonize the Solar System. Money is always an object. What I think you meant to ask was something much less ambitious, like “What would you do if you had a million dollars?” But I might actually have a million dollars someday—most likely by saving and investing the proceeds of a six-figure job as an economist over many years. (Save $2,000 per month for 20 years, growing it at 7% per year, and you’ll be over $1 million. You can do your own calculations here.) I doubt I’ll ever have $10 million, and I’m pretty sure I’ll never have $1 billion.

To be fair, it seems that many of the grand ambitions I would want to achieve with billions of dollars already are achieved by 23rd century; world hunger has definitely been ended, cancer seems to have been largely cured, and we have absolutely colonized the Solar System (and well beyond). But that doesn’t mean that new grand ambitions wouldn’t arise, and indeed I think they would. What if I wanted to command my own fleet of starships? What if I wanted a whole habitable planet to conduct experiments on, perhaps creating my own artificial ecosystem? The human imagination is capable of quite grand ambitions, and it’s unlikely that we could ever satisfy all of them for everyone.

Some things are just inherently scarce. I already mentioned some earlier: Original paintings, front-row seats, officer commissions, and above all, land. There’s only so much land that people want to live on, especially because people generally want to live near other people (Internet access could conceivably reduce the pressure for this, but, uh, so far it really hasn’t, so why would we think it will in 300 years?). Even if it’s true that people can have essentially arbitrary amounts of food, clothing, or electronics, the fact remains that there’s only so much real estate in San Francisco.

It would certainly help to build taller buildings, and presumably they would, though most of the depictions don’t really seem to show that; where are the 10-kilometer-tall skyscrapers made of some exotic alloy or held up by structural integrity fields? (Are the forces of NIMBY still too powerful?) But can everyone really have a 1000-square-meter apartment in the center of downtown? Maybe if you build tall enough? But you do still need to decide who gets the penthouse.

It’s possible that all inherently-scarce resources could be allocated by some mechanism other than money. Some even should be: Starfleet officer commissions are presumably allocated by merit. (Indeed, Starfleet seems implausibly good at selecting supremely competent officers.) Others could be: Concert tickets could be offered by lottery, and maybe people wouldn’t care so much about being in the real front row when you can always simulate the front row at home in your holodeck. Original paintings could all be placed in museums available for public access—and the tickets, too, could be allocated by lottery or simply first-come, first-served. (Picard mentions the Smithsonian, so public-access museums clearly still exist.)

Then there’s the question of how you get everyone to work, if you’re not paying them. Some jobs people will do for fun, or satisfaction, or duty, or prestige; it’s plausible that people would join Starfleet for free (I’m pretty sure I would). But can we really expect all jobs to work that way? Has automation reached such an advanced level that there are no menial jobs? Sanitation? Plumbing? Gardening? Paramedics? Police? People still seem to pick grapes by hand in the Picard vineyards; do they all do it for the satisfaction of a job well done? What happens if one day everyone decides they don’t feel like picking grapes today?

I certainly agree that most menial jobs are underpaid—most people do them because they can’t get better jobs. But surely we don’t want to preserve that? Surely we don’t want some sort of caste system that allocates people to work as plumbers or garbage collectors based on their birth? I guess we could use merit-based aptitude testing; it’s clear that the vast majority of people really aren’t cut out for Starfleet (indeed, perhaps I’m not!), and maybe some people really would be happiest working as janitors. But it’s really not at all clear what such a labor allocation system would be like. I guess if automation has reached such an advanced level that all the really necessary work is done by machines and human beings can just choose to work as they please, maybe that could work; it definitely seems like a very difficult system to manage.

So I guess it’s not completely out of the question that we could find some appropriate mechanism to allocate all goods and services without ever using money. But then my question becomes: Why? What do you have against money?

I understand hating inequality—indeed I share that feeling. I, too, am outraged by the existence of hectobillionaires in a world where people still die of malaria and malnutrition. But having a money system, or even a broadly free-market capitalist economy, doesn’t inherently have to mean allowing this absurd and appalling level of inequality. We could simply impose high, progressive taxes, redistribute wealth, and provide a generous basic income. If per-capita GDP is something like 100 times its current level (as it appears to be in Star Trek), then the basic income could be $1 million per year and still be entirely affordable.

That is, rather than trying to figure out how to design fair and efficient lotteries for tickets to concerts and museums, we could still charge for tickets, and just make sure that everyone has a million dollars a year in basic income. Instead of trying to find a way to convince people to clean bathrooms for free, we could just pay them to do it.

The taxes could even be so high at the upper brackets that they effectively impose a maximum income; say we have a 99% marginal rate above $20 million per year. Then the income inequality would collapse to quite a low level: No one below $1 million, essentially no one above $20 million. We could tax wealth as well, ensuring that even if people save or get lucky on the stock market (if we even still have a stock market—maybe that is unnecessary after all), they still can’t become hectobillionaires. But by still letting people use money and allowing some inequality, we’d still get all the efficiency gains of having a market economy (minus whatever deadweight loss such a tax system imposed—which I in fact suspect would not be nearly as large as most economists fear).

In all, I guess I am prepared to say that, given the assumption of such great feats of technological advancement, it is probably possible to sustain such a prosperous economy without the use of money. But why bother, when it’s so much easier to just have progressive taxes and a basic income?

When to give up

Jun 6 JDN 2459372

Perseverance is widely regarded as a virtue, and for good reason. Often one of the most important deciding factors in success is the capacity to keep trying after repeated failure. I think this has been a major barrier for me personally; many things came easily to me when I was young, and I internalized the sense that if something doesn’t come easily, it must be beyond my reach.

Yet it’s also worth noting that this is not the only deciding factor—some things really are beyond our capabilities. Indeed, some things are outright impossible. And we often don’t know what is possible and what isn’t.

This raises the question: When should we persevere, and when should we give up?

There is actually reason to think that people often don’t give up when they should. Steven Levitt (of Freakonomics fame)recently published a study that asked people who were on the verge of a difficult decision to flip a coin, and then base their decision on the coin flip: Heads, make a change; tails, keep things as they are. Many didn’t actually follow the coin flip—but enough did that there was a statistical difference between those who saw heads and those who saw tails. The study found that the people who flipped heads and made a change were on average happier a couple of years later than the people who flipped tails and kept things as they were.

This question is particularly salient for me lately, because the academic job market has gone so poorly for me. I’ve spent most of my life believing that academia is where I belong; my intellect and my passion for teaching and research has convinced me and many others that this is the right path for me. But now that I have a taste of what it is actually like to apply for tenure-track jobs and submit papers to journals, I am utterly miserable. I hate every minute of it. I’ve spent the entire past year depressed and feeling like I have accomplished absolutely nothing.

In theory, once one actually gets tenure it’s supposed to get easier. But that could be a long way away—or it might never happen at all. As it is, there’s basically no chance I’ll get a tenure track position this year, and it’s unclear what my chances would be if I tried again next year.

If I could actually get a paper published, that would no doubt improve my odds of landing a better job next year. But I haven’t been able to do that, and each new rejection cuts so deep that I can barely stand to look at my papers anymore, much less actually continue submitting them. And apparently even tenured professors still get their papers rejected repeatedly, which means that this pain will never go away. I simply cannot imagine being happy if this is what I am expected to do for the rest of my life.

I found this list of criteria for when you should give up something—and most of them fit me. I’m not sure I know in my heart it can’t work out, but I increasingly suspect that. I’m not sure I want it anymore, now that I have a better idea of what it’s really like. Pursuing it is definitely making me utterly miserable. I wouldn’t say it’s the only reason, but I definitely do worry what other people will think if I quit; I feel like I’d be letting a lot of people down. I also wonder who I am without it, where I belong if not here. I don’t know what other paths are out there, but maybe there is something better. This constant stream of failure and rejection has definitely made me feel like I hate myself. And above all, when I imagine quitting, I absolutely feel an enormous sense of relief.

Publishing in journals seems to be the thing that successful academics care about most, and it means almost nothing to me anymore. I only want it because of all the pressure to have it, because of all the rewards that come from having it. It has become fully instrumental to me, with no intrinsic meaning or value. I have no particular desire to be lauded by the same system that lauded Fischer Black or Kenneth Rogoff—both of whose egregious and easily-avoidable mistakes are responsible for the suffering of millions people around the world.

I want people to read my ideas. But people don’t actually read journals. They skim them. They read the abstracts. They look at the graphs and regression tables. (You have the meeting that should have been an email? I raise you the paper that should have been a regression table.) They see if there’s something in there that they should be citing for their own work, and if there is, maybe then they actually read the paper—but everyone is so hyper-specialized that only a handful of people will ever actually want to cite any given paper. The vast majority of research papers are incredibly tedious to read and very few people actually bother. As a method for disseminating ideas, this is perhaps slightly better than standing on a street corner and shouting into a megaphone.

I would much rather write books; people sometimes actually read books, especially when they are written for a wide audience and hence not forced into the straitjacket of standard ‘scientific writing’ that no human being actually gets any enjoyment out of writing or reading. I’ve seen a pretty clear improvement in writing quality of papers written by Nobel laureates—after they get their Nobels or similar accolades. Once they establish themselves, they are free to actually write in ways that are compelling and interesting, rather than having to present everything in the most dry, tedious way possible. If your paper reads like something that a normal person would actually find interesting or enjoyable to read, you will be—as I have been—immediately told that you must remove all such dangerous flavor until the result is as tasteless as possible.

No, the purpose of research journals is not to share ideas. Its function is not to share, but to evaluate. And it isn’t even really to evaluate research—it’s to evaluate researchers. It’s to outsource the efforts of academic hiring to an utterly unaccountable and arbitrary system run mostly by for-profit corporations. It may have some secondary effect of evaluating ideas for validity; at least the really awful ideas are usually excluded. But its primary function is to decide the academic pecking order.

I had thought that scientific peer review was supposed to select for truth. Perhaps sometimes it does. It seems to do so reasonably well in the natural sciences, at least. But in the social sciences? That’s far less clear. Peer-reviewed papers are much more likely to be accurate than any randomly-selected content; but there are still a disturbingly large number of peer-reviewed published papers that are utterly wrong, and some unknown but undoubtedly vast number of good papers that have never seen the light of day.

Then again, when I imagine giving up on an academic career, I don’t just feel relief—I also feel regret and loss. I feel like I’ve wasted years of my life putting together a dream that has now crumbled in my hands. I even feel some anger, some sense that I was betrayed by those who told me that this was about doing good research when it turns out it’s actually about being thick-skinned enough that you can take an endless assault of rejections. It feels like I’ve been running a marathon, and I just rounded a curve to discover that the last five miles must be ridden on horseback, when I don’t have a horse, I have no equestrian training, and in fact I’m allergic to horses.

I wish someone had told me it would be like this. Maybe they tried and I didn’t listen. They did say that papers would get rejected. They did say that the tenure track was high-pressure and publish-or-perish was a major source of anxiety. But they never said that it would tear at my soul like this. They never said that I would have to go through multiple rounds of agony, self-doubt, and despair in order to get even the slighest recognition for my years of work. They never said that the whole field would treat me like I’m worthless because I can’t satisfy the arbitrary demands of a handful of anonymous reviewers. They never said that I would begin to feel worthless after several rounds of this.

That’s really what I want to give up on. I want to give up on hitching my financial security, my career, my future, my self-worth to a system as capricious as peer review.

I don’t want to give up on research. I don’t want to give up on teaching. I still believe strongly in discovering new truths and sharing them with others. I’m just increasingly realizing that academia isn’t nearly as good at that as I thought it was.

It isn’t even that I think it’s impossible for me to succeed in academia. I think that if I continued trying to get a tenure-track job, I would land one eventually. Maybe next year. Or maybe I’d spend a few years at a postdoc first. And I’d probably manage to publish some paper in some reasonably respectable journal at some point in the future. But I don’t know how long it would take, or how good a journal it would be—and I’m already past the point where I really don’t care anymore, where I can’t afford to care, where if I really allowed myself to care it would only devastate me when I inevitably fail again. Now that I see what is really involved in the process, how arduous and arbitrary it is, publishing in a journal means almost nothing to me. I want to be validated; I want to be appreciated; I want to be recognized. But the system is set up to provide nothing but rejection, rejection, rejection. If even the best work won’t be recognized immediately and even the worst work can make it with enough tries, then the whole system begins to seem meaningless. It’s just rolls of the dice. And I didn’t sign up to be a gambler.

The job market will probably be better next year than it was this year. But how much better? Yes, there will be more openings, but there will also be more applicants: Everyone who would normally be on the market, plus everyone like me who didn’t make it this year, plus everyone who decided to hold back this year because they knew they wouldn’t make it (as I probably should have done). Yes, in a normal year, I could be fairly confident of getting some reasonably decent position—but this wasn’t a normal year, and next year won’t be one either, and the one after that might still not be. If I can’t get a paper published in a good journal between now and then—and I’m increasingly convinced that I can’t—then I really can’t expect my odds to be greatly improved from what they were this time around. And if I don’t know that this terrible gauntlet is going to lead to something good, I’d really much rather avoid it altogether. It was miserable enough when I went into it being (over)confident that it would work out all right.

Perhaps the most important question when deciding whether to give up is this: What will happen if you do? What alternatives do you have? If giving up means dying, then don’t give up. (“Learn to let go” is very bad advice to someone hanging from the edge of a cliff.) But while it may feel that way sometimes, rarely does giving up on a career or a relationship or a project yield such catastrophic results.

When people are on the fence about making a change and then do so, even based on the flip of a coin, it usually makes them better off. Note that this is different from saying you should make all your decisions randomly; if you are confident that you don’t want to make a change, don’t make a change. This advice is for people who feel like they want a change but are afraid to take the chance, people who find themselves ambivalent about what direction to go next—people like me.

I don’t know where I should go next. I don’t know where I belong. I know it isn’t Wall Street. I’m pretty sure it’s not consulting. Maybe it’s nonprofits. Maybe it’s government. Maybe it’s freelance writing. Maybe it’s starting my own business. I guess I’d still consider working in academia; if Purdue called me back to say they made a terrible mistake and they want me after all, I’d probably take the offer. But since such an outcome is now vanishingly unlikely, perhaps it’s time, after all, to give up.

Why is cryptocurrency popular?

May 30 JDN 2459365

At the time of writing, the price of most cryptocurrencies has crashed, likely due to a ban on conventional banks using cryptocurrency in China (though perhaps also due to Elon Musk personally refusing to accept Bitcoin at his businesses). But for all I know by the time this post goes live the price will surge again. Or maybe they’ll crash even further. Who knows? The prices of popular cryptocurrencies have been extremely volatile.

This post isn’t really about the fluctuations of cryptocurrency prices. It’s about something a bit deeper: Why are people willing to put money into cryptocurrencies at all?

The comparison is often made to fiat currency: “Bitcoin isn’t backed by anything, but neither is the US dollar.”

But the US dollar is backed by something: It’s backed by the US government. Yes, it’s not tradeable for gold at a fixed price, but so what? You can use it to pay taxes. The government requires it to be legal tender for all debts. There are certain guaranteed exchange rights built into the US dollar, which underpin the value that the dollar takes on in other exchanges. Moreover, the US Federal Reserve carefully manages the supply of US dollars so as to keep their value roughly constant.

Bitcoin does not have this (nor does Dogecoin, or Etherium, or any of the other hundreds of lesser-known cryptocurrencies). There is no central bank. There is no government making them legal tender for any debts at all, let alone all of them. Nobody collects taxes in Bitcoin.

And so, because its value is untethered, Bitcoin’s price rises and falls, often in huge jumps, more or less randomly. If you look all the way back to when it was introduced, Bitcoin does seem to have an overall upward price trend, but this honestly seems like a statistical inevitability: If you start out being worthless, the only way your price can change is upward. While some people have become quite rich by buying into Bitcoin early on, there’s no particular reason to think that it will rise in value from here on out.

Nor does Bitcoin have any intrinsic value. You can’t eat it, or build things out of it, or use it for scientific research. It won’t even entertain you (unless you have a very weird sense of entertainment). Bitcoin doesn’t even have “intrinsic value” the way gold does (which is honestly an abuse of the term, since gold isn’t actually especially useful): It isn’t innately scarce. It was made scarce by its design: Through the blockchain, a clever application of encryption technology, it was made difficult to generate new Bitcoins (called “mining”) in an exponentially increasing way. But the decision of what encryption algorithm to use was utterly arbitrary. Bitcoin mining could just as well have been made a thousand times easier or a thousand times harder. They seem to have hit a sweet spot where they made it just hard enough that it make Bitcoin seem scarce while still making it feel feasible to get.

We could actually make a cryptocurrency that does something useful, by tying its mining to a genuinely valuable pursuit, like analyzing scientific data or proving mathematical theorems. Perhaps I should suggest a partnership with Folding@Home to make FoldCoin, the crypto coin you mine by folding proteins. There are some technical details there that would be a bit tricky, but I think it would probably be feasible. And then at least all this computing power would accomplish something, and the money people make would be to compensate them for their contribution.

But Bitcoin is not useful. No institution exists to stabilize its value. It constantly rises and falls in price. Why do people buy it?

In a word, FOMO. The fear of missing out. People buy Bitcoin because they see that a handful of other people have become rich by buying and selling Bitcoin. Bitcoin symbolizes financial freedom: The chance to become financially secure without having to participate any longer in our (utterly broken) labor market.

In this, volatility is not a bug but a feature: A stable currency won’t change much in value, so you’d only buy into it because you plan on spending it. But an unstable currency, now, there you might manage to get lucky speculating on its value and get rich quick for nothing. Or, more likely, you’ll end up poorer. You really have no way of knowing.

That makes cryptocurrency fundamentally like gambling. A few people make a lot of money playing poker, too; but most people who play poker lose money. Indeed, those people who get rich are only able to get rich because other people lose money. The game is zero-sum—and likewise so is cryptocurrency.

Note that this is not how the stock market works, or at least not how it’s supposed to work (sometimes maybe). When you buy a stock, you are buying a share of the profits of a corporation—a real, actual corporation that produces and sells goods or services. You’re (ostensibly) supplying capital to fund the operations of that corporation, so that they might make and sell more goods in order to earn more profit, which they will then share with you.

Likewise when you buy a bond: You are lending money to an institution (usually a corporation or a government) that intends to use that money to do something—some real actual thing in the world, like building a factory or a bridge. They are willing to pay interest on that debt in order to get the money now rather than having to wait.

Initial Coin Offerings were supposed to be away to turn cryptocurrency into a genuine investment, but at least in their current virtually unregulated form, they are basically indistinguishable from a Ponzi scheme. Unless the value of the coin is somehow tied to actual ownership of the corporation or shares of its profits (the way stocks are), there’s nothing to ensure that the people who buy into the coin will actually receive anything in return for the capital they invest. There’s really very little stopping a startup from running an ICO, receiving a bunch of cash, and then absconding to the Cayman Islands. If they made it really obvious like that, maybe a lawsuit would succeed; but as long as they can create even the appearance of a good-faith investment—or even actually make their business profitable!—there’s nothing forcing them to pay a cent to the owners of their cryptocurrency.

The really frustrating thing for me about all this is that, sometimes, it works. There actually are now thousands of people who made decisions that by any objective standard were irrational and irresponsible, and then came out of it millionaires. It’s much like the lottery: Playing the lottery is clearly and objectively a bad idea, but every once in awhile it will work and make you massively better off.

It’s like I said in a post about a year ago: Glorifying superstars glorifies risk. When a handful of people can massively succeed by making a decision, that makes a lot of other people think that it was a good decision. But quite often, it wasn’t a good decision at all; they just got spectacularly lucky.

I can’t exactly say you shouldn’t buy any cryptocurrency. It probably has better odds than playing poker or blackjack, and it certainly has better odds than playing the lottery. But what I can say is this: It’s about odds. It’s gambling. It may be relatively smart gambling (poker and blackjack are certainly a better idea than roulette or slot machines), with relatively good odds—but it’s still gambling. It’s a zero-sum high-risk exchange of money that makes a few people rich and lots of other people poorer.

With that in mind, don’t put any money into cryptocurrency that you couldn’t afford to lose at a blackjack table. If you’re looking for something to seriously invest your savings in, the answer remains the same: Stocks. All the stocks.

I doubt this particular crash will be the end for cryptocurrency, but I do think it may be the beginning of the end. I think people are finally beginning to realize that cryptocurrencies are really not the spectacular innovation that they were hyped to be, but more like a high-tech iteration of the ancient art of the Ponzi scheme. Maybe blockchain technology will ultimately prove useful for something—hey, maybe we should actually try making FoldCoin. But the future of money remains much as it has been for quite some time: Fiat currency managed by central banks.

Selectivity is a terrible measure of quality

May 23 JDN 2459358

How do we decide which universities and research journals are the best? There are a vast number of ways we could go about this—and there are in fact many different ranking systems out there, though only a handful are widely used. But one primary criterion which seems to be among the most frequently used is selectivity.

Selectivity is a very simple measure: What proportion of people who try to get in, actually get in? For universities this is admission rates for applicants; for journals it is acceptance rates for submitted papers.

The top-rated journals in economics have acceptance rates of 1-7%. The most prestigious universities have acceptance rates of 4-10%. So a reasonable ballpark is to assume a 95% chance of not getting accepted in either case. Of course, some applicants are more or less qualified, and some papers are more or less publishable; but my guess is that most applicants are qualified and most submitted papers are publishable. So these low acceptance rates mean refusing huge numbers of qualified people.


Selectivity is an objective, numeric score that can be easily generated and compared, and is relatively difficult to fake. This may accouunt for its widespread appeal. And it surely has some correlation with genuine quality: Lots of people are likely to apply to a school because it is good, and lots of people are likely to submit to a journal because it is good.

But look a little bit closer, and it becomes clear that selectivity is really a terrible measure of quality.


One, it is extremely self-fulfilling. Once a school or a journal becomes prestigious, more people will try to get in there, and that will inflate its selectivity rating. Harvard is extremely selective because Harvard is famous and high-rated. Why is Harvard so high-rated? Well, in part because Harvard is extremely selective.

Two, it incentivizes restricting the number of applicants accepted.

Ivy League schools have vast endowments, and could easily afford to expand their capacity, thus employing more faculty and educating more students. But that would require reducing their acceptance rates and hence jeopardizing their precious selectivity ratings. If the goal is to give as many people as possible the highest quality education, then selectivity is a deeply perverse incentive: It specifically incentivizes not educating too many students.

Similarly, most journals include something in their rejection letters about “limited space”, which in the age of all-digital journals is utter nonsense. Journals could choose to publish ten, twenty, fifty times as many papers as they currently do—or half, or a tenth. They could publish everything that gets submitted, or only publish one paper a year. It’s an entirely arbitrary decision with no real constraints. They choose what proportion of papers to publish entirely based primarily on three factors that have absolutely nothing to do with limited space: One, they want to publish enough papers to make it seem like they are putting out regular content; two, they want to make sure they publish anything that will turn out to be a major discovery (though they honestly seem systematically bad at predicting that); and three, they want to publish as few papers as possible within those constraints to maximize their selectivity.

To be clear, I’m not saying that journals should publish everything that gets submitted. Actually I think too many papers already get published—indeed, too many get written. The incentives in academia are to publish as many papers in top journals as possible, rather than to actually do the most rigorous and ground-breaking research. The best research often involves spending long periods of time making very little visible progress, and it does not lend itself to putting out regular publications to impress tenure committees and grant agencies.

The number of scientific papers published each year has grown at about 5% per year since 1900. The number of peer-reviewed journals has grown at an increasing rate, from about 3% per year for most of the 20th century to over 6% now. These are far in excess of population growth, technological advancement, or even GDP growth; this many scientific papers is obviously unsustainable. There are now 300 times as many scientific papers published per year as there were in 1900—while the world population has only increased by about 5-fold during that time. Yes, the number of scientists has also increased—but not that fast. About 8 million people are scientists, publishing an average of 2 million articles per year—one per scientist every four years. But the number of scientist jobs grows at just over 1%—basically tracking population growth or the job market in general. If papers published continue to grow at 5% while the number of scientists increases at 1%, then in 100 years each scientist will have to publish 48 times as many papers as today, or about 1 every month.


So the problem with research journals isn’t so much that journals aren’t accepting enough papers, as that too many people are submitting papers. Of course the real problem is that universities have outsourced their hiring decisions to journal editors. Rather than actually evaluating whether someone is a good teacher or a good researcher (or accepting that they can’t and hiring randomly), universities have trusted in the arbitrary decisions of research journals to decide whom they should hire.

But selectivity as a measure of quality means that journals have no reason not to support this system; they get their prestige precisely from the fact that scientists are so pressured to publish papers. The more papers get submitted, the better the journals look for rejecting them.

Another way of looking at all this is to think about what the process of acceptance or rejection entails. It is inherently a process of asymmetric information.

If we had perfect information, what would the acceptance rate of any school or journal be? 100%, regardless of quality. Only the applicants who knew they would get accepted would apply. So the total number of admitted students and accepted papers would be exactly the same, but all the acceptance rates would rise to 100%.

Perhaps that’s not realistic; but what if the application criteria were stricter? For instance, instead of asking you your GPA and SAT score, Harvard’s form could simply say: “Anyone with a GPA less than 4.0 or an SAT score less than 1500 need not apply.” That’s practically true anyway. But Harvard doesn’t have an incentive to say it out loud, because then applicants who know they can’t meet that standard won’t bother applying, and Harvard’s precious selectivity number will go down. (These are far from sufficient, by the way; I was valedictorian and had a 1590 on my SAT and still didn’t get in.)

There are other criteria they’d probably be even less willing to emphasize, but are no less significant: “If your family income is $20,000 or less, there is a 95% chance we won’t accept you.” “Other things equal, your odds of getting in are much better if you’re Black than if you’re Asian.”

For journals it might be more difficult to express the criteria clearly, but they could certainly do more than they do. Journals could more strictly delineate what kind of papers they publish: This one only for pure theory, that one only for empirical data, this one only for experimental results. They could choose more specific content niches rather than literally dozens of journals all being ostensibly about “economics in general” (the American Economic Review, the Quarterly Journal of Economics, the Journal of Political Economy, the Review of Economic Studies, the European Economic Review, the International Economic Review, Economic Inquiry… these are just the most prestigious). No doubt there would still have to be some sort of submission process and some rejections—but if they really wanted to reduce the number of submissions they could easily do so. The fact is, they want to have a large number of submissions that they can reject.

What this means is that rather than being a measure of quality, selectivity is primarily a measure of opaque criteria. It’s possible to imagine a world where nearly every school and every journal accept less than 1% of applicants; this would occur if the criteria for acceptance were simply utterly unknown and everyone had to try hundreds of places before getting accepted.


Indeed, that’s not too dissimilar to how things currently work in the job market or the fiction publishing market. The average job opening receives a staggering 250 applications. In a given year, a typical literary agent receives 5000 submissions and accepts 10 clients—so about one in every 500.

For fiction writing I find this somewhat forgivable, if regrettable; the quality of a novel is a very difficult thing to assess, and to a large degree inherently subjective. I honestly have no idea what sort of submission guidelines one could put on an agency page to explain to authors what distinguishes a good novel from a bad one (or, not quite the same thing, a successful one from an unsuccessful one).

Indeed, it’s all the worse because a substantial proportion of authors don’t even follow the guidelines that they do include! The most common complaint I hear from agents and editors at writing conferences is authors not following their submission guidelines—such basic problems as submitting content from the wrong genre, not formatting it correctly, having really egregious grammatical errors. Quite frankly I wish they’d shut up about it, because I wanted to hear what would actually improve my chances of getting published, not listen to them rant about the thousands of people who can’t bother to follow directions. (And I’m pretty sure that those people aren’t likely to go to writing conferences and listen to agents give panel discussions.)

But for the job market? It’s really not that hard to tell who is qualified for most jobs. If it isn’t something highly specialized, most people could probably do it, perhaps with a bit of training. If it is something highly specialized, you can restrict your search to people who already have the relevant education or training. In any case, having experience in that industry is obviously a plus. Beyond that, it gets much harder to assess quality—but also much less necessary. Basically anyone with an advanced degree in the relevant subject or a few years of experience at that job will probably do fine, and you’re wasting effort by trying to narrow the field further. If it is very hard to tell which candidate is better, that usually means that the candidates really aren’t that different.

To my knowledge, not a lot of employers or fiction publishers pride themselves on their selectivity. Indeed, many fiction publishers have a policy of simply refusing unsolicited submissions, relying upon literary agents to pre-filter their submissions for them. (Indeed, even many agents refuse unsolicited submissions—which raises the question: What is a debut author supposed to do?) This is good, for if they did—if Penguin Random House (or whatever that ludicrous all-absorbing conglomerate is calling itself these days; ah, what was it like in that bygone era, when anti-trust enforcement was actually a thing?) decided to start priding itself on its selectivity of 0.05% or whatever—then the already massively congested fiction industry would probably grind to a complete halt.

This means that by ranking schools and journals based on their selectivity, we are partly incentivizing quality, but mostly incentivizing opacity. The primary incentive is for them to attract as many applicants as possible, even knowing full well that they will reject most of these applicants. They don’t want to be too clear about what they will accept or reject, because that might discourage unqualified applicants from trying and thus reduce their selectivity rate. In terms of overall welfare, every rejected application is wasted human effort—but in terms of the institution’s selectivity rating, it’s a point in their favor.

Social science is broken. Can we fix it?

May 16 JDN 2459349

Social science is broken. I am of course not the first to say so. The Atlantic recently published an article outlining the sorry state of scientific publishing, and several years ago Slate Star Codex published a lengthy post (with somewhat harsher language than I generally use on this blog) showing how parapsychology, despite being obviously false, can still meet the standards that most social science is expected to meet. I myself discussed the replication crisis in social science on this very blog a few years back.

I was pessimistic then about the incentives of scientific publishing be fixed any time soon, and I am even more pessimistic now.

Back then I noted that journals are often run by for-profit corporations that care more about getting attention than getting the facts right, university administrations are incompetent and top-heavy, and publish-or-perish creates cutthroat competition without providing incentives for genuinely rigorous research. But these are widely known facts, even if so few in the scientific community seem willing to face up to them.

Now I am increasingly concerned that the reason we aren’t fixing this system is that the people with the most power to fix it don’t want to. (Indeed, as I have learned more about political economy I have come to believe this more and more about all the broken institutions in the world. American democracy has its deep flaws because politicians like it that way. China’s government is corrupt because that corruption is profitable for many of China’s leaders. Et cetera.)

I know economics best, so that is where I will focus; but most of what I’m saying here would also apply to other social sciences such as sociology and psychology as well. (Indeed it was psychology that published Daryl Bem.)

Rogoff and Reinhart’s 2010 article “Growth in a Time of Debt”, which was a weak correlation-based argument to begin with, was later revealed (by an intrepid grad student! His name is Thomas Herndon.) to be based upon deep, fundamental errors. Yet the article remains published, without any notice of retraction or correction, in the American Economic Review, probably the most prestigious journal in economics (and undeniably in the vaunted “Top Five”). And the paper itself was widely used by governments around the world to justify massive austerity policies—which backfired with catastrophic consequences.

Why wouldn’t the AER remove the article from their website? Or issue a retraction? Or at least add a note on the page explaining the errors? If their primary concern were scientific truth, they would have done something like this. Their failure to do so is a silence that speaks volumes, a hound that didn’t bark in the night.

It’s rational, if incredibly selfish, for Rogoff and Reinhart themselves to not want a retraction. It was one of their most widely-cited papers. But why wouldn’t AER’s editors want to retract a paper that had been so embarrassingly debunked?

And so I came to realize: These are all people who have succeeded in the current system. Their work is valued, respected, and supported by the system of scientific publishing as it stands. If we were to radically change that system, as we would necessarily have to do in order to re-align incentives toward scientific truth, they would stand to lose, because they would suddenly be competing against other people who are not as good at satisfying the magical 0.05, but are in fact at least as good—perhaps even better—actual scientists than they are.

I know how they would respond to this criticism: I’m someone who hasn’t succeeded in the current system, so I’m biased against it. This is true, to some extent. Indeed, I take it quite seriously, because while tenured professors stand to lose prestige, they can’t really lose their jobs even if there is a sudden flood of far superior research. So in directly economic terms, we would expect the bias against the current system among grad students, adjuncts, and assistant professors to be larger than the bias in favor of the current system among tenured professors and prestigious researchers.

Yet there are other motives aside from money: Norms and social status are among the most powerful motivations human beings have, and these biases are far stronger in favor of the current system—even among grad students and junior faculty. Grad school is many things, some good, some bad; but one of them is a ritual gauntlet that indoctrinates you into the belief that working in academia is the One True Path, without which your life is a failure. If your claim is that grad students are upset at the current system because we overestimate our own qualifications and are feeling sour grapes, you need to explain our prevalence of Impostor Syndrome. By and large, grad students don’t overestimate our abilities—we underestimate them. If we think we’re as good at this as you are, that probably means we’re better. Indeed I have little doubt that Thomas Herndon is a better economist than Kenneth Rogoff will ever be.

I have additional evidence that insider bias is important here: When Paul Romer—Nobel laureate—left academia he published an utterly scathing criticism of the state of academic macroeconomics. That is, once he had escaped the incentives toward insider bias, he turned against the entire field.

Romer pulls absolutely no punches: He literally compares the standard methods of DSGE models to “phlogiston” and “gremlins”. And the paper is worth reading, because it’s obviously entirely correct. He pulls no punches and every single one lands on target. It’s also a pretty fun read, at least if you have the background knowledge to appreciate the dry in-jokes. (Much like “Transgressing the Boundaries: Toward a Transformative Hermeneutics of Quantum Gravity.” I still laugh out loud every time I read the phrase “hegemonic Zermelo-Frankel axioms”, though I realize most people would be utterly nonplussed. For the unitiated, these are the Zermelo-Frankel axioms. Can’t you just see the colonialist imperialism in sentences like “\forall x \forall y (\forall z, z \in x \iff z \in y) \implies x = y”?)

In other words, the Upton Sinclair Principle seems to be applying here: “It is difficult to get a man to understand something when his salary depends upon not understanding it.” The people with the most power to change the system of scientific publishing are journal editors and prestigious researchers, and they are the people for whom the current system is running quite swimmingly.

It’s not that good science can’t succeed in the current system—it often does. In fact, I’m willing to grant that it almost always does, eventually. When the evidence has mounted for long enough and the most adamant of the ancien regime finally retire or die, then, at last, the paradigm will shift. But this process takes literally decades longer than it should. In principle, a wrong theory can be invalidated by a single rigorous experiment. In practice, it generally takes about 30 years of experiments, most of which don’t get published, until the powers that be finally give in.

This delay has serious consequences. It means that many of the researchers working on the forefront of a new paradigm—precisely the people that the scientific community ought to be supporting most—will suffer from being unable to publish their work, get grant funding, or even get hired in the first place. It means that not only will good science take too long to win, but that much good science will never get done at all, because the people who wanted to do it couldn’t find the support they needed to do so. This means that the delay is in fact much longer than it appears: Because it took 30 years for one good idea to take hold, all the other good ideas that would have sprung from it in that time will be lost, at least until someone in the future comes up with them.

I don’t think I’ll ever forget it: At the AEA conference a few years back, I went to a luncheon celebrating Richard Thaler, one of the founders of behavioral economics, whom I regard as one of the top 5 greatest economists of the 20th century (I’m thinking something like, “Keynes > Nash > Thaler > Ramsey > Schelling”). Yes, now he is being rightfully recognized for his seminal work; he won a Nobel, and he has an endowed chair at Chicago, and he got an AEA luncheon in his honor among many other accolades. But it was not always so. Someone speaking at the luncheon offhandedly remarked something like, “Did we think Richard would win a Nobel? Honestly most of us weren’t sure he’d get tenure.” Most of the room laughed; I had to resist the urge to scream. If Richard Thaler wasn’t certain to get tenure, then the entire system is broken. This would be like finding out that Erwin Schrodinger or Niels Bohr wasn’t sure he would get tenure in physics.

A. Gary Schilling, a renowned Wall Street economist (read: One Who Has Turned to the Dark Side), once remarked (the quote is often falsely attributed to Keynes): “markets can remain irrational a lot longer than you and I can remain solvent.” In the same spirit, I would say this: the scientific community can remain wrong a lot longer than you and I can extend our graduate fellowships and tenure clocks.