Unending nightmares

Sep 19 JDN 2459477

We are living in a time of unending nightmares.

As I write this, we have just passed the 20th anniversary of 9/11. Yet only in the past month were US troops finally withdrawn from Afghanistan—and that withdrawal was immediately followed by a total collapse of the Afghan government and a reinstatement of the Taliban. The United States had been at war for nearly 20 years, spending trillions of dollars and causing thousands of deaths—and seems to have accomplished precisely nothing.

Some left-wing circles have been saying that the Taliban offered surrender all the way back in 2001; this is not accurate. Alternet even refers to it as an “unconditional surrender” which is utter nonsense. No one in their right mind—not even the most die-hard imperialist—would ever refuse an unconditional surrender, and the US most certainly did nothing of the sort.)

The Taliban did offer a peace deal in 2001, which would have involved giving the US control of Kandahar and turning Osama bin Laden over to a neutral country (not to the US or any US ally). It would also have granted amnesty to a number of high-level Taliban leaders, which was a major sticking point for the US. In hindsight, should they have taken the deal? Obviously. But I don’t think that was nearly so clear at the time—nor would it have been particularly palatable to most of the American public to leave Osama bin Laden under house arrest in some neutral country (which they never specified by the way; somewhere without US extradition, presumably?) and grant amnesty to the top leaders of the Taliban.

Thus, even after the 20-year nightmare of the war that refused to end, we are still back to the nightmare we were in before—Afghanistan ruled by fanatics who will oppress millions.

Yet somehow this isn’t even the worst unending nightmare, for after a year and a half we are still in the throes of a global pandemic which has now caused over 4.6 million deaths. We are still wearing masks wherever we go—at least, those of us who are complying with the rules. We have gotten vaccinated already, but likely will need booster shots—at least, those of us who believe in vaccines.

The most disturbing part of it all is how many people still aren’t willing to follow the most basic demands of public health agencies.

In case you thought this was just an American phenomenon: Just a few days ago I looked out the window of my apartment to see a protest in front of the Scottish Parliament complaining about vaccine and mask mandates, with signs declaring it all a hoax. (Yes, my current temporary apartment overlooks the Scottish Parliament.)

Some of those signs displayed a perplexing innumeracy. One sign claimed that the vaccines must be stopped because they had killed 1,400 people in the UK. This is not actually true; while there have been 1,400 people in the UK who died after receiving a vaccine, 48 million people in the UK have gotten the vaccine, and many of them were old and/or sick, so, purely by statistics, we’d expect some of them to die shortly afterward. Less than 100 of these deaths are in any way attributable to the vaccine. But suppose for a moment that we took the figure at face value, and assumed, quite implausibly, that everyone who died shortly after getting the vaccine was in fact killed by the vaccine. This 1,400 figure needs to be compared against the 156,000 UK deaths attributable to COVID itself. Since 7 million people in the UK have tested positive for the virus, this is a fatality rate of over 2%. Even if we suppose that literally everyone in the UK who hasn’t been vaccinated in fact had the virus, that would still only be 20 million (the UK population of 68 million – the 48 million vaccinated) people, so the death rate for COVID itself would still be at least 0.8%—a staggeringly high fatality rate for a pandemic airborne virus. Meanwhile, even on this ridiculous overestimate of the deaths caused by the vaccine, the fatality rate for vaccination would be at most 0.003%. Thus, even by the anti-vaxers’ own claims, the vaccine is nearly 300 times safer than catching the virus. If we use the official estimates of a 1.9% COVID fatality rate and 100 deaths caused by the vaccines, the vaccines are in fact over 9000 times safer.

Yet it does seem to be worse in the United States, as while 22% of Americans described themselves as opposed to vaccination in general, only about 2% of Britons said the same.

But this did not translate to such a large difference in actual vaccination: While 70% of people in the UK have received the vaccine, 64% of people in the US have. Both of these figures are tantalizingly close to, yet clearly below, the at least 84% necessary to achieve herd immunity. (Actually some early estimates thought 60-70% might be enough—but epidemiologists no longer believe this, and some think that even 90% wouldn’t be enough.)

Indeed, the predominant tone I get from trying to keep up on the current news in epidemiology is fatalism: It’s too late, we’ve already failed to contain the virus, we won’t reach herd immunity, we won’t ever eradicate it. At this point they now all seem to think that COVID is going to become the new influenza, always with us, a major cause of death that somehow recedes into the background and seems normal to us—but COVID, unlike influenza, may stick around all year long. The one glimmer of hope is that influenza itself was severely hampered by the anti-pandemic procedures, and influenza cases and deaths are indeed down in both the US and UK (though not zero, nor as drastically reduced as many have reported).

The contrast between terrorism and pandemics is a sobering one, as pandemics kill far more people, yet somehow don’t provoke anywhere near as committed a response.

9/11 was a massive outlier in terrorism, at 3,000 deaths on a single day; otherwise the average annual death rate by terrorism is about 20,000 worldwide, mostly committed by Islamist groups. Yet the threat is not actually to Americans in particular; annual deaths due to terrorism in the US are less than 100—and most of these by right-wing domestic terrorists, not international Islamists.

Meanwhile, in an ordinary year, influenza would kill 50,000 Americans and somewhere between 300,000 and 700,000 people worldwide. COVID in the past year and a half has killed over 650,000 Americans and 4.6 million people worldwide—annualize that and it would be 400,000 per year in the US and 3 million per year worldwide.

Yet in response to terrorism we as a country were prepared to spend $2.3 trillion dollars, lose nearly 4,000 US and allied troops, and kill nearly 50,000 civilians—not even counting the over 60,000 enemy soldiers killed. It’s not even clear that this accomplished anything as far as reducing terrorism—by some estimates it actually made it worse.

Were we prepared to respond so aggressively to pandemics? Certainly not to influenza; we somehow treat all those deaths are normal or inevitable. In response to COVID we did spend a great deal of money, even more than the wars in fact—a total of nearly $6 trillion. This was a very pleasant surprise to me (it’s the first time in my lifetime I’ve witnessed a serious, not watered-down Keynesian fiscal stimulus in the United States). And we imposed lockdowns—but these were all-too quickly removed, despite the pleading of public health officials. It seems to be that our governments tried to impose an aggressive response, but then too many of the citizens pushed back against it, unwilling to give up their “freedom” (read: convenience) in the name of public safety.

For the wars, all most of us had to do was pay some taxes and sit back and watch; but for the pandemic we were actually expected to stay home, wear masks, and get shots? Forget it.

Politics was clearly a very big factor here: In the US, the COVID death rate map and the 2020 election map look almost equivalent: By and large, people who voted for Biden have been wearing masks and getting vaccinated, while people who voted for Trump have not.

But pandemic response is precisely the sort of thing you can’t do halfway. If one area is containing a virus and another isn’t, the virus will still remain uncontained. (As some have remarked, it’s rather like having a “peeing section” of a swimming pool. Much worse, actually, as urine contains relatively few bacteria—but not zero—and is quickly diluted by the huge quantities of water in a swimming pool.)

Indeed, that seems to be what has happened, and why we can’t seem to return to normal life despite months of isolation. Since enough people are refusing to make any effort to contain the virus, the virus remains uncontained, and the only way to protect ourselves from it is to continue keeping restrictions in place indefinitely.

Had we simply kept the original lockdowns in place awhile longer and then made sure everyone got the vaccine—preferably by paying them for doing it, rather than punishing them for not—we might have been able to actually contain the virus and then bring things back to normal.

But as it is, this is what I think is going to happen: At some point, we’re just going to give up. We’ll see that the virus isn’t getting any more contained than it ever was, and we’ll be so tired of living in isolation that we’ll finally just give up on doing it anymore and take our chances. Some of us will continue to get our annual vaccines, but some won’t. Some of us will continue to wear masks, but most won’t. The virus will become a part of our lives, just as influenza did, and we’ll convince ourselves that millions of deaths is no big deal.

And then the nightmare will truly never end.

An unusual recession, a rapid recovery

Jul 11 JDN 2459407

It seems like an egregious understatement to say that the last couple of years have been unusual. The COVID-19 pandemic was historic, comparable in threat—though not in outcome—to the 1918 influenza pandemic.

At this point it looks like we may not be able to fully eradicate COVID. And there are still many places around the world where variants of the virus continue to spread. I personally am a bit worried about the recent surge in the UK; it might add some obstacles (as if I needed any more) to my move to Edinburgh. Yet even in hard-hit places like India and Brazil things are starting to get better. Overall, it seems like the worst is over.

This pandemic disrupted our society in so many ways, great and small, and we are still figuring out what the long-term consequences will be.

But as an economist, one of the things I found most unusual is that this recession fit Real Business Cycle theory.

Real Business Cycle theory (henceforth RBC) posits that recessions are caused by negative technology shocks which result in a sudden drop in labor supply, reducing employment and output. This is generally combined with sophisticated mathematical modeling (DSGE or GTFO), and it typically leads to the conclusion that the recession is optimal and we should do nothing to correct it (which was after all the original motivation of the entire theory—they didn’t like the interventionist policy conclusions of Keynesian models). Alternatively it could suggest that, if we can, we should try to intervene to produce a positive technology shock (but nobody’s really sure how to do that).

For a typical recession, this is utter nonsense. It is obvious to anyone who cares to look that major recessions like the Great Depression and the Great Recession were caused by a lack of labor demand, not supply. There is no apparent technology shock to cause either recession. Instead, they seem to be preciptiated by a financial crisis, which then causes a crisis of liquidity which leads to a downward spiral of layoffs reducing spending and causing more layoffs. Millions of people lose their jobs and become desperate to find new ones, with hundreds of people applying to each opening. RBC predicts a shortage of labor where there is instead a glut. RBC predicts that wages should go up in recessions—but they almost always go down.

But for the COVID-19 recession, RBC actually had some truth to it. We had something very much like a negative technology shock—namely the pandemic. COVID-19 greatly increased the cost of working and the cost of shopping. This led to a reduction in labor demand as usual, but also a reduction in labor supply for once. And while we did go through a phase in which hundreds of people applied to each new opening, we then followed it up with a labor shortage and rising wages. A fall in labor supply should create inflation, and we now have the highest inflation we’ve had in decades—but there’s good reason to think it’s just a transitory spike that will soon settle back to normal.

The recovery from this recession was also much more rapid: Once vaccines started rolling out, the economy began to recover almost immediately. We recovered most of the employment losses in just the first six months, and we’re on track to recover completely in half the time it took after the Great Recession.

This makes it the exception that proves the rule: Now that you’ve seen a recession that actually resembles RBC, you can see just how radically different it was from a typical recession.

Moreover, even in this weird recession the usual policy conclusions from RBC are off-base. It would have been disastrous to withhold the economic relief payments—which I’m happy to say even most Republicans realized. The one thing that RBC got right as far as policy is that a positive technology shock was our salvation—vaccines.

Indeed, while the cause of this recession was very strange and not what Keynesian models were designed to handle, our government largely followed Keynesian policy advice—and it worked. We ran massive government deficits—over $3 trillion in 2020—and the result was rapid recovery in consumer spending and then employment. I honestly wouldn’t have thought our government had the political will to run a deficit like that, even when the economic models told them they should; but I’m very glad to be wrong. We ran the huge deficit just as the models said we should—and it worked. I wonder how the 2010s might have gone differently had we done the same after 2008.

Perhaps we’ve learned from some of our mistakes.

Responsible business owners support regulations

Jun 27 JDN 2459373

In last week’s post I explained why business owners so consistently overestimate the harms of regulations: In short, they ignore the difference between imposing a rule on a single competitor and imposing that same rule on all competitors equally. The former would be disastrous; the latter is often inconsequential.

In this follow-up post I’m going to explain why ethical, responsible business owners should want many types of regulation—and that in fact if they were already trying to behave ethically and responsibly, regulations can make them more profitable in doing so.

Let’s use an extreme example just to make things clear. Suppose you are running a factory building widgets, you are competing with several other factories, and you find out that some of the other factories are using slave labor in their production.

What would be the best thing for you to do? In terms of maximizing profit, you’ve really got two possible approaches: You could start using slaves yourself, or you could find a way to stop the other factories from using slaves. If you are even remotely a decent human being, you will choose the latter. How can you do that? By supporting regulations.

By lobbying your government to ban slavery—or, if it’s already banned, to enforce those laws more effectively—you can free the workers enslaved by the other factories while also increasing your own profits. This is a very big win-win. (I guess it’s not a Pareto improvement, because the factory owners who were using slaves are probably worse off—but it’s hard to feel bad for them.)

Slavery is an extreme example (but sadly not an unrealistic one), but a similar principle applies to many other cases. If you are a business owner who wants to be environmentally responsible, you should support regulations on pollution—because you’re already trying to comply with them, so imposing them on your competitors who aren’t will give you an advantage. If you are a business owner who wants to pay high wages, you should support increasing minimum wage. Whatever socially responsible activities you already do, you have an economic incentive to make them mandatory for other companies.

Voluntary social responsibility sounds nice in theory, but in a highly competitive market it’s actually very difficult to sustain. I don’t doubt that many owners of sweatshops would like to pay their workers better, but they know they’d have to raise their prices a bit in order to afford it, and then they would get outcompeted and might even have to shut down. So any individual sweatshop owner really doesn’t have much choice: Either you meet the prevailing market price, or you go out of business. (The multinationals who buy from them, however, have plenty of market power and massive profits. They absolutely could afford to change their supply chain practices to support factories that pay their workers better.) Thus the best thing for them to do would be to support a higher minimum wage that would apply to their competitors as well.

Consumer pressure can provide some space for voluntary social responsibility, if customers are willing to pay more for products made by socially responsible companies. But people often don’t seem willing to pay all that much, and even when they are, it can be very difficult for consumers to really know which companies are being responsible (this is particular true for environmental sustainability: hence the widespread practice of greenwashing). In order for consumer pressure to work, you need a critical mass of a large number of consumers who are all sufficiently committed and well-informed. Regulation can often accomplish the same goals much more reliably.

In fact, there’s some risk that businesses could lobby for too many regulations, because they are more interested in undermining their competition than they are about being socially responsible. If you have lots of idiosyncratic business practices, it could be in your best interest to make those practices mandatory even if they have no particular benefits—simply because you were already doing them, and so the cost of transitioning to them will fall entirely on your competitors.


Regarding publicly-traded corporations in particular, there’s another reason why socially responsible CEOs would want regulations: Shareholders. If you’re trying to be socially responsible but it’s cutting into your profits, your shareholders may retaliate by devaluing your stock, firing you, or even suing you—as Dodge sued Ford in 1919 for the “crime” of making wages too high and prices too low. But if there are regulations that require you to be socially responsible, your shareholders can’t really complain; you’re simply complying with the law. In this case you wouldn’t want to be too vocal about supporting the regulations (since your shareholders might object to that); but you would, in fact, support them.

Market competition is a very cutthroat game, and both the prizes for winning and the penalties for losing are substantial. Regulations are what decides the rules of that game. If there’s a particular way that you want to play—either because it has benefits for the rest of society, or simply because it’s your preference—it is advantageous for you to get that written into the rules that everyone needs to follow.

Why business owners are always so wrong about regulations

Jun 20 JDN 2459386

Minimum wage. Environmental regulations. Worker safety. Even bans on child slavery.No matter what the regulation is, it seems that businesses will always oppose it, always warn that these new regulations will destroy their business and leave thousands out of work—and always be utterly, completely wrong.

In fact, the overall impact of US federal government regulations on employment is basically negligible, and the impact on GDP is very clearly positive. This really isn’t surprising if you think about it: Despite what some may have you believe, our government doesn’t go around randomly regulating things for no reason. The regulations we impose are specifically chosen because their benefits outweighed their costs, and the rigorous, nonpartisan analysis of our civil service is one of the best-kept secrets of American success and the envy of the world.

But when businesses are so consistently insistent that new regulations (of whatever kind, however minor or reasonable they may be) will inevitably destroy their industry—when such catastrophic outcomes have basically never occurred, that cries out for an explanation. How can such otherwise competent, experienced, knowledgeable people be always so utterly wrong about something so basic? These people are experts in what they do. Shouldn’t business owners know what would happen if we required them to raise wages a little, or require basic safety standards, or reduce pollution caps, or not allow their suppliers to enslave children?

Well, what do you mean by “them”? Herein lies the problem. There is a fundamental difference between what would happen if we required any specific business to comply with a new regulation (but left their competitors exempt), versus what happens if we require an entire industry to comply with that same regulation.

Business owners are accustomed to thinking in an open system, what economists call partial equilibrium: They think about how things will affect them specifically, and not how they will affect broader industries or the economy as a whole. If wages go up, they’ll lay off workers. If the price of their input goes down, they’ll buy more inputs and produce more outputs. They aren’t thinking about how these effects interact with one another at a systemic level, because they don’t have to.

This works because even a huge multinational corporation is only a small portion of the US economy, and doesn’t have much control over the system as a whole. So in general when a business tries to maximize its profit in partial equilibrium, it tends to get the right answer (at least as far as maximizing GDP goes).

But large-scale regulation is one time where we absolutely cannot do this. If we try to analyze federal regulations purely in partial equilibrium terms, we will be consistently and systematically wrong—as indeed business owners are.

If we went to a specific corporation and told them, “You must pay your workers $2 more per hour.”, what would happen? They would be forced to lay off workers. No doubt about it. If we specifically targeted one particular corporation and required them to raise their wages, they would be unable to compete with other businesses who had not been forced to comply. In fact, they really might go out of business completely. This is the panic that business owners are expressing when they warn that even really basic regulations like “You can’t dump toxic waste in our rivers” or “You must not force children to pick cocoa beans for you” will cause total economic collapse.

But when you regulate an entire industry in this way, no such dire outcomes happen. The competitors are also forced to comply, and so no businesses are given special advantages relative to one another. Maybe there’s some small reduction in employment or output as a result, but at least if the regulation is reasonably well-planned—as virtually all US federal regulations are, by extremely competent people—those effects will be much smaller than the benefits of safer workers, or cleaner water, or whatever was the reason for the regulation in the first place.

Think of it this way. Businesses are in a constant state of fierce, tight competition. So let’s consider a similarly tight competition such as the Olympics. The gold medal for the 100-meter sprint is typically won by someone who runs the whole distance in less than 10 seconds.

Suppose we had told one of the competitors: “You must wait an extra 3 seconds before starting.” If we did this to one specific runner, that runner would lose. With certainty. There has never been an Olympic 100-meter sprint where the first-place runner was more than 3 seconds faster than the second-place runner. So it is basically impossible for that runner to ever win the gold, simply because of that 3-second handicap. And if we imposed that constraint on some runners but not others, we would ensure that only runners without the handicap had any hope of winning the race.

But now suppose we had simply started the competition 3 seconds late. We had a minor technical issue with the starting gun, we fixed it in 3 seconds, and then everything went as normal. Basically no one would notice. The winner of the race would be the same as before, all the running times would be effectively the same. Things like this have almost certainly happened, perhaps dozens of times, and no one noticed or cared.

It’s the same 3-second delay, but the outcome is completely different.

The difference is simple but vital: Are you imposing this constraint on some competitors, or on all competitors? A constraint imposed on some competitors will be utterly catastrophic for those competitors. A constraint imposed on all competitors may be basically unnoticeable to all involved.

Now, with regulations it does get a bit more complicated than that: We typically can’t impose regulations on literally everyone, because there is no global federal government with the authority to do that. Even international human rights law, sadly, is not that well enforced. (International intellectual property lawvery nearly is—and that contrast itself says something truly appalling about our entire civilization.) But when regulation is imposed by a large entity like the United States (or even the State of California), it generally affects enough of the competitors—and competitors who already had major advantages to begin with, like the advanced infrastructure, impregnable national security, and educated population of the United States—that the effects on competition are, if not negligible, at least small enough to be outweighed by the benefits of the regulation.

So, whenever we propose a new regulation and business owners immediately panic about its catastrophic effects, we can safely ignore them. They do this every time, and they are always wrong.

But take heed: Economists are trained to think in terms of closed systems and general equilibrium. So if economists are worried about the outcome of a regulation, then there is legitimate reason to be concerned. It’s not that we know better how to run their businesses—we certainly don’t. Rather, we much better understand the difference between imposing a 3-second delay on a single runner versus simply starting the whole race 3 seconds later.

Why is cryptocurrency popular?

May 30 JDN 2459365

At the time of writing, the price of most cryptocurrencies has crashed, likely due to a ban on conventional banks using cryptocurrency in China (though perhaps also due to Elon Musk personally refusing to accept Bitcoin at his businesses). But for all I know by the time this post goes live the price will surge again. Or maybe they’ll crash even further. Who knows? The prices of popular cryptocurrencies have been extremely volatile.

This post isn’t really about the fluctuations of cryptocurrency prices. It’s about something a bit deeper: Why are people willing to put money into cryptocurrencies at all?

The comparison is often made to fiat currency: “Bitcoin isn’t backed by anything, but neither is the US dollar.”

But the US dollar is backed by something: It’s backed by the US government. Yes, it’s not tradeable for gold at a fixed price, but so what? You can use it to pay taxes. The government requires it to be legal tender for all debts. There are certain guaranteed exchange rights built into the US dollar, which underpin the value that the dollar takes on in other exchanges. Moreover, the US Federal Reserve carefully manages the supply of US dollars so as to keep their value roughly constant.

Bitcoin does not have this (nor does Dogecoin, or Etherium, or any of the other hundreds of lesser-known cryptocurrencies). There is no central bank. There is no government making them legal tender for any debts at all, let alone all of them. Nobody collects taxes in Bitcoin.

And so, because its value is untethered, Bitcoin’s price rises and falls, often in huge jumps, more or less randomly. If you look all the way back to when it was introduced, Bitcoin does seem to have an overall upward price trend, but this honestly seems like a statistical inevitability: If you start out being worthless, the only way your price can change is upward. While some people have become quite rich by buying into Bitcoin early on, there’s no particular reason to think that it will rise in value from here on out.

Nor does Bitcoin have any intrinsic value. You can’t eat it, or build things out of it, or use it for scientific research. It won’t even entertain you (unless you have a very weird sense of entertainment). Bitcoin doesn’t even have “intrinsic value” the way gold does (which is honestly an abuse of the term, since gold isn’t actually especially useful): It isn’t innately scarce. It was made scarce by its design: Through the blockchain, a clever application of encryption technology, it was made difficult to generate new Bitcoins (called “mining”) in an exponentially increasing way. But the decision of what encryption algorithm to use was utterly arbitrary. Bitcoin mining could just as well have been made a thousand times easier or a thousand times harder. They seem to have hit a sweet spot where they made it just hard enough that it make Bitcoin seem scarce while still making it feel feasible to get.

We could actually make a cryptocurrency that does something useful, by tying its mining to a genuinely valuable pursuit, like analyzing scientific data or proving mathematical theorems. Perhaps I should suggest a partnership with Folding@Home to make FoldCoin, the crypto coin you mine by folding proteins. There are some technical details there that would be a bit tricky, but I think it would probably be feasible. And then at least all this computing power would accomplish something, and the money people make would be to compensate them for their contribution.

But Bitcoin is not useful. No institution exists to stabilize its value. It constantly rises and falls in price. Why do people buy it?

In a word, FOMO. The fear of missing out. People buy Bitcoin because they see that a handful of other people have become rich by buying and selling Bitcoin. Bitcoin symbolizes financial freedom: The chance to become financially secure without having to participate any longer in our (utterly broken) labor market.

In this, volatility is not a bug but a feature: A stable currency won’t change much in value, so you’d only buy into it because you plan on spending it. But an unstable currency, now, there you might manage to get lucky speculating on its value and get rich quick for nothing. Or, more likely, you’ll end up poorer. You really have no way of knowing.

That makes cryptocurrency fundamentally like gambling. A few people make a lot of money playing poker, too; but most people who play poker lose money. Indeed, those people who get rich are only able to get rich because other people lose money. The game is zero-sum—and likewise so is cryptocurrency.

Note that this is not how the stock market works, or at least not how it’s supposed to work (sometimes maybe). When you buy a stock, you are buying a share of the profits of a corporation—a real, actual corporation that produces and sells goods or services. You’re (ostensibly) supplying capital to fund the operations of that corporation, so that they might make and sell more goods in order to earn more profit, which they will then share with you.

Likewise when you buy a bond: You are lending money to an institution (usually a corporation or a government) that intends to use that money to do something—some real actual thing in the world, like building a factory or a bridge. They are willing to pay interest on that debt in order to get the money now rather than having to wait.

Initial Coin Offerings were supposed to be away to turn cryptocurrency into a genuine investment, but at least in their current virtually unregulated form, they are basically indistinguishable from a Ponzi scheme. Unless the value of the coin is somehow tied to actual ownership of the corporation or shares of its profits (the way stocks are), there’s nothing to ensure that the people who buy into the coin will actually receive anything in return for the capital they invest. There’s really very little stopping a startup from running an ICO, receiving a bunch of cash, and then absconding to the Cayman Islands. If they made it really obvious like that, maybe a lawsuit would succeed; but as long as they can create even the appearance of a good-faith investment—or even actually make their business profitable!—there’s nothing forcing them to pay a cent to the owners of their cryptocurrency.

The really frustrating thing for me about all this is that, sometimes, it works. There actually are now thousands of people who made decisions that by any objective standard were irrational and irresponsible, and then came out of it millionaires. It’s much like the lottery: Playing the lottery is clearly and objectively a bad idea, but every once in awhile it will work and make you massively better off.

It’s like I said in a post about a year ago: Glorifying superstars glorifies risk. When a handful of people can massively succeed by making a decision, that makes a lot of other people think that it was a good decision. But quite often, it wasn’t a good decision at all; they just got spectacularly lucky.

I can’t exactly say you shouldn’t buy any cryptocurrency. It probably has better odds than playing poker or blackjack, and it certainly has better odds than playing the lottery. But what I can say is this: It’s about odds. It’s gambling. It may be relatively smart gambling (poker and blackjack are certainly a better idea than roulette or slot machines), with relatively good odds—but it’s still gambling. It’s a zero-sum high-risk exchange of money that makes a few people rich and lots of other people poorer.

With that in mind, don’t put any money into cryptocurrency that you couldn’t afford to lose at a blackjack table. If you’re looking for something to seriously invest your savings in, the answer remains the same: Stocks. All the stocks.

I doubt this particular crash will be the end for cryptocurrency, but I do think it may be the beginning of the end. I think people are finally beginning to realize that cryptocurrencies are really not the spectacular innovation that they were hyped to be, but more like a high-tech iteration of the ancient art of the Ponzi scheme. Maybe blockchain technology will ultimately prove useful for something—hey, maybe we should actually try making FoldCoin. But the future of money remains much as it has been for quite some time: Fiat currency managed by central banks.

Selectivity is a terrible measure of quality

May 23 JDN 2459358

How do we decide which universities and research journals are the best? There are a vast number of ways we could go about this—and there are in fact many different ranking systems out there, though only a handful are widely used. But one primary criterion which seems to be among the most frequently used is selectivity.

Selectivity is a very simple measure: What proportion of people who try to get in, actually get in? For universities this is admission rates for applicants; for journals it is acceptance rates for submitted papers.

The top-rated journals in economics have acceptance rates of 1-7%. The most prestigious universities have acceptance rates of 4-10%. So a reasonable ballpark is to assume a 95% chance of not getting accepted in either case. Of course, some applicants are more or less qualified, and some papers are more or less publishable; but my guess is that most applicants are qualified and most submitted papers are publishable. So these low acceptance rates mean refusing huge numbers of qualified people.


Selectivity is an objective, numeric score that can be easily generated and compared, and is relatively difficult to fake. This may accouunt for its widespread appeal. And it surely has some correlation with genuine quality: Lots of people are likely to apply to a school because it is good, and lots of people are likely to submit to a journal because it is good.

But look a little bit closer, and it becomes clear that selectivity is really a terrible measure of quality.


One, it is extremely self-fulfilling. Once a school or a journal becomes prestigious, more people will try to get in there, and that will inflate its selectivity rating. Harvard is extremely selective because Harvard is famous and high-rated. Why is Harvard so high-rated? Well, in part because Harvard is extremely selective.

Two, it incentivizes restricting the number of applicants accepted.

Ivy League schools have vast endowments, and could easily afford to expand their capacity, thus employing more faculty and educating more students. But that would require reducing their acceptance rates and hence jeopardizing their precious selectivity ratings. If the goal is to give as many people as possible the highest quality education, then selectivity is a deeply perverse incentive: It specifically incentivizes not educating too many students.

Similarly, most journals include something in their rejection letters about “limited space”, which in the age of all-digital journals is utter nonsense. Journals could choose to publish ten, twenty, fifty times as many papers as they currently do—or half, or a tenth. They could publish everything that gets submitted, or only publish one paper a year. It’s an entirely arbitrary decision with no real constraints. They choose what proportion of papers to publish entirely based primarily on three factors that have absolutely nothing to do with limited space: One, they want to publish enough papers to make it seem like they are putting out regular content; two, they want to make sure they publish anything that will turn out to be a major discovery (though they honestly seem systematically bad at predicting that); and three, they want to publish as few papers as possible within those constraints to maximize their selectivity.

To be clear, I’m not saying that journals should publish everything that gets submitted. Actually I think too many papers already get published—indeed, too many get written. The incentives in academia are to publish as many papers in top journals as possible, rather than to actually do the most rigorous and ground-breaking research. The best research often involves spending long periods of time making very little visible progress, and it does not lend itself to putting out regular publications to impress tenure committees and grant agencies.

The number of scientific papers published each year has grown at about 5% per year since 1900. The number of peer-reviewed journals has grown at an increasing rate, from about 3% per year for most of the 20th century to over 6% now. These are far in excess of population growth, technological advancement, or even GDP growth; this many scientific papers is obviously unsustainable. There are now 300 times as many scientific papers published per year as there were in 1900—while the world population has only increased by about 5-fold during that time. Yes, the number of scientists has also increased—but not that fast. About 8 million people are scientists, publishing an average of 2 million articles per year—one per scientist every four years. But the number of scientist jobs grows at just over 1%—basically tracking population growth or the job market in general. If papers published continue to grow at 5% while the number of scientists increases at 1%, then in 100 years each scientist will have to publish 48 times as many papers as today, or about 1 every month.


So the problem with research journals isn’t so much that journals aren’t accepting enough papers, as that too many people are submitting papers. Of course the real problem is that universities have outsourced their hiring decisions to journal editors. Rather than actually evaluating whether someone is a good teacher or a good researcher (or accepting that they can’t and hiring randomly), universities have trusted in the arbitrary decisions of research journals to decide whom they should hire.

But selectivity as a measure of quality means that journals have no reason not to support this system; they get their prestige precisely from the fact that scientists are so pressured to publish papers. The more papers get submitted, the better the journals look for rejecting them.

Another way of looking at all this is to think about what the process of acceptance or rejection entails. It is inherently a process of asymmetric information.

If we had perfect information, what would the acceptance rate of any school or journal be? 100%, regardless of quality. Only the applicants who knew they would get accepted would apply. So the total number of admitted students and accepted papers would be exactly the same, but all the acceptance rates would rise to 100%.

Perhaps that’s not realistic; but what if the application criteria were stricter? For instance, instead of asking you your GPA and SAT score, Harvard’s form could simply say: “Anyone with a GPA less than 4.0 or an SAT score less than 1500 need not apply.” That’s practically true anyway. But Harvard doesn’t have an incentive to say it out loud, because then applicants who know they can’t meet that standard won’t bother applying, and Harvard’s precious selectivity number will go down. (These are far from sufficient, by the way; I was valedictorian and had a 1590 on my SAT and still didn’t get in.)

There are other criteria they’d probably be even less willing to emphasize, but are no less significant: “If your family income is $20,000 or less, there is a 95% chance we won’t accept you.” “Other things equal, your odds of getting in are much better if you’re Black than if you’re Asian.”

For journals it might be more difficult to express the criteria clearly, but they could certainly do more than they do. Journals could more strictly delineate what kind of papers they publish: This one only for pure theory, that one only for empirical data, this one only for experimental results. They could choose more specific content niches rather than literally dozens of journals all being ostensibly about “economics in general” (the American Economic Review, the Quarterly Journal of Economics, the Journal of Political Economy, the Review of Economic Studies, the European Economic Review, the International Economic Review, Economic Inquiry… these are just the most prestigious). No doubt there would still have to be some sort of submission process and some rejections—but if they really wanted to reduce the number of submissions they could easily do so. The fact is, they want to have a large number of submissions that they can reject.

What this means is that rather than being a measure of quality, selectivity is primarily a measure of opaque criteria. It’s possible to imagine a world where nearly every school and every journal accept less than 1% of applicants; this would occur if the criteria for acceptance were simply utterly unknown and everyone had to try hundreds of places before getting accepted.


Indeed, that’s not too dissimilar to how things currently work in the job market or the fiction publishing market. The average job opening receives a staggering 250 applications. In a given year, a typical literary agent receives 5000 submissions and accepts 10 clients—so about one in every 500.

For fiction writing I find this somewhat forgivable, if regrettable; the quality of a novel is a very difficult thing to assess, and to a large degree inherently subjective. I honestly have no idea what sort of submission guidelines one could put on an agency page to explain to authors what distinguishes a good novel from a bad one (or, not quite the same thing, a successful one from an unsuccessful one).

Indeed, it’s all the worse because a substantial proportion of authors don’t even follow the guidelines that they do include! The most common complaint I hear from agents and editors at writing conferences is authors not following their submission guidelines—such basic problems as submitting content from the wrong genre, not formatting it correctly, having really egregious grammatical errors. Quite frankly I wish they’d shut up about it, because I wanted to hear what would actually improve my chances of getting published, not listen to them rant about the thousands of people who can’t bother to follow directions. (And I’m pretty sure that those people aren’t likely to go to writing conferences and listen to agents give panel discussions.)

But for the job market? It’s really not that hard to tell who is qualified for most jobs. If it isn’t something highly specialized, most people could probably do it, perhaps with a bit of training. If it is something highly specialized, you can restrict your search to people who already have the relevant education or training. In any case, having experience in that industry is obviously a plus. Beyond that, it gets much harder to assess quality—but also much less necessary. Basically anyone with an advanced degree in the relevant subject or a few years of experience at that job will probably do fine, and you’re wasting effort by trying to narrow the field further. If it is very hard to tell which candidate is better, that usually means that the candidates really aren’t that different.

To my knowledge, not a lot of employers or fiction publishers pride themselves on their selectivity. Indeed, many fiction publishers have a policy of simply refusing unsolicited submissions, relying upon literary agents to pre-filter their submissions for them. (Indeed, even many agents refuse unsolicited submissions—which raises the question: What is a debut author supposed to do?) This is good, for if they did—if Penguin Random House (or whatever that ludicrous all-absorbing conglomerate is calling itself these days; ah, what was it like in that bygone era, when anti-trust enforcement was actually a thing?) decided to start priding itself on its selectivity of 0.05% or whatever—then the already massively congested fiction industry would probably grind to a complete halt.

This means that by ranking schools and journals based on their selectivity, we are partly incentivizing quality, but mostly incentivizing opacity. The primary incentive is for them to attract as many applicants as possible, even knowing full well that they will reject most of these applicants. They don’t want to be too clear about what they will accept or reject, because that might discourage unqualified applicants from trying and thus reduce their selectivity rate. In terms of overall welfare, every rejected application is wasted human effort—but in terms of the institution’s selectivity rating, it’s a point in their favor.

Is privacy dead?

May 9 JDN 2459342

It is the year 2021, and while we don’t yet have flying cars or human-level artificial intelligence, our society is in many ways quite similar to what cyberpunk fiction predicted it would be. We are constantly connected to the Internet, even linking devices in our homes to the Web when that is largely pointless or actively dangerous. Oligopolies of fewer and fewer multinational corporations that are more and more powerful have taken over most of our markets, from mass media to computer operating systems, from finance to retail.

One of the many dire predictions of cyberpunk fiction is that constant Internet connectivity will effectively destroy privacy. There is reason to think that this is in fact happening: We have televisions that listen to our conversations, webcams that can be hacked, sometimes invisibly, and the operating system that runs the majority of personal and business computers is built around constantly tracking its users.

The concentration of oligopoly power and the decline of privacy are not unconnected. It’s the oligopoly power of corporations like Microsoft and Google and Facebook that allows them to present us with absurdly long and virtually unreadable license agreements as an ultimatum: “Sign away your rights, or else you can’t use our product. And remember, we’re the only ones who make this product and it’s increasingly necessary for your basic functioning in society!” This is of course exactly as cyberpunk fiction warned us it would be.

Giving up our private information to a handful of powerful corporations would be bad enough if that information were securely held only by them. But it isn’t. There have been dozens of major data breaches of major corporations, and there will surely be many more. In an average year, several billion data records are exposed through data breaches. Each person produces many data records, so it’s difficult to say exactly how many people have had their data stolen; but it isn’t implausible to say that if you are highly active on the Internet, at least some of your data has been stolen in one breach or another. Corporations have strong incentives to collect and use your data—data brokerage is a hundred-billion-dollar industry—but very weak incentives to protect it from prying eyes. The FTC does impose fines for negligence in the event of a major data breach, but as usual the scale of the fines simply doesn’t match the scale of the corporations responsible. $575 million sounds like a lot of money, but for a corporation with $28 billion in assets it’s a slap on the wrist. It would be equivalent to fining me about $500 (about what I’d get for driving without a passenger in the carpool lane). Yeah, I’d feel that; it would be unpleasant and inconvenient. But it’s certainly not going to change my life. And typically these fines only impact shareholders, and don’t even pass through to the people who made the decisions: The man who was CEO of Equifax when it suffered its catastrophic data breach retired with a $90 million pension.

While most people seem either blissfully unaware or fatalistically resigned to its inevitability, a few people have praised the trend of reduced privacy, usually by claiming that it will result in increased transparency. Yet, ironically, a world with less privacy can actually mean a world with less transparency as well: When you don’t know what information you reveal will be stolen and misused, you will constantly endeavor to protect all your information, even things that you would normally not hesitate to reveal. When even your face and name can be used to track you, you’ll be more hesitant to reveal them. Cyberpunk fiction predicted this too: Most characters in cyberpunk stories are known by their hacker handles, not their real given names.

There is some good news, however. People are finally beginning to notice that they have been pressured into giving away their privacy rights, and demanding to get them back. The United Nations has recently passed resolutions defending digital privacy, governments have taken action against the worst privacy violations with increasing frequency, courts are ruling in favor of stricter protections, think tanks are demanding stricter regulations, and even corporate policies are beginning to change. While the major corporations all want to take your data, there are now many smaller businesses and nonprofit organizations that will sell you tools to help protect it.

This does not mean we can be complacent: The war is far from won. But it does mean that there is some hope left; we don’t simply have to surrender and accept a world where anyone with enough money can know whatever they want about anyone else. We don’t need to accept what the CEO of Sun Microsystems infamously said: “You have zero privacy anyway. Get over it.”

I think the best answer to the decline of privacy is to address the underlying incentives that make it so lucrative. Why is data brokering such a profitable industry? Because ad targeting is such a profitable industry. So profitable, indeed, that huge corporations like Facebook and Google make almost all of their money that way, and the useful services they provide to users are offered for free simply as an enticement to get them to look at more targeted advertising.

Selling advertising is hardly new—we’ve been doing it for literally millennia, as Roman gladiators were often paid to hawk products. It has been the primary source of revenue for most forms of media, from newspapers to radio stations to TV networks, since those media have existed. What has changed is that ad targeting is now a lucrative business: In the 1850s, that newspaper being sold by barking boys on the street likely had ads in it, but they were the same ads for every single reader. Now when you log in to CNN.com or nytimes.com, the ads on that page are specific only to you, based on any information that these media giants have been able to glean from your past Internet activity. If you do try to protect your online privacy with various tools, a quick-and-dirty way to check if it’s working is to see if websites give you ads for things you know you’d never buy.

In fact, I consider it a very welcome recent development that video streaming is finally a way to watch TV shows by actually paying for them instead of having someone else pay for the right to shove ads in my face. I can’t remember the last time I heard a TV ad jingle, and I’m very happy about that fact. Having to spend 15 minutes of each hour of watching TV to watch commercials may not seem so bad—in fact, many people may feel that they’d rather do that than pay the money to avoid it. But think about it this way: If it weren’t worth at least that much to the corporations buying those ads, they wouldn’t do it. And if a corporation expects to get $X from you that you wouldn’t have otherwise paid, that means they’re getting you to spend that much that you otherwise wouldn’t have—meaning that they’re getting you to buy something you didn’t need. Perhaps it’s better after all to spend that $X on getting entertainment that doesn’t try to get you to buy things you don’t need.

Indeed, I think there is an opportunity to restructure the whole Internet this way. What we need is a software company—maybe a nonprofit organization, maybe a for-profit business—that is set up to let us make micropayments for online content in lieu of having our data collected or being force-fed advertising.

How big would these payments need to be? Well, Facebook has about 2.8 billion users and takes in revenue of about $80 billion per year, so the average user would have to pay about $29 a year for the use of Facebook, Instagram, and WhatsApp. That’s about $2.50 per month, or $0.08 per day.

The New York Times is already losing its ad-supported business model; less than $400 million of its $1.8 billion revenue last year was from ads, the rest being primarily from subscriptions. But smaller media outlets have a much harder time gaining subscribers; often people just want to read a single article and aren’t willing to pay for a whole month or year of the periodical. If we could somehow charge for individual articles, how much would we have to charge? Well, a typical webpage has an ad clickthrough rate of 1%, while a typical cost-per-click rate is about $0.60, so ads on the average webpage makes its owners a whopping $0.006. That’s not even a single cent. So if this new micropayment system allowed you to pay one cent to read an article without the annoyance of ads or the pressure to buy something you don’t need, would you pay it? I would. In fact, I’d pay five cents. They could quintuple their revenue!

The main problem is that we currently don’t have an efficient way to make payments that small. Processing a credit card transaction typically costs at least $0.05, so a five-cent transaction would yield literally zero revenue for the website. I’d have to pay ten cents to give the website five, and I admit I might not always want to do that—I’d also definitely be uncomfortable with half the money going to credit card companies.

So what’s needed is software to bundle the payments at each end: In a single credit card transaction, you add say $20 of tokens to an account. Each token might be worth $0.01, or even less if we want. These tokens can then be spent at participating websites to pay for access. The websites can then collect all the tokens they’ve received over say a month, bundle them together, and sell them back to the company that originally sold them to you, for slightly less than what you paid for them. These bundled transactions could actually be quite large in many cases—thousands or millions of dollars—and thus processing fees would be a very small fraction. For smaller sites there could be a minimum amount of tokens they must collect—perhaps also $20 or so—before they can sell them back. Note that if you’ve bought $20 in tokens and you are paying $0.05 per view, you can read 400 articles before you run out of tokens and have to buy more. And they don’t all have to be from the same source, as they would with a traditional subscription; you can read articles from any outlet that participates in the token system.

There are a number of technical issues to be resolved here: How to keep the tokens secure, how to guarantee that once a user purchases access to an article they will continue to have access to it, ideally even if they clear their cache, delete all cookies, or login from another computer. I can’t literally set up this website today, and even if I could, I don’t know how I’d attract a critical mass of both users and participating websites (it’s a major network externality problem). But it seems well within the purview of what the tech industry has done in the past—indeed, it’s quite comparable to the impressive (and unsettling) infrastructure that has been laid down to support ad-targeting and data brokerage.

How would such a system help protect privacy? If micropayments for content became the dominant model of funding online content, most people wouldn’t spend much time looking at online ads, and ad targeting would be much less profitable. Data brokerage, in turn, would become less lucrative, because there would be fewer ways to use that data to make profits. With the incentives to take our data thus reduced, it would be easier to enforce regulations protecting our privacy. Those fines might actually be enough to make it no longer worth the while to take sensitive data, and corporations might stop pressuring people to give it up.

No, privacy isn’t dead. But it’s dying. If we want to save it, we have a lot of work to do.

What if we taxed market share?

Apr 18 JDN 2459321

In one of his recent columns, Paul Krugman lays out the case for why corporate tax cuts have been so ineffective at reducing unemployment or increasing economic growth. The central insight is that only a small portion of corporate tax incidence actually seems to fall on real capital investment. First, most corporate tax avoidance is via accounting fictions, not real changes in production; second, most forms of investment and loan interest are tax-deductible; and the third is what I want to focus on today: Corporations today have enormous monopoly power, and taxing monopoly profits is Pigouvian; it doesn’t reduce efficiency, it actually increases it.

Of course, in our current system, we don’t directly tax monopoly profits. We tax profits in general, many—by some estimates, most—of which are monopoly (or oligopoly) profits. But some profits aren’t monopoly profits, while some monopolies are staggeringly powerful—and we’re taxing them all the same. (In fact, the really big monopolies seem to be especially good at avoiding taxes: I guarantee you pay a higher tax rate than Apple or Boeing.)

It’s difficult to precisely measure how much of a corporation’s profits are due to their monopoly power. But there is something that’s quite easy to measure that would be a good proxy for this: market share.

We could tax each corporation’s profits in direct proportion—or even literally equal to—its market share in a suitably defined market. It shouldn’t be too broad (“electronics” would miss Apple’s dominance in smartphones and laptops specifically) or too narrow (“restaurants on Broadway Ave.” would greatly overestimate the market share of many small businesses); this could pose some practical difficulties, but I think it can be done.


And what if a corporation produces in many industries? I offer a bold proposal: Use the maximum. If a corporation controls 10% of one market, 20% of another, and 60% of another, tax all of their profits at the rate of 60%.

If they want to avoid that outcome, well, I guess they’ll have to spin off their different products into different corporations that can account their profits separately. Behold: Self-enforcing antitrust.

Of course, we need to make sure that when corporations split, they actually split—it can’t just be the same CEO and board for 40 “different corporations” that all coordinate all their actions and produce subtle variations on the same product. At that point the correct response is for the FTC to sue them all for illegal collusion.

This would also disincentivize mergers and acquisitions—the growth of which is a major reason why we got into this mess of concentrated oligopolies in the first place.

This policy could be extremely popular, because it directly and explicitly targets big business. Small businesses—even those few that actually are C corporations—would see their taxes dramatically reduced, while trillion-dollar multinationals would suddenly find that they can no longer weasel out of the taxes every other company is paying.

Indeed, if we somehow managed to achieve a perfectly-competitive market where no firm had any significant market share, this corporate tax would effectively disappear. So any time some libertarian tries to argue that corporate taxes are interfering with perfect free market competition, we could point out that this is literally impossible—if we had perfect competition, this corporate tax wouldn’t do anything.

In fact, the total tax revenue would be proportional to the Herfindahl–Hirschman Index, a commonly-used measure of market concentration in oligopoly markets. A monopoly would pay 100% tax, so no one would ever want to be a monopoly; they’d immediately split into two firms so that they could pay a tax rate of 50%. And depending on other characteristics of the market, they might want to split even further than that.

I’ll spare you the algebra, but total profits in a Cournot equilibrium [PDF] with n firms are proportional to n/(n+1)^2, but with a tax rate of 1/n, this makes the after-tax profits proportional to (n-1)/(n+1)^2; this is actually maximized at n = 3. So in this (admittedly oversimplified) case, they’d actually prefer to split into 3 firms. And the difference between a monopoly and a trinopoly is quite significant.

Like any tax, this would create some incentive to produce less; but this could be less than the incentive against expanding monopoly power. A Cournot economy with 3 firms, even with this tax, would produce 50% more and sell at a lower price than a monopoly in the same market.

And once a market is highly competitive, the tax would essentially feel like a constant to each firm; if you are only 1% of the market, even doubling your production to make yourself 2% of the market would only increase your tax rate by 1 percentage point.

Indeed, if we really want to crack down on corporate tax avoidance, we could even charge this tax on sales rather than profits. You can’t avoid that by offshoring production; as long as you’re selling products in the US, you’ll be paying taxes in the US. Firms in a highly-competitive industry would still only pay a percentage point or two of tax, which is totally within a reasonable profit margin. The only firms that would find themselves suddenly unable to pay would be the huge multinationals that control double-digit percentages of the market. They wouldn’t just have an incentive to break up; they’d have no choice but to do so in order to survive.

What happened with GameStop?

Feb 7 JDN 2459253

No doubt by now you’ve heard about the recent bubble in GameStop stock that triggered several trading stops, nearly destroyed a hedge fund, and launched a thousand memes. What really strikes me about this whole thing is how ordinary it is: This is basically the sort of thing that happens in our financial markets all the time. So why are so many people suddenly paying so much attention to it?

There are a few important ways this is unusual: Most importantly, the bubble was triggered by a large number of middle-class people investing small amounts, rather than by a handful of billionaires or hedge funds. It’s also more explicitly collusive than usual, with public statements in writing about what stocks are being manipulated rather than hushed whispers between executives at golf courses. Partly as a consequence of these, the response from the government and the financial industry has been quite different as well, trying to halt trading and block transactions in a way that they would never do if the crisis had been caused by large financial institutions.

If you’re interested in the technical details of what happened, what a short squeeze is and how it can make a hedge fund lose enormous amounts of money unexpectedly, I recommend this summary by KQED. But the gist of it is simple enough: Melvin Capital placed huge bets that GameStop stock would fall in price, and a coalition of middle-class traders coordinated on Reddit to screw them over by buying a bunch of GameStop stock and driving up the price. It worked, and now Melvin Capital lost something on the order of $3-5 billion in just a few days.

The particular kind of bet they placed is called a short, and it’s a completely routine practice on Wall Street despite the fact that I could never quite understand why it is a thing that should be allowed.

The essence of a short is quite simple: When you short, you are selling something you don’t own. You “borrow” it (it isn’t really even borrowing), and then sell it to someone else, promising to buy it back and return it to where you borrowed it from at some point in the future. This amounts to a bet that the price will decline, so that the price at which you buy it is lower than the price at which you sold it.

Doesn’t that seem like an odd thing to be allowed to do? Normally you can’t sell something you have merely borrowed. I can’t borrow a car and then sell it; car title in fact exists precisely to prevent this from happening. If I were to borrow your coat and then sell it to a thrift store, I’d have committed larceny. It’s really quite immaterial whether I plan to buy it back afterward; in general we do not allow people to sell things that they do not own.

Now perhaps the problem is that when I borrow your coat or your car, you expect me to return that precise object—not a similar coat or a car of equivalent Blue Book value, but your coat or your car. When I borrow a share of GameStop stock, no one really cares whether it is that specific share which I return—indeed, it would be almost impossible to even know whether it was. So in that way it’s a bit like borrowing money: If I borrow $20 from you, you don’t expect me to pay back that precise $20 bill. Indeed you’d be shocked if I did, since presumably I borrowed it in order to spend it or invest it, so how would I ever get it back?

But you also don’t sell money, generally speaking. Yes, there are currency exchanges and money-market accounts; but these are rather exceptional cases. In general, money is not bought and sold the way coats or cars are.

What about consumable commodities? You probably don’t care too much about any particular banana, sandwich, or gallon of gasoline. Perhaps in some circumstances we might “loan” someone a gallon of gasoline, intending them to repay us at some later time with a different gallon of gasoline. But far more likely, I think, would be simply giving a friend a gallon of gasoline and then not expecting any particular repayment except perhaps a vague offer of providing a similar favor in the future. I have in fact heard someone say the sentence “Can I borrow your sandwich?”, but it felt very odd when I heard it. (Indeed, I responded something like, “No, you can keep it.”)

And in order to actually be shorting gasoline (which is a thing that you, too, can do, perhaps even right now, if you have a margin account on a commodities exchange), it isn’t enough to borrow a gallon with the expectation of repaying a different gallon; you must also sell that gallon you borrowed. And now it seems very odd indeed to say to a friend, “Hey, can I borrow a gallon of gasoline so that I can sell it to someone for a profit?”

The usual arguments for why shorting should be allowed are much like the arguments for exotic financial instruments in general: “Increase liquidity”, “promote efficient markets”. These arguments are so general and so ubiquitous that they essentially amount to the strongest form of laissez-faire: Whatever Wall Street bankers feel like doing is fine and good and part of what makes American capitalism great.

In fact, I was never quite clear why margin accounts are something we decided to allow; margin trading is inherently high-leverage and thus inherently high-risk. Borrowing money in order to arbitrage financial assets doesn’t just seem like a very risky thing to do, it has been one way or another implicated in virtually every financial crisis that has ever occurred. It would be an exaggeration to say that leveraged arbitrage is the one single cause of financial crises, but it would be a shockingly small exaggeration. I think it absolutely is fair to say that if leveraged arbitrage did not exist, financial crises would be far rarer and further between.

Indeed, I am increasingly dubious of the whole idea of allowing arbitrage in general. Some amount of arbitrage may be unavoidable; there may always be people people who see that prices are different for the same item in two different markets, and then exploit that difference before anyone can stop them. But this is a bit like saying that theft is probably inevitable: Yes, every human society that has had a system of property ownership (which is most of them—even communal hunter-gatherers have rules about personal property), has had some amount of theft. That doesn’t mean there is nothing we can do to reduce theft, or that we should simply allow theft wherever it occurs.

The moral argument against arbitrage is straightforward enough: You’re not doing anything. No good is produced; no service is provided. You are making money without actually contributing any real value to anyone. You just make money by having money. This is what people in the Middle Ages found suspicious about lending money at interest; but lending money actually is doing something—sometimes people need more money than they have, and lending it to them is providing a useful service for which you deserve some compensation.

A common argument economists make is that arbitrage will make prices more “efficient”, but when you ask them what they mean by “efficient”, the answer they give is that it removes arbitrage opportunities! So the good thing about arbitrage is that it stops you from doing more arbitrage?

And what if it doesn’t stop you? Many of the ways to exploit price gaps (particularly the simplest ones like “where it’s cheap, buy it; where it’s expensive, sell it”) will automatically close those gaps, but it’s not at all clear to me that all the ways to exploit price gaps will necessarily do so. And even if it’s a small minority of market manipulation strategies that exploit gaps without closing them, those are precisely the strategies that will be most profitable in the long run, because they don’t undermine their own success. Then, left to their own devices, markets will evolve to use such strategies more and more, because those are the strategies that work.

That is, in order for arbitrage to be beneficial, it must always be beneficial; there must be no way to exploit price gaps without inevitably closing those price gaps. If that is not the case, then evolutionary pressure will push more and more of the financial system toward using methods of arbitrage that don’t close gaps—or even exacerbate them. And indeed, when you look at how ludicrously volatile and crisis-prone our financial system has become, it sure looks an awful lot like an evolutionary equilibrium where harmful arbitrage strategies have evolved to dominate.

A world where arbitrage actually led to efficient pricing would be a world where the S&P 500 rises a steady 0.02% per day, each and every day. Maybe you’d see a big move when there was actually a major event, like the start of a war or the invention of a vaccine for a pandemic. You’d probably see a jump up or down of a percentage point or two with each quarterly Fed announcement. But daily moves of even five or six percentage points would be a very rare occurrence—because the real expected long-run aggregate value of the 500 largest publicly-traded corporations in America is what the S&P 500 is supposed to represent, and that is not a number that should change very much very often. The fact that I couldn’t really tell you what that number is without multi-trillion-dollar error bars is so much the worse for anyone who thinks that financial markets can somehow get it exactly right every minute of every day.

Moreover, it’s not hard to imagine how we might close price gaps without simply allowing people to exploit them. There could be a bunch of economists at the Federal Reserve whose job it is to locate markets where there are arbitrage opportunities, and then a bundle of government funds that they can allocate to buying and selling assets in order to close those price gaps. Any profits made are received by the treasury; any losses taken are borne by the treasury. The economists would get paid a comfortable salary, and perhaps get bonuses based on doing a good job in closing large or important price gaps; but there is no need to give them even a substantial fraction of the proceeds, much less all of it. This is already how our money supply is managed, and it works quite well, indeed obviously much better than an alternative with “skin in the game”: Can you imagine the dystopian nightmare we’d live in if the Chair of the Federal Reserve actually received even a 1% share of the US money supply? (Actually I think that’s basically what happened in Zimbabwe: The people who decided how much money to print got to keep a chunk of the money that was printed.)

I don’t actually think this GameStop bubble is all that important in itself. A decade from now, it may be no more memorable than Left Shark or the Macarena. But what is really striking about it is how little it differs from business-as-usual on Wall Street. The fact that a few million Redditors can gather together to buy a stock “for the lulz” or to “stick it to the Man” and thereby bring hedge funds to their knees is not such a big deal in itself, but it is symptomatic of much deeper structural flaws in our financial system.

The paperclippers are already here

Jan 24 JDN 2459239

Imagine a powerful artificial intelligence, which is comprised of many parts distributed over a vast area so that it has no particular location. It is incapable of feeling any emotion: Neither love nor hate, neither joy nor sorrow, neither hope nor fear. It has no concept of ethics or morals, only its own programmed directives. It has one singular purpose, which it seeks out at any cost. Any who aid its purpose are generously rewarded. Any who resist its purpose are mercilessly crushed.

The Less Wrong community has come to refer to such artificial intelligences as “paperclippers”; the metonymous singular directive is to maximize the number of paperclips produced. There’s even an online clicker game where you can play as one called “Universal Paperclips“. The concern is that we might one day invent such artificial intelligences, and they could get out of control. The paperclippers won’t kill us because they hate us, but simply because we can be used to make more paperclips. This is a far more plausible scenario for the “AI apocalypse” than the more conventional sci-fi version where AIs try to kill us on purpose.

But I would say that the paperclippers are already here. Slow, analog versions perhaps. But they are already getting out of control. We call them corporations.

A corporation is probably not what you visualized when you read the first paragraph of this post, so try reading it again. Which parts are not true of corporations?

Perhaps you think a corporation is not an artificial intelligence? But clearly it’s artificial, and doesn’t it behave in ways that seem intelligent? A corporation has purpose beyond its employees in much the same way that a hive has purpose beyond its bees. A corporation is a human superorganism (and not the only kind either).

Corporations are absolutely, utterly amoral. Their sole directive is to maximize profit. Now, you might think that an individual CEO, or a board of directors, could decide to do something good, or refrain from something evil, for reasons other than profit; and to some extent this is true. But particularly when a corporation is publicly-traded, that CEO and those directors are beholden to shareholders. If shareholders see that the corporation is acting in ways that benefit the community but hurt their own profits, shareholders can rebel by selling their shares or even suing the company. In 1919, Dodge successfully sued Ford for the “crime” of setting wages too high and prices too low.

Humans are altruistic. We are capable of feeling, emotion, and compassion. Corporations are not. Corporations are made of human beings, but they are specifically structured to minimize the autonomy of human choices. They are designed to provide strong incentives to behave in a particular way so as to maximize profit. Even the CEO of a corporation, especially one that is publicly traded, has their hands tied most of the time by the desires of millions of shareholders and customers—so-called “market forces”. Corporations are entirely the result of human actions, but they feel like impersonal forces because they are the result of millions of independent choices, almost impossible to coordinate; so one individual has very little power to change the outcome.

Why would we create such entities? It almost feels as though we were conquered by some alien force that sought to enslave us to its own purposes. But no, we created corporations ourselves. We intentionally set up institutions designed to limit our own autonomy in the name of maximizing profit.

Part of the answer is efficiency: There are genuine gains in economic efficiency due to the corporate structure. Corporations can coordinate complex activity on a vast scale, with thousands or even millions of employees each doing what they are assigned without ever knowing—or needing to know—the whole of which they are a part.

But a publicly-traded corporation is far from the only way to do that. Even for-profit businesses are not the only way to organize production. And empirically, worker co-ops actually seem to be about as productive as corporations, while producing far less inequality and far more satisfied employees.

Thus, in order to explain the primacy of corporations, particularly those that are traded on stock markets, we must turn to ideology: The extreme laissez- faire concept of capitalism and its modern expression in the ideology of “shareholder value”. Somewhere along the way enough people—or at least enough policymakers—became convinced that the best way to run an economy was to hand over as much as possible to entities that exist entirely to maximize their own profits.

This is not to say that corporations should be abolished entirely. I am certainly not advocating a shift to central planning; I believe in private enterprise. But I should note that private enterprise can also include co-ops, partnerships, and closely-held businesses, rather than publicly traded corproations, and perhaps that’s all we need. Yet there do seem to be significant advantages to the corporate structure: Corporation seem to be spectacularly good at scaling up the production of goods and providing them to a large number of customers. So let’s not get rid of corporations just yet.

Instead, let us keep corporations on a short leash. When properly regulated, corporations can be very efficient at producing goods. But corporations can also cause tremendous damage when given the opportunity. Regulations aren’t just “red tape” that gets in the way of production. They are a vital lifeline that protects us against countless abuses that corporations would otherwise commit.

These vast artificial intelligences are useful to us, so let’s not get rid of them. But never for a moment imagine that their goals are the same as ours. Keep them under close watch at all times, and compel them to use their great powers for good—for, left to their own devices, they can just as easily do great evil.