Reckoning costs in money distorts them

May 7 JDN 2460072

Consider for a moment what it means when an economic news article reports “rising labor costs”. What are they actually saying?

They’re saying that wages are rising—perhaps in some industry, perhaps in the economy as a whole. But this is not a cost. It’s a price. As I’ve written about before, the two are fundamentally distinct.

The cost of labor is measured in effort, toil, and time. It’s the pain of having to work instead of whatever else you’d like to do with your time.

The price of labor is a monetary amount, which is delivered in a transaction.

This may seem perfectly obvious, but it has important and oft-neglected implications. A cost, one paid, is gone. That value has been destroyed. We hope that it was worth it for some benefit we gained. A price, when paid, is simply transferred: One person had that money before, now someone else has it. Nothing was gained or lost.

So in fact when reports say that “labor costs have risen”, what they are really saying is that income is being transferred from owners to workers without any change in real value taking place. They are framing as a loss what is fundamentally a zero-sum redistribution.

In fact, it is disturbingly common to see a fundamentally good redistribution of income framed in the press as a bad outcome because of its expression as “costs”; the “cost” of chocolate is feared to go up if we insist upon enforcing bans on forced labor—when in fact it is only the price that goes up, and the cost actually goes down: chocolate would no longer include complicity in an atrocity. The real suffering of making chocolate would be thereby reduced, not increased. Even when they aren’t literally enslaved, those workers are astonishingly poor, and giving them even a few more cents per hour would make a real difference in their lives. But God forbid we pay a few cents more for a candy bar!

If labor costs were to rise, that would mean that work had suddenly gotten harder, or more painful; or else, that some outside circumstance had made it more difficult to work. Having a child increases your labor costs—you now have the opportunity cost of not caring for the child. COVID increased the cost of labor, by making it suddenly dangerous just to go outside in public. That could also increase prices—you may demand a higher wage, and people do seem to have demanded higher wages after COVID. But these are two separate effects, and you can have one without the other. In fact, women typically see wage stagnation or even reduction after having kids (but men largely don’t), despite their real opportunity cost of labor having obviously greatly increased.

On an individual level, it’s not such a big mistake to equate price and cost. If you are buying something, its cost to you basically just is its price, plus a little bit of transaction cost for actually finding and buying it. But on a societal level, it makes an enormous difference. It distorts our policy priorities and can even lead to actively trying to suppress things that are beneficial—such as rising wages.

This false equivalence between price and costs seems to be at least as common among economists as it is among laypeople. Economists will often justify it on the grounds that in an ideal perfect competitive market the two would be in some sense equated. But of course we don’t live in that ideal perfect market, and even if we did, they would only beproportional at the margin, not fundamentally equal across the board. It would still be obviously wrong to characterize the total value or cost of work by the price paid for it; only the last unit of effort would be priced so that marginal value equals price equals marginal cost. The first 39 hours of your work would cost you less than what you were paid, and produce more than you were paid; only that 40th hour would set the three equal.

Once you account for all the various market distortions in the world, there’s no particular relationship between what something costs—in terms of real effort and suffering—and its price—in monetary terms. Things can be expensive and easy, or cheap and awful. In fact, they often seem to be; for some reason, there seems to be a pattern where the most terrible, miserable jobs (e.g. coal mining) actually pay the leastand the easiest, most pleasant jobs (e.g. stock trading) pay the most. Some jobs that benefit society pay well (e.g. doctors) and others pay terribly or not at all (e.g. climate activists). Some actions that harm the world get punished (e.g. armed robbery) and others get rewarded with riches (e.g. oil drilling). In the real world, whether a job is good or bad and whether it is paid well or poorly seem to be almost unrelated.

In fact, sometimes they seem even negatively related, where we often feel tempted to “sell out” and do something destructive in order to get higher pay. This is likely due to Berkson’s paradox: If people are willing to do jobs if they are either high-paying or beneficial to humanity, then we should expect that, on average, most of the high-paying jobs people do won’t be beneficial to humanity. Even if there were inherently no correlation or a small positive one, people’s refusal to do harmful low-paying work removes those jobs from our sample and results in a negative correlation in what remains.

I think that the best solution, ultimately, is to stop reckoning costs in money entirely. We should reckon them in happiness.

This is of course much more difficult than simply using prices; it’s not easy to say exactly how many QALY are sacrificed in the extraction of cocoa beans or the drilling of offshore oil wells. But if we actually did find a way to count them, I strongly suspect we’d find that it was far more than we ought to be willing to pay.

A very rough approximation, surely flawed but at least a start, would be to simply convert all payments into proportions of their recipient’s income: For full-time wages, this would result in basically everyone being counted the same, as 1 hour of work if you work 40 hours per week, 50 weeks per year is precisely 0.05% of your annual income. So we could say that whatever is equivalent to your hourly wage constitutes 50 microQALY.

This automatically implies that every time a rich person pays a poor person, QALY increase, while every time a poor person pays a rich person, QALY decrease. This is not an error in the calculation. It is a fact of the universe. We ignore it only at out own peril. All wealth redistributed downward is a benefit, while all wealth redistributed upward is a harm. That benefit may cause some other harm, or that harm may be compensated by some other benefit; but they are still there.

This would also put some things in perspective. When HSBC was fined £70 million for its crimes, that can be compared against its £1.5 billion in net income; if it were an individual, it would have been hurt about 50 milliQALY, which is about what I would feel if I lost $2000. Of course, it’s not a person, and it’s not clear exactly how this loss was passed through to employees or shareholders; but that should give us at least some sense of how small that loss was for them. They probably felt it… a little.

When Trump was ordered to pay a $1.3 million settlement, based on his $2.5 billion net wealth (corresponding to roughly $125 million in annual investment income), that cost him about 10 milliQALY; for me that would be about $500.

At the other extreme, if someone goes from making $1 per day to making $1.50 per day, that’s a 50% increase in their income—500 milliQALY per year.

For those who have no income at all, this becomes even trickier; for them I think we should probably use their annual consumption, since everyone needs to eat and that costs something, though likely not very much. Or we could try to measure their happiness directly, trying to determine how much it hurts to not eat enough and work all day in sweltering heat.

Properly shifting this whole cultural norm will take a long time. For now, I leave you with this: Any time you see a monetary figure, ask yourself: How much is that worth to them?” The world will seem quite different once you get in the habit of that.

What happens when a bank fails

Mar 19 JDN 2460023

As of March 9, Silicon Valley Bank (SVB) has failed and officially been put into receivership under the FDIC. A bank that held $209 billion in assets has suddenly become insolvent.

This is the second-largest bank failure in US history, after Washington Mutual (WaMu) in 2008. In fact it will probably have more serious consequences than WaMu, for two reasons:

1. WaMu collapsed as part of the Great Recession, so there was already a lot of other things going on and a lot of policy responses already in place.

2. WaMu was mostly a conventional commercial bank that held deposits and loans for consumers, so its assets were largely protected by the FDIC, and thus its bankruptcy didn’t cause contagion the spread out to the rest of the system. (Other banks—shadow banks—did during the crash, but not so much WaMu.) SVB mostly served tech startups, so a whopping 89% of its deposits were not protected by FDIC insurance.

You’ve likely heard of many of the companies that had accounts at SVB: Roku, Roblox, Vimeo, even Vox. Stocks of the US financial industry lost $100 billion in value in two days.

The good news is that this will not be catastrophic. It probably won’t even trigger a recession (though the high interest rates we’ve been having lately potentially could drive us over that edge). Because this is commercial banking, it’s done out in the open, with transparency and reasonably good regulation. The FDIC knows what they are doing, and even though they aren’t covering all those deposits directly, they intend to find a buyer for the bank who will, and odds are good that they’ll be able to cover at least 80% of the lost funds.

In fact, while this one is exceptionally large, bank failures are not really all that uncommon. There have been nearly 100 failures of banks with assets over $1 billion in the US alone just since the 1970s. The FDIC exists to handle bank failures, and generally does the job well.

Then again, it’s worth asking whether we should really have a banking system in which failures are so routine.

The reason banks fail is kind of a dark open secret: They don’t actually have enough money to cover their deposits.

Banks loan away most of their cash, and rely upon the fact that most of their depositors will not want to withdraw their money at the same time. They are required to keep a certain ratio in reserves, but it’s usually fairly small, like 10%. This is called fractional-reserve banking.

As long as less than 10% of deposits get withdrawn at any given time, this works. But if a bunch of depositors suddenly decide to take out their money, the bank may not have enough to cover it all, and suddenly become insolvent.

In fact, the fear that a bank might become insolvent can actually cause it to become insolvent, in a self-fulfilling prophecy. Once depositors get word that the bank is about to fail, they rush to be the first to get their money out before it disappears. This is a bank run, and it’s basically what happened to SVB.

The FDIC was originally created to prevent or mitigate bank runs. Not only did they provide insurance that reduced the damage in the event of a bank failure; by assuring depositors that their money would be recovered even if the bank failed, they also reduced the chances of a bank run becoming a self-fulfilling prophecy.


Indeed, SVB is the exception that proves the rule, as they failed largely because their assets were mainly not FDIC insured.

Fractional-reserve banking effectively allows banks to create money, in the form of credit that they offer to borrowers. That credit gets deposited in other banks, which then go on to loan it out to still others; the result is that there is more money in the system than was ever actually printed by the central bank.

In most economies this commercial bank money is a far larger quantity than the central bank money actually printed by the central bank—often nearly 10 to 1. This ratio is called the money multiplier.

Indeed, it’s not a coincidence that the reserve ratio is 10% and the multiplier is 10; the theoretical maximum multiplier is always the inverse of the reserve ratio, so if you require reserves of 10%, the highest multiplier you can get is 10. Had we required 20% reserves, the multiplier would drop to 5.

Most countries have fractional-reserve banking, and have for centuries; but it’s actually a pretty weird system if you think about it.

Back when we were on the gold standard, fractional-reserve banking was a way of cheating, getting our money supply to be larger than the supply of gold would actually allow.

But now that we are on a pure fiat money system, it’s worth asking what fractional-reserve banking actually accomplishes. If we need more money, the central bank could just print more. Why do we delegate that task to commercial banks?

David Friedman of the Cato Institute had some especially harsh words on this, but honestly I find them hard to disagree with:

Before leaving the subject of fractional reserve systems, I should mention one particularly bizarre variant — a fractional reserve system based on fiat money. I call it bizarre because the essential function of a fractional reserve system is to reduce the resource cost of producing money, by allowing an ounce of reserves to replace, say, five ounces of currency. The resource cost of producing fiat money is zero; more precisely, it costs no more to print a five-dollar bill than a one-dollar bill, so the cost of having a larger number of dollars in circulation is zero. The cost of having more bills in circulation is not zero but small. A fractional reserve system based on fiat money thus economizes on the cost of producing something that costs nothing to produce; it adds the disadvantages of a fractional reserve system to the disadvantages of a fiat system without adding any corresponding advantages. It makes sense only as a discreet way of transferring some of the income that the government receives from producing money to the banking system, and is worth mentioning at all only because it is the system presently in use in this country.

Our banking system evolved gradually over time, and seems to have held onto many features that made more sense in an earlier era. Back when we had arbitrarily tied our central bank money supply to gold, creating a new money supply that was larger may have been a reasonable solution. But today, it just seems to be handing the reins over to private corporations, giving them more profits while forcing the rest of society to bear more risk.

The obvious alternative is full-reserve banking, where banks are simply required to hold 100% of their deposits in reserve and the multiplier drops to 1. This idea has been supported by a number of quite prominent economists, including Milton Friedman.

It’s not just a right-wing idea: The left-wing organization Positive Money is dedicated to advocating for a full-reserve banking system in the UK and EU. (The ECB VP’s criticism of the proposal is utterly baffling to me: it “would not create enough funding for investment and growth.” Um, you do know you can print more money, right? Hm, come to think of it, maybe the ECB doesn’t know that, because they think inflation is literally Hitler. There are legitimate criticisms to be had of Positive Money’s proposal, but “There won’t be enough money under this fiat money system” is a really weird take.)

There’s a relatively simple way to gradually transition from our current system to a full-reserve sytem: Simply increase the reserve ratio over time, and print more central bank money to keep the total money supply constant. If we find that it seems to be causing more problems than it solves, we could stop or reverse the trend.

Krugman has pointed out that this wouldn’t really fix the problems in the banking system, which actually seem to be much worse in the shadow banking sector than in conventional commercial banking. This is clearly right, but it isn’t really an argument against trying to improve conventional banking. I guess if stricter regulations on conventional banking push more money into the shadow banking system, that’s bad; but really that just means we should be imposing stricter regulations on the shadow banking system first (or simultaneously).

We don’t need to accept bank runs as a routine part of the financial system. There are other ways of doing things.

There should be a glut of nurses.

Jan 15 JDN 2459960

It will not be news to most of you that there is a worldwide shortage of healthcare staff, especially nurses and emergency medical technicians (EMTs). I would like you to stop and think about the utterly terrible policy failure this represents. Maybe if enough people do, we can figure out a way to fix it.

It goes without saying—yet bears repeating—that people die when you don’t have enough nurses and EMTs. Indeed, surely a large proportion of the 2.6 million (!) deaths each year from medical errors are attributable to this. It is likely that at least one million lives per year could be saved by fixing this problem worldwide. In the US alone, over 250,000 deaths per year are caused by medical errors; so we’re looking at something like 100,000 lives we could safe each year by removing staffing shortages.

Precisely because these jobs have such high stakes, the mere fact that we would ever see the word “shortage” beside “nurse” or “EMT” was already clear evidence of dramatic policy failure.

This is not like other jobs. A shortage of accountants or baristas or even teachers, while a bad thing, is something that market forces can be expected to correct in time, and it wouldn’t be unreasonable to simply let them do so—meaning, let wages rise on their own until the market is restored to equilibrium. A “shortage” of stockbrokers or corporate lawyers would in fact be a boon to our civilization. But a shortage of nurses or EMTs or firefighters (yes, there are those too!) is a disaster.

Partly this is due to the COVID pandemic, which has been longer and more severe than any but the most pessimistic analysts predicted. But there shortages of nurses before COVID. There should not have been. There should have been a massive glut.

Even if there hadn’t been a shortage of healthcare staff before the pandemic, the fact that there wasn’t a glut was already a problem.

This is what a properly-functioning healthcare policy would look like: Most nurses are bored most of the time. They are widely regarded as overpaid. People go into nursing because it’s a comfortable, easy career with very high pay and usually not very much work. Hospitals spend most of their time with half their beds empty and half of their ambulances parked while the drivers and EMTs sit around drinking coffee and watching football games.

Why? Because healthcare, especially emergency care, involves risk, and the stakes couldn’t be higher. If the number of severely sick people doubles—as in, say, a pandemic—a hospital that usually runs at 98% capacity won’t be able to deal with them. But a hospital that usually runs at 50% capacity will.

COVID exposed to the world what a careful analysis would already have shown: There was not nearly enough redundancy in our healthcare system. We had been optimizing for a narrow-minded, short-sighted notion of “efficiency” over what we really needed, which was resiliency and robustness.

I’d like to compare this to two other types of jobs.

The first is stockbrokers.Set aside for a moment the fact that most of what they do is worthless is not actively detrimental to human society. Suppose that their most adamant boosters are correct and what they do is actually really important and beneficial.

Their experience is almost like what I just said nurses ought to be. They are widely regarded (correctly) as very overpaid. There is never any shortage of them; there are people lining up to be hired. People go into the work not because they care about it or even because they are particularly good at it, but because they know it’s an easy way to make a lot of money.

The one thing that seems to be different from my image may not be as different as it seems. Stockbrokers work long hours, but nobody can really explain why. Frankly most of what they do can be—and has been—successfully automated. Since there simply isn’t that much work for them to do, my guess is that most of the time they spend “working” 60-80 hour weeks is actually not actually working, but sitting around pretending to work. Since most financial forecasters are outperformed by a simple diversified portfolio, the most profitable action for most stock analysts to take most of the time would be nothing.

It may also be that stockbrokers work hard at sales—trying to convince people to buy and sell for bad reasons in order to earn sales commissions. This would at least explain why they work so many hours, though it would make it even harder to believe that what they do benefits society. So if we imagine our “ideal” stockbroker who makes the world a better place, I think they mostly just use a simple algorithm and maybe adjust it every month or two. They make better returns than their peers, but spend 38 hours a week goofing off.

There is a massive glut of stockbrokers. This is what it looks like when a civilization is really optimized to be good at something.

The second is soldiers. Say what you will about them, no one can dispute that their job has stakes of life and death. A lot of people seem to think that the world would be better off without them, but that’s at best only true if everyone got rid of them; if you don’t have soldiers but other countries do, you’re going to be in big trouble. (“We’ll beat our swords into liverwurst / Down by the East Riverside; / But no one wants to be the first!”) So unless and until we can solve that mother of all coordination problems, we need to have soldiers around.

What is life like for a soldier? Well, they don’t seem overpaid; if anything, underpaid. (Maybe some of the officers are overpaid, but clearly not most of the enlisted personnel. Part of the problem there is that “pay grade” is nearly synonymous with “rank”—it’s a primate hierarchy, not a rational wage structure. Then again, so are most industries; the military just makes it more explicit.) But there do seem to be enough of them. Military officials may lament of “shortages” of soldiers, but they never actually seem to want for troops to deploy when they really need them. And if a major war really did start that required all available manpower, the draft could be reinstated and then suddenly they’d have it—the authority to coerce compliance is precisely how you can avoid having a shortage while keeping your workers underpaid. (Russia’s soldier shortage is genuine—something about being utterly outclassed by your enemy’s technological superiority in an obviously pointless imperialistic war seems to hurt your recruiting numbers.)

What is life like for a typical soldier? The answer may surprise you. The overwhelming answer in surveys and interviews (which also fits with the experiences I’ve heard about from friends and family in the military) is that life as a soldier is boring. All you do is wake up in the morning and push rubbish around camp.” Bosnia was scary for about 3 months. After that it was boring. That is pretty much day to day life in the military. You are bored.”

This isn’t new, nor even an artifact of not being in any major wars: Union soldiers in the US Civil War had the same complaint. Even in World War I, a typical soldier spent only half the time on the front, and when on the front only saw combat 1/5 of the time. War is boring.

In other words, there is a massive glut of soldiers. Most of them don’t even know what to do with themselves most of the time.

This makes perfect sense. Why? Because an army needs to be resilient. And to be resilient, you must be redundant. If you only had exactly enough soldiers to deploy in a typical engagement, you’d never have enough for a really severe engagement. If on average you had enough, that means you’d spend half the time with too few. And the costs of having too few soldiers are utterly catastrophic.

This is probably an evolutionary outcome, in fact; civilizations may have tried to have “leaner” militaries that didn’t have so much redundancy, and those civilizations were conquered by other civilizations that were more profligate. (This is not to say that we couldn’t afford to cut military spending at all; it’s one thing to have the largest military in the world—I support that, actually—but quite another to have more than the next 10 combined.)

What’s the policy solution here? It’s actually pretty simple.

Pay nurses and EMTs more. A lot more. Whatever it takes to get to the point where we not only have enough, but have so many people lining up to join we don’t even know what to do with them all. If private healthcare firms won’t do it, force them to—or, all the more reason to nationalize healthcare. The stakes are far too high to leave things as they are.

Would this be expensive? Sure.

Removing the shortage of EMTs wouldn’t even be that expensive. There are only about 260,000 EMTs in the US, and they get paid the apallingly low median salary of $36,000. That means we’re currently spending only about $9 billion per year on EMTs. We could double their salaries and double their numbers for only an extra $27 billion—about 0.1% of US GDP.

Nurses would cost more. There are about 5 million nurses in the US, with an average salary of about $78,000, so we’re currently spending about $390 billion a year on nurses. We probably can’t afford to double both salary and staffing. But maybe we could increase both by 20%, costing about an extra $170 billion per year.

Altogether that would cost about $200 billion per year. To save one hundred thousand lives.

That’s $2 million per life saved, or about $40,000 per QALY. The usual estimate for the value of a statistical life is about $10 million, and the usual threshold for a cost-effective medical intervention is $50,000-$100,000 per QALY; so we’re well under both. This isn’t as efficient as buying malaria nets in Africa, but it’s more efficient than plenty of other things we’re spending on. And this isn’t even counting additional benefits of better care that go beyond lives saved.

In fact if we nationalized US healthcare we could get more than these amounts in savings from not wasting our money on profits for insurance and drug companies—simply making the US healthcare system as cost-effective as Canada’s would save $6,000 per American per year, or a whopping $1.9 trillion. At that point we could double the number of nurses and their salaries and still be spending less.

No, it’s not because nurses and doctors are paid much less in Canada than the US. That’s true in some countries, but not Canada. The median salary for nurses in Canada is about $95,500 CAD, which is $71,000 US at current exchange rates. Doctors in Canada can make anywhere from $80,000 to $400,000 CAD, which is $60,000 to $300,000 US. Nor are healthcare outcomes in Canada worse than the US; if anything, they’re better, as Canadians live an average of four years longer than Americans. No, the radical difference in cost—a factor of 2 to 1—between Canada and the US comes from privatization. Privatization is supposed to make things more efficient and lower costs, but it has absolutely not done that in US healthcare.

And if our choice is between spending more money and letting hundreds of thousands or millions of people die every year, that’s no choice at all.

The Efficient Roulette Hypothesis

Nov 27 JDN 2459911

The efficient market hypothesis is often stated in several different ways, and these are often treated as equivalent. There are at least three very different definitions of it that people seem to use interchangeably:

  1. Market prices are optimal and efficient.
  2. Market prices aggregate and reflect all publicly-available relevant information.
  3. Market prices are difficult or impossible to predict.

The first reading, I will call the efficiency hypothesis, because, well, it is what we would expect a phrase like “efficient market hypothesis” to mean. The ordinary meaning of those words would imply that we are asserting that market prices are in some way optimal or near-optimal, that markets get prices “right” in some sense at least the vast majority of the time.

The second reading I’ll call the information hypothesis; it implies that market prices are an information aggregation mechanism which automatically incorporates all publicly-available information. This already seems quite different from efficiency, but it seems at least tangentially related, since information aggregation could be one useful function that markets serve.

The third reading I will call the unpredictability hypothesis; it says simply that market prices are very difficult to predict, and so you can’t reasonably expect to make money by anticipating market price changes far in advance of everyone else. But as I’ll get to in more detail shortly, that doesn’t have the slightest thing to do with efficiency.

The empirical data in favor of the unpredictability hypothesis is quite overwhelming. It’s exceedingly hard to beat the market, and for most people, most of the time, the smartest way to invest is just to buy a diversified portfolio and let it sit.

The empirical data in favor of the information hypothesis is mixed, but it’s at least plausible; most prices do seem to respond to public announcements of information in ways we would expect, and prediction markets can be surprisingly accurate at forecasting the future.

The empirical data in favor of the efficiency hypothesis, on the other hand, is basically nonexistent. On the one hand this is a difficult hypothesis to test directly, since it isn’t clear what sort of benchmark we should be comparing against—so it risks being not even wrong. But if you consider basically any plausible standard one could try to set for how an efficient market would run, our actual financial markets in no way resemble it. They are erratic, jumping up and down for stupid reasons or no reason at all. They are prone to bubbles, wildly overvaluing worthless assets. They have collapsed governments and ruined millions of lives without cause. They have resulted in the highest-paying people in the world doing jobs that accomplish basically nothing of genuine value. They are, in short, a paradigmatic example of what inefficiency looks like.

Yet, we still have economists who insist that “the efficient market hypothesis” is a proven fact, because the unpredictability hypothesis is clearly correct.

I do not think this is an accident. It’s not a mistake, or an awkwardly-chosen technical term that people are misinterpreting.

This is a motte and bailey doctrine.

Motte-and-bailey was a strategy in medieval warfare. Defending an entire region is very difficult, so instead what was often done was constructing a small, highly defensible fortification—the motte—while accepting that the land surrounding it—the bailey—would not be well-defended. Most of the time, the people stayed on the bailey, where the land was fertile and it was relatively pleasant to live. But should they be attacked, they could retreat to the motte and defend themselves until the danger was defeated.

A motte-and-bailey doctrine is an analogous strategy used in argumentation. You use the same words for two different versions of an idea: The motte is a narrow, defensible core of your idea that you can provide strong evidence for, but it isn’t very strong and may not even be interesting or controversial. The bailey is a broad, expansive version of your idea that is interesting and controversial and leads to lots of significant conclusions, but can’t be well-supported by evidence.

The bailey is the efficiency hypothesis: That market prices are optimal and we are fools to try to intervene or even regulate them because the almighty Invisible Hand is superior to us.

The motte is the unpredictability hypothesis: Market prices are very hard to predict, and most people who try to make money by beating the market fail.

By referring to both of these very different ideas as “the efficient market hypothesis”, economists can act as if they are defending the bailey, and prescribe policies that deregulate financial markets on the grounds that they are so optimal and efficient; but then when pressed for evidence to support their beliefs, they can pivot to the motte, and merely show that markets are unpredictable. As long as people don’t catch on and recognize that these are two very different meanings of “the efficient market hypothesis”, then they can use the evidence for unpredictability to support their goal of deregulation.

Yet when you look closely at this argument, it collapses. Unpredictability is not evidence of efficiency; if anything, it’s the opposite. Since the world doesn’t really change on a minute-by-minute basis, an efficient system should actually be relatively predictable in the short term. If prices reflected the real value of companies, they would change only very gradually, as the fortunes of the company change as a result of real-world events. An earthquake or a discovery of a new mine would change stock prices in relevant industries; but most of the time, they’d be basically flat. The occurrence of minute-by-minute or even second-by-second changes in prices basically proves that we are not tracking any genuine changes in value.

Roulette wheels are extremely unpredictable by design—by law, even—and yet no one would accuse them of being an efficient way of allocating resources. If you bet on roulette wheels and try to beat the house, you will almost surely fail, just as you would if you try to beat the stock market—and dare I say, for much the same reasons?

So if we’re going to insist that “efficiency” just means unpredictability, rather than actual, you know, efficiency, then we should all speak of the Efficient Roulette Hypothesis. Anything we can’t predict is now automatically “efficient” and should therefore be left unregulated.

Small deviations can have large consequences.

Jun 26 JDN 2459787

A common rejoinder that behavioral economists get from neoclassical economists is that most people are mostly rational most of the time, so what’s the big deal? If humans are 90% rational, why worry so much about the other 10%?

Well, it turns out that small deviations from rationality can have surprisingly large consequences. Let’s consider an example.

Suppose we have a market for some asset. Without even trying to veil my ulterior motive, let’s make that asset Bitcoin. Its fundamental value is of course $0; it’s not backed by anything (not even taxes or a central bank), it has no particular uses that aren’t already better served by existing methods, and it’s not even scalable.

Now, suppose that 99% of the population rationally recognizes that the fundamental value of the asset is indeed $0. But 1% of the population doesn’t; they irrationally believe that the asset is worth $20,000. What will the price of that asset be, in equilibrium?

If you assume that the majority will prevail, it should be $0. If you did some kind of weighted average, you’d think maybe its price will be something positive but relatively small, like $200. But is this actually the price it will take on?

Consider someone who currently owns 1 unit of the asset, and recognizes that it is fundamentally worthless. What should they do? Well, if they also know that there are people out there who believe it is worth $20,000, the answer is obvious: They should sell it to those people. Indeed, they should sell it for something quite close to $20,000 if they can.

Now, suppose they don’t already own the asset, but are considering whether or not to buy it. They know it’s worthless, but they also know that there are people who will buy it for close to $20,000. Here’s the kicker: This is a reason for them to buy it at anything meaningfully less than $20,000.

Suppose, for instance, they could buy it for $10,000. Spending $10,000 to buy something you know is worthless seems like a terribly irrational thing to do. But it isn’t irrational, if you also know that somewhere out there is someone who will pay $20,000 for that same asset and you have a reasonable chance of finding that person and selling it to them.

The equilibrium outcome, then, is that the price of the asset will be almost $20,000! Even though 99% of the population recognizes that this asset is worthless, the fact that 1% of people believe it’s worth as much as a car will result in it selling at that price. Thus, even a slight deviation from a perfectly-rational population can result in a market that is radically at odds with reality.

And it gets worse! Suppose that in fact everyone knows that the asset is worthless, but most people think that there is some small portion of the population who believes the asset has value. Then, it will still be priced at that value in equilibrium, as people trade it back and forth searching in vain for the person who really wants it! (This is called the Greater Fool theory.)

That is, the price of an asset in a free market—even in a market where most people are mostly rational most of the time—will in fact be determined by the highest price anyone believes that anyone else thinks it has. And this is true of essentially any asset market—any market where people are buying something, not to use it, but to sell it to someone else.

Of course, beliefs—and particularly beliefs about beliefs—can very easily change, so that equilibrium price could move in any direction basically without warning.

Suddenly, the cycle of bubble and crash, boom and bust, doesn’t seem so surprising does it? The wonder is that prices ever become stable at all.


Then again, do they? Last I checked, the only prices that were remotely stable were for goods like apples and cars and televisions, goods that are bought and sold to be consumed. (Or national currencies managed by competent central banks, whose entire job involves doing whatever it takes to keep those prices stable.) For pretty much everything else—and certainly any purely financial asset that isn’t a national currency—prices are indeed precisely as wildly unpredictable and utterly irrational as this model would predict.

So much for the Efficient Market Hypothesis? Sadly I doubt that the people who still believe this nonsense will be convinced.

If I had a trillion dollars…

May 29 JDN 2459729

(To the tune of “If I had a million dollars” by Barenaked Ladies; by the way, he does now)

[Inspired by the book How to Spend a Trillion Dollars]

If I had a trillion dollars… if I had a trillion dollars!

I’d buy everyone a house—and yes, I mean, every homeless American.

[500,000 homeless households * $300,000 median home price = $150 billion]

If I had a trillion dollars… if I had a trillion dollars!

I’d give to the extreme poor—and then there would be no extreme poor!

[Global poverty gap: $160 billion]

If I had a trillion dollars… if I had a trillion dollars!

I’d send people to Mars—hey, maybe we’d find some alien life!

[Estimated cost of manned Mars mission: $100 billion]

If I had a trillion dollars… if I had a trillion dollars!

I’d build us a Moon base—haven’t you always wanted a Moon base?

[Estimated cost of a permanent Lunar base: $35 billion. NASA is bad at forecasting cost, so let’s allow cost overruns to take us to $100 billion.]

If I had a trillion dollars… if I had a trillion dollars!

I’d build a new particle accelerator—let’s finally figure out dark matter!

[Cost of planned new accelerator at CERN: $24 billion. Let’s do 4 times bigger and make it $100 billion.]

If I had a trillion dollars… if I had a trillion dollars!

I’d save the Amazon—pay all the ranchers to do something else!

[Brazil, where 90% of Amazon cattle ranching is, produces about 10 million tons of beef per year, which at an average price of $5000 per ton is $50 billion. So I could pay all the farmers two years of revenue to protect the Amazon instead of destroying it for $100 billion.]

If I had a trillion dollars…

We wouldn’t have to drive anymore!

If I had a trillion dollars…

We’d build high-speed rail—it won’t cost more!

[Cost of proposed high-speed rail system: $240 billion]

If I had a trillion dollars… if I had trillion dollars!

Hey wait, I could get it from a carbon tax!

[Even a moderate carbon tax could raise $1 trillion in 10 years.]

If I had a trillion dollars… I’d save the world….

All of the above really could be done for under $1 trillion. (Some of them would need to be repeated, so we could call it $1 trillion per year.)

I, of course, do not, and will almost certainly never have, anything approaching $1 trillion.

But here’s the thing: There are people who do.

Elon Musk and Jeff Bezos together have a staggering $350 billion. That’s two people with enough money to end world hunger. And don’t give me that old excuse that it’s not in cash: UNICEF gladly accepts donations in stock. They could, right now, give their stocks to UNICEF and thereby end world hunger. They are choosing not to do that. In fact, the goodwill generated by giving, say, half their stocks to UNICEF might actually result in enough people buying into their companies that their stock prices would rise enough to make up the difference—thus costing them literally nothing.

The total net wealth of all the world’s billionaires is a mind-boggling $12.7 trillion. That’s more than half a year of US GDP. Held by just over 2600 people—a small town.

The US government spends $4 trillion in a normal year—and $5 trillion the last couple of years due to the pandemic. Nearly $1 trillion of that is military spending, which could be cut in half and still be the highest in the world. After seeing how pathetic Russia’s army actually is in battle (they paint Zs on their tanks because apparently their IFF system is useless!), are we really still scared of them? Do we really need eleven carrier battle groups?

Yes, the total cost of mitigating climate change is probably in the tens of trillions—but the cost of not mitigating climate change could be over $100 trillion. And it’s not as if the world can’t come up with tens of trillions; we already do. World GDP is now over $100 trillion per year; just 2% of that for 10 years is $20 trillion.

Do these sound like good ideas to you? Would you want to do them? I think most people would want most of them. So now the question becomes: Why aren’t we doing them?

The economic impact of chronic illness

Mar 27 JDN 2459666

This topic is quite personal for me, as someone who has suffered from chronic migraines since adolescence. Some days, weeks, and months are better than others. This past month has been the worst I have felt since 2019, when we moved into an apartment that turned out to be full of mold. This time, there is no clear trigger—which also means no easy escape.

The economic impact of chronic illness is enormous. 90% of US healthcare spending is on people with chronic illnesses, including mental illnesses—and the US has the most expensive healthcare system in the world by almost any measure. Over 55% of adult Medicaid beneficiaries have two or more chronic illnesses.

The total annual cost of all chronic illnesses is hard to estimate, but it’s definitely somewhere in the trillions of dollars per year. The World Economic Forum estimated that number at $47 trillion over the next 20 years, which I actually consider conservative. I think this is counting how much we actually spend and some notion of lost productivity, as well as the (fraught) concept of the value of a statistical life—but I don’t think it’s putting a sensible value on the actual suffering. This will effectively undervalue poor people who are suffering severely but can’t get treated—because they spend little and can’t put a large dollar value on their lives. In the US, where the data is the best, the total cost of chronic illness comes to nearly $4 trillion per year—20% of GDP. If other countries are as bad or worse (and I don’t see why they would be better), then we’re looking at something like $17 trillion in real cost every single year; so over the next 20 years that’s not $47 trillion—it’s over $340 trillion.

Over half of US adults have at least one of the following, and over a quarter have two or more: arthritis, cancer, chronic obstructive pulmonary disease, coronary heart disease, current asthma, diabetes, hepatitis, hypertension, stroke, or kidney disease. (Actually the former very nearly implies the latter, unless chronic conditions somehow prevented one another. Two statistically independent events with 50% probability will jointly occur 25% of the time: Flip two coins.)

Unsurprisingly, age is positively correlated with chronic illness. Income is negatively correlated, both because chronic illnesses reduce job opportunities and because poorer people have more trouble getting good treatment. I am the exception that proves the rule, the upper-middle-class professional with both a PhD and a severe chronic illness.

There seems to be a common perception that chronic illness is largely a “First World problem”, but in fact chronic illnesses are more common—and much less poorly treated—in countries with low and moderate levels of development than they are in the most highly-developed countries. Over 75% of all deaths by non-communicable disease are in low- and middle-income countries. The proportion of deaths that is caused by non-communicable diseases is higher in high-income countries—but that’s because other diseases have been basically eradicated from high-income countries. People in rich countries actually suffer less from chronic illness than people in poor countries (on average).

It’s always a good idea to be careful of the distinction between incidence and prevalence, but with chronic illness this is particularly important, because (almost by definition) chronic illnesses last longer and so can have very high prevalence even with low incidence. Indeed, the odds of someone getting their first migraine (incidence) are low precisely because the odds of being someone who gets migraines (prevalence) is so high.

Quite high in fact: About 10% of men and 20% of women get migraines at least occasionally—though only about 8% of these (so 1% of men and 2% of women) get chronic migraines. Indeed, because ti is both common and can be quite severe, migraine is the second-most disabling condition worldwide as measured by years lived with disability (YLD), after low back pain. Neurologists are particularly likely to get migraines; the paper I linked speculates that they are better at realizing they have migraines, but I think we also need to consider the possibility of self-selection bias where people with migraines may be more likely to become neurologists. (I considered it, and it seems at least as good a reason as becoming a dentist because your name is Denise.)

If you order causes by the number of disability-adjusted life years (DALYs) they cost, chronic conditions rank quite high: while cardiovascular disease and cancer rate by far the highest, diabetes and kidney disease, mental disorders, neurological disorders, and musculoskeletal disorders all rate higher than malaria, HIV, or any other infection except respiratory infections (read: tuberculosis, influenza, and, once these charts are updated for the next few years, COVID). Note also that at the very bottom is “conflict and terrorism”—that’s all organized violence in the world—and natural disasters. Mental disorders alone cost the world 20 times as many DALYs as all conflict and terrorism combined.

Cryptocurrency and its failures

Jan 30 JDN 2459620

It started out as a neat idea, though very much a solution in search of a problem. Using encryption, could we decentralize currency and eliminate the need for a central bank?

Well, it’s been a few years now, and we have now seen how well that went. Bitcoin recently crashed, but it has always been astonishingly volatile. As a speculative asset, such volatility is often tolerable—for many, even profitable. But as a currency, it is completely unbearable. People need to know that their money will be a store of value and a medium of exchange—and something that changes price one minute to the next is neither.

Some of cryptocurrency’s failures have been hilarious, like the ill-fated island called [yes, really] “Cryptoland”, which crashed and burned when they couldn’t find any investors to help them buy the island.

Others have been darkly comic, but tragic in their human consequences. Chief among these was the failed attempt by El Salvador to make Bitcoin an official currency.

At the time, President Bukele justified it by an economically baffling argument: Total value of all Bitcoin in the world is $680 billion, therefore if even 1% gets invested in El Salvador, GDP will increase by $6.8 billion, which is 25%!

First of all, that would only happen if 1% of all Bitcoin were invested in El Salvador each year—otherwise you’re looking at a one-time injection of money, not an increase in GDP.

But more importantly, this is like saying that the total US dollar supply is $6 trillion, (that’s physically cash; the actual money supply is considerably larger) so maybe by dollarizing your economy you can get 1% of that—$60 billion, baby! No, that’s not how any of this works. Dollarizing could still be a good idea (though it didn’t go all that well in El Salvador), but it won’t give you some kind of share in the US economy. You can’t collect dividends on US GDP.

It’s actually good how El Salvador’s experiment in bitcoin failed: Nobody bought into it in the first place. They couldn’t convince people to buy government assets that were backed by Bitcoin (perhaps because the assets were a strictly worse deal than just, er, buying Bitcoin). So the human cost of this idiotic experiment should be relatively minimal: It’s not like people are losing their homes over this.

That is, unless President Bukele doubles down, which he now appears to be doing. Even people who are big fans of cryptocurrency are unimpressed with El Salvador’s approach to it.

It would be one thing if there were some stable cryptocurrency that one could try pegging one’s national currency to, but there isn’t. Even so-called stablecoins are generally pegged to… regular currencies, typically the US dollar but also sometimes the Euro or a few other currencies. (I’ve seen the Australian Dollar and the Swiss Franc, but oddly enough, not the Pound Sterling.)

Or a country could try issuing its own cryptocurrency, as an all-digital currency instead of one that is partly paper. It’s not totally clear to me what advantages this would have over the current system (in which most of the money supply is bank deposits, i.e. already digital), but it would at least preserve the key advantage of having a central bank that can regulate your money supply.

But no, President Bukele decided to take an already-existing cryptocurrency, backed by nothing but the whims of the market, and make it legal tender. Somehow he missed the fact that a currency which rises and falls by 10% in a single day is generally considered bad.

Why? Is he just an idiot? I mean, maybe, though Bukele’s approval rating is astonishingly high. (And El Salvador is… mostly democratic. Unlike, say, Putin’s, I think these approval ratings are basically real.) But that’s not the only reason. My guess is that he was gripped by the same FOMO that has gripped everyone else who evangelizes for Bitcoin. The allure of easy money is often irresistible.

Consider President Bukele’s position. You’re governing a poor, war-torn country which has had economic problems of various types since its founding. When the national currency collapsed a generation ago, the country was put on the US dollar, but that didn’t solve the problem. So you’re looking for a better solution to the monetary doldrums your country has been in for decades.

You hear about a fancy new monetary technology, “cryptocurrency”, which has all the tech people really excited and seems to be making tons of money. You don’t understand a thing about it—hardly anyone seems to, in fact—but you know that people with a lot of insider knowledge of technology and finance are really invested in it, so it seems like there must be something good here. So, you decide to launch a program that will convert your country’s currency from the US dollar to one of these new cryptocurrencies—and you pick the most famous one, which is also extremely valuable, Bitcoin.

Could cryptocurrencies be the future of money, you wonder? Could this be the way to save your country’s economy?

Despite all the evidence that had already accumulated that cryptocurrency wasn’t working, I can understand why Bukele would be tempted by that dream. Just as we’d all like to get free money without having to work, he wanted to save his country’s economy without having to implement costly and unpopular reforms.

But there is no easy money. Not really. Some people get lucky; but they ultimately benefit from other people’s hard work.

The lesson here is deeper than cryptocurrency. Yes, clearly, it was a dumb idea to try to make Bitcoin a national currency, and it will get even dumber if Bukele really does double down on it. But more than that, we must all resist the lure of easy money. If it sounds too good to be true, it probably is.

Low-skill jobs

Dec 5 JDN 2459554

I’ve seen this claim going around social media for awhile now: “Low-skill jobs are a classist myth created to justify poverty wages.”

I can understand why people would say things like this. I even appreciate that many low-skill jobs are underpaid and unfairly stigmatized. But it’s going a bit too far to claim that there is no such thing as a low-skill job.

Suppose all the world’s physicists and all the world’s truckers suddenly had to trade jobs for a month. Who would have a harder time?

If a mathematician were asked to do the work of a janitor, they’d be annoyed. If a janitor were asked to do the work of a mathematician, they’d be completely nonplussed.

I could keep going: Compare robotics engineers to dockworkers or software developers to fruit pickers.

Higher pay does not automatically equate to higher skills: welders are clearly more skilled than stock traders. Give any welder a million-dollar account and a few days of training, and they could do just as well as the average stock trader (which is to say, worse than the S&P 500). Give any stock trader welding equipment and a similar amount of training, and they’d be lucky to not burn their fingers off, much less actually usefully weld anything.

This is not to say that any random person off the street could do just as well as a janitor or dockworker as someone who has years of experience at that job. It is simply to say that they could do better—and pick up the necessary skills faster—than a random person trying to work as a physicist or software developer.

Moreover, this does justify some difference in pay. If some jobs are easier than others, in the sense that more people are qualified to do them, then the harder jobs will need to pay more in order to attract good talent—if they didn’t, they’d risk their high-skill workers going and working at the low-skill jobs instead.

This is of course assuming all else equal, which is clearly not the case. No two jobs are the same, and there are plenty of other considerations that go into choosing someone’s wage: For one, not simply what skills are required, but also the effort and unpleasantness involved in doing the work. I’m entirely prepared to believe that being a dockworker is less fun than being a physicist, and this should reduce the differential in pay between them. Indeed, it may have: Dockworkers are paid relatively well as far as low-skill jobs go—though nowhere near what physicists are paid. Then again, productivity is also a vital consideration, and there is a general tendency that high-skill jobs tend to be objectively more productive: A handful of robotics engineers can do what was once the work of hundreds of factory laborers.

There are also ways for a worker to be profitable without being particularly productive—that is, to be very good at rent-seeking. This is arguably the case for lawyers and real estate agents, and undeniably the case for derivatives traders and stockbrokers. Corporate executives aren’t stupid; they wouldn’t pay these workers astronomical salaries if they weren’t making money doing so. But it’s quite possible to make lots of money without actually producing anything of particular value for human society.

But that doesn’t mean that wages are always fair. Indeed, I dare say they typically are not. One of the most important determinants of wages is bargaining power. Unions don’t increase skill and probably don’t increase productivity—but they certainly increase wages, because they increase bargaining power.

And this is also something that’s correlated with lower levels of skill, because the more people there are who know how to do what you do, the harder it is for you to make yourself irreplaceable. A mathematician who works on the frontiers of conformal geometry or Teichmueller theory may literally be one of ten people in the world who can do what they do (quite frankly, even the number of people who know what they do is considerably constrained, though probably still at least in the millions). A dockworker, even one who is particularly good at loading cargo skillfully and safely, is still competing with millions of other people with similar skills. The easier a worker is to replace, the less bargaining power they have—in much the same way that a monopoly has higher profits than an oligopoly, which has higher profits that a competitive market.

This is why I support unions. I’m also a fan of co-ops, and an ardent supporter of progressive taxation and safety regulations. So don’t get me wrong: Plenty of low-skill workers are mistreated and underpaid, and they deserve better.

But that doesn’t change the fact that it’s a lot easier to be a janitor than a physicist.

Are unions collusion?

Oct 31 JDN 2459519

The standard argument from center-right economists against labor unions is that they are a form of collusion: Producers are coordinating and intentionally holding back from what would be in their individual self-interest in order to gain a collective advantage. And this is basically true: In the broadest sense of the term, labor unions are are form of collusion. Since collusion is generally regarded as bad, therefore (this argument goes), unions are bad.

What this argument misses out on is why collusion is generally regarded as bad. The typical case for collusion is between large corporations, each of which already controls a large share of the market—collusion then allows them to act as if they control an even larger share, potentially even acting as a monopoly.

Labor unions are not like this. Literally no individual laborer controls a large segment of the market. (Some very specialized laborers, like professional athletes, or, say, economists, might control a not completely trivial segment of their particular job market—but we’re still talking something like 1% at most. Even Tiger Woods or Paul Krugman is not literally irreplaceable.) Moreover, even the largest unions can rarely achieve anything like a monopoly over a particular labor market.

Thus whereas typical collusion involves going from a large market share to an even larger—often even dominant—market share, labor unions involve going from a tiny market share to a moderate—and usually not dominant—market share.

But that, by itself, wouldn’t be enough to justify unions. While small family businesses banding together in collusion is surely less harmful than large corporations doing the same, it would probably still be a bad thing, insofar as it would raise prices and reduce the quantity or quality of products sold. It would just be less bad.

Yet unions differ from even this milder collusion in another important respect: They do not exist to increase bargaining power versus consumers. They exist to increase bargaining power versus corporations.

And corporations, it turns out, already have a great deal of bargaining power. While a labor union acts as something like a monopoly (or at least oligopoly), corporations act like the opposite: oligopsony or even monopsony.

While monopoly or monopsony on its own is highly unfair and inefficient, the combination of the two—bilateral monopolyis actually relatively fair and efficient. Bilateral monopoly is probably not as good as a truly competitive market, but it is definitely better than either a monopoly or monopsony alone. Whereas a monopoly has too much bargaining power for the seller (resulting in prices that are too high), and a monopsony has too much bargaining power for the buyer (resulting in prices that are too low), a bilateral monopoly has relatively balanced bargaining power, and thus gets an outcome that’s not too much different from fair competition in a free market.

Thus, unions really exist as a correction mechanism for the excessive bargaining power of corporations. Most unions are between workers in large industries who work for a relatively small number of employers, such as miners, truckers, and factory workers. (Teachers are also an interesting example, because they work for the government, which effectively has a monopsony on public education services.) In isolation they may seem inefficient; but in context they really exist to compensate for other, worse inefficiencies.


We could imagine a world where this was not so: Say there is a market with many independent buyers who are unwilling or unable to reliably collude, and they are served by a small number of powerful unions that use their bargaining power to raise prices and reduce output.


We have some markets that already look a bit like that: Consider the licensing systems for doctors and lawyers. These are basically guilds, which are collusive in the same way as labor unions.

Note that unlike, say, miners, truckers, or factory workers, doctors and lawyers are not a large segment of the population; they are bargaining against consumers just as much as corporations; and they are extremely well-paid and very likely undersupplied. (Doctors are definitely undersupplied; with lawyers it’s a bit more complicated, but given how often corporations get away with terrible things and don’t get sued for it, I think it’s fair to say that in the current system, lawyers are undersupplied.) So I think it is fair to be concerned that the guild systems for doctors and lawyers are too powerful. We want some system for certifying the quality of doctors and lawyers, but the existing standards are so demanding that they result in a shortage of much-needed labor.

One way to tell that unions aren’t inefficient is to look at how unionization relates to unemployment. If unions were acting as a harmful monopoly on labor, unemployment should be higher in places with greater unionization rates. The empirical data suggests that if there is any such effect, it’s a small one. There are far more important determinants of unemployment than unionization. (Wages, on the other hand, show a strong positive link with unionization.) Much like the standard prediction that raising minimum wage would reduce employment, the prediction that unions raise unemployment has largely not been borne out by the data. And for much the same reason: We had ignored the bargaining power of employers, which minimum wage and unions both reduce.

Thus, the justifiability of unions isn’t something that we could infer a priori without looking at the actual structure of the labor market. Unions aren’t always or inherently good—but they are usually good in the system as it stands. (Actually there’s one particular class of unions that do not seem to be good, and that’s police unions: But this is a topic for another time.)

My ultimate conclusion? Yes, unions are a form of collusion. But to infer from that they must be bad is to commit a Noncentral Fallacy. Unions are the good kind of collusion.