A common rejoinder that behavioral economists get from neoclassical economists is that most people are mostly rational most of the time, so what’s the big deal? If humans are 90% rational, why worry so much about the other 10%?
Well, it turns out that small deviations from rationality can have surprisingly large consequences. Let’s consider an example.
Suppose we have a market for some asset. Without even trying to veil my ulterior motive, let’s make that asset Bitcoin. Its fundamental value is of course $0; it’s not backed by anything (not even taxes or a central bank), it has no particular uses that aren’t already better served by existing methods, and it’s not even scalable.
Now, suppose that 99% of the population rationally recognizes that the fundamental value of the asset is indeed $0. But 1% of the population doesn’t; they irrationally believe that the asset is worth $20,000. What will the price of that asset be, in equilibrium?
If you assume that the majority will prevail, it should be $0. If you did some kind of weighted average, you’d think maybe its price will be something positive but relatively small, like $200. But is this actually the price it will take on?
Consider someone who currently owns 1 unit of the asset, and recognizes that it is fundamentally worthless. What should they do? Well, if they also know that there are people out there who believe it is worth $20,000, the answer is obvious: They should sell it to those people. Indeed, they should sell it for something quite close to $20,000 if they can.
Now, suppose they don’t already own the asset, but are considering whether or not to buy it. They know it’s worthless, but they also know that there are people who will buy it for close to $20,000. Here’s the kicker: This is a reason for them to buy it at anything meaningfully less than $20,000.
Suppose, for instance, they could buy it for $10,000. Spending $10,000 to buy something you know is worthless seems like a terribly irrational thing to do. But it isn’t irrational, if you also know that somewhere out there is someone who will pay $20,000 for that same asset and you have a reasonable chance of finding that person and selling it to them.
The equilibrium outcome, then, is that the price of the asset will be almost $20,000! Even though 99% of the population recognizes that this asset is worthless, the fact that 1% of people believe it’s worth as much as a car will result in it selling at that price. Thus, even a slight deviation from a perfectly-rational population can result in a market that is radically at odds with reality.
And it gets worse! Suppose that in fact everyone knows that the asset is worthless, but most people think that there is some small portion of the population who believes the asset has value. Then, it will still be priced at that value in equilibrium, as people trade it back and forth searching in vain for the person who really wants it! (This is called the Greater Fool theory.)
That is, the price of an asset in a free market—even in a market where most people are mostly rational most of the time—will in fact be determined by the highest price anyone believes that anyone else thinks it has. And this is true of essentially any asset market—any market where people are buying something, not to use it, but to sell it to someone else.
Of course, beliefs—and particularly beliefs about beliefs—can very easily change, so that equilibrium price could move in any direction basically without warning.
Suddenly, the cycle of bubble and crash, boom and bust, doesn’t seem so surprising does it? The wonder is that prices ever become stable at all.
Then again, do they? Last I checked, the only prices that were remotely stable were for goods like apples and cars and televisions, goods that are bought and sold to be consumed. (Or national currencies managed by competent central banks, whose entire job involves doing whatever it takes to keep those prices stable.) For pretty much everything else—and certainly any purely financial asset that isn’t a national currency—prices are indeed precisely as wildly unpredictable and utterly irrational as this model would predict.
So much for the Efficient Market Hypothesis? Sadly I doubt that the people who still believe this nonsense will be convinced.
While a return to double-digits remains possible, at this point it likely won’t happen, and if it does, it will occur only briefly.
This is no doubt a major reason why the dollar and the pound are widely used as reserve currencies (especially the dollar), and is likely due to the fact that they are managed by the world’s most competent central banks. Brexit would almost have made sense if the UK had been pressured to join the Euro; but they weren’t, because everyone knew the pound was better managed.
The Euro also doesn’t have much inflation, but if anything they err on the side of too low, mainly because Germany appears to believe that inflation is literally Hitler. In fact, the rise of the Nazis didn’t have much to do with the Weimar hyperinflation. The Great Depression was by far a greater factor—unemployment is much, much worse than inflation. (By the way, it’s weird that you can put that graph back to the 1980s. It, uh, wasn’t the Euro then. Euros didn’t start circulating until 1999. Is that an aggregate of the franc and the deutsche mark and whatever else? The Euro itself has never had double-digit inflation—ever.)
But it’s always a little surreal for me to see how panicked people in the US and UK get when our inflation rises a couple of percentage points. There seems to be an entire subgenre of economics news that basically consists of rich people saying the sky is falling because inflation has risen—or will, or may rise—by two points. (Hey, anybody got any ideas how we can get them to panic like this over rises in sea level or aggregate temperature?)
Hyperinflation is a real problem—it isn’t what put Hitler into power, but it has led to real crises in Germany, Zimbabwe, and elsewhere. Once you start getting over 100% per year, and especially when it starts rapidly accelerating, that’s a genuine crisis. Moreover, even though they clearly don’t constitute hyperinflation, I can see why people might legitimately worry about price increases of 20% or 30% per year. (Let alone 60% like Argentina is dealing with right now.) But why is going from 2% to 6% any cause for alarm? Yet alarmed we seem to be.
I can even understand why rich people would be upset about inflation (though the magnitudeof their concern does still seem disproportionate). Inflation erodes the value of financial assets, because most bonds, options, etc. are denominated in nominal, not inflation-adjusted terms. (Though there are such things as inflation-indexed bonds.) So high inflation can in fact make rich people slightly less rich.
But why in the world are so many poor people upset about inflation?
Inflation doesn’t just erode the value of financial assets; it also erodes the value of financial debts. And most poor people have more debts than they have assets—indeed, it’s not uncommon for poor people to have substantial debt and no financial assets to speak of (what little wealth they have being non-financial, e.g. a car or a home). Thus, their net wealth position improves as prices rise.
The interest rate response can compensate for this to some extent, but most people’s debts are fixed-rate. Moreover, if it’s the higher interest rates you’re worried about, you should want the Federal Reserve and the Bank of England not to fight inflation too hard, because the way they fight it is chiefly by raising interest rates.
I admit, I question the survey design here: I would answer ‘yes’ to both questions if we’re talking about a theoretical 10,000% hyperinflation, but ‘no’ if we’re talking about a realistic 10% inflation. So I would like to see, but could not find, a survey asking people what level of inflation is sufficient cause for concern. But since most of these people seemed concerned about actual, realistic inflation (85% reported anger at seeing actual, higher prices), it still suggests a lot of strong feelings that even mild inflation is bad.
So it does seem to be the case that a lot of poor and middle-class people really strongly dislike inflation even in the actual, mild levels in which it occurs in the US and UK.
The main fear seems to be that inflation will erode people’s purchasing power—that as the price of gasoline and groceries rise, people won’t be able to eat as well or drive as much. And that, indeed, would be a real loss of utility worth worrying about.
But in fact this makes very little sense: Most forms of income—particularly labor income, which is the only real income for some 80%-90% of the population—actually increases with inflation, more or less one-to-one. Yes, there’s some delay—you won’t get your annual cost-of-living raise immediately, but several months down the road. But this could have at most a small effect on your real consumption.
To see this, suppose that inflation has risen from 2% to 6%. (Really, you need not suppose; it has.) Now consider your cost-of-living raise, which nearly everyone gets. It will presumably rise the same way: So if it was 3% before, it will now be 7%. Now consider how much your purchasing power is affected over the course of the year.
For concreteness, let’s say your initial income was $3,000 per month at the start of the year (a fairly typical amount for a middle-class American, indeed almost exactly the median personal income). Let’s compare the case of no inflation with a 1% raise, 2% inflation with a 3% raise, and 5% inflation with a 6% raise.
If there was no inflation, your real income would remain simply $3,000 per month, until the end of the year when it would become $3,030 per month. That’s the baseline to compare against.
If inflation is 2%, your real income would gradually fall, by about 0.16% per month, before being bumped up 3% at the end of the year. So in January you’d have $3,000, in February $2,995, in March $2,990. Come December, your real income has fallen to $2,941. But then next January it will immediately be bumped up 3% to $3,029, almost the same as it would have been with no inflation at all. The total lost income over the entire year is about $380, or about 1% of your total income.
If inflation instead rises to 6%, your real income will fall by 0.49% per month, reaching a minimum of $2,830 in December before being bumped back up to $3,028 next January. Your total loss for the whole year will be about $1110, or about 3% of your total income.
Indeed, it’s a pretty good heuristic to say that for an inflation rate of x% with annual cost-of-living raises, your loss of real income relative to having no inflation at all is about (x/2)%. (This breaks down for really high levels of inflation, at which point it becomes a wild over-estimate, since even 200% inflation doesn’t make your real income go to zero.)
This isn’t nothing, of course. You’d feel it. Going from 2% to 6% inflation at an income of $3000 per month is like losing $700 over the course of a year, which could be a month of groceries for a family of four. (Not that anyone can really raise a family of four on a single middle-class income these days. When did The Simpsons begin to seem aspirational?)
But this isn’t the whole story. Suppose that this same family of four had a mortgage payment of $1000 per month; that is also decreasing in real value by the same proportion. And let’s assume it’s a fixed-rate mortgage, as most are, so we don’t have to factor in any changes in interest rates.
With no inflation, their mortgage payment remains $1000. It’s 33.3% of their income this year, and it will be 33.0% of their income next year after they get that 1% raise.
With 2% inflation, their mortgage payment will also fall by 0.16% per month; $998 in February, $996 in March, and so on, down to $980 in December. This amounts to an increase in real income of about $130—taking away a third of the loss that was introduced by the inflation.
With 6% inflation, their mortgage payment will also fall by 0.49% per month; $995 in February, $990 in March, and so on, until it’s only $943 in December. This amounts to an increase in real income of over $370—again taking away a third of the loss.
Indeed, it’s no coincidence that it’s one third; the proportion of lost real income you’ll get back by cheaper mortgage payments is precisely the proportion of your income that was spent on mortgage payments at the start—so if, like too many Americans, they are paying more than a third of their income on mortgage, their real loss of income from inflation will be even lower.
And what if they are renting instead? They’re probably on an annual lease, so that payment won’t increase in nominal terms either—and hence will decrease in real terms, in just the same way as a mortgage payment. Likewise car payments, credit card payments, any debt that has a fixed interest rate. If they’re still paying back student loans, their financial situation is almost certainly improved by inflation.
This means that the real loss from an increase of inflation from 2% to 6% is something like 1.5% of total income, or about $500 for a typical American adult. That’s clearly not nearly as bad as a similar increase in unemployment, which would translate one-to-one into lost income on average; moreover, this loss would be concentrated among people who lost their jobs, so it’s actually worse than that once you account for risk aversion. It’s clearly better to lose 1% of your income than to have a 1% chance of losing nearly all your income—and inflation is the former while unemployment is the latter.
Indeed, the only reason you lost purchasing power at all was that your cost-of-living increases didn’t occur often enough. If instead you had a labor contract that instituted cost-of-living raises every month, or even every paycheck, instead of every year, you would get all the benefits of a cheaper mortgage and virtually none of the costs of a weaker paycheck. Convince your employer to make this adjustment, and you will actually benefit from higher inflation.
So if poor and middle-class people are upset about eroding purchasing power, they should be mad at their employers for not implementing more frequent cost-of-living adjustments; the inflation itself really isn’t the problem.
Why are some molecules (e.g. DNA) billions of times larger than others (e.g. H2O), but all atoms are within a much narrower range of sizes (only a few hundred)?
Why are some animals (e.g. elephants) millions of times as heavy as other (e.g. mice), but their cells are basically the same size?
Why does capital income vary so much more (factors of thousands or millions) than wages (factors of tens or hundreds)?
These three questions turn out to have much the same answer: Scalability.
Atoms are not very scalable: Adding another proton to a nucleus causes interactions with all the other protons, which makes the whole atom unstable after a hundred protons or so. But molecules, particularly organic polymers such as DNA, are tremendously scalable: You can add another piece to one end without affecting anything else in the molecule, and keep on doing that more or less forever.
Cells are not very scalable: Even with the aid of active transport mechanisms and complex cellular machinery, a cell’s functionality is still very much limited by its surface area. But animals are tremendously scalable: The same exponential growth that got you from a zygote to a mouse only needs to continue a couple years longer and it’ll get you all the way to an elephant. (A baby elephant, anyway; an adult will require a dozen or so years—remarkably comparable to humans, in fact.)
Labor income is not very scalable: There are only so many hours in a day, and the more hours you work the less productive you’ll be in each additional hour. But capital income is perfectly scalable: We can add another digit to that brokerage account with nothing more than a few milliseconds of electronic pulses, and keep doing that basically forever (due to the way integer storage works, above 2^63 it would require special coding, but it can be done; and seeing as that’s over 9 quintillion, it’s not likely to be a problem any time soon—though I am vaguely tempted to write a short story about an interplanetary corporation that gets thrown into turmoil by an integer overflow error).
This isn’t just an effect of our accounting either. Capital is scalable in a way that labor is not. When your contribution to production is owning a factory, there’s really nothing to stop you from owning another factory, and then another, and another. But when your contribution is working at a factory, you can only work so hard for so many hours.
When a phenomenon is highly scalable, it can take on a wide range of outcomes—as we see in molecules, animals, and capital income. When it’s not, it will only take on a narrow range of outcomes—as we see in atoms, cells, and labor income.
Exponential growth is also part of the story here: Animals certainly grow exponentially, and so can capital when invested; even some polymers function that way (e.g. under polymerase chain reaction). But I think the scalability is actually more important: Growing rapidly isn’t so useful if you’re going to immediately be blocked by a scalability constraint. (This actually relates to the difference between r- and K- evolutionary strategies, and offers further insight into the differences between mice and elephants.) Conversely, even if you grow slowly, given enough time, you’ll reach whatever constraint you’re up against.
Indeed, we can even say something about the probability distribution we are likely to get from random processes that are scalable or non-scalable.
A non-scalable random process will generally converge toward the familiar normal distribution, a “bell curve”:
The normal distribution has most of its weight near the middle; most of the population ends up near there. This is clearly the case for labor income: Most people are middle class, while some are poor and a few are rich.
But a scalable random process will typically converge toward quite a different distribution, aPareto distribution:
A Pareto distribution has most of its weight near zero, but covers an extremely wide range. Indeed it is what we call fat tailed, meaning that really extreme events occur often enough to have a meaningful effect on the average. A Pareto distribution has most of the people at the bottom, but the ones at the top are really on top.
And indeed, that’s exactly how capital income works: Most people have little or no capital income (indeed only about half of Americans and only a third(!) of Brits own any stocks at all), while a handful of hectobillionaires make utterly ludicrous amounts of money literally in their sleep.
Indeed, it turns out that income in general is pretty close to distributed normally (or maybe lognormally) for most of the income range, and then becomes very much Pareto at the top—where nearly all the income is capital income.
This fundamental difference in scalability between capital and labor underlies much of what makes income inequality so difficult to fight. Capital is scalable, and begets more capital. Labor is non-scalable, and we only have to much to give.
It would require a radically different system of capital ownership to really eliminate this gap—and, well, that’s been tried, and so far, it hasn’t worked out so well. Our best option is probably to let people continue to own whatever amounts of capital, and then tax the proceeds in order to redistribute the resulting income. That certainly has its own downsides, but they seem to be a lot more manageable than either unfettered anarcho-capitalism or totalitarian communism.
I think it’s worth repeating now: Centrism isn’t saying “both sides are the same” when they aren’t. It’s recognizing that the norms of democracy themselves are worth defending—and more worth defending than almost any specific policy goal.
I wanted to say any specific policy goal, but I do think you can construct extreme counterexamples, like “establish a 100% tax on all income” (causing an immediate, total economic collapse), or “start a war with France” (our staunchest ally for the past 250 years who also has nuclear weapons). But barring anything that extreme, just about any policy is less important than defending democracy itself.
Or at least I think so. It seems that most Americans disagree. On both the left and the right—but especially on the right—a large majority of American voters are still willing to vote for a candidate who flouts basic democratic norms as long as they promise the right policies.
I guess on the right this fact should have been obvious: Trump. But things aren’t much better on the left, and should some actual radical authoritarian communist run for office (as opposed to, you know, literally every left-wing politician who is accused of being a radical authoritarian communist), this suggests that a lot of leftist voters might actually vote for them, which is nearly as terrifying.
My hope today is that I might tip the balance a little bit the other direction, remind people why democracy is worth defending, even at the cost of our preferred healthcare systems and marginal tax rates.
This is, above all, that democracy is self-correcting. If a bad policy gets put in place while democratic norms are still strong, then that policy can be removed and replaced with something better later on. Authoritarianism lacks this self-correction mechanism; get someone terrible in power and they stay in power, doing basically whatever they want, unless they are violently overthrown.
For the right wing, that’s basically it. You need to stop making excuses for authoritarianism. Basically none of your policies are so important that they would justify even moderate violations of democratic norms—much less than Trump already committed, let alone what he might do if re-elected and unleashed. I don’t care how economically efficient lower taxes or privatized healthcare might be (and I know that there are in fact many economists who would agree with you on that, though I don’t), it isn’t worth undermining democracy. And while I do understand why you consider abortion to be such a vital issue, you really need to ask yourself whether banning abortion is worth living under a fascist government, because that’s the direction you’re headed. Let me note that banning abortion doesn’t even seem to reduce it very much, so there’s that. While the claim that abortion bans do nothing is false, even a total overturn of Roe v. Wade would most likely reduce US abortions by about 15%—much less than the 25% decrease between 2008 and 2014, which was also part of a long-term trend of decreasing abortion rates which are now roughly half what they were in 1980. We don’t need to ban abortion in order to reduce it—and indeed many of the things that work are things like free healthcare and easy access to contraception that right-wing governments typically resist. So even if you consider abortion to be a human rights violation, which I know many of you do, is that relativelysmall reduction in abortion rates worth risking the slide into fascism?
But for the left wing, things are actually a bit more complicated. Some right-wing policies—particularly social policies—are inherently anti-democratic and violations of human rights. I gave abortion the benefit of the doubt above; I can at least see why someone would think it’s a human rights violation (though I do not). Here I’m thinking particularly of immigration policies that lock up children at the border and laws that actively discriminate against LGBT people. I can understand why people would be unwilling to “hold their nose” and vote for someone who wants to enact that kind of policy—though if it’s really the only way to avoid authoritarianism, I think we might still have to do it. Democracy is too high a price to pay; give it up now and there is nothing to stop that new authoritarian leftist government from turning into a terrible nightmare (that may not even remain leftist, by the way!). If we vote in someone who is pro-democratic but otherwise willing to commit these sorts of human rights violations, hopefully we can change things by civic engagement or vote them out of office later on (and over the long run, we do, in fact, have a track record of doing that). But if we vote in someone who will tear apart democracy even when they seem to have the high ground on human rights, then once democracy is undermined, the new authoritarian government can oppress us in all sorts of ways (even ways they specifically promised not to!), and we will have very little recourse.
Above all, even if they promise to give us everything we want, once you put an authoritarian in power, they can do whatever they want. They have no reason to keep their promises (whereas, contrary to popular belief, democratic politicians actually typically do), for we have no recourse if they don’t. Our only option to remove them from power is violent revolution—which usually fails, and even if it succeeds, would have an enormous cost in human lives.
Why is this a minority view? Why don’t more Americans agree with this?
I can think of a few possible reasons.
One is that they may not believe that these violations of democratic norms are really all that severe or worrisome. Overriding a judge with an executive order isn’t such a big deal, is it? Gerrymandering has been going on for decades, why should we worry about it now?
If that is indeed your view, let me remind you that in January 2021, armed insurrectionists stormed the Capitol building. That is not something we can just take lying down. This is a direct attack upon the foundations of democracy, and while it failed (miserably, and to be honest, hilariously), it wasn’t punished nearly severely enough—most of the people involved were not arrested on any charges, and several are now running for office. This lack of punishment means that it could very well happen again, and this time be better organized and more successful.
A second possibility is that people do not know that democracy is being undermined; they are somehow unaware that this is happening. If that’s the case, all I can tell you is that you really need to go to the Associated Press or New York Times website and read some news. You would have to be catastrophically ignorant of our political situation, and you frankly don’t deserve to be voting if that is the case.
But I suspect that for most people, a third reason applies: They see that democracy is being undermined, but they blame the other side. We aren’t the ones doing it—it’s them.
Such a view is tempting, at least from the left side of the aisle. No Democratic Party politician can hold a candle to Trump as far as authoritarianism (or narcissism). But we should still be cognizant of ways that our actions may also undermine democratic norms: Maybe we shouldn’t be considering packing the Supreme Court, unless we can figure out a way to ensure that it will genuinely lead to a more democratic and fair court long into the future. (For the latter sort of reform, suppose each federal district elected its own justice? Or we set up a mandatory retirement cycle such that every President would always appoint at least one justice?)
But for those of you on the right… How can you possibly think this? Where do you get your information from? How can you look at Donald Trump and think, “This man will defend our democracy from those left-wing radicals”? Right now you may be thinking, “oh, look, he suggested the New York Times; see his liberal bias”; that is a newspaper of record in the United States. While their editors are a bit left of center, they are held to the highest standards of factual accuracy. But okay, if you prefer the Wall Street Journal(also a newspaper of record, but whose editors are a bit more right of center), be my guest; their factual claims won’t disagree, because truth is truth. I also suggested the Associated Press, widely regarded worldwide as one of the most credible news sources. (I considered adding Al Jazeera, which has a similar reputation, but figured you wouldn’t go for that.)
If you think that the attack on the Capitol was even remotely acceptable, you must think that their claims of a stolen election were valid, or at least plausible. But everycrediblemajornewssource, the US Justice Department, and dozens of law courts agree that they were not. Any large election is going to have a few cases of fraud, but there were literally only hundreds of fradulent votes—in an election in which over 150 million votes were cast, Biden won the popular vote by over 7 million votes, and no state was won by less than 10,000 votes. This means that 99.999% of votes were valid, and even if every single fradulent vote had been for Biden and in Georgia (obviously not the case), it wouldn’t have been enough to tip even that state.
I’m not going to say that left-wing politicians never try to undermine democratic norms—there’s certainly plenty of gerrymandering, and I just said, court-packing is at least problematic. Nor would I say that the right wing is always worse about this. But it should be pretty obvious to anyone with access to basic factual information—read: everyone with Internet access—that right now, the problem is much worse on the right. You on the right need to face up to that fact, and start voting out Republicans who refuse to uphold democracy, even if it means you have to wait a bit longer for lower taxes or more (let me remind you, not very effective) abortion bans.
In the long run, I would of course like to see changes in the whole political system, so that we are no longer dominated by two parties and have a wider variety of realistic options. (The best way to do that would of couse be range voting.) But for now, let’s start by ensuring that democracy continues to exist in America.
There is an extremely common and quite bizarre result in the standard theory of taxation, which is that the optimal marginal tax rate for the highest incomes should be zero. Ever since that result came out, economists have basically divided into two camps.
The more left-leaning have said, “This is obviously wrong; so why is it wrong? What are we missing?”; the more right-leaning have said, “The model says so, so it must be right! Cut taxes on the rich!”
I probably don’t need to tell you that I’m very much in the first camp. But more recently I’ve come to realize that even the answers left-leaning economists have been giving for why this result is wrong are also missing something vital.
In my view, there are really two reasons why taxes should be progressive, and they are sufficiently general reasons that they should almost always override other considerations.
The first is diminishing marginal utility of wealth. The real value of a dollar is much less to someone who already has $1 million than to someone who has only $100. Thus, if we want to raise the most revenue while causing the least pain, we typically want to tax people who have a lot of money rather than people who have very little.
But the right-wing economists have an answer to this one, based on these fancy models: Yes, taking a given amount from the rich would be better (a lump-sum tax), but you can’t do that; you can only tax their income at a certain rate. (So far, that seems right. Lump-sum taxes are silly and economists talk about them too much.) But the rich are rich because they are more productive! If you tax them more, they will work less, and that will harm society as a whole due to their lost productivity.
This is the fundamental intuition behind the “top rate should be zero” result: The rich are so fantastically productive that it isn’t worth it to tax them. We simply can’t risk them working less.
But are the rich actually so fantastically productive? Are they really that smart? Do they really work that hard?
If Tony Stark were real, okay, don’t tax him. He is a one-man Singularity: He invented the perfect power source on his own, “in a cave, with a box of scraps!”; he created a true AI basically by himself; he single-handedly discovered a new stable island element and used it to make his already perfect power source even better.
But despite what his fanboys may tell you, Elon Musk is not Tony Stark. Tesla and SpaceX have done a lot of very good things, but in order to do they really didn’t need Elon Musk for much. Mainly, they needed his money. Give me $270 billion and I could make companies that build electric cars and launch rockets into space too. (Indeed, I probably would—though I’d also set up some charitable foundations as well, more like what Bill Gates did with his similarly mind-boggling wealth.)
Don’t get me wrong; Elon Musk is a very intelligent man, and he works, if anything, obsessively. (He makes his employees work excessively too—and that’s a problem.) But if he were to suddenly die, as long as a reasonably competent CEO replaced him, Tesla and SpaceX would go on working more or less as they already do. The spectacular productivity of these companies is not due to Musk alone, but thousands of highly-skilled employees. These people would be productive if Musk had not existed, and they will continue to be productive once Musk is gone.
And they aren’t particularly rich. They aren’t poor either, mind you—a typical engineer at Tesla or SpaceX is quite well-paid, and rightly so. (Median salary at SpaceX is over $115,000.) These people are brilliant, tremendously hard-working, and highly productive; and they get quite well-paid. But very few of these people are in the top 1%, and basically none of them will ever be billionaires—let alone the truly staggering wealth of a hectobillionaire like Musk himself.
How, then, does one become a billionaire? Not by being brilliant, hard-working, or productive—at least that is not sufficient, and the existence of, say, Donald Trump suggests that it is not necessary either. No, the really quintessential feature every billionaire has is remarkably simple and consistent across the board: They own a monopoly.
You can pretty much go down the list, finding what monopoly each billionaire owned: Bill Gates owned software patents on (what is still) the most widely-used OS and office suite in the world. J.K. Rowling owns copyrights on the most successful novels in history. Elon Musk owns technology patents on various innovations in energy storage and spaceflight technology—very few of which he himself invented, I might add. Andrew Carnegie owned the steel industry. John D. Rockefeller owned the oil industry. And so on.
I honestly can’t find any real exceptions: Basically every billionaire either owned a monopoly or inherited from ancestors who did. The closest things to exceptions are billionaire who did something even worse, like defrauding thousands of people, enslaving an indigenous population or running a nation with an iron fist. (And even then, Leopold II and Vladimir Putin both exerted a lot of monopoly power as part of their murderous tyranny.)
In other words, billionaire wealth is almost entirely rent. You don’t earn a billion dollars. You don’t get it by working. You get it by owning—and by using that ownership to exert monopoly power.
This means that taxing billionaire wealth wouldn’t incentivize them to work less; they already don’t work for their money. It would just incentivize them to fight less hard at extracting wealth from everyone else using their monopoly power—which hardly seems like a downside.
Since virtually all of the wealth at the top is simply rent, we have no reason not to tax it away. It isn’t genuine productivity at all; it’s just extracting wealth that other people produced.
Thus, my second, and ultimately most decisive reason for wanting strongly progressive taxes: rent-seeking. The very rich don’t actually deserve the vast majority of what they have, and we should take it back so that we can give it to people who really need and deserve it.
Now, there is a somewhat more charitable version of the view that high taxes even on the top 0.01% would hurt productivity, and it is worth addressing. That is based on the idea that entrepreneurship is valuable, and part of the incentive for becoming and entrepreneur is the chance at one day striking it fabulously rich, so taxing the fabulously rich might result in a world of fewer entrepreneurs.
This isn’t nearly as ridiculous as the idea that Elon Musk somehow works a million times as hard as the rest of us, but it’s still pretty easy to find flaws in it.
Suppose you were considering starting a business. Indeed, perhaps you already have considered it. What are your main deciding factors in whether or not you will?
Surely they do not include the difference between a 0.0001% chance of making $200 billion and a 0.0001% chance of making $50 billion. Indeed, that probably doesn’t factor in at all; you know you’ll almost certainly never get there, and even if you did, there’s basically no real difference in your way of life between $50 billion and $200 billion.
No, more likely they include things like this: (1) How likely are you to turn a profit at all? Even a profit of $50,000 per year would probably be enough to be worth it, but how sure are you that you can manage that? (2) How much funding can you get to start it in the first place? Depending on what sort of business you’re hoping to found, it could be as little as thousands or as much as millions of dollars to get it set up, well before it starts taking in any revenue. And even a few thousand is a lot for most middle-class people to come up with in one chunk and be willing to risk losing.
This means that there is a very simple policy we could implement which would dramatically increase entrepreneurship while taxing only billionaires more, and it goes like this: Add an extra 1% marginal tax to capital gains for billionaires, and plow it into a fund that gives grants of $10,000 to $100,000 to promising new startups.
That 1% tax could raise several billion dollars a year—yes, really; US billionaires gained some $2 trillion in capital gains last year, so we’d raise $20 billion—and thereby fund many, many startups. Say the average grant is $20,000 and the total revenue is $20 billion; that’s one million new startups funded every single year. Every single year! Currently, about 4 million new businesses are founded each year in the US (leading the world by a wide margin); this could raise that to 5 million.
So don’t tell me this is about incentivizing entrepreneurship. We could do that far better than we currently do, with some very simple policy changes.
Meanwhile, the economics literature on optimal taxation seems to be completely missing the point. Most of it is still mired in the assumption that the rich are rich because they are productive, and thus terribly concerned about the “trade-off” between efficiency and equity involved in higher taxes. But when you realize that the vast, vast majority—easily 99.9%—of billionaire wealth is unearned rents, then it becomes obvious that this trade-off is an illusion. We can improve efficiency and equity simultaneously, by taking some of this ludicrous hoard of unearned wealth and putting it back into productive activities, or giving it to the people who need it most. The only people who will be harmed by this are billionaires themselves, and by diminishing marginal utility of wealth, they won’t be harmed very much.
Fortunately, the tide is turning, and more economists are starting to see the light. One of the best examples comes from Piketty, Saez, and Stantcheva in their paper on how CEO “pay for luck” (e.g. stock options) respond to top tax rates. There are a few other papers that touch on similar issues, such as Lockwood, Nathanson, and Weyl and Rothschild and Scheuer. But there’s clearly a lot of space left for new work to be done. The old results that told us not to raise taxes were wrong on a deep, fundamental level, and we need to replace them with something better.
Marx famously wrote that capitalism “alienates labor”. Much ink has been spilled over interpreting exactly what he meant by that, but I think the most useful and charitable reading goes something like the following:
When you make something for yourself, it feels fully yours. The effort you put into it feels valuable and meaningful. Whether you’re building a house to live in it or just cooking an omelet to eat it, your labor is directly reflected in your rewards, and you have a clear sense of purpose and value in what you are doing.
But when you make something for an employer, it feels like theirs, not yours. You have been instructed by your superiors to make a certain thing a certain way, for reasons you may or may not understand (and may or may not even agree with). Once you deliver the product—which may be as concrete as a carburetor or as abstract as an accounting report—you will likely never see it again; it will be used or not by someone else somewhere else whom you may not even ever get the chance to meet. Such labor feels tedious, effortful, exhausting—and also often empty, pointless, and meaningless.
On that reading, Marx isn’t wrong. There really is something to this. (I don’t know if this is really Marx’s intended meaning or not, and really I don’t much care—this is a valid thing and we should be addressing it, whether Marx meant to or not.)
There is a little parable about this, which I can’t quite remember where I heard:
Three men are moving heavy stones from one place to another. A traveler passes by and asks them, “What are you doing?”
The first man sighs and says, “We do whatever the boss tells us to do.”
The second man shrugs and says, “We pick up the rocks here, we move them over there.”
The third man smiles and says, “We’re building a cathedral.”
The three answers are quite different—yet all three men may be telling the truth as they see it.
The first man is fully alienated from his labor: he does whatever the boss says, following instructions that he considers arbitrary and mechanical. The second man is partially alienated: he knows the mechanics of what he is trying to accomplish, which may allow him to improve efficiency in some way (e.g. devise better ways to transport the rocks faster or with less effort), but he doesn’t understand the purpose behind it all, so ultimately his work still feels meaningless. But the third man is not alienated: he understands the purpose of his work, and he values that purpose. He sees that what he is doing is contributing to a greater whole that he considers worthwhile. It’s not hard to imagine that the third man will be the happiest, and the first will be the unhappiest.
There really is something about the capitalist wage-labor structure that can easily feed into this sort of alienation. You get a job because you need money to live, not because you necessarily value whatever the job does. You do as you are told so that you can keep your job and continue to get paid.
Some jobs are much more alienating than others. Most teachers and nurses see their work as a vocation, even a calling—their work has deep meaning for them and they value its purpose. At the other extreme there are corporate lawyers and derivatives traders, who must on some level understand that their work contributes almost nothing to the world (may in fact actively cause harm), but they continue to do the work because it pays them very well.
But there are many jobs in between which can be experienced both ways. Working in retail can be an agonizing grind where you must face a grueling gauntlet of ungrateful customers day in and day out—or it can be a way to participate in your local community and help your neighbors get the things they need. Working in manufacturing can be a mechanical process of inserting tab A into slot B and screwing it into place over, and over, and over again—or it can be a chance to create something, convert raw materials into something useful and valuable that other people can cherish.
And while individual perspective and framing surely matter here—those three men were all working in the same quarry, building the same cathedral—there is also an important objective component as well. Working as an artisan is not as alienating as working on an assembly line. Hosting a tent at a farmer’s market is not as alienating as working the register at Walmart. Tutoring an individual student is more purposeful than recording video lectures for a MOOC. Running a quirky local book store is more fulfilling than stocking shelves at Barnes & Noble.
Moreover, capitalism really does seem to push us more toward the alienating side of the spectrum. Assembly lines are far more efficient than artisans, so we make most of our products on assembly lines. Buying food at Walmart is cheaper and more convenient than at farmer’s markets, so more people shop there. Hiring one video lecturer for 10,000 students is a lot cheaper than paying 100 in-person lecturers, let alone 1,000 private tutors. And Barnes & Noble doesn’t drive out local book stores by some nefarious means: It just provides better service at lower prices. If you want a specific book for a good price right now, you’re much more likely to find it at Barnes & Noble. (And even more likely to find it on Amazon.)
Finding meaning in your work is very important for human happiness. Indeed, along with health and social relationships, it’s one of the biggest determinants of happiness. For most people in First World countries, it seems to be more important than income (though income certainly does matter).
Yet the increased efficiency and productivity upon which our modern standard of living depends seems to be based upon a system of production—in a word, capitalism—that systematically alienates us from meaning in our work.
This puts us in a dilemma: Do we keep things as they are, accepting that we will feel an increasing sense of alienation and ennui as our wealth continues to grow and we get ever-fancier toys to occupy our meaningless lives? Or do we turn back the clock, returning to a world where work once again has meaning, but at the cost of making everyone poorer—and some people desperately so?
Well, first of all, to some extent this is a false dichotomy. There are jobs that are highly meaningful but also highly productive, such as teaching and engineering. (Even recording a video lecture is a lot more fulfilling than plenty of jobs out there.) We could try to direct more people into jobs like these. There are jobs that are neither particularly fulfilling nor especially productive, like driving trucks, washing floors and waiting tables. We could redouble our efforts into automating such jobs out of existence. There are meaningless jobs that are lucrative only by rent-seeking, producing little or no genuine value, like the aforementioned corporate lawyers and derivatives traders. These, quite frankly, could simply be banned—or if there is some need for them in particular circumstances (I guess someone should defend corporations when they get sued; but they far more often go unjustly unpunished than unjustly punished!), strictly regulated and their numbers and pay rates curtailed.
Nevertheless, we still have decisions to make, as a society, about what we value most. Do we want a world of cheap, mostly adequate education, that feels alienating even to the people producing it? Then MOOCs are clearly the way to go; pennies on the dollar for education that could well be half as good! Or do we want a world of high-quality, personalized teaching, by highly-qualified academics, that will help students learn better and feel more fulfilling for the teachers? More pointedly—are we willing to pay for that higher-quality education, knowing it will be more expensive?
Moreover, in the First World at least, our standard of living is… pretty high already? Like seriously, what do we really need that we don’t already have? We could always imagine more, of course—a bigger house, a nicer car, dining at fancier restaurants, and so on. But most of us have roofs over our heads, clothes on our backs, and food on our tables.
Economic growth has done amazing things for us—but maybe we’re kind of… done? Maybe we don’t need to keep growing like this, and should start redirecting our efforts away from greater efficiency and toward greater fulfillment. Maybe there are economic possibilities we haven’t been considering.
Note that I specifically mean First World countries here. In Third World countries it’s totally different—they need growth, lots of it, as fast as possible. Fulfillment at work ends up being a pretty low priority when your children are starving and dying of malaria.
But then, you may wonder: If we stop buying cheap plastic toys to fill the emptiness in our hearts, won’t that throw all those Chinese factory workers back into poverty?
In the system as it stands? Yes, that’s a real concern. A sudden drop in consumption spending in general, or even imports in particular, in First World countries could be economically devastating for millions of people in Third World countries.
But there’s nothing inherent about this arrangement. There are less-alienating ways of working that can still provide a decent standard of living, and there’s no fundamental reason why people around the world couldn’t all be doing them. If they aren’t, it’s in the short run because they don’t have the education or the physical machinery—and in the long run it’s usually because their government is corrupt and authoritarian. A functional democratic government can get you capital and education remarkably fast—it certainly did in South Korea, Taiwan, and Japan.
Automation is clearly a big part of the answer here. Many people in the First World seem to suspect that our way of life depends upon the exploited labor of impoverished people in Third World countries, but this is largely untrue. Most of that work could be done by robots and highly-skilled technicians and engineers; it just isn’t because that would cost more. Yes, that higher cost would mean some reduction in standard of living—but it wouldn’t be nearly as dramatic as many people seem to think. We would have slightly smaller houses and slightly older cars and slightly slower laptops, but we’d still have houses and cars and laptops.
So I don’t think we should all cast off our worldly possessions just yet. Whether or not it would make us better off, it would cause great harm to countries that depend on their exports to us. But in the long run, I do think we should be working to achieve a future for humanity that isn’t so obsessed with efficiency and growth, and instead tries to provide both a decent standard of living and a life of meaning and purpose.
Field Adjunct Xorlan nervously adjusted their antenna jewelry and twiddled their mandibles as they waited to be called before the Xenoanthropology Committee.
At last, it was Xorlan’s turn to speak. They stepped slowly, hesitantly up to the speaking perch, trying not to meet any of the dozens of quartets of eyes gazing upon them. “So… yes. The humans of Terra. I found something…” Their throat suddenly felt dry. “Something very unusual.”
The Committee Chair glared at Xorlan impatiently. “Go on, then.”
“Well, to begin, humans exhibit moderate sexual dimorphism, though much more in physical than mental traits.”
The Chair rolled all four of their eyes. “That is hardly unusual at all! I could name a dozen species on as many worlds—”
“Uh, if I may, I wasn’t finished. But the humans, you see, they endeavor greatly—at enormous individual and social cost—to emphasize their own dimorphism. They wear clothing that accentuates their moderate physical differences. They organize themselves into groups based primarily if not entirely around secondary sexual characteristics. Many of their languages even directly incorporate pronouns or even adjectives and nouns associated with these categorizations.”
Seemingly placated for now, the Chair was no longer glaring or rolling their eyes. “Continue.”
“They build complex systems of norms surrounding the appropriate dress and behavior of individuals based on these dimorphic characteristics. Moreover, they enforce these norms with an iron mandible—” Xorlan choked at their own cliched metaphor, regretting it immediately. “Well, uh, not literally, humans don’t have mandibles—but what I mean to say is, they enforce these norms extremely aggressively. Humans will berate, abuse, ostracize, in extreme cases even assault or kill one another over seemingly trivial violations of these norms.”
Now the Chair sounded genuinely interested. “We know religion is common among humans. Do the norms have some religious significance, perhaps?”
“Sometimes. But not always. Oftentimes the norms seem to be entirely separate from religious practices, yet are no less intensively enforced. Different groups of humans even have quite different norms, though I have noticed certain patterns, if you’ll turn to table 4 of my report—”
The Chair waved dismissively. “In due time, Field Adjunct. For now, tell us: Do the humans have a name for this strange practice?”
“Ah. Yes, in fact they do. They call it gender.“
We are so thoroughly accustomed to them—in basically every human society—that we hardly even notice their existence, much less think to question them most of the time. But as I hope this little vignette about an alien anthropologist illustrates, gender norms are actually quite profoundly weird.
Sexual dimorphism is not weird. A huge number of species have vary degrees of dimorphism, and mammals in particular are especially likely to exhibit significant dimorphism, from the huge antlers of a stag to the silver back of a gorilla. Human dimorphism is in a fairly moderate range; our males and females are neither exceptionally similar nor exceptionally different by most mammal standards.
No, what’s weird is gender—the way that, in nearly every human society, culture has taken our sexual dimorphism and expanded it into an incredibly intricate, incredibly draconian system of norms that everyone is expected to follow on pain of ostracism if not outright violence.
Imagine a government that passed laws implementing the following:
Shortly after your birth, you will be assigned to a group without your input, and will remain it in your entire life. Based on your group assignment, you must obey the following rules: You must wear only clothing on this approved list, and never anything on this excluded list. You must speak with a voice pitch within a particular octave range. You must stand and walk a certain way. You must express, or not express, your emotions under certain strictly defined parameters—for group A, anger is almost never acceptable, while for group B, anger is the only acceptable emotion in most circumstances. You are expected to eat certain approved foods and exclude other foods. You must exhibit the assigned level of dominance for your group. All romantic and sexual relations are to be only with those assigned to the opposite group. If you violate any of these rules, you will be punished severely.
We surely see any such government as the epitome of tyranny. These rules are petty, arbitrary, oppressive, and disproportionately and capriciously enforced. And yet, when for millennia we in every society on Earth have imposed these rules upon ourselves and each other, it seems to us as though nothing is amiss.
It isn’t really true mentally either: There are some robust correlations between gender and certain psychological traits. But they are just that: Correlations. Men are more likely to be dominant, aggressive, risk-seeking and visually oriented, while women are more likely to be submissive, nurturing, neurotic, and verbally oriented. There is still an enormous amount of variation within each group, such that knowing only someone’s gender actually tells you very little about their psychology.
And whatever differences there may be, however small or large, and whatever exceptions may exist, whether rare or ubiquitous—the question remains: Why enforce this? Why punish people for deviating from whatever trends may exist? Why is deviating from gender norms not simply unusual, but treated as immoral?
I don’t have a clear answer. People do generally enforce all sorts of social norms, some good and some bad; but gender norms in particular seem especially harshly enforced. People do generally feel uncomfortable with having their mental categories challenged or violated, but sporks and schnoodles have never received anything like the kind of hatred that is routinely directed at trans people. There’s something about gender in particular that seems to cut very deep into the core of human psychology.
Indeed, so deep that I doubt we’ll ever be truly free of them. But perhaps we can at least reduce their draconian demands on us by remaining aware of just how weird those demands are.
This topic is quite personal for me, as someone who has suffered from chronic migraines since adolescence. Some days, weeks, and months are better than others. This past month has been the worst I have felt since 2019, when we moved into an apartment that turned out to be full of mold. This time, there is no clear trigger—which also means no easy escape.
The total annual cost of all chronic illnesses is hard to estimate, but it’s definitely somewhere in the trillions of dollars per year. The World Economic Forum estimated that number at $47 trillion over the next 20 years, which I actually consider conservative. I think this is counting how much we actually spend and some notion of lost productivity, as well as the (fraught) concept of the value of a statistical life—but I don’t think it’s putting a sensible value on the actual suffering. This will effectively undervalue poor people who are suffering severely but can’t get treated—because they spend little and can’t put a large dollar value on their lives. In the US, where the data is the best, the total cost of chronic illness comes to nearly $4 trillion per year—20% of GDP. If other countries are as bad or worse (and I don’t see why they would be better), then we’re looking at something like $17 trillion in real cost every single year; so over the next 20 years that’s not $47 trillion—it’s over $340 trillion.
Over half of US adults have at least one of the following, and over a quarter have two or more: arthritis, cancer, chronic obstructive pulmonary disease, coronary heart disease, current asthma, diabetes, hepatitis, hypertension, stroke, or kidney disease. (Actually the former very nearly implies the latter, unless chronic conditions somehow prevented one another. Two statistically independent events with 50% probability will jointly occur 25% of the time: Flip two coins.)
Unsurprisingly, age is positively correlated with chronic illness. Income is negatively correlated, both because chronic illnesses reduce job opportunities and because poorer people have more trouble getting good treatment. I am the exception that proves the rule, the upper-middle-class professional with both a PhD and a severe chronic illness.
It’s always a good idea to be careful of the distinction between incidence and prevalence, but with chronic illness this is particularly important, because (almost by definition) chronic illnesses last longer and so can have very high prevalence even with low incidence. Indeed, the odds of someone getting their first migraine (incidence) are low precisely because the odds of being someone who gets migraines (prevalence) is so high.
If you order causes by the number of disability-adjusted life years (DALYs) they cost, chronic conditions rank quite high: while cardiovascular disease and cancer rate by far the highest, diabetes and kidney disease, mental disorders, neurological disorders, and musculoskeletal disorders all rate higher than malaria, HIV, or any other infection except respiratory infections (read: tuberculosis, influenza, and, once these charts are updated for the next few years, COVID). Note also that at the very bottom is “conflict and terrorism”—that’s all organized violence in the world—and natural disasters. Mental disorders alone cost the world 20 times as many DALYs as all conflict and terrorism combined.
I recently finished reading Human Kind by Rutger Bregman. His central thesis is a surprisingly controversial one, yet one I largely agree with: People are basically good. Most people, in most circumstances, try to do the right thing.
Neoclassical economists in particular seem utterly scandalized by any such suggestion. No, they insist, people are selfish! They’ll take any opportunity to exploit each other! On this, Bregman is right and the neoclassical economists are wrong.
One of the best parts of the book is Bregman’s tale of several shipwrecked Tongan boys who were stranded on the remote island of ‘Ata, sometimes called “the real Lord of the Flies“ but with an outcome quite radically different from that of the novel. There were of course conflicts during their long time stranded, but the boys resolved most of these conflicts peacefully, and by the time help came over a year later they were still healthy and harmonious. Bregman himself was involved in the investigative reporting about these events, and his tale of how he came to meet some of the (now elderly) survivors and tell their tale is both enlightening and heartwarming.
Bregman spends a lot of time (perhaps too much time) analyzing classic experiments meant to elucidate human nature. He does a good job of analyzing the Milgram experiment—it’s valid, but it says more about our willingness to serve a cause than our blind obedience to authority. He utterly demolishes the Zimbardo experiment; I knew it was bad, but I hadn’t even realized how utterly worthless that so-called “experiment” actually is. Zimbardo basically paid people to act like abusive prison guards—specifically instructing them how to act!—and then claimed that he had discovered something deep in human nature. Bregman calls it a “hoax”, which might be a bit too strong—but it’s about as accurate as calling it an “experiment”. I think it’s more like a form of performance art.
Bregman’s criticism of Steven Pinker I find much less convincing. He cites a few other studies that purported to show the following: (1) the archaeological record is unreliable in assessing death rates in prehistoric societies (fair enough, but what else do we have?), (2) the high death rates in prehistoric cultures could be from predators such as lions rather than other humans (maybe, but that still means civilization is providing vital security!), (3) the Long Peace could be a statistical artifact because data on wars is so sparse (I find this unlikely, but I admit the Russian invasion of Ukraine does support such a notion), or (4) the Long Peace is the result of nuclear weapons, globalized trade, and/or international institutions rather than a change in overall attitudes toward violence (perfectly reasonable, but I’m not even sure Pinker would disagree).
I appreciate that Bregman does not lend credence to the people who want to use absolute death counts instead of relative death rates, who apparently would rather live in a prehistoric village of 100 people that gets wiped out by a plague (or for that matter on a Mars colony of 100 people who all die of suffocation when the life support fails) rather than remain in a modern city of a million people that has a few dozen murders each year. Zero homicides is better than 40, right? Personally, I care most about the question “How likely am I to die at any given time?”; and for that, relative death rates are the only valid measure. I don’t even see why we should particularly care about homicide versus other causes of death—I don’t see being shot as particularly worse than dying of Alzheimer’s (indeed, quite the contrary, other than the fact that Alzheimer’s is largely limited to old age and shooting isn’t). But all right, if violence is the question, then go ahead and use homicides—but it certainly should be rates and not absolute numbers. A larger human population is not an inherently bad thing.
I even appreciate that Bregman offers a theory (not an especially convincing one, but not an utterly ridiculous one either) of how agriculture and civilization could emerge even if hunter-gatherer life was actually better. It basically involves agriculture being discovered by accident, and then people gradually transitioning to a sedentary mode of life and not realizing their mistake until generations had passed and all the old skills were lost. There are various holes one can poke in this theory (Were the skills really lost? Couldn’t they be recovered from others? Indeed, haven’t people done that, in living memory, by “going native”?), but it’s at least better than simply saying “civilization was a mistake”.
Yet Bregman’s own account, particularly his discussion of how early civilizations all seem to have been slave states, seems to better support what I think is becoming the historical consensus, which is that civilization emerged because a handful of psychopaths gathered armies to conquer and enslave everyone around them. This is bad news for anyone who holds to a naively Whiggish view of history as a continuous march of progress (which I have oft heard accused but rarely heard endorsed), but it’s equally bad news for anyone who believes that all human beings are basically good and we should—or even could—return to a state of blissful anarchism.
Indeed, this is where Bregman’s view and mine part ways. We both agree that most people are mostly good most of the time. He even acknowledges that about 2% of people are psychopaths, which is a very plausible figure. (The figures I find most credible are about 1% of women and about 4% of men, which averages out to 2.5%. The prevalence you get also depends on how severely lacking in empathy someone needs to be in order to qualify. I’ve seen figures as low as 1% and as high as 4%.) What he fails to see is how that 2% of people can have large effects on society, wildly disproportionate to their number.
Consider the few dozen murders that are committed in any given city of a million people each year. Who is committing those murders? By and large, psychopaths. That’s more true of premeditated murder than of crimes of passion, but even the latter are far more likely to be committed by psychopaths than the general population.
Or consider those early civilizations that were nearly all authoritarian slave-states. What kind of person tends to govern an authoritarian slave-state? A psychopath. Sure, probably not every Roman emperor was a psychopath—but I’m quite certain that Commodus and Caligula were, and I suspect that Augustus and several others were as well. And the ones who don’t seem like psychopaths (like Marcus Aurelius) still seem like narcissists. Indeed, I’m not sure it’s possible to be an authoritarian emperor and not be at least a narcissist; should an ordinary person somehow find themselves in the role, I think they’d immediately set out to delegate authority and improve civil liberties.
This suggests that civilization was not so much a mistake as it was a crime—civilization was inflicted upon us by psychopaths and their greed for wealth and power. Like I said, not great for a “march of progress” view of history. Yet a lot has changed in the last few thousand years, and life in the 21st century at least seems overall pretty good—and almost certainly better than life on the African savannah 50,000 years ago.
In essence, what I think happened was we invented a technology to turn the tables of civilization, use the same tools psychopaths had used to oppress us as a means to contain them. This technology was called democracy. The institutions of democracy allowed us to convert government from a means by which psychopaths oppress and extract wealth from the populace to a means by which the populace could prevent psychopaths from committing wanton acts of violence.
Is it perfect? Certainly not. Indeed, there are many governments today that much better fit the “psychopath oppressing people” model (e.g. Russia, China, North Korea), and even in First World democracies there are substantial abuses of power and violations of human rights. In fact, psychopaths are overrepresented among the police and also among politicians. Perhaps there are superior modes of governance yet to be found that would further reduce the power psychopaths have and thereby make life better for everyone else.
Yet it remains clear that democracy is better than anarchy. This is not so much because anarchy results in everyone behaving badly and causes immediate chaos (as many people seem to erroneously believe), but because it results in enough people behaving badly to be a problem—and because some of those people are psychopaths who will take advantage of power vacuum to seize control for themselves.
Yes, most people are basically good. But enough people aren’t that it’s a problem.
Bregman seems to think that simply outnumbering the psychopaths is enough to keep them under control, but history clearly shows that it isn’t. We need institutions of governance to protect us. And for the most part, First World democracies do a fairly good job of that.
Indeed, I think Bregman’s perspective may be a bit clouded by being Dutch, as the Netherlands has one of the highest rates of trust in the world. Nearly 90% of people in the Netherlands trust their neighbors. Even the US has high levels of trust by world standards, at about 84%; a more typical country is India or Mexico at 64%, and the least-trusting countries are places like Gabon with 31% or Benin with a dismal 23%. Trust in government varies widely, from an astonishing 94% in Norway (then again, have you seen Norway? Their government is doing a bang-up job!) to 79% in the Netherlands, to closer to 50% in most countries (in this the US is more typical), all the way down to 23% in Nigeria (which seems equally justified). Some mysteries remain, like why more people trust the government in Russia than in Namibia. (Maybe people in Namibia are just more willing to speak their minds? They’re certainly much freer to do so.)
In other words, Dutch people are basically good. Not that the Netherlands has no psychopaths; surely they have a few just like everyone else. But they have strong, effective democratic institutions that provide both liberty and security for the vast majority of the population. And with the psychopaths under control, everyone else can feel free to trust each other and cooperate, even in the absence of obvious government support. It’s precisely because the government of the Netherlands is so unusually effective that someone living there can come to believe that government is unnecessary.
In short, Bregman is right that we should have donation boxes—and a lot of people seem to miss that (especially economists!). But he seems to forget that we need to keep them locked.
Russia has invaded Ukraine. No doubt you have heard it by now, as it’s all over the news now in dozens of outlets, from CNN toNBC to The Guardian to Al-Jazeera. And as well it should be, as this is the first time in history that a nuclear power has annexed another country. Yes, nuclear powers have fought wars before—the US just got out of one in Afghanistan as you may recall. They have even started wars and led invasions—the US did that in Iraq. And certainly, countries have been annexing and conquering other countries for millennia. But never before—never before, in human history—has a nuclear-armed state invaded another country simply to claim it as part of itself. (Trump said he thought the US should have done something like that, and the world was rightly horrified.)
Russia’s invasion of Ukraine has just disproved the most optimistic models of international relations, which basically said that major power wars for territory were over at the end of WW2. Some thought it was nuclear weapons, others the United Nations, still others a general improvement in trade integration and living standards around the world. But they’ve all turned out to be wrong; maybe such wars are rarer, but they can clearly still happen, because one just did.
I would say that only two major theories of the Long Peace are still left standing in light of this invasion, and that is nuclear deterrence and the democratic peace. Ukraine gave up its nuclear arsenal and later got attacked—that’s consistent with nuclear deterrence. Russia under Putin is nearly as authoritarian as the Soviet Union, and Ukraine is a “hybrid regime” (let’s call it a solid D), so there’s no reason the democratic peace would stop this invasion. But any model which posits that trade or the UN prevent war is pretty much off the table now, as Ukraine had very extensive trade with both Russia and the EU and the UN has been utterly toothless so far. (Maybe we could say the UN prevents wars except those led by permanent Security Council members.)
Well, then, what if the nuclear deterrence theory is right? What would have happened if Ukraine had kept its nuclear weapons? Would that have made this situation better, or worse? It could have made it better, if it acted as a deterrent against Russian aggression. But it could also have made it much, much worse, if it resulted in a nuclear exchange between Russia and Ukraine.
This is the problem with nukes. They are not a guarantee of safety. They are a guarantee of fat tails. To explain what I mean by that, let’s take a brief detour into statistics.
A fat-tailed distributionis one for which very extreme events have non-negligible probability. For some distributions, like a uniform distribution, events are clearly contained within a certain interval and nothing outside is even possible. For others, like a normal distributionor lognormal distribution, extreme events are theoretically possible, but so vanishingly improbable they aren’t worth worrying about. But for fat-tailed distributions like aCauchy distribution or aPareto distribution, extreme events are not so improbable. They may be unlikely, but they are not so unlikely they can simply be ignored. Indeed, they can actually dominate the average—most of what happens, happens in a handful of extreme events.
Deaths in war seem to be fat-tailed, even in conventional warfare. They seem to follow a Pareto distribution. There are lots of tiny skirmishes, relatively frequent regional conflicts, occasional major wars, and a handful of super-deadly global wars. This kind of pattern tends to emerge when a phenomenon is self-reinforcing by positive feedback—hence why we also see it in distributions of income and wildfire intensity.
Fat-tailed distributions typically (though not always—it’s easy to construct counterexamples, like the Cauchy distribution with low values truncated off) have another property as well, which is that minor events are common. More common, in fact, than they would be under a normal distribution. What seems to happen is that the probability mass moves away from the moderate outcomes and shifts to both the extreme outcomes and the minor ones.
Nuclear weapons fit this pattern perfectly. They may in fact reduce the probability of moderate, regional conflicts, in favor of increasing the probability of tiny skirmishes or peaceful negotiations. But they also increase the probability of utterly catastrophic outcomes—a full-scale nuclear war could kill billions of people. It probably wouldn’t wipe out all of humanity, and more recent analyses suggest that a catastrophic “nuclear winter” is unlikely. But even 2 billion people dead would be literally the worst thing that has ever happened, and nukes could make it happen in hours when such a death toll by conventional weapons would take years.
If we could somehow guarantee that such an outcome would never occur, then the lower rate of moderate conflicts nuclear weapons provide would justify their existence. But we can’t. It hasn’t happened yet, but it doesn’t have to happen often to be terrible. Really, just once would be bad enough.
Let us hope, then, that the democratic peace turns out to be the theory that’s right. Because a more democratic world would clearly be better—while a more nuclearized world could be better, but could also be much, much worse.