Gender norms are weird.

Apr 3 JDN 2459673

Field Adjunct Xorlan nervously adjusted their antenna jewelry and twiddled their mandibles as they waited to be called before the Xenoanthropology Committee.

At last, it was Xorlan’s turn to speak. They stepped slowly, hesitantly up to the speaking perch, trying not to meet any of the dozens of quartets of eyes gazing upon them. “So… yes. The humans of Terra. I found something…” Their throat suddenly felt dry. “Something very unusual.”

The Committee Chair glared at Xorlan impatiently. “Go on, then.”

“Well, to begin, humans exhibit moderate sexual dimorphism, though much more in physical than mental traits.”

The Chair rolled all four of their eyes. “That is hardly unusual at all! I could name a dozen species on as many worlds—”

“Uh, if I may, I wasn’t finished. But the humans, you see, they endeavor greatly—at enormous individual and social cost—to emphasize their own dimorphism. They wear clothing that accentuates their moderate physical differences. They organize themselves into groups based primarily if not entirely around secondary sexual characteristics. Many of their languages even directly incorporate pronouns or even adjectives and nouns associated with these categorizations.”

Seemingly placated for now, the Chair was no longer glaring or rolling their eyes. “Continue.”

“They build complex systems of norms surrounding the appropriate dress and behavior of individuals based on these dimorphic characteristics. Moreover, they enforce these norms with an iron mandible—” Xorlan choked at their own cliched metaphor, regretting it immediately. “Well, uh, not literally, humans don’t have mandibles—but what I mean to say is, they enforce these norms extremely aggressively. Humans will berate, abuse, ostracize, in extreme cases even assault or kill one another over seemingly trivial violations of these norms.”

Now the Chair sounded genuinely interested. “We know religion is common among humans. Do the norms have some religious significance, perhaps?”

“Sometimes. But not always. Oftentimes the norms seem to be entirely separate from religious practices, yet are no less intensively enforced. Different groups of humans even have quite different norms, though I have noticed certain patterns, if you’ll turn to table 4 of my report—”

The Chair waved dismissively. “In due time, Field Adjunct. For now, tell us: Do the humans have a name for this strange practice?”

“Ah. Yes, in fact they do. They call it gender.

We are so thoroughly accustomed to them—in basically every human society—that we hardly even notice their existence, much less think to question them most of the time. But as I hope this little vignette about an alien anthropologist illustrates, gender norms are actually quite profoundly weird.

Sexual dimorphism is not weird. A huge number of species have vary degrees of dimorphism, and mammals in particular are especially likely to exhibit significant dimorphism, from the huge antlers of a stag to the silver back of a gorilla. Human dimorphism is in a fairly moderate range; our males and females are neither exceptionally similar nor exceptionally different by most mammal standards.

No, what’s weird is gender—the way that, in nearly every human society, culture has taken our sexual dimorphism and expanded it into an incredibly intricate, incredibly draconian system of norms that everyone is expected to follow on pain of ostracism if not outright violence.

Imagine a government that passed laws implementing the following:

Shortly after your birth, you will be assigned to a group without your input, and will remain it in your entire life. Based on your group assignment, you must obey the following rules: You must wear only clothing on this approved list, and never anything on this excluded list. You must speak with a voice pitch within a particular octave range. You must stand and walk a certain way. You must express, or not express, your emotions under certain strictly defined parameters—for group A, anger is almost never acceptable, while for group B, anger is the only acceptable emotion in most circumstances. You are expected to eat certain approved foods and exclude other foods. You must exhibit the assigned level of dominance for your group. All romantic and sexual relations are to be only with those assigned to the opposite group. If you violate any of these rules, you will be punished severely.

We surely see any such government as the epitome of tyranny. These rules are petty, arbitrary, oppressive, and disproportionately and capriciously enforced. And yet, when for millennia we in every society on Earth have imposed these rules upon ourselves and each other, it seems to us as though nothing is amiss.

Note that I’m not saying that men and women are the same in every way. That’s clearly not true physically; the differences in upper body strength and grip strength are frankly staggering. The average man is nearly twice as strong as the average woman, and an average 75-year-old man grips better with his left hand than an average 25-year-old woman grips with her right.

It isn’t really true mentally either: There are some robust correlations between gender and certain psychological traits. But they are just that: Correlations. Men are more likely to be dominant, aggressive, risk-seeking and visually oriented, while women are more likely to be submissive, nurturing, neurotic, and verbally oriented. There is still an enormous amount of variation within each group, such that knowing only someone’s gender actually tells you very little about their psychology.

And whatever differences there may be, however small or large, and whatever exceptions may exist, whether rare or ubiquitous—the question remains: Why enforce this? Why punish people for deviating from whatever trends may exist? Why is deviating from gender norms not simply unusual, but treated as immoral?

I don’t have a clear answer. People do generally enforce all sorts of social norms, some good and some bad; but gender norms in particular seem especially harshly enforced. People do generally feel uncomfortable with having their mental categories challenged or violated, but sporks and schnoodles have never received anything like the kind of hatred that is routinely directed at trans people. There’s something about gender in particular that seems to cut very deep into the core of human psychology.

Indeed, so deep that I doubt we’ll ever be truly free of them. But perhaps we can at least reduce their draconian demands on us by remaining aware of just how weird those demands are.

The economic impact of chronic illness

Mar 27 JDN 2459666

This topic is quite personal for me, as someone who has suffered from chronic migraines since adolescence. Some days, weeks, and months are better than others. This past month has been the worst I have felt since 2019, when we moved into an apartment that turned out to be full of mold. This time, there is no clear trigger—which also means no easy escape.

The economic impact of chronic illness is enormous. 90% of US healthcare spending is on people with chronic illnesses, including mental illnesses—and the US has the most expensive healthcare system in the world by almost any measure. Over 55% of adult Medicaid beneficiaries have two or more chronic illnesses.

The total annual cost of all chronic illnesses is hard to estimate, but it’s definitely somewhere in the trillions of dollars per year. The World Economic Forum estimated that number at $47 trillion over the next 20 years, which I actually consider conservative. I think this is counting how much we actually spend and some notion of lost productivity, as well as the (fraught) concept of the value of a statistical life—but I don’t think it’s putting a sensible value on the actual suffering. This will effectively undervalue poor people who are suffering severely but can’t get treated—because they spend little and can’t put a large dollar value on their lives. In the US, where the data is the best, the total cost of chronic illness comes to nearly $4 trillion per year—20% of GDP. If other countries are as bad or worse (and I don’t see why they would be better), then we’re looking at something like $17 trillion in real cost every single year; so over the next 20 years that’s not $47 trillion—it’s over $340 trillion.

Over half of US adults have at least one of the following, and over a quarter have two or more: arthritis, cancer, chronic obstructive pulmonary disease, coronary heart disease, current asthma, diabetes, hepatitis, hypertension, stroke, or kidney disease. (Actually the former very nearly implies the latter, unless chronic conditions somehow prevented one another. Two statistically independent events with 50% probability will jointly occur 25% of the time: Flip two coins.)

Unsurprisingly, age is positively correlated with chronic illness. Income is negatively correlated, both because chronic illnesses reduce job opportunities and because poorer people have more trouble getting good treatment. I am the exception that proves the rule, the upper-middle-class professional with both a PhD and a severe chronic illness.

There seems to be a common perception that chronic illness is largely a “First World problem”, but in fact chronic illnesses are more common—and much less poorly treated—in countries with low and moderate levels of development than they are in the most highly-developed countries. Over 75% of all deaths by non-communicable disease are in low- and middle-income countries. The proportion of deaths that is caused by non-communicable diseases is higher in high-income countries—but that’s because other diseases have been basically eradicated from high-income countries. People in rich countries actually suffer less from chronic illness than people in poor countries (on average).

It’s always a good idea to be careful of the distinction between incidence and prevalence, but with chronic illness this is particularly important, because (almost by definition) chronic illnesses last longer and so can have very high prevalence even with low incidence. Indeed, the odds of someone getting their first migraine (incidence) are low precisely because the odds of being someone who gets migraines (prevalence) is so high.

Quite high in fact: About 10% of men and 20% of women get migraines at least occasionally—though only about 8% of these (so 1% of men and 2% of women) get chronic migraines. Indeed, because ti is both common and can be quite severe, migraine is the second-most disabling condition worldwide as measured by years lived with disability (YLD), after low back pain. Neurologists are particularly likely to get migraines; the paper I linked speculates that they are better at realizing they have migraines, but I think we also need to consider the possibility of self-selection bias where people with migraines may be more likely to become neurologists. (I considered it, and it seems at least as good a reason as becoming a dentist because your name is Denise.)

If you order causes by the number of disability-adjusted life years (DALYs) they cost, chronic conditions rank quite high: while cardiovascular disease and cancer rate by far the highest, diabetes and kidney disease, mental disorders, neurological disorders, and musculoskeletal disorders all rate higher than malaria, HIV, or any other infection except respiratory infections (read: tuberculosis, influenza, and, once these charts are updated for the next few years, COVID). Note also that at the very bottom is “conflict and terrorism”—that’s all organized violence in the world—and natural disasters. Mental disorders alone cost the world 20 times as many DALYs as all conflict and terrorism combined.

Are people basically good?

Mar 20 JDN 2459659

I recently finished reading Human Kind by Rutger Bregman. His central thesis is a surprisingly controversial one, yet one I largely agree with: People are basically good. Most people, in most circumstances, try to do the right thing.

Neoclassical economists in particular seem utterly scandalized by any such suggestion. No, they insist, people are selfish! They’ll take any opportunity to exploit each other! On this, Bregman is right and the neoclassical economists are wrong.

One of the best parts of the book is Bregman’s tale of several shipwrecked Tongan boys who were stranded on the remote island of ‘Ata, sometimes called “the real Lord of the Flies but with an outcome quite radically different from that of the novel. There were of course conflicts during their long time stranded, but the boys resolved most of these conflicts peacefully, and by the time help came over a year later they were still healthy and harmonious. Bregman himself was involved in the investigative reporting about these events, and his tale of how he came to meet some of the (now elderly) survivors and tell their tale is both enlightening and heartwarming.

Bregman spends a lot of time (perhaps too much time) analyzing classic experiments meant to elucidate human nature. He does a good job of analyzing the Milgram experiment—it’s valid, but it says more about our willingness to serve a cause than our blind obedience to authority. He utterly demolishes the Zimbardo experiment; I knew it was bad, but I hadn’t even realized how utterly worthless that so-called “experiment” actually is. Zimbardo basically paid people to act like abusive prison guards—specifically instructing them how to act!—and then claimed that he had discovered something deep in human nature. Bregman calls it a “hoax”, which might be a bit too strong—but it’s about as accurate as calling it an “experiment”. I think it’s more like a form of performance art.

Bregman’s criticism of Steven Pinker I find much less convincing. He cites a few other studies that purported to show the following: (1) the archaeological record is unreliable in assessing death rates in prehistoric societies (fair enough, but what else do we have?), (2) the high death rates in prehistoric cultures could be from predators such as lions rather than other humans (maybe, but that still means civilization is providing vital security!), (3) the Long Peace could be a statistical artifact because data on wars is so sparse (I find this unlikely, but I admit the Russian invasion of Ukraine does support such a notion), or (4) the Long Peace is the result of nuclear weapons, globalized trade, and/or international institutions rather than a change in overall attitudes toward violence (perfectly reasonable, but I’m not even sure Pinker would disagree).

I appreciate that Bregman does not lend credence to the people who want to use absolute death counts instead of relative death rates, who apparently would rather live in a prehistoric village of 100 people that gets wiped out by a plague (or for that matter on a Mars colony of 100 people who all die of suffocation when the life support fails) rather than remain in a modern city of a million people that has a few dozen murders each year. Zero homicides is better than 40, right? Personally, I care most about the question “How likely am I to die at any given time?”; and for that, relative death rates are the only valid measure. I don’t even see why we should particularly care about homicide versus other causes of death—I don’t see being shot as particularly worse than dying of Alzheimer’s (indeed, quite the contrary, other than the fact that Alzheimer’s is largely limited to old age and shooting isn’t). But all right, if violence is the question, then go ahead and use homicides—but it certainly should be rates and not absolute numbers. A larger human population is not an inherently bad thing.

I even appreciate that Bregman offers a theory (not an especially convincing one, but not an utterly ridiculous one either) of how agriculture and civilization could emerge even if hunter-gatherer life was actually better. It basically involves agriculture being discovered by accident, and then people gradually transitioning to a sedentary mode of life and not realizing their mistake until generations had passed and all the old skills were lost. There are various holes one can poke in this theory (Were the skills really lost? Couldn’t they be recovered from others? Indeed, haven’t people done that, in living memory, by “going native”?), but it’s at least better than simply saying “civilization was a mistake”.

Yet Bregman’s own account, particularly his discussion of how early civilizations all seem to have been slave states, seems to better support what I think is becoming the historical consensus, which is that civilization emerged because a handful of psychopaths gathered armies to conquer and enslave everyone around them. This is bad news for anyone who holds to a naively Whiggish view of history as a continuous march of progress (which I have oft heard accused but rarely heard endorsed), but it’s equally bad news for anyone who believes that all human beings are basically good and we should—or even could—return to a state of blissful anarchism.

Indeed, this is where Bregman’s view and mine part ways. We both agree that most people are mostly good most of the time. He even acknowledges that about 2% of people are psychopaths, which is a very plausible figure. (The figures I find most credible are about 1% of women and about 4% of men, which averages out to 2.5%. The prevalence you get also depends on how severely lacking in empathy someone needs to be in order to qualify. I’ve seen figures as low as 1% and as high as 4%.) What he fails to see is how that 2% of people can have large effects on society, wildly disproportionate to their number.

Consider the few dozen murders that are committed in any given city of a million people each year. Who is committing those murders? By and large, psychopaths. That’s more true of premeditated murder than of crimes of passion, but even the latter are far more likely to be committed by psychopaths than the general population.

Or consider those early civilizations that were nearly all authoritarian slave-states. What kind of person tends to govern an authoritarian slave-state? A psychopath. Sure, probably not every Roman emperor was a psychopath—but I’m quite certain that Commodus and Caligula were, and I suspect that Augustus and several others were as well. And the ones who don’t seem like psychopaths (like Marcus Aurelius) still seem like narcissists. Indeed, I’m not sure it’s possible to be an authoritarian emperor and not be at least a narcissist; should an ordinary person somehow find themselves in the role, I think they’d immediately set out to delegate authority and improve civil liberties.

This suggests that civilization was not so much a mistake as it was a crime—civilization was inflicted upon us by psychopaths and their greed for wealth and power. Like I said, not great for a “march of progress” view of history. Yet a lot has changed in the last few thousand years, and life in the 21st century at least seems overall pretty good—and almost certainly better than life on the African savannah 50,000 years ago.

In essence, what I think happened was we invented a technology to turn the tables of civilization, use the same tools psychopaths had used to oppress us as a means to contain them. This technology was called democracy. The institutions of democracy allowed us to convert government from a means by which psychopaths oppress and extract wealth from the populace to a means by which the populace could prevent psychopaths from committing wanton acts of violence.

Is it perfect? Certainly not. Indeed, there are many governments today that much better fit the “psychopath oppressing people” model (e.g. Russia, China, North Korea), and even in First World democracies there are substantial abuses of power and violations of human rights. In fact, psychopaths are overrepresented among the police and also among politicians. Perhaps there are superior modes of governance yet to be found that would further reduce the power psychopaths have and thereby make life better for everyone else.

Yet it remains clear that democracy is better than anarchy. This is not so much because anarchy results in everyone behaving badly and causes immediate chaos (as many people seem to erroneously believe), but because it results in enough people behaving badly to be a problem—and because some of those people are psychopaths who will take advantage of power vacuum to seize control for themselves.

Yes, most people are basically good. But enough people aren’t that it’s a problem.

Bregman seems to think that simply outnumbering the psychopaths is enough to keep them under control, but history clearly shows that it isn’t. We need institutions of governance to protect us. And for the most part, First World democracies do a fairly good job of that.

Indeed, I think Bregman’s perspective may be a bit clouded by being Dutch, as the Netherlands has one of the highest rates of trust in the world. Nearly 90% of people in the Netherlands trust their neighbors. Even the US has high levels of trust by world standards, at about 84%; a more typical country is India or Mexico at 64%, and the least-trusting countries are places like Gabon with 31% or Benin with a dismal 23%. Trust in government varies widely, from an astonishing 94% in Norway (then again, have you seen Norway? Their government is doing a bang-up job!) to 79% in the Netherlands, to closer to 50% in most countries (in this the US is more typical), all the way down to 23% in Nigeria (which seems equally justified). Some mysteries remain, like why more people trust the government in Russia than in Namibia. (Maybe people in Namibia are just more willing to speak their minds? They’re certainly much freer to do so.)

In other words, Dutch people are basically good. Not that the Netherlands has no psychopaths; surely they have a few just like everyone else. But they have strong, effective democratic institutions that provide both liberty and security for the vast majority of the population. And with the psychopaths under control, everyone else can feel free to trust each other and cooperate, even in the absence of obvious government support. It’s precisely because the government of the Netherlands is so unusually effective that someone living there can come to believe that government is unnecessary.

In short, Bregman is right that we should have donation boxes—and a lot of people seem to miss that (especially economists!). But he seems to forget that we need to keep them locked.

Cryptocurrency and its failures

Jan 30 JDN 2459620

It started out as a neat idea, though very much a solution in search of a problem. Using encryption, could we decentralize currency and eliminate the need for a central bank?

Well, it’s been a few years now, and we have now seen how well that went. Bitcoin recently crashed, but it has always been astonishingly volatile. As a speculative asset, such volatility is often tolerable—for many, even profitable. But as a currency, it is completely unbearable. People need to know that their money will be a store of value and a medium of exchange—and something that changes price one minute to the next is neither.

Some of cryptocurrency’s failures have been hilarious, like the ill-fated island called [yes, really] “Cryptoland”, which crashed and burned when they couldn’t find any investors to help them buy the island.

Others have been darkly comic, but tragic in their human consequences. Chief among these was the failed attempt by El Salvador to make Bitcoin an official currency.

At the time, President Bukele justified it by an economically baffling argument: Total value of all Bitcoin in the world is $680 billion, therefore if even 1% gets invested in El Salvador, GDP will increase by $6.8 billion, which is 25%!

First of all, that would only happen if 1% of all Bitcoin were invested in El Salvador each year—otherwise you’re looking at a one-time injection of money, not an increase in GDP.

But more importantly, this is like saying that the total US dollar supply is $6 trillion, (that’s physically cash; the actual money supply is considerably larger) so maybe by dollarizing your economy you can get 1% of that—$60 billion, baby! No, that’s not how any of this works. Dollarizing could still be a good idea (though it didn’t go all that well in El Salvador), but it won’t give you some kind of share in the US economy. You can’t collect dividends on US GDP.

It’s actually good how El Salvador’s experiment in bitcoin failed: Nobody bought into it in the first place. They couldn’t convince people to buy government assets that were backed by Bitcoin (perhaps because the assets were a strictly worse deal than just, er, buying Bitcoin). So the human cost of this idiotic experiment should be relatively minimal: It’s not like people are losing their homes over this.

That is, unless President Bukele doubles down, which he now appears to be doing. Even people who are big fans of cryptocurrency are unimpressed with El Salvador’s approach to it.

It would be one thing if there were some stable cryptocurrency that one could try pegging one’s national currency to, but there isn’t. Even so-called stablecoins are generally pegged to… regular currencies, typically the US dollar but also sometimes the Euro or a few other currencies. (I’ve seen the Australian Dollar and the Swiss Franc, but oddly enough, not the Pound Sterling.)

Or a country could try issuing its own cryptocurrency, as an all-digital currency instead of one that is partly paper. It’s not totally clear to me what advantages this would have over the current system (in which most of the money supply is bank deposits, i.e. already digital), but it would at least preserve the key advantage of having a central bank that can regulate your money supply.

But no, President Bukele decided to take an already-existing cryptocurrency, backed by nothing but the whims of the market, and make it legal tender. Somehow he missed the fact that a currency which rises and falls by 10% in a single day is generally considered bad.

Why? Is he just an idiot? I mean, maybe, though Bukele’s approval rating is astonishingly high. (And El Salvador is… mostly democratic. Unlike, say, Putin’s, I think these approval ratings are basically real.) But that’s not the only reason. My guess is that he was gripped by the same FOMO that has gripped everyone else who evangelizes for Bitcoin. The allure of easy money is often irresistible.

Consider President Bukele’s position. You’re governing a poor, war-torn country which has had economic problems of various types since its founding. When the national currency collapsed a generation ago, the country was put on the US dollar, but that didn’t solve the problem. So you’re looking for a better solution to the monetary doldrums your country has been in for decades.

You hear about a fancy new monetary technology, “cryptocurrency”, which has all the tech people really excited and seems to be making tons of money. You don’t understand a thing about it—hardly anyone seems to, in fact—but you know that people with a lot of insider knowledge of technology and finance are really invested in it, so it seems like there must be something good here. So, you decide to launch a program that will convert your country’s currency from the US dollar to one of these new cryptocurrencies—and you pick the most famous one, which is also extremely valuable, Bitcoin.

Could cryptocurrencies be the future of money, you wonder? Could this be the way to save your country’s economy?

Despite all the evidence that had already accumulated that cryptocurrency wasn’t working, I can understand why Bukele would be tempted by that dream. Just as we’d all like to get free money without having to work, he wanted to save his country’s economy without having to implement costly and unpopular reforms.

But there is no easy money. Not really. Some people get lucky; but they ultimately benefit from other people’s hard work.

The lesson here is deeper than cryptocurrency. Yes, clearly, it was a dumb idea to try to make Bitcoin a national currency, and it will get even dumber if Bukele really does double down on it. But more than that, we must all resist the lure of easy money. If it sounds too good to be true, it probably is.

A very Omicron Christmas

Dec 26 JDN 2459575

Remember back in spring of 2020 when we thought that this pandemic would quickly get under control and life would go back to normal? How naive we were.

The newest Omicron strain seems to be the most infectious yet—even people who are fully vaccinated are catching it. The good news is that it also seems to be less deadly than most of the earlier strains. COVID is evolving to spread itself better, but not be as harmful to us—much as influenza and cold viruses evolved. While weekly cases are near an all-time peek, weekly deaths are well below the worst they had been.

Indeed, at this point, it’s looking like COVID will more or less be with us forever. In the most likely scenario, the virus will continue to evolve to be more infectious but less lethal, and then we will end up with another influenza on our hands: A virus that can’t be eradicated, gets huge numbers of people sick, but only kills a relatively small number. At some point we will decide that the risk of getting sick is low enough that it isn’t worth forcing people to work remotely or maybe even wear masks. And we’ll relax various restrictions and get back to normal with this new virus a regular part of our lives.


Merry Christmas?

But it’s not all bad news. The vaccination campaign has been staggeringly successful—now the total number of vaccine doses exceeds the world population, so the average human being has been vaccinated for COVID at least once.

And while 5.3 million deaths due to the virus over the last two years sounds terrible, it should be compared against the baseline rate of 15 million deaths during that same interval, and the fact that worldwide death rates have been rapidly declining. Had COVID not happened, 2021 would be like 2019, which had nearly the lowest death rate on record, at 7,579 deaths per million people per year. As it is, we’re looking at something more like 10,000 deaths per million people per year (1%), or roughly what we considered normal way back in the long-ago times of… the 1980s. To get even as bad as things were in the 1950s, we would have to double our current death rate.

Indeed, there’s something quite remarkable about the death rate we had in 2019, before the pandemic hit: 7,579 per million is only 0.76%. A being with a constant annual death rate of 0.76% would have a life expectancy of over 130 years. This very low death rate is partly due to demographics: The current world population is unusually young and healthy because the world recently went through huge surges in population growth. Due to demographic changes the UN forecasts that our death rate will start to climb again as fertility falls and the average age increases; but they are still predicting it will stabilize at about 11,200 per million per year, which would be a life expectancy of 90. And that estimate could well be too pessimistic, if medical technology continues advancing at anything like its current rate.

We call it Christmas, but it’s really a syncretized amalgamation of holidays: Yule, Saturnalia, various Solstice celebrations. (Indeed, there’s no particular reason to think Jesus was even born in December.) Most Northern-hemisphere civilizations have some sort of Solstice holiday, and we’ve greedily co-opted traditions from most of them. The common theme really seems to be this:

Now it is dark, but band together and have hope, for the light shall return.

Diurnal beings in northerly latitudes instinctively fear the winter, when it becomes dark and cold and life becomes more hazardous—but we have learned to overcome this fear together, and we remind ourselves that light and warmth will return by ritual celebrations.

The last two years have made those celebrations particularly difficult, as we have needed to isolate ourselves in order to keep ourselves and others safe. Humans are fundamentally social at a level most people—even most scientists—do not seem to grasp: We need contact with other human beings as deeply and vitally as we need food or sleep.

The Internet has allowed us to get some level of social contact while isolated, which has been a tremendous boon; but I think many of us underestimated how much we would miss real face-to-face contact. I think much of the vague sense of malaise we’ve all been feeling even when we aren’t sick and even when we’ve largely adapted our daily routine to working remotely comes from this: We just aren’t getting the chance to see people in person nearly as often as we want—as often as we hadn’t even realized we needed.

So, if you do travel to visit family this holiday season, I understand your need to do so. But be careful. Get vaccinated—three times, if you can. Don’t have any contact with others who are at high risk if you do have any reason to think you’re infected.

Let’s hope next Christmas is better.

Low-skill jobs

Dec 5 JDN 2459554

I’ve seen this claim going around social media for awhile now: “Low-skill jobs are a classist myth created to justify poverty wages.”

I can understand why people would say things like this. I even appreciate that many low-skill jobs are underpaid and unfairly stigmatized. But it’s going a bit too far to claim that there is no such thing as a low-skill job.

Suppose all the world’s physicists and all the world’s truckers suddenly had to trade jobs for a month. Who would have a harder time?

If a mathematician were asked to do the work of a janitor, they’d be annoyed. If a janitor were asked to do the work of a mathematician, they’d be completely nonplussed.

I could keep going: Compare robotics engineers to dockworkers or software developers to fruit pickers.

Higher pay does not automatically equate to higher skills: welders are clearly more skilled than stock traders. Give any welder a million-dollar account and a few days of training, and they could do just as well as the average stock trader (which is to say, worse than the S&P 500). Give any stock trader welding equipment and a similar amount of training, and they’d be lucky to not burn their fingers off, much less actually usefully weld anything.

This is not to say that any random person off the street could do just as well as a janitor or dockworker as someone who has years of experience at that job. It is simply to say that they could do better—and pick up the necessary skills faster—than a random person trying to work as a physicist or software developer.

Moreover, this does justify some difference in pay. If some jobs are easier than others, in the sense that more people are qualified to do them, then the harder jobs will need to pay more in order to attract good talent—if they didn’t, they’d risk their high-skill workers going and working at the low-skill jobs instead.

This is of course assuming all else equal, which is clearly not the case. No two jobs are the same, and there are plenty of other considerations that go into choosing someone’s wage: For one, not simply what skills are required, but also the effort and unpleasantness involved in doing the work. I’m entirely prepared to believe that being a dockworker is less fun than being a physicist, and this should reduce the differential in pay between them. Indeed, it may have: Dockworkers are paid relatively well as far as low-skill jobs go—though nowhere near what physicists are paid. Then again, productivity is also a vital consideration, and there is a general tendency that high-skill jobs tend to be objectively more productive: A handful of robotics engineers can do what was once the work of hundreds of factory laborers.

There are also ways for a worker to be profitable without being particularly productive—that is, to be very good at rent-seeking. This is arguably the case for lawyers and real estate agents, and undeniably the case for derivatives traders and stockbrokers. Corporate executives aren’t stupid; they wouldn’t pay these workers astronomical salaries if they weren’t making money doing so. But it’s quite possible to make lots of money without actually producing anything of particular value for human society.

But that doesn’t mean that wages are always fair. Indeed, I dare say they typically are not. One of the most important determinants of wages is bargaining power. Unions don’t increase skill and probably don’t increase productivity—but they certainly increase wages, because they increase bargaining power.

And this is also something that’s correlated with lower levels of skill, because the more people there are who know how to do what you do, the harder it is for you to make yourself irreplaceable. A mathematician who works on the frontiers of conformal geometry or Teichmueller theory may literally be one of ten people in the world who can do what they do (quite frankly, even the number of people who know what they do is considerably constrained, though probably still at least in the millions). A dockworker, even one who is particularly good at loading cargo skillfully and safely, is still competing with millions of other people with similar skills. The easier a worker is to replace, the less bargaining power they have—in much the same way that a monopoly has higher profits than an oligopoly, which has higher profits that a competitive market.

This is why I support unions. I’m also a fan of co-ops, and an ardent supporter of progressive taxation and safety regulations. So don’t get me wrong: Plenty of low-skill workers are mistreated and underpaid, and they deserve better.

But that doesn’t change the fact that it’s a lot easier to be a janitor than a physicist.

Risk compensation is not a serious problem

Nov 28 JDN 2459547

Risk compensation. It’s one of those simple but counter-intuitive ideas that economists love, and it has been a major consideration in regulatory policy since the 1970s.

The idea is this: The risk we face in our actions is partly under our control. It requires effort to reduce risk, and effort is costly. So when an external source, such as a government regulation, reduces our risk, we will compensate by reducing the effort we expend, and thus our risk will decrease less, or maybe not at all. Indeed, perhaps we’ll even overcompensate and make our risk worse!

It’s often used as an argument against various kinds of safety efforts: Airbags will make people drive worse! Masks will make people go out and get infected!

The basic theory here is sound: Effort to reduce risk is costly, and people try to reduce costly things.

Indeed, it’s theoretically possible that risk compensation could yield the exact same risk, or even more risk than before—or at least, I wasn’t able to prove that for any possible risk profile and cost function it couldn’t happen.

But I wasn’t able to find any actual risk profiles or cost functions that would yield this result, even for a quite general form. Here, let me show you.

Let’s say there’s some possible harm H. There is also some probability that it will occur, which you can mitigate with some choice x. For simplicity let’s say that it’s one-to-one, so that your risk of H occurring is precisely 1-x. Since probabilities must be between 0 and 1, thus so must x.

Reducing that risk costs effort. I won’t say much about that cost, except to call it c(x) and assume the following:

(1) It is increasing: More effort reduces risk more and costs more than less effort.

(2) It is convex: Reducing risk from a high level to a low level (e.g. 0.9 to 0.8) costs less than reducing it from a low level to an even lower level (e.g. 0.2 to 0.1).

These both seem like eminently plausible—indeed, nigh-unassailable—assumptions. And they result in the following total expected cost (the opposite of your expected utility):

(1-x)H + c(x)

Now let’s suppose there’s some policy which will reduce your risk by a factor r, which must be between 0 and 1. Your cost then becomes:

r(1-x)H + c(x)

Minimizing this yields the following result:

rH = c'(x)

where c'(x) is the derivative of c(x). Since c(x) is increasing and convex, c'(x) is positive and increasing.

Thus, if I make r smaller—an external source of less risk—then I will reduce the optimal choice of x. This is risk compensation.

But have I reduced or increased the amount of risk?

The total risk is r(1-x); since r decreased and so did x, it’s not clear whether this went up or down. Indeed, it’s theoretically possible to have cost functions that would make it go up—but I’ve never seen one.

For instance, suppose we assume that c(x) = axb, where a and b are constants. This seems like a pretty general form, doesn’t it? To maintain the assumption that c(x) is increasing and convex, I need a > 0 and b > 1. (If 0 < b < 1, you get a function that’s increasing but concave. If b=1, you get a linear function and some weird corner solutions where you either expend no effort at all or all possible effort.)

Then I’m trying to minimize:

r(1-x)H + axb

This results in a closed-form solution for x:

x = (rH/ab)^(1/(b-1))

Since b>1, 1/(b-1) > 0.


Thus, the optimal choice of x is increasing in rH and decreasing in ab. That is, reducing the harm H or the overall risk r will make me put in less effort, while reducing the cost of effort (via either a or b) will make me put in more effort. These all make sense.

Can I ever increase the overall risk by reducing r? Let’s see.


My total risk r(1-x) is therefore:

r(1-x) = r[1-(rH/ab)^(1/(b-1))]

Can making r smaller ever make this larger?

Well, let’s compare it against the case when r=1. We want to see if there’s a case where it’s actually larger.

r[1-(rH/ab)^(1/(b-1))] > [1-(H/ab)^(1/(b-1))]

r – r^(1/(b-1)) (H/ab)^(1/(b-1)) > 1 – (H/ab)^(1/(b-1))

For this to be true, we would need r > 1, which would mean we didn’t reduce risk at all. Thus, reducing risk externally reduces total risk even after compensation.

Now, to be fair, this isn’t a fully general model. I had to assume some specific functional forms. But I didn’t assume much, did I?

Indeed, there is a fully general argument that externally reduced risk will never harm you. It’s quite simple.

There are three states to consider: In state A, you have your original level of risk and your original level of effort to reduce it. In state B, you have an externally reduced level of risk and your original level of effort. In state C, you have an externally reduced level of risk, and you compensate by reducing your effort.

Which states make you better off?

Well, clearly state B is better than state A: You get reduced risk at no cost to you.

Furthermore, state C must be better than state B: You voluntarily chose to risk-compensate precisely because it made you better off.

Therefore, as long as your preferences are rational, state C is better than state A.

Externally reduced risk will never make you worse off.

QED. That’s it. That’s the whole proof.

But I’m a behavioral economist, am I not? What if people aren’t being rational? Perhaps there’s some behavioral bias that causes people to overcompensate for reduced risks. That’s ultimately an empirical question.

So, what does the empirical data say? Risk compensation is almost never a serious problem in the real world. Measures designed to increase safety, lo and behold, actually increase safety. Removing safety regulations, astonishingly enough, makes people less safe and worse off.

If we ever do find a case where risk compensation is very large, then I guess we can remove that safety measure, or find some way to get people to stop overcompensating. But in the real world this has basically never happened.

It’s still a fair question whether any given safety measure is worth the cost: Implementing regulations can be expensive, after all. And while many people would like to think that “no amount of money is worth a human life”, nobody does—or should, or even can—act like that in the real world. You wouldn’t drive to work or get out of bed in the morning if you honestly believed that.

If it would cost $4 billion to save one expected life, it’s definitely not worth it. Indeed, you should still be able to see that even if you don’t think lives can be compared with other things—because $4 billion could save an awful lot of lives if you spent it more efficiently. (Probablyover a million, in fact, as current estimates of the marginal cost to save one life are about $2,300.) Inefficient safety interventions don’t just cost money—they prevent us from doing other, more efficient safety interventions.

And as for airbags and wearing masks to prevent COVID? Yes, definitely 100% worth it, as both interventions have already saved tens if not hundreds of thousands of lives.

How can we fix medical residency?

Nov 21 JDN 459540

Most medical residents work 60 or more hours per week, and nearly 20% work 80 or more hours. 66% of medical residents report sleeping 6 hours or less each night, and 20% report sleeping 5 hours or less.

It’s not as if sleep deprivation is a minor thing: Worldwide, across all jobs, nearly 750,000 deaths annually are attributable to long working hours, most of these due to sleep deprivation.


By some estimates, medical errors account for as many as 250,000 deaths per year in the US alone. Even the most conservative estimates say that at least 25,000 deaths per year in the US are attributable to medical errors. It seems quite likely that long working hours increase the rate of dangerous errors (though it has been difficult to determine precisely how much).

Indeed, the more we study stress and sleep deprivation, the more we learn how incredibly damaging they are to health and well-being. Yet we seem to have set up a system almost intentionally designed to maximize the stress and sleep deprivation of our medical professionals. Some of them simply burn out and leave the profession (about 18% of surgical residents quit); surely an even larger number of people never enter medicine in the first place because they know they would burn out.

Even once a doctor makes it through residency and has learned to cope with absurd hours, this most likely distorts their whole attitude toward stress and sleep deprivation. They are likely to not consider them “real problems”, because they were able to “tough it out”—and they are likely to assume that their patients can do the same. One of the primary functions of a doctor is to reduce pain and suffering, and by putting doctors through unnecessary pain and suffering as part of their training, we are teaching them that pain and suffering aren’t really so bad and you should just grin and bear it.

We are also systematically selecting against doctors who have disabilities that would make it difficult to work these double-time hours—which means that the doctors who are most likely to sympathize with disabled patients are being systematically excluded from the profession.

There have been some attempts to regulate the working hours of residents, but they have generally not been effective. I think this is for three reasons:

1. They weren’t actually trying hard enough. A cap of 80 hours per week is still 40 hours too high, and looks to me like you are trying to get better PR without fixing the actual problem.

2. Their enforcement mechanisms left too much opportunity to cheat the system, and in fact most medical residents simply became pressured to continue over-working and under-report their hours.

3. They don’t seem to have considered how to effect the transition in a way that won’t reduce the total number of resident-hours, and so residents got less training and hospitals were less served.

The solution to problem 1 is obvious: The cap needs to be lower. Much lower.

The solution to problem 2 is trickier: What sort of enforcement mechanism would prevent hospitals from gaming the system?

I believe the answer is very steep overtime pay requirements, coupled with regular and intensive auditing. Every hour a medical resident goes over their cap, they should have to be paid triple time. Audits should be performed frequently, randomly and without notice. And if a hospital is caught falsifying their records, they should be required to pay all missing hours to all medical residents at quintuple time. And Medicare and Medicaid should not be allowed to reimburse these additional payments—they must come directly out of the hospital’s budget.

Under the current system, the “punishment” is usually a threat of losing accreditation, which is too extreme and too harmful to the residents. Precisely because this is such a drastic measure, it almost never happens. The punishment needs to be small enough that we will actually enforce it; and it needs to hurt the hospital, not the residents—overtime pay would do precisely that.

That brings me to problem 3: How can we ensure that we don’t reduce the total number of resident-hours?

This is important for two reasons: Each resident needs a certain number of hours of training to become a skilled doctor, and residents provide a significant proportion of hospital services. Of the roughly 1 million doctors in the US, about 140,000 are medical residents.

The answer is threefold:

1. Increase the number of residency slots (we have a global doctor shortage anyway).

2. Extend the duration of residency so that each resident gets the same number of total work hours.

3. Gradually phase in so that neither increase needs to be too fast.

Currently a typical residency is about 4 years. 4 years of 80-hour weeks is equivalent to 8 years of 40-hour weeks. The goal is for each resident to get 320 hour-years of training.

With 140,000 current residents averaging 4 years, a typical cohort is about 35,000. So the goal is to each year have at least (35,000 residents per cohort)(4 cohorts)(80 hours per week) = 11 million resident-hours per week.

In cohort 1, we reduce the cap to 70 hours, and increase the number of accepted residents to 40,000. Residents in cohort 1 will continue their residency for 4 years, 7 months. This gives each one 321 hour-years of training.

In cohort 2, we reduce the cap to 60 hours, and increase the number of accepted residents to 46,000.

Residents in cohort 2 will continue their residency for 5 years, 4 months. This gives each one 320 hour-years of training.

In cohort 3, we reduce the cap to 55 hours, and increase the number of accepted residents to 50,000.

Residents in cohort 3 will continue their residency for 6 years. This gives each one 330 hour-years of training.

In cohort 4, we reduce the cap to 50 hours, and increase the number of accepted residents to 56,000. Residents in cohort 4 will continue their residency for 6 years, 6 months. This gives each one 325 hour-years of training.

In cohort 5, we reduce the cap to 45 hours, and increase the number of accepted residents to 60,000. Residents in cohort 5 will continue their residency for 7 years, 2 months. This gives each one 322 hour-years of training.

In cohort 6, we reduce the cap to 40 hours, and increase the number of accepted residents to 65,000. Residents in cohort 6 will continue their residency for 8 years. This gives each one 320 hour-years of training.

In cohort 7, we keep the cap to 40 hours, and increase the number of accepted residents to 70,000. This is now the new standard, with 8-year residencies with 40 hour weeks.

I’ve made a graph here of what this does to the available number of resident-hours each year. There is a brief 5% dip in year 4, but by the time we reach year 14 we’ve actually doubled the total number of available resident-hours at any given time—without increasing the total amount of work each resident does, simply keeping them longer and working them less intensively each year. Given that quality of work is reduced by working longer hours, it’s likely that even this brief reduction in hours would not result in any reduced quality of care for patients.

[residency_hours.png]

I have thus managed to increase the number of available resident-hours, ensure that each resident gets the same amount of training as before, and still radically reduce the work hours from 80 per week to 40 per week. The additional recruitment each year is never more than 6,000 new residents or 15% of the current number of residents.

It takes several years to effect this transition. This is unavoidable if we are trying to avoid massive increases in recruitment, though if we were prepared to simply double the number of admitted residents each year we could immediately transition to 40-hour work weeks in a single cohort and the available resident-hours would then strictly increase every year.

This plan is likely not the optimal one; I don’t know enough about the details of how costly it would be to admit more residents, and it’s possible that some residents might actually prefer a briefer, more intense residency rather than a longer, less stressful one. (Though it’s worth noting that most people greatly underestimate the harms of stress and sleep deprivation, and doctors don’t seem to be any better in this regard.)

But this plan does prove one thing: There are solutions to this problem. It can be done. If our medical system isn’t solving this problem, it is not because solutions do not exist—it is because they are choosing not to take them.

Does power corrupt?

Nov 7 JDN 2459526

It’s a familiar saying, originally attributed to the Lord Acton: “Power tends to corrupt, and absolute power corrupts absolutely. Great men are nearly always bad men.”

I think this saying is not only wrong, but in fact dangerous. We can all observe plenty of corrupt people in power, that much is true. But if it’s simply the power that corrupts them, and they started as good people, then there’s really nothing to be done. We may try to limit the amount of power any one person can have, but in any large, complex society there will be power, and so, if the saying is right, there will also be corruption.

How do I know that this saying is wrong?

First of all, note that corruption varies tremendously, and with very little correlation with most sensible notions of power.

Consider used car salespeople, stockbrokers, drug dealers, and pimps. All of these professions are rather well known for their high level of corruption. Yet are people in these professions powerful? Yes, any manager has some power over their employees; but there’s no particular reason to think that used car dealers have more power over their employees than grocery stores, and yet there’s a very clear sense in which used car dealers are more corrupt.

Even power on a national scale is not inherently tied to corruption. Consider the following individuals: Nelson Mandela, Mahatma Gandhi, Abraham Lincoln, and Franklin Roosevelt.

These men were extremely powerful; each ruled an entire nation.Indeed, during his administration, FDR was probably the most powerful person in the world. And they certainly were not impeccable: Mandela was a good friend of Fidel Castro, Gandhi abused his wife, Lincoln suspended habeas corpus, and of course FDR ordered the internment of Japanese-Americans. Yet overall I think it’s pretty clear that these men were not especially corrupt and had a large positive impact on the world.

Say what you will about Bernie Sanders, Dennis Kucinich, or Alexandria Ocasio-Cortez. Idealistic? Surely. Naive? Perhaps. Unrealistic? Sometimes. Ineffective? Often. But they are equally as powerful as anyone else in the US Congress, and ‘corrupt’ is not a word I’d use to describe them. Mitch McConnell, on the other hand….

There does seem to be a positive correlation between a country’s level of corruption and its level of authoritarianism; the most democratic countries—Scandinavia—are also the least corrupt. Yet India is surely more democratic than China, but is widely rated as about the same level of corruption. Greece is not substantially less democratic than Chile, but it has considerably more corruption. So even at a national level, power is the not the only determinant of corruption.

I’ll even agree to the second clause: “absolute power corrupts absolutely.” Were I somehow granted an absolute dictatorship over the world, one of my first orders of business would be to establish a new democratic world government to replace my dictatorial rule. (Would it be my first order of business, or would I implement some policy reforms first? Now that’s a tougher question. I think I’d want to implement some kind of income redistribution and anti-discrimination laws before I left office, at least.) And I believe that most good people think similarly: We wouldn’t want to have that kind of power over other people. We wouldn’t trust ourselves to never abuse it. Anyone who maintains absolute power is either already corrupt or likely to become so. And anyone who seeks absolute power is precisely the sort of person who should not be trusted with power at all.

It may also be that power is one determinant of corruption—that a given person will generally end up more corrupt if you give them more power. This might help explain why even the best ‘great men’ are still usually bad men. But clearly there are other determinants that are equally important.

And I would like to offer a different hypothesis to explain the correlation between power and corruption, which has profoundly different implications: The corrupt seek power.

Donald Trump didn’t start out a good man and become corrupt by becoming a billionaire or becoming President. Donald Trump was born a narcissistic idiot.

Josef Stalin wasn’t a good man who became corrupted by the unlimited power of ruling the Soviet Union. Josef Stalin was born a psychopath.

Indeed, when you look closely at how corrupt leaders get into power, it often involves manipulating and exploiting others on a grand scale. They are willing to compromise principles that good people wouldn’t. They aren’t corrupt because they got into power; they got into power because they are corrupt.

Let me be clear: I’m not saying we should compromise all of our principles in order to achieve power. If there is a route by which power corrupts, it is surely that. Rather, I am saying that we must maintain constant vigilance against anyone who seems so eager to attain power that they will compromise principles to do it—for those are precisely the people who are likely to be most dangerous if they should achieve their aims.

Moreover, I’m saying that “power corrupts” is actually a very dangerous message. It tells good people not to seek power, because they would be corrupted by it. But in fact what we actually need in order to get good people in power is more good people seeking power, more opportunities to out-compete the corrupt. If Congress were composed entirely of people like Alexandria Ocasio-Cortez, then the left-wing agenda would no longer seem naive and unrealistic; it would simply be what gets done. (Who knows? Maybe it wouldn’t work out so well after all. But it definitely would get done.) Yet how many idealistic left-wing people have heard that phrase ‘power corrupts’ too many times, and decided they didn’t want to risk running for office?

Indeed, the notion that corruption is inherent to the exercise of power may well be the greatest tool we have ever given to those who are corrupt and seeking to hold onto power.

Are unions collusion?

Oct 31 JDN 2459519

The standard argument from center-right economists against labor unions is that they are a form of collusion: Producers are coordinating and intentionally holding back from what would be in their individual self-interest in order to gain a collective advantage. And this is basically true: In the broadest sense of the term, labor unions are are form of collusion. Since collusion is generally regarded as bad, therefore (this argument goes), unions are bad.

What this argument misses out on is why collusion is generally regarded as bad. The typical case for collusion is between large corporations, each of which already controls a large share of the market—collusion then allows them to act as if they control an even larger share, potentially even acting as a monopoly.

Labor unions are not like this. Literally no individual laborer controls a large segment of the market. (Some very specialized laborers, like professional athletes, or, say, economists, might control a not completely trivial segment of their particular job market—but we’re still talking something like 1% at most. Even Tiger Woods or Paul Krugman is not literally irreplaceable.) Moreover, even the largest unions can rarely achieve anything like a monopoly over a particular labor market.

Thus whereas typical collusion involves going from a large market share to an even larger—often even dominant—market share, labor unions involve going from a tiny market share to a moderate—and usually not dominant—market share.

But that, by itself, wouldn’t be enough to justify unions. While small family businesses banding together in collusion is surely less harmful than large corporations doing the same, it would probably still be a bad thing, insofar as it would raise prices and reduce the quantity or quality of products sold. It would just be less bad.

Yet unions differ from even this milder collusion in another important respect: They do not exist to increase bargaining power versus consumers. They exist to increase bargaining power versus corporations.

And corporations, it turns out, already have a great deal of bargaining power. While a labor union acts as something like a monopoly (or at least oligopoly), corporations act like the opposite: oligopsony or even monopsony.

While monopoly or monopsony on its own is highly unfair and inefficient, the combination of the two—bilateral monopolyis actually relatively fair and efficient. Bilateral monopoly is probably not as good as a truly competitive market, but it is definitely better than either a monopoly or monopsony alone. Whereas a monopoly has too much bargaining power for the seller (resulting in prices that are too high), and a monopsony has too much bargaining power for the buyer (resulting in prices that are too low), a bilateral monopoly has relatively balanced bargaining power, and thus gets an outcome that’s not too much different from fair competition in a free market.

Thus, unions really exist as a correction mechanism for the excessive bargaining power of corporations. Most unions are between workers in large industries who work for a relatively small number of employers, such as miners, truckers, and factory workers. (Teachers are also an interesting example, because they work for the government, which effectively has a monopsony on public education services.) In isolation they may seem inefficient; but in context they really exist to compensate for other, worse inefficiencies.


We could imagine a world where this was not so: Say there is a market with many independent buyers who are unwilling or unable to reliably collude, and they are served by a small number of powerful unions that use their bargaining power to raise prices and reduce output.


We have some markets that already look a bit like that: Consider the licensing systems for doctors and lawyers. These are basically guilds, which are collusive in the same way as labor unions.

Note that unlike, say, miners, truckers, or factory workers, doctors and lawyers are not a large segment of the population; they are bargaining against consumers just as much as corporations; and they are extremely well-paid and very likely undersupplied. (Doctors are definitely undersupplied; with lawyers it’s a bit more complicated, but given how often corporations get away with terrible things and don’t get sued for it, I think it’s fair to say that in the current system, lawyers are undersupplied.) So I think it is fair to be concerned that the guild systems for doctors and lawyers are too powerful. We want some system for certifying the quality of doctors and lawyers, but the existing standards are so demanding that they result in a shortage of much-needed labor.

One way to tell that unions aren’t inefficient is to look at how unionization relates to unemployment. If unions were acting as a harmful monopoly on labor, unemployment should be higher in places with greater unionization rates. The empirical data suggests that if there is any such effect, it’s a small one. There are far more important determinants of unemployment than unionization. (Wages, on the other hand, show a strong positive link with unionization.) Much like the standard prediction that raising minimum wage would reduce employment, the prediction that unions raise unemployment has largely not been borne out by the data. And for much the same reason: We had ignored the bargaining power of employers, which minimum wage and unions both reduce.

Thus, the justifiability of unions isn’t something that we could infer a priori without looking at the actual structure of the labor market. Unions aren’t always or inherently good—but they are usually good in the system as it stands. (Actually there’s one particular class of unions that do not seem to be good, and that’s police unions: But this is a topic for another time.)

My ultimate conclusion? Yes, unions are a form of collusion. But to infer from that they must be bad is to commit a Noncentral Fallacy. Unions are the good kind of collusion.