A very Omicron Christmas

Dec 26 JDN 2459575

Remember back in spring of 2020 when we thought that this pandemic would quickly get under control and life would go back to normal? How naive we were.

The newest Omicron strain seems to be the most infectious yet—even people who are fully vaccinated are catching it. The good news is that it also seems to be less deadly than most of the earlier strains. COVID is evolving to spread itself better, but not be as harmful to us—much as influenza and cold viruses evolved. While weekly cases are near an all-time peek, weekly deaths are well below the worst they had been.

Indeed, at this point, it’s looking like COVID will more or less be with us forever. In the most likely scenario, the virus will continue to evolve to be more infectious but less lethal, and then we will end up with another influenza on our hands: A virus that can’t be eradicated, gets huge numbers of people sick, but only kills a relatively small number. At some point we will decide that the risk of getting sick is low enough that it isn’t worth forcing people to work remotely or maybe even wear masks. And we’ll relax various restrictions and get back to normal with this new virus a regular part of our lives.


Merry Christmas?

But it’s not all bad news. The vaccination campaign has been staggeringly successful—now the total number of vaccine doses exceeds the world population, so the average human being has been vaccinated for COVID at least once.

And while 5.3 million deaths due to the virus over the last two years sounds terrible, it should be compared against the baseline rate of 15 million deaths during that same interval, and the fact that worldwide death rates have been rapidly declining. Had COVID not happened, 2021 would be like 2019, which had nearly the lowest death rate on record, at 7,579 deaths per million people per year. As it is, we’re looking at something more like 10,000 deaths per million people per year (1%), or roughly what we considered normal way back in the long-ago times of… the 1980s. To get even as bad as things were in the 1950s, we would have to double our current death rate.

Indeed, there’s something quite remarkable about the death rate we had in 2019, before the pandemic hit: 7,579 per million is only 0.76%. A being with a constant annual death rate of 0.76% would have a life expectancy of over 130 years. This very low death rate is partly due to demographics: The current world population is unusually young and healthy because the world recently went through huge surges in population growth. Due to demographic changes the UN forecasts that our death rate will start to climb again as fertility falls and the average age increases; but they are still predicting it will stabilize at about 11,200 per million per year, which would be a life expectancy of 90. And that estimate could well be too pessimistic, if medical technology continues advancing at anything like its current rate.

We call it Christmas, but it’s really a syncretized amalgamation of holidays: Yule, Saturnalia, various Solstice celebrations. (Indeed, there’s no particular reason to think Jesus was even born in December.) Most Northern-hemisphere civilizations have some sort of Solstice holiday, and we’ve greedily co-opted traditions from most of them. The common theme really seems to be this:

Now it is dark, but band together and have hope, for the light shall return.

Diurnal beings in northerly latitudes instinctively fear the winter, when it becomes dark and cold and life becomes more hazardous—but we have learned to overcome this fear together, and we remind ourselves that light and warmth will return by ritual celebrations.

The last two years have made those celebrations particularly difficult, as we have needed to isolate ourselves in order to keep ourselves and others safe. Humans are fundamentally social at a level most people—even most scientists—do not seem to grasp: We need contact with other human beings as deeply and vitally as we need food or sleep.

The Internet has allowed us to get some level of social contact while isolated, which has been a tremendous boon; but I think many of us underestimated how much we would miss real face-to-face contact. I think much of the vague sense of malaise we’ve all been feeling even when we aren’t sick and even when we’ve largely adapted our daily routine to working remotely comes from this: We just aren’t getting the chance to see people in person nearly as often as we want—as often as we hadn’t even realized we needed.

So, if you do travel to visit family this holiday season, I understand your need to do so. But be careful. Get vaccinated—three times, if you can. Don’t have any contact with others who are at high risk if you do have any reason to think you’re infected.

Let’s hope next Christmas is better.

Low-skill jobs

Dec 5 JDN 2459554

I’ve seen this claim going around social media for awhile now: “Low-skill jobs are a classist myth created to justify poverty wages.”

I can understand why people would say things like this. I even appreciate that many low-skill jobs are underpaid and unfairly stigmatized. But it’s going a bit too far to claim that there is no such thing as a low-skill job.

Suppose all the world’s physicists and all the world’s truckers suddenly had to trade jobs for a month. Who would have a harder time?

If a mathematician were asked to do the work of a janitor, they’d be annoyed. If a janitor were asked to do the work of a mathematician, they’d be completely nonplussed.

I could keep going: Compare robotics engineers to dockworkers or software developers to fruit pickers.

Higher pay does not automatically equate to higher skills: welders are clearly more skilled than stock traders. Give any welder a million-dollar account and a few days of training, and they could do just as well as the average stock trader (which is to say, worse than the S&P 500). Give any stock trader welding equipment and a similar amount of training, and they’d be lucky to not burn their fingers off, much less actually usefully weld anything.

This is not to say that any random person off the street could do just as well as a janitor or dockworker as someone who has years of experience at that job. It is simply to say that they could do better—and pick up the necessary skills faster—than a random person trying to work as a physicist or software developer.

Moreover, this does justify some difference in pay. If some jobs are easier than others, in the sense that more people are qualified to do them, then the harder jobs will need to pay more in order to attract good talent—if they didn’t, they’d risk their high-skill workers going and working at the low-skill jobs instead.

This is of course assuming all else equal, which is clearly not the case. No two jobs are the same, and there are plenty of other considerations that go into choosing someone’s wage: For one, not simply what skills are required, but also the effort and unpleasantness involved in doing the work. I’m entirely prepared to believe that being a dockworker is less fun than being a physicist, and this should reduce the differential in pay between them. Indeed, it may have: Dockworkers are paid relatively well as far as low-skill jobs go—though nowhere near what physicists are paid. Then again, productivity is also a vital consideration, and there is a general tendency that high-skill jobs tend to be objectively more productive: A handful of robotics engineers can do what was once the work of hundreds of factory laborers.

There are also ways for a worker to be profitable without being particularly productive—that is, to be very good at rent-seeking. This is arguably the case for lawyers and real estate agents, and undeniably the case for derivatives traders and stockbrokers. Corporate executives aren’t stupid; they wouldn’t pay these workers astronomical salaries if they weren’t making money doing so. But it’s quite possible to make lots of money without actually producing anything of particular value for human society.

But that doesn’t mean that wages are always fair. Indeed, I dare say they typically are not. One of the most important determinants of wages is bargaining power. Unions don’t increase skill and probably don’t increase productivity—but they certainly increase wages, because they increase bargaining power.

And this is also something that’s correlated with lower levels of skill, because the more people there are who know how to do what you do, the harder it is for you to make yourself irreplaceable. A mathematician who works on the frontiers of conformal geometry or Teichmueller theory may literally be one of ten people in the world who can do what they do (quite frankly, even the number of people who know what they do is considerably constrained, though probably still at least in the millions). A dockworker, even one who is particularly good at loading cargo skillfully and safely, is still competing with millions of other people with similar skills. The easier a worker is to replace, the less bargaining power they have—in much the same way that a monopoly has higher profits than an oligopoly, which has higher profits that a competitive market.

This is why I support unions. I’m also a fan of co-ops, and an ardent supporter of progressive taxation and safety regulations. So don’t get me wrong: Plenty of low-skill workers are mistreated and underpaid, and they deserve better.

But that doesn’t change the fact that it’s a lot easier to be a janitor than a physicist.

Risk compensation is not a serious problem

Nov 28 JDN 2459547

Risk compensation. It’s one of those simple but counter-intuitive ideas that economists love, and it has been a major consideration in regulatory policy since the 1970s.

The idea is this: The risk we face in our actions is partly under our control. It requires effort to reduce risk, and effort is costly. So when an external source, such as a government regulation, reduces our risk, we will compensate by reducing the effort we expend, and thus our risk will decrease less, or maybe not at all. Indeed, perhaps we’ll even overcompensate and make our risk worse!

It’s often used as an argument against various kinds of safety efforts: Airbags will make people drive worse! Masks will make people go out and get infected!

The basic theory here is sound: Effort to reduce risk is costly, and people try to reduce costly things.

Indeed, it’s theoretically possible that risk compensation could yield the exact same risk, or even more risk than before—or at least, I wasn’t able to prove that for any possible risk profile and cost function it couldn’t happen.

But I wasn’t able to find any actual risk profiles or cost functions that would yield this result, even for a quite general form. Here, let me show you.

Let’s say there’s some possible harm H. There is also some probability that it will occur, which you can mitigate with some choice x. For simplicity let’s say that it’s one-to-one, so that your risk of H occurring is precisely 1-x. Since probabilities must be between 0 and 1, thus so must x.

Reducing that risk costs effort. I won’t say much about that cost, except to call it c(x) and assume the following:

(1) It is increasing: More effort reduces risk more and costs more than less effort.

(2) It is convex: Reducing risk from a high level to a low level (e.g. 0.9 to 0.8) costs less than reducing it from a low level to an even lower level (e.g. 0.2 to 0.1).

These both seem like eminently plausible—indeed, nigh-unassailable—assumptions. And they result in the following total expected cost (the opposite of your expected utility):

(1-x)H + c(x)

Now let’s suppose there’s some policy which will reduce your risk by a factor r, which must be between 0 and 1. Your cost then becomes:

r(1-x)H + c(x)

Minimizing this yields the following result:

rH = c'(x)

where c'(x) is the derivative of c(x). Since c(x) is increasing and convex, c'(x) is positive and increasing.

Thus, if I make r smaller—an external source of less risk—then I will reduce the optimal choice of x. This is risk compensation.

But have I reduced or increased the amount of risk?

The total risk is r(1-x); since r decreased and so did x, it’s not clear whether this went up or down. Indeed, it’s theoretically possible to have cost functions that would make it go up—but I’ve never seen one.

For instance, suppose we assume that c(x) = axb, where a and b are constants. This seems like a pretty general form, doesn’t it? To maintain the assumption that c(x) is increasing and convex, I need a > 0 and b > 1. (If 0 < b < 1, you get a function that’s increasing but concave. If b=1, you get a linear function and some weird corner solutions where you either expend no effort at all or all possible effort.)

Then I’m trying to minimize:

r(1-x)H + axb

This results in a closed-form solution for x:

x = (rH/ab)^(1/(b-1))

Since b>1, 1/(b-1) > 0.


Thus, the optimal choice of x is increasing in rH and decreasing in ab. That is, reducing the harm H or the overall risk r will make me put in less effort, while reducing the cost of effort (via either a or b) will make me put in more effort. These all make sense.

Can I ever increase the overall risk by reducing r? Let’s see.


My total risk r(1-x) is therefore:

r(1-x) = r[1-(rH/ab)^(1/(b-1))]

Can making r smaller ever make this larger?

Well, let’s compare it against the case when r=1. We want to see if there’s a case where it’s actually larger.

r[1-(rH/ab)^(1/(b-1))] > [1-(H/ab)^(1/(b-1))]

r – r^(1/(b-1)) (H/ab)^(1/(b-1)) > 1 – (H/ab)^(1/(b-1))

For this to be true, we would need r > 1, which would mean we didn’t reduce risk at all. Thus, reducing risk externally reduces total risk even after compensation.

Now, to be fair, this isn’t a fully general model. I had to assume some specific functional forms. But I didn’t assume much, did I?

Indeed, there is a fully general argument that externally reduced risk will never harm you. It’s quite simple.

There are three states to consider: In state A, you have your original level of risk and your original level of effort to reduce it. In state B, you have an externally reduced level of risk and your original level of effort. In state C, you have an externally reduced level of risk, and you compensate by reducing your effort.

Which states make you better off?

Well, clearly state B is better than state A: You get reduced risk at no cost to you.

Furthermore, state C must be better than state B: You voluntarily chose to risk-compensate precisely because it made you better off.

Therefore, as long as your preferences are rational, state C is better than state A.

Externally reduced risk will never make you worse off.

QED. That’s it. That’s the whole proof.

But I’m a behavioral economist, am I not? What if people aren’t being rational? Perhaps there’s some behavioral bias that causes people to overcompensate for reduced risks. That’s ultimately an empirical question.

So, what does the empirical data say? Risk compensation is almost never a serious problem in the real world. Measures designed to increase safety, lo and behold, actually increase safety. Removing safety regulations, astonishingly enough, makes people less safe and worse off.

If we ever do find a case where risk compensation is very large, then I guess we can remove that safety measure, or find some way to get people to stop overcompensating. But in the real world this has basically never happened.

It’s still a fair question whether any given safety measure is worth the cost: Implementing regulations can be expensive, after all. And while many people would like to think that “no amount of money is worth a human life”, nobody does—or should, or even can—act like that in the real world. You wouldn’t drive to work or get out of bed in the morning if you honestly believed that.

If it would cost $4 billion to save one expected life, it’s definitely not worth it. Indeed, you should still be able to see that even if you don’t think lives can be compared with other things—because $4 billion could save an awful lot of lives if you spent it more efficiently. (Probablyover a million, in fact, as current estimates of the marginal cost to save one life are about $2,300.) Inefficient safety interventions don’t just cost money—they prevent us from doing other, more efficient safety interventions.

And as for airbags and wearing masks to prevent COVID? Yes, definitely 100% worth it, as both interventions have already saved tens if not hundreds of thousands of lives.

How can we fix medical residency?

Nov 21 JDN 459540

Most medical residents work 60 or more hours per week, and nearly 20% work 80 or more hours. 66% of medical residents report sleeping 6 hours or less each night, and 20% report sleeping 5 hours or less.

It’s not as if sleep deprivation is a minor thing: Worldwide, across all jobs, nearly 750,000 deaths annually are attributable to long working hours, most of these due to sleep deprivation.


By some estimates, medical errors account for as many as 250,000 deaths per year in the US alone. Even the most conservative estimates say that at least 25,000 deaths per year in the US are attributable to medical errors. It seems quite likely that long working hours increase the rate of dangerous errors (though it has been difficult to determine precisely how much).

Indeed, the more we study stress and sleep deprivation, the more we learn how incredibly damaging they are to health and well-being. Yet we seem to have set up a system almost intentionally designed to maximize the stress and sleep deprivation of our medical professionals. Some of them simply burn out and leave the profession (about 18% of surgical residents quit); surely an even larger number of people never enter medicine in the first place because they know they would burn out.

Even once a doctor makes it through residency and has learned to cope with absurd hours, this most likely distorts their whole attitude toward stress and sleep deprivation. They are likely to not consider them “real problems”, because they were able to “tough it out”—and they are likely to assume that their patients can do the same. One of the primary functions of a doctor is to reduce pain and suffering, and by putting doctors through unnecessary pain and suffering as part of their training, we are teaching them that pain and suffering aren’t really so bad and you should just grin and bear it.

We are also systematically selecting against doctors who have disabilities that would make it difficult to work these double-time hours—which means that the doctors who are most likely to sympathize with disabled patients are being systematically excluded from the profession.

There have been some attempts to regulate the working hours of residents, but they have generally not been effective. I think this is for three reasons:

1. They weren’t actually trying hard enough. A cap of 80 hours per week is still 40 hours too high, and looks to me like you are trying to get better PR without fixing the actual problem.

2. Their enforcement mechanisms left too much opportunity to cheat the system, and in fact most medical residents simply became pressured to continue over-working and under-report their hours.

3. They don’t seem to have considered how to effect the transition in a way that won’t reduce the total number of resident-hours, and so residents got less training and hospitals were less served.

The solution to problem 1 is obvious: The cap needs to be lower. Much lower.

The solution to problem 2 is trickier: What sort of enforcement mechanism would prevent hospitals from gaming the system?

I believe the answer is very steep overtime pay requirements, coupled with regular and intensive auditing. Every hour a medical resident goes over their cap, they should have to be paid triple time. Audits should be performed frequently, randomly and without notice. And if a hospital is caught falsifying their records, they should be required to pay all missing hours to all medical residents at quintuple time. And Medicare and Medicaid should not be allowed to reimburse these additional payments—they must come directly out of the hospital’s budget.

Under the current system, the “punishment” is usually a threat of losing accreditation, which is too extreme and too harmful to the residents. Precisely because this is such a drastic measure, it almost never happens. The punishment needs to be small enough that we will actually enforce it; and it needs to hurt the hospital, not the residents—overtime pay would do precisely that.

That brings me to problem 3: How can we ensure that we don’t reduce the total number of resident-hours?

This is important for two reasons: Each resident needs a certain number of hours of training to become a skilled doctor, and residents provide a significant proportion of hospital services. Of the roughly 1 million doctors in the US, about 140,000 are medical residents.

The answer is threefold:

1. Increase the number of residency slots (we have a global doctor shortage anyway).

2. Extend the duration of residency so that each resident gets the same number of total work hours.

3. Gradually phase in so that neither increase needs to be too fast.

Currently a typical residency is about 4 years. 4 years of 80-hour weeks is equivalent to 8 years of 40-hour weeks. The goal is for each resident to get 320 hour-years of training.

With 140,000 current residents averaging 4 years, a typical cohort is about 35,000. So the goal is to each year have at least (35,000 residents per cohort)(4 cohorts)(80 hours per week) = 11 million resident-hours per week.

In cohort 1, we reduce the cap to 70 hours, and increase the number of accepted residents to 40,000. Residents in cohort 1 will continue their residency for 4 years, 7 months. This gives each one 321 hour-years of training.

In cohort 2, we reduce the cap to 60 hours, and increase the number of accepted residents to 46,000.

Residents in cohort 2 will continue their residency for 5 years, 4 months. This gives each one 320 hour-years of training.

In cohort 3, we reduce the cap to 55 hours, and increase the number of accepted residents to 50,000.

Residents in cohort 3 will continue their residency for 6 years. This gives each one 330 hour-years of training.

In cohort 4, we reduce the cap to 50 hours, and increase the number of accepted residents to 56,000. Residents in cohort 4 will continue their residency for 6 years, 6 months. This gives each one 325 hour-years of training.

In cohort 5, we reduce the cap to 45 hours, and increase the number of accepted residents to 60,000. Residents in cohort 5 will continue their residency for 7 years, 2 months. This gives each one 322 hour-years of training.

In cohort 6, we reduce the cap to 40 hours, and increase the number of accepted residents to 65,000. Residents in cohort 6 will continue their residency for 8 years. This gives each one 320 hour-years of training.

In cohort 7, we keep the cap to 40 hours, and increase the number of accepted residents to 70,000. This is now the new standard, with 8-year residencies with 40 hour weeks.

I’ve made a graph here of what this does to the available number of resident-hours each year. There is a brief 5% dip in year 4, but by the time we reach year 14 we’ve actually doubled the total number of available resident-hours at any given time—without increasing the total amount of work each resident does, simply keeping them longer and working them less intensively each year. Given that quality of work is reduced by working longer hours, it’s likely that even this brief reduction in hours would not result in any reduced quality of care for patients.

[residency_hours.png]

I have thus managed to increase the number of available resident-hours, ensure that each resident gets the same amount of training as before, and still radically reduce the work hours from 80 per week to 40 per week. The additional recruitment each year is never more than 6,000 new residents or 15% of the current number of residents.

It takes several years to effect this transition. This is unavoidable if we are trying to avoid massive increases in recruitment, though if we were prepared to simply double the number of admitted residents each year we could immediately transition to 40-hour work weeks in a single cohort and the available resident-hours would then strictly increase every year.

This plan is likely not the optimal one; I don’t know enough about the details of how costly it would be to admit more residents, and it’s possible that some residents might actually prefer a briefer, more intense residency rather than a longer, less stressful one. (Though it’s worth noting that most people greatly underestimate the harms of stress and sleep deprivation, and doctors don’t seem to be any better in this regard.)

But this plan does prove one thing: There are solutions to this problem. It can be done. If our medical system isn’t solving this problem, it is not because solutions do not exist—it is because they are choosing not to take them.

Does power corrupt?

Nov 7 JDN 2459526

It’s a familiar saying, originally attributed to the Lord Acton: “Power tends to corrupt, and absolute power corrupts absolutely. Great men are nearly always bad men.”

I think this saying is not only wrong, but in fact dangerous. We can all observe plenty of corrupt people in power, that much is true. But if it’s simply the power that corrupts them, and they started as good people, then there’s really nothing to be done. We may try to limit the amount of power any one person can have, but in any large, complex society there will be power, and so, if the saying is right, there will also be corruption.

How do I know that this saying is wrong?

First of all, note that corruption varies tremendously, and with very little correlation with most sensible notions of power.

Consider used car salespeople, stockbrokers, drug dealers, and pimps. All of these professions are rather well known for their high level of corruption. Yet are people in these professions powerful? Yes, any manager has some power over their employees; but there’s no particular reason to think that used car dealers have more power over their employees than grocery stores, and yet there’s a very clear sense in which used car dealers are more corrupt.

Even power on a national scale is not inherently tied to corruption. Consider the following individuals: Nelson Mandela, Mahatma Gandhi, Abraham Lincoln, and Franklin Roosevelt.

These men were extremely powerful; each ruled an entire nation.Indeed, during his administration, FDR was probably the most powerful person in the world. And they certainly were not impeccable: Mandela was a good friend of Fidel Castro, Gandhi abused his wife, Lincoln suspended habeas corpus, and of course FDR ordered the internment of Japanese-Americans. Yet overall I think it’s pretty clear that these men were not especially corrupt and had a large positive impact on the world.

Say what you will about Bernie Sanders, Dennis Kucinich, or Alexandria Ocasio-Cortez. Idealistic? Surely. Naive? Perhaps. Unrealistic? Sometimes. Ineffective? Often. But they are equally as powerful as anyone else in the US Congress, and ‘corrupt’ is not a word I’d use to describe them. Mitch McConnell, on the other hand….

There does seem to be a positive correlation between a country’s level of corruption and its level of authoritarianism; the most democratic countries—Scandinavia—are also the least corrupt. Yet India is surely more democratic than China, but is widely rated as about the same level of corruption. Greece is not substantially less democratic than Chile, but it has considerably more corruption. So even at a national level, power is the not the only determinant of corruption.

I’ll even agree to the second clause: “absolute power corrupts absolutely.” Were I somehow granted an absolute dictatorship over the world, one of my first orders of business would be to establish a new democratic world government to replace my dictatorial rule. (Would it be my first order of business, or would I implement some policy reforms first? Now that’s a tougher question. I think I’d want to implement some kind of income redistribution and anti-discrimination laws before I left office, at least.) And I believe that most good people think similarly: We wouldn’t want to have that kind of power over other people. We wouldn’t trust ourselves to never abuse it. Anyone who maintains absolute power is either already corrupt or likely to become so. And anyone who seeks absolute power is precisely the sort of person who should not be trusted with power at all.

It may also be that power is one determinant of corruption—that a given person will generally end up more corrupt if you give them more power. This might help explain why even the best ‘great men’ are still usually bad men. But clearly there are other determinants that are equally important.

And I would like to offer a different hypothesis to explain the correlation between power and corruption, which has profoundly different implications: The corrupt seek power.

Donald Trump didn’t start out a good man and become corrupt by becoming a billionaire or becoming President. Donald Trump was born a narcissistic idiot.

Josef Stalin wasn’t a good man who became corrupted by the unlimited power of ruling the Soviet Union. Josef Stalin was born a psychopath.

Indeed, when you look closely at how corrupt leaders get into power, it often involves manipulating and exploiting others on a grand scale. They are willing to compromise principles that good people wouldn’t. They aren’t corrupt because they got into power; they got into power because they are corrupt.

Let me be clear: I’m not saying we should compromise all of our principles in order to achieve power. If there is a route by which power corrupts, it is surely that. Rather, I am saying that we must maintain constant vigilance against anyone who seems so eager to attain power that they will compromise principles to do it—for those are precisely the people who are likely to be most dangerous if they should achieve their aims.

Moreover, I’m saying that “power corrupts” is actually a very dangerous message. It tells good people not to seek power, because they would be corrupted by it. But in fact what we actually need in order to get good people in power is more good people seeking power, more opportunities to out-compete the corrupt. If Congress were composed entirely of people like Alexandria Ocasio-Cortez, then the left-wing agenda would no longer seem naive and unrealistic; it would simply be what gets done. (Who knows? Maybe it wouldn’t work out so well after all. But it definitely would get done.) Yet how many idealistic left-wing people have heard that phrase ‘power corrupts’ too many times, and decided they didn’t want to risk running for office?

Indeed, the notion that corruption is inherent to the exercise of power may well be the greatest tool we have ever given to those who are corrupt and seeking to hold onto power.

Are unions collusion?

Oct 31 JDN 2459519

The standard argument from center-right economists against labor unions is that they are a form of collusion: Producers are coordinating and intentionally holding back from what would be in their individual self-interest in order to gain a collective advantage. And this is basically true: In the broadest sense of the term, labor unions are are form of collusion. Since collusion is generally regarded as bad, therefore (this argument goes), unions are bad.

What this argument misses out on is why collusion is generally regarded as bad. The typical case for collusion is between large corporations, each of which already controls a large share of the market—collusion then allows them to act as if they control an even larger share, potentially even acting as a monopoly.

Labor unions are not like this. Literally no individual laborer controls a large segment of the market. (Some very specialized laborers, like professional athletes, or, say, economists, might control a not completely trivial segment of their particular job market—but we’re still talking something like 1% at most. Even Tiger Woods or Paul Krugman is not literally irreplaceable.) Moreover, even the largest unions can rarely achieve anything like a monopoly over a particular labor market.

Thus whereas typical collusion involves going from a large market share to an even larger—often even dominant—market share, labor unions involve going from a tiny market share to a moderate—and usually not dominant—market share.

But that, by itself, wouldn’t be enough to justify unions. While small family businesses banding together in collusion is surely less harmful than large corporations doing the same, it would probably still be a bad thing, insofar as it would raise prices and reduce the quantity or quality of products sold. It would just be less bad.

Yet unions differ from even this milder collusion in another important respect: They do not exist to increase bargaining power versus consumers. They exist to increase bargaining power versus corporations.

And corporations, it turns out, already have a great deal of bargaining power. While a labor union acts as something like a monopoly (or at least oligopoly), corporations act like the opposite: oligopsony or even monopsony.

While monopoly or monopsony on its own is highly unfair and inefficient, the combination of the two—bilateral monopolyis actually relatively fair and efficient. Bilateral monopoly is probably not as good as a truly competitive market, but it is definitely better than either a monopoly or monopsony alone. Whereas a monopoly has too much bargaining power for the seller (resulting in prices that are too high), and a monopsony has too much bargaining power for the buyer (resulting in prices that are too low), a bilateral monopoly has relatively balanced bargaining power, and thus gets an outcome that’s not too much different from fair competition in a free market.

Thus, unions really exist as a correction mechanism for the excessive bargaining power of corporations. Most unions are between workers in large industries who work for a relatively small number of employers, such as miners, truckers, and factory workers. (Teachers are also an interesting example, because they work for the government, which effectively has a monopsony on public education services.) In isolation they may seem inefficient; but in context they really exist to compensate for other, worse inefficiencies.


We could imagine a world where this was not so: Say there is a market with many independent buyers who are unwilling or unable to reliably collude, and they are served by a small number of powerful unions that use their bargaining power to raise prices and reduce output.


We have some markets that already look a bit like that: Consider the licensing systems for doctors and lawyers. These are basically guilds, which are collusive in the same way as labor unions.

Note that unlike, say, miners, truckers, or factory workers, doctors and lawyers are not a large segment of the population; they are bargaining against consumers just as much as corporations; and they are extremely well-paid and very likely undersupplied. (Doctors are definitely undersupplied; with lawyers it’s a bit more complicated, but given how often corporations get away with terrible things and don’t get sued for it, I think it’s fair to say that in the current system, lawyers are undersupplied.) So I think it is fair to be concerned that the guild systems for doctors and lawyers are too powerful. We want some system for certifying the quality of doctors and lawyers, but the existing standards are so demanding that they result in a shortage of much-needed labor.

One way to tell that unions aren’t inefficient is to look at how unionization relates to unemployment. If unions were acting as a harmful monopoly on labor, unemployment should be higher in places with greater unionization rates. The empirical data suggests that if there is any such effect, it’s a small one. There are far more important determinants of unemployment than unionization. (Wages, on the other hand, show a strong positive link with unionization.) Much like the standard prediction that raising minimum wage would reduce employment, the prediction that unions raise unemployment has largely not been borne out by the data. And for much the same reason: We had ignored the bargaining power of employers, which minimum wage and unions both reduce.

Thus, the justifiability of unions isn’t something that we could infer a priori without looking at the actual structure of the labor market. Unions aren’t always or inherently good—but they are usually good in the system as it stands. (Actually there’s one particular class of unions that do not seem to be good, and that’s police unions: But this is a topic for another time.)

My ultimate conclusion? Yes, unions are a form of collusion. But to infer from that they must be bad is to commit a Noncentral Fallacy. Unions are the good kind of collusion.

Where did all that money go?

Sep 26 JDN 2459484

Since 9/11, the US has spent a staggering $14 trillion on the military, averaging $700 billion per year. Some of this was the routine spending necessary to maintain a large standing army (though it is fair to ask whether we really need our standing army to be quite this large).

But a recent study by the Costs of War Project suggests that a disturbing amount of this money has gone to defense contractors: Somewhere between one-third and one-half, or in other words between $5 and $7 trillion.

This is revenue, not profit; presumably these defense contractors also incurred various costs in materials, labor, and logistics. But even as raw revenue that is an enormous amount of money. Apple, one of the largest corporations in the world, takes in on average about $300 billion per year. Over 20 years, that would be $6 trillion—so, our government has basically spent as much on defense contractors as the entire world spent on Apple products.

Of that $5 to $7 trillion, one-fourth to one-third went to just five corporations. That’s over $2 trillion just to Lockheed Martin, Boeing, General Dynamics, Raytheon, and Northrop Grumman. We pay more each year to Lockheed Martin than we do to the State Department and USAID.

Looking at just profit, each of these corporations appears to make a gross profit margin of about 10%. So we’re looking at something like $200 billion over 20 years—$10 billion per year—just handed over to shareholders.

And what were we buying with this money? Mostly overengineered high-tech military equipment that does little or nothing to actually protect soldiers, win battles, or promote national security. (It certainly didn’t do much to stop the Taliban from retaking control as soon as we left Afghanistan!)

Eisenhower tried to warn us about the military-industrial complex, but we didn’t listen.

Even when the equipment they sell us actually does its job, it still raises some serious questions about whether these are things we ought to be privatizing. As I mentioned in a post on private prisons several years ago, there are really three types of privatization of government functions.

Type 1 is innocuous: There are certain products and services that privatized businesses already provide in the open market and the government also has use for. There’s no reason the government should hesitate to buy wrenches or toothbrushes or hire cleaners or roofers.

Type 3 is the worst: There have been attempts to privatize fundamental government services, such as prisons, police, and the military. This is inherently unjust and undemocratic and must never be allowed. The use of force must never be for profit.

But defense contractors lie in the middle area, type 2: contracting services to specific companies that involve government-specific features such as military weapons. It’s true, there’s not that much difference functionally between a civilian airliner and a bomber plane, so it makes at least some sense that Boeing would be best qualified to produce both. This is not an obviously nonsensical idea. But there are still some very important differences, and I am deeply uneasy with the very concept of private corporations manufacturing weapons.


It’s true, there are some weapons that private companies make for civilians, such as knives and handguns. I think it would be difficult to maintain a free society while banning all such production, and it is literally impossible to ban anything that could potentially be used as a weapon (Wrenches? Kitchen knives? Tree branches!?). But we strictly regulate such production for very good reasons—and we probably don’t go far enough, really.

Moreover, there’s a pretty clear difference in magnitude if not in kind between a corporation making knives or even handguns and a corporation making cruise missiles—let alone nuclear missiles. Even if there is a legitimate overlap in skills and technology between making military weapons and whatever other products a corporation might make for the private market, it might still ultimately be better to nationalize the production of military weapons.

And then there are corporations that essentially do nothing but make military weapons—and we’re back to Lockheed-Martin again. Boeing does in fact make most of the world’s civilian airliners, in addition to making some military aircraft and missiles. But Lockheed-Martin? They pretty much just make fighters and bombers. This isn’t a company with generalized aerospace manufacturing skills that we are calling upon to make fighters in a time of war. This is an entire private, for-profit corporation that exists for the sole purpose of making fighter planes.

I really can’t see much reason not to simply nationalize Lockheed-Martin. They should be a division of the US Air Force or something.

I guess, in theory, the possibility of competing between different military contractors could potentially keep costs down… but, uh, how’s that working out for you? The acquisition costs of the F-35 are expected to run over $400 billion—the cost of the whole program a whopping $1.5 trillion. That doesn’t exactly sound like we’ve been holding costs down through competition.

And there really is something deeply unseemly about the idea of making profits through war. There’s a reason we have that word “profiteering”. Yes, manufacturing weapons has costs, and you should of course pay your workers and material suppliers at fair rates. But do we really want corporations to be making billions of dollars in profits for making machines of death?

But if nationalizing defense contractors or making them into nonprofit institutions seems too radical, I think there’s one very basic law we ought to make: No corporation with government contracts may engage in any form of lobbying. That’s such an obvious conflict of interest, such a clear opening for regulatory capture, that there’s really no excuse for it. If there must be shareholders profiting from war, at the very least they should have absolutely no say in whether we go to war or not.

And yet, we do allow defense contractors to spend on lobbying—and spend they do, tens of millions of dollars every year. Does all this lobbying affect our military budget or our willingness to go to war?

They must think so.

Hypocrisy is underrated

Sep 12 JDN 2459470

Hypocrisy isn’t a good thing, but it isn’t nearly as bad as most people seem to think. Often accusing someone of hypocrisy is taken as a knock-down argument for everything they are saying, and this is just utterly wrong. Someone can be a hypocrite and still be mostly right.

Often people are accused of hypocrisy when they are not being hypocritical; for instance the right wing seems to think that “They want higher taxes on the rich, but they are rich!” is hypocrisy, when in fact it’s simply altruism. (If they had wanted the rich guillotined, that would be hypocrisy. Maybe the problem is that the right wing can’t tell the difference?) Even worse, “They live under capitalism but they want to overthrow capitalism!” is not even close to hypocrisy—let alone why, how would someone overthrow a system they weren’t living under? (There are many things wrong with Marxists, but that is not one of them.)

But in fact I intend something stronger: Hypocrisy itself just isn’t that bad.


There are currently two classes of Republican politicians with regard to the COVID vaccines: Those who are consistent in their principles and don’t get the vaccines, and those who are hypocrites and get the vaccines while telling their constituents not to. Of the two, who is better? The hypocrites. At least they are doing the right thing even as they say things that are very, very wrong.

There are really four cases to consider. The principles you believe in could be right, or they could be wrong. And you could follow those principles, or you could be a hypocrite. Each of these two factors is independent.

If your principles are right and you are consistent, that’s the best case; if your principles are right and you are a hypocrite, that’s worse.

But if your principles are wrong and you are consistent, that’s the worst case; if your principles are wrong and you are a hypocrite, that’s better.

In fact I think for most things the ordering goes like this: Consistent Right > Hypocritical Wrong > Hypocritical Right > Consistent Wrong. Your behavior counts for more than your principles—so if you’re going to be a hypocrite, it’s better for your good actions to not match your bad principles.

Obviously if we could get people to believe good moral principles and then follow them, that would be best. And we should in fact be working to achieve that.

But if you know that someone’s moral principles are wrong, it doesn’t accomplish anything to accuse them of being a hypocrite. If it’s true, that’s a good thing.

Here’s a pretty clear example for you: Anyone who says that the Bible is infallible but doesn’t want gay people stoned to death is a hypocrite. The Bible is quite clear on this matter; Leviticus 20:13 really doesn’t leave much room for interpretation. By this standard, most Christians are hypocrites—and thank goodness for that. I owe my life to the hypocrisy of millions.

Of course if I could convince them that the Bible isn’t infallible—perhaps by pointing out all the things it says that contradict their most deeply-held moral and factual beliefs—that would be even better. But the last thing I want to do is make their behavior more consistent with their belief that the Bible is infallible; that would turn them into fanatical monsters. The Spanish Inquisition was very consistent in behaving according to the belief that the Bible is infallible.

Here’s another example: Anyone who thinks that cruelty to cats and dogs is wrong but is willing to buy factory-farmed beef and ham is a hypocrite. Any principle that would tell you that it’s wrong to kick a dog or cat would tell you that the way cows and pigs are treated in CAFOs is utterly unconscionable. But if you are really unwilling to give up eating meat and you can’t find or afford free-range beef, it still would be bad for you to start kicking dogs in a display of your moral consistency.

And one more example for good measure: The leaders of any country who resist human rights violations abroad but tolerate them at home are hypocrites. Obviously the best thing to do would be to fight human rights violations everywhere. But perhaps for whatever reason you are unwilling or unable to do this—one disturbing truth is that many human rights violations at home (such as draconian border policies) are often popular with your local constituents. Human-rights violations abroad are also often more severe—detaining children at the border is one thing, a full-scale genocide is quite another. So, for good reasons or bad, you may decide to focus your efforts on resisting human rights violations abroad rather than at home; this would make you a hypocrite. But it would still make you much better than a more consistent leader who simply ignores all human rights violations wherever they may occur.

In fact, there are cases in which it may be optimal for you to knowingly be a hypocrite. If you have two sets of competing moral beliefs, and you don’t know which is true but you know that as a whole they are inconsistent, your best option is to apply each set of beliefs in the domain for which you are most confident that it is correct, while searching for more information that might allow you to correct your beliefs and reconcile the inconsistency. If you are self-aware about this, you will know that you are behaving in a hypocritical way—but you will still behave better than you would if you picked the wrong beliefs and stuck to them dogmatically. In fact, given a reasonable level of risk aversion, you’ll be better off being a hypocrite than you would by picking one set of beliefs arbitrarily (say, at the flip of a coin). At least then you avoid the worst-case scenario of being the most wrong.

There is yet another factor to take into consideration. Sometimes following your own principles is hard.

Considerable ink has been spilled on the concept of akrasia, or “weakness of will”, in which we judge that A is better than B yet still find ourselves doing B. Philosophers continue to debate to this day whether this really happens. As a behavioral economist, I observe it routinely, perhaps even daily. In fact, I observe it in myself.

I think the philosophers’ mistake is to presume that there is one simple, well-defined “you” that makes all observations and judgments and takes actions. Our brains are much more complicated than that. There are many “you”s inside your brain, each with its own capacities, desires, and judgments. Yes, there is some important sense in which they are all somehow unified into a single consciousness—by a mechanism which still eludes our understanding. But it doesn’t take esoteric cognitive science to see that there are many minds inside you: Haven’t you ever felt an urge to do something you knew you shouldn’t do? Haven’t you ever succumbed to such an urge—drank the drink, eaten the dessert, bought the shoes, slept with the stranger—when it seemed so enticing but you knew it wasn’t really the right choice?

We even speak of being “of two minds” when we are ambivalent about something, and I think there is literal truth in this. The neural networks in your brain are forming coalitions, and arguing between them over which course of action you ought to take. Eventually one coalition will prevail, and your action will be taken; but afterward your reflective mind need not always agree that the coalition which won the vote was the one that deserved to.

The evolutionary reason for this is simple: We’re a kludge. We weren’t designed from the top down for optimal efficiency. We were the product of hundreds of millions of years of subtle tinkering, adding a bit here, removing a bit there, layering the mammalian, reflective cerebral cortex over the reptilian, emotional limbic system over the ancient, involuntary autonomic system. Combine this with the fact that we are built in pairs, with left and right halves of each kind of brain (and yes, they are independently functional when their connection is severed), and the wonder is that we ever agree with our own decisions.

Thus, there is a kind of hypocrisy that is not a moral indictment at all: You may genuinely and honestly agree that it is morally better to do something and still not be able to bring yourself to do it. You may know full well that it would be better to donate that money to malaria treatment rather than buy yourself that tub of ice cream—you may be on a diet and full well know that the ice cream won’t even benefit you in the long run—and still not be able to stop yourself from buying the ice cream.

Sometimes your feeling of hesitation at an altruistic act may be a useful insight; I certainly don’t think we should feel obliged to give all our income, or even all of our discretionary income, to high-impact charities. (For most people I encourage 5%. I personally try to aim for 10%. If all the middle-class and above in the First World gave even 1% we could definitely end world hunger.) But other times it may lead you astray, make you unable to resist the temptation of a delicious treat or a shiny new toy when even you know the world would be better off if you did otherwise.

Yet when following our own principles is so difficult, it’s not really much of a criticism to point out that someone has failed to do so, particularly when they themselves already recognize that they failed. The inconsistency between behavior and belief indicates that something is wrong, but it may not be any dishonesty or even anything wrong with their beliefs.

I wouldn’t go so far as to say you should stop ever calling out hypocrisy. Sometimes it is clearly useful to do so. But while hypocrisy is often the sign of a moral failing, it isn’t always—and even when it is, often as not the problem is the bad principles, not the behavior inconsistent with them.

Unending nightmares

Sep 19 JDN 2459477

We are living in a time of unending nightmares.

As I write this, we have just passed the 20th anniversary of 9/11. Yet only in the past month were US troops finally withdrawn from Afghanistan—and that withdrawal was immediately followed by a total collapse of the Afghan government and a reinstatement of the Taliban. The United States had been at war for nearly 20 years, spending trillions of dollars and causing thousands of deaths—and seems to have accomplished precisely nothing.

Some left-wing circles have been saying that the Taliban offered surrender all the way back in 2001; this is not accurate. Alternet even refers to it as an “unconditional surrender” which is utter nonsense. No one in their right mind—not even the most die-hard imperialist—would ever refuse an unconditional surrender, and the US most certainly did nothing of the sort.)

The Taliban did offer a peace deal in 2001, which would have involved giving the US control of Kandahar and turning Osama bin Laden over to a neutral country (not to the US or any US ally). It would also have granted amnesty to a number of high-level Taliban leaders, which was a major sticking point for the US. In hindsight, should they have taken the deal? Obviously. But I don’t think that was nearly so clear at the time—nor would it have been particularly palatable to most of the American public to leave Osama bin Laden under house arrest in some neutral country (which they never specified by the way; somewhere without US extradition, presumably?) and grant amnesty to the top leaders of the Taliban.

Thus, even after the 20-year nightmare of the war that refused to end, we are still back to the nightmare we were in before—Afghanistan ruled by fanatics who will oppress millions.

Yet somehow this isn’t even the worst unending nightmare, for after a year and a half we are still in the throes of a global pandemic which has now caused over 4.6 million deaths. We are still wearing masks wherever we go—at least, those of us who are complying with the rules. We have gotten vaccinated already, but likely will need booster shots—at least, those of us who believe in vaccines.

The most disturbing part of it all is how many people still aren’t willing to follow the most basic demands of public health agencies.

In case you thought this was just an American phenomenon: Just a few days ago I looked out the window of my apartment to see a protest in front of the Scottish Parliament complaining about vaccine and mask mandates, with signs declaring it all a hoax. (Yes, my current temporary apartment overlooks the Scottish Parliament.)

Some of those signs displayed a perplexing innumeracy. One sign claimed that the vaccines must be stopped because they had killed 1,400 people in the UK. This is not actually true; while there have been 1,400 people in the UK who died after receiving a vaccine, 48 million people in the UK have gotten the vaccine, and many of them were old and/or sick, so, purely by statistics, we’d expect some of them to die shortly afterward. Less than 100 of these deaths are in any way attributable to the vaccine. But suppose for a moment that we took the figure at face value, and assumed, quite implausibly, that everyone who died shortly after getting the vaccine was in fact killed by the vaccine. This 1,400 figure needs to be compared against the 156,000 UK deaths attributable to COVID itself. Since 7 million people in the UK have tested positive for the virus, this is a fatality rate of over 2%. Even if we suppose that literally everyone in the UK who hasn’t been vaccinated in fact had the virus, that would still only be 20 million (the UK population of 68 million – the 48 million vaccinated) people, so the death rate for COVID itself would still be at least 0.8%—a staggeringly high fatality rate for a pandemic airborne virus. Meanwhile, even on this ridiculous overestimate of the deaths caused by the vaccine, the fatality rate for vaccination would be at most 0.003%. Thus, even by the anti-vaxers’ own claims, the vaccine is nearly 300 times safer than catching the virus. If we use the official estimates of a 1.9% COVID fatality rate and 100 deaths caused by the vaccines, the vaccines are in fact over 9000 times safer.

Yet it does seem to be worse in the United States, as while 22% of Americans described themselves as opposed to vaccination in general, only about 2% of Britons said the same.

But this did not translate to such a large difference in actual vaccination: While 70% of people in the UK have received the vaccine, 64% of people in the US have. Both of these figures are tantalizingly close to, yet clearly below, the at least 84% necessary to achieve herd immunity. (Actually some early estimates thought 60-70% might be enough—but epidemiologists no longer believe this, and some think that even 90% wouldn’t be enough.)

Indeed, the predominant tone I get from trying to keep up on the current news in epidemiology is fatalism: It’s too late, we’ve already failed to contain the virus, we won’t reach herd immunity, we won’t ever eradicate it. At this point they now all seem to think that COVID is going to become the new influenza, always with us, a major cause of death that somehow recedes into the background and seems normal to us—but COVID, unlike influenza, may stick around all year long. The one glimmer of hope is that influenza itself was severely hampered by the anti-pandemic procedures, and influenza cases and deaths are indeed down in both the US and UK (though not zero, nor as drastically reduced as many have reported).

The contrast between terrorism and pandemics is a sobering one, as pandemics kill far more people, yet somehow don’t provoke anywhere near as committed a response.

9/11 was a massive outlier in terrorism, at 3,000 deaths on a single day; otherwise the average annual death rate by terrorism is about 20,000 worldwide, mostly committed by Islamist groups. Yet the threat is not actually to Americans in particular; annual deaths due to terrorism in the US are less than 100—and most of these by right-wing domestic terrorists, not international Islamists.

Meanwhile, in an ordinary year, influenza would kill 50,000 Americans and somewhere between 300,000 and 700,000 people worldwide. COVID in the past year and a half has killed over 650,000 Americans and 4.6 million people worldwide—annualize that and it would be 400,000 per year in the US and 3 million per year worldwide.

Yet in response to terrorism we as a country were prepared to spend $2.3 trillion dollars, lose nearly 4,000 US and allied troops, and kill nearly 50,000 civilians—not even counting the over 60,000 enemy soldiers killed. It’s not even clear that this accomplished anything as far as reducing terrorism—by some estimates it actually made it worse.

Were we prepared to respond so aggressively to pandemics? Certainly not to influenza; we somehow treat all those deaths are normal or inevitable. In response to COVID we did spend a great deal of money, even more than the wars in fact—a total of nearly $6 trillion. This was a very pleasant surprise to me (it’s the first time in my lifetime I’ve witnessed a serious, not watered-down Keynesian fiscal stimulus in the United States). And we imposed lockdowns—but these were all-too quickly removed, despite the pleading of public health officials. It seems to be that our governments tried to impose an aggressive response, but then too many of the citizens pushed back against it, unwilling to give up their “freedom” (read: convenience) in the name of public safety.

For the wars, all most of us had to do was pay some taxes and sit back and watch; but for the pandemic we were actually expected to stay home, wear masks, and get shots? Forget it.

Politics was clearly a very big factor here: In the US, the COVID death rate map and the 2020 election map look almost equivalent: By and large, people who voted for Biden have been wearing masks and getting vaccinated, while people who voted for Trump have not.

But pandemic response is precisely the sort of thing you can’t do halfway. If one area is containing a virus and another isn’t, the virus will still remain uncontained. (As some have remarked, it’s rather like having a “peeing section” of a swimming pool. Much worse, actually, as urine contains relatively few bacteria—but not zero—and is quickly diluted by the huge quantities of water in a swimming pool.)

Indeed, that seems to be what has happened, and why we can’t seem to return to normal life despite months of isolation. Since enough people are refusing to make any effort to contain the virus, the virus remains uncontained, and the only way to protect ourselves from it is to continue keeping restrictions in place indefinitely.

Had we simply kept the original lockdowns in place awhile longer and then made sure everyone got the vaccine—preferably by paying them for doing it, rather than punishing them for not—we might have been able to actually contain the virus and then bring things back to normal.

But as it is, this is what I think is going to happen: At some point, we’re just going to give up. We’ll see that the virus isn’t getting any more contained than it ever was, and we’ll be so tired of living in isolation that we’ll finally just give up on doing it anymore and take our chances. Some of us will continue to get our annual vaccines, but some won’t. Some of us will continue to wear masks, but most won’t. The virus will become a part of our lives, just as influenza did, and we’ll convince ourselves that millions of deaths is no big deal.

And then the nightmare will truly never end.

Realistic open borders

Sep 5 JDN 2459463

In an earlier post I lamented the tight restrictions on border crossings that prevail even between allied First World countries. (On a personal note, you’ll be happy to know that our visas have cleared and we are now moved into Edinburgh, cat and all, though we are still in temporary housing and our official biometric residence permits haven’t yet arrived.)

In this post I’d like to speculate on how we might get from our current regime to something more like open borders.

Obviously we can’t simply remove all border restrictions immediately. That would be a political non-starter, and even ethically or economically it wouldn’t make very much sense. There are sensible reasons behind some of our border regulations—just not most of them.

Instead we would want to remove a few restrictions at a time, starting with the most onerous or ridiculous ones.

High on my list in the UK in particular would be the requirement that pets must fly as cargo. I literally can’t think of a good reason for this; it seems practically designed to cost travelers more money and traumatize as many pets as possible. If it’s intended to support airlines somehow, please simply subsidize airlines. (But really, why are you doing that? You should be taxing airlines because of their high carbon emissions. Subsidize boats and trains.) If it’s intended to somehow prevent the spread of rabies, it’s obviously unnecessary, since every pet moved to the UK already has to document a recent rabies vaccine. But this particular rule seems to be a quirk of the UK in particular, hence not very generalizable.

But here’s one that actually seems quite common: Financial requirements for visas. Even tourist visas in most countries cost money, in amounts that seem to vary according to some sort of occult ritual. I can see no sensible economic reason why a visa would be $130 in Vietnam but only $20 in neighboring Cambodia, or why Kazakhstan can be visited for $25 but Azerbaijan costs $100, or why Myanmar costs only $30 but Bhutan will run you over $200.

Work visas are considerably more demanding still.

Financial requirements in the UK are especially onerous; you have to make above a certain salary and have a certain amount of savings in the bank, based on your family size. This was no problem for me personally, but it damn well shouldn’t be; I have a PhD in economics. My salary is now twice what it was as a grad student, and honestly that’s a good deal less than I was hoping for (and would have gotten on the tenure track at an R1 university).

All the countries in the Schengen Area have their own requirements for “financial subsistence” for visa applications, ranging from a trivial €3 in Hungary (not per day, just total; why do they even bother?) or manageable €14 per day in Latvia, through the more demanding amounts of €45 per day in Germany and Italy, to €92 per day in Switzerland and Liechtenstein, all the way up to the utterly unreasonable €120 per day in France. That would be €43,800 per year, or $51,700. Apparently you must be at least middle class to enter France.

Canada has a similar requirement known as “proof of funds”, but it’s considerably more reasonable, since you can substitute proof of employment and there are no wage minimums for such employment. Even if you don’t already have a job you can still apply and the minimum requirement is actually lower than the poverty line in Canada.

The United States doesn’t require financial requirements for most visas, but it does have a $160 visa fee. And the H1-B visa in particular (the nearest equivalent to the Skilled Worker visa I’ve got in the UK) requires that your wage or salary be at least the “prevailing wage” in your industry—meaning it is nearly impossible for a company to save money by hiring people on H1-B visas and hence they have very little incentive to hire H1-B workers. If you are of above-average talent and being paid only average wages, I guess they can save some money that way. But this is not how trade is supposed to work—nobody requires that you pay US prices for goods shipped from China, and if they did, nobody would ever buy anything from China. This is blatant, naked protectionism—but we’re apparently okay with it as long as it’s trade in labor instead of goods.

I wasn’t able to quickly find whether there are similar financial requirements in other countries. Perhaps there aren’t; these are the countries most people actually want to move to anyway. Permanent migration is overwhelminginly toward OECD (read: First World) countries, and is actually helping us sustain our populations in the face of low birth rates.

I must admit, I can see some fiscal benefits for a country not allowing poor people in, but this practice raises some very deep ethical problems: What right do we have to do this?

If someone is born poor in Laredo, Texas, we take responsibility for them as a US citizen. Maybe we don’t treat them particularly well (that is Texas, after all), but we do give them access to certain basic services, such as emergency services, Medicaid, TANF and SNAP. They are allowed to vote, own property, and even hold office in the United States. But if that same person were born in Nuevo Laredo, Tamaulipas—literally less than a mile away, right across the river—they would receive none of these benefits. They would not even be allowed to cross the river without a passport and a visa.

In some ways the contrast is even more dire if we consider a more liberal US state. A poor person born in Chula Vista, California has access to the full array of California services; Medi-Cal is honestly something close to a single-payer healthcare system, though the full morass of privatized US healthcare is layered on top of us. Then there is CalWORKS, CalFresh, and so on. But the same person born in Tijuana, Baja California would get none of these benefits.

They could be the same person. They could look the same and have essentially the same culture—even the same language, given how many Californians speak Spanish and how many Mexicans speak English. But if they were born on the other side of a river (in Texas) or even an arbitrary line (in California), we treat them completely differently. And then to add insult to injury, we won’t even let them across, not in spite, but because of how poor and desperate they are. If they were rich and educated, we’d let them come across—but then why would they need to?

“Give me your tired, your poor, your huddled masses yearning to breathe free”?

Some restrictions may apply.

Economists talk often of “trade barriers”, but in real terms we have basically removed all trade barriers in goods. Yes, there are still some small tariffs, and the occasional quota here and there—and these should go away too, especially the quotas, because they don’t even raise revenue—but in general we have an extremely globalized economy in terms of goods. The same complex product, like a car or a smartphone, is often made of parts from a dozen countries.

But when it comes to labor, we are still living in a protectionist world. Crossing borders to work is difficult, time-consuming, and above all, expensive. This dramatically reduces opportunities for workers to move where their labor is most valued—which hurts not only them, but also anyone who would employ them or buy products made by them. The poorest people are those who stand to gain the most from crossing borders, and they are precisely the ones that we work hardest to forbid.

So let’s start with that, shall we? We can keep all this nonsense about passports, visas, background checks, and customs inspections. It’s probably all unnecessary and wasteful and unfair, but politically it’s clearly too popular to remove. Let’s just remove this: No more financial requirements or fees for work visas. If you want to come to another country to work, you have to go through an application and all that; fine. But you shouldn’t have to prove you aren’t poor. Poor people have just as much right to live here as anybody else—and if we let them do so, they’d be a lot less poor.