Low-skill jobs

Dec 5 JDN 2459554

I’ve seen this claim going around social media for awhile now: “Low-skill jobs are a classist myth created to justify poverty wages.”

I can understand why people would say things like this. I even appreciate that many low-skill jobs are underpaid and unfairly stigmatized. But it’s going a bit too far to claim that there is no such thing as a low-skill job.

Suppose all the world’s physicists and all the world’s truckers suddenly had to trade jobs for a month. Who would have a harder time?

If a mathematician were asked to do the work of a janitor, they’d be annoyed. If a janitor were asked to do the work of a mathematician, they’d be completely nonplussed.

I could keep going: Compare robotics engineers to dockworkers or software developers to fruit pickers.

Higher pay does not automatically equate to higher skills: welders are clearly more skilled than stock traders. Give any welder a million-dollar account and a few days of training, and they could do just as well as the average stock trader (which is to say, worse than the S&P 500). Give any stock trader welding equipment and a similar amount of training, and they’d be lucky to not burn their fingers off, much less actually usefully weld anything.

This is not to say that any random person off the street could do just as well as a janitor or dockworker as someone who has years of experience at that job. It is simply to say that they could do better—and pick up the necessary skills faster—than a random person trying to work as a physicist or software developer.

Moreover, this does justify some difference in pay. If some jobs are easier than others, in the sense that more people are qualified to do them, then the harder jobs will need to pay more in order to attract good talent—if they didn’t, they’d risk their high-skill workers going and working at the low-skill jobs instead.

This is of course assuming all else equal, which is clearly not the case. No two jobs are the same, and there are plenty of other considerations that go into choosing someone’s wage: For one, not simply what skills are required, but also the effort and unpleasantness involved in doing the work. I’m entirely prepared to believe that being a dockworker is less fun than being a physicist, and this should reduce the differential in pay between them. Indeed, it may have: Dockworkers are paid relatively well as far as low-skill jobs go—though nowhere near what physicists are paid. Then again, productivity is also a vital consideration, and there is a general tendency that high-skill jobs tend to be objectively more productive: A handful of robotics engineers can do what was once the work of hundreds of factory laborers.

There are also ways for a worker to be profitable without being particularly productive—that is, to be very good at rent-seeking. This is arguably the case for lawyers and real estate agents, and undeniably the case for derivatives traders and stockbrokers. Corporate executives aren’t stupid; they wouldn’t pay these workers astronomical salaries if they weren’t making money doing so. But it’s quite possible to make lots of money without actually producing anything of particular value for human society.

But that doesn’t mean that wages are always fair. Indeed, I dare say they typically are not. One of the most important determinants of wages is bargaining power. Unions don’t increase skill and probably don’t increase productivity—but they certainly increase wages, because they increase bargaining power.

And this is also something that’s correlated with lower levels of skill, because the more people there are who know how to do what you do, the harder it is for you to make yourself irreplaceable. A mathematician who works on the frontiers of conformal geometry or Teichmueller theory may literally be one of ten people in the world who can do what they do (quite frankly, even the number of people who know what they do is considerably constrained, though probably still at least in the millions). A dockworker, even one who is particularly good at loading cargo skillfully and safely, is still competing with millions of other people with similar skills. The easier a worker is to replace, the less bargaining power they have—in much the same way that a monopoly has higher profits than an oligopoly, which has higher profits that a competitive market.

This is why I support unions. I’m also a fan of co-ops, and an ardent supporter of progressive taxation and safety regulations. So don’t get me wrong: Plenty of low-skill workers are mistreated and underpaid, and they deserve better.

But that doesn’t change the fact that it’s a lot easier to be a janitor than a physicist.

Risk compensation is not a serious problem

Nov 28 JDN 2459547

Risk compensation. It’s one of those simple but counter-intuitive ideas that economists love, and it has been a major consideration in regulatory policy since the 1970s.

The idea is this: The risk we face in our actions is partly under our control. It requires effort to reduce risk, and effort is costly. So when an external source, such as a government regulation, reduces our risk, we will compensate by reducing the effort we expend, and thus our risk will decrease less, or maybe not at all. Indeed, perhaps we’ll even overcompensate and make our risk worse!

It’s often used as an argument against various kinds of safety efforts: Airbags will make people drive worse! Masks will make people go out and get infected!

The basic theory here is sound: Effort to reduce risk is costly, and people try to reduce costly things.

Indeed, it’s theoretically possible that risk compensation could yield the exact same risk, or even more risk than before—or at least, I wasn’t able to prove that for any possible risk profile and cost function it couldn’t happen.

But I wasn’t able to find any actual risk profiles or cost functions that would yield this result, even for a quite general form. Here, let me show you.

Let’s say there’s some possible harm H. There is also some probability that it will occur, which you can mitigate with some choice x. For simplicity let’s say that it’s one-to-one, so that your risk of H occurring is precisely 1-x. Since probabilities must be between 0 and 1, thus so must x.

Reducing that risk costs effort. I won’t say much about that cost, except to call it c(x) and assume the following:

(1) It is increasing: More effort reduces risk more and costs more than less effort.

(2) It is convex: Reducing risk from a high level to a low level (e.g. 0.9 to 0.8) costs less than reducing it from a low level to an even lower level (e.g. 0.2 to 0.1).

These both seem like eminently plausible—indeed, nigh-unassailable—assumptions. And they result in the following total expected cost (the opposite of your expected utility):

(1-x)H + c(x)

Now let’s suppose there’s some policy which will reduce your risk by a factor r, which must be between 0 and 1. Your cost then becomes:

r(1-x)H + c(x)

Minimizing this yields the following result:

rH = c'(x)

where c'(x) is the derivative of c(x). Since c(x) is increasing and convex, c'(x) is positive and increasing.

Thus, if I make r smaller—an external source of less risk—then I will reduce the optimal choice of x. This is risk compensation.

But have I reduced or increased the amount of risk?

The total risk is r(1-x); since r decreased and so did x, it’s not clear whether this went up or down. Indeed, it’s theoretically possible to have cost functions that would make it go up—but I’ve never seen one.

For instance, suppose we assume that c(x) = axb, where a and b are constants. This seems like a pretty general form, doesn’t it? To maintain the assumption that c(x) is increasing and convex, I need a > 0 and b > 1. (If 0 < b < 1, you get a function that’s increasing but concave. If b=1, you get a linear function and some weird corner solutions where you either expend no effort at all or all possible effort.)

Then I’m trying to minimize:

r(1-x)H + axb

This results in a closed-form solution for x:

x = (rH/ab)^(1/(b-1))

Since b>1, 1/(b-1) > 0.


Thus, the optimal choice of x is increasing in rH and decreasing in ab. That is, reducing the harm H or the overall risk r will make me put in less effort, while reducing the cost of effort (via either a or b) will make me put in more effort. These all make sense.

Can I ever increase the overall risk by reducing r? Let’s see.


My total risk r(1-x) is therefore:

r(1-x) = r[1-(rH/ab)^(1/(b-1))]

Can making r smaller ever make this larger?

Well, let’s compare it against the case when r=1. We want to see if there’s a case where it’s actually larger.

r[1-(rH/ab)^(1/(b-1))] > [1-(H/ab)^(1/(b-1))]

r – r^(1/(b-1)) (H/ab)^(1/(b-1)) > 1 – (H/ab)^(1/(b-1))

For this to be true, we would need r > 1, which would mean we didn’t reduce risk at all. Thus, reducing risk externally reduces total risk even after compensation.

Now, to be fair, this isn’t a fully general model. I had to assume some specific functional forms. But I didn’t assume much, did I?

Indeed, there is a fully general argument that externally reduced risk will never harm you. It’s quite simple.

There are three states to consider: In state A, you have your original level of risk and your original level of effort to reduce it. In state B, you have an externally reduced level of risk and your original level of effort. In state C, you have an externally reduced level of risk, and you compensate by reducing your effort.

Which states make you better off?

Well, clearly state B is better than state A: You get reduced risk at no cost to you.

Furthermore, state C must be better than state B: You voluntarily chose to risk-compensate precisely because it made you better off.

Therefore, as long as your preferences are rational, state C is better than state A.

Externally reduced risk will never make you worse off.

QED. That’s it. That’s the whole proof.

But I’m a behavioral economist, am I not? What if people aren’t being rational? Perhaps there’s some behavioral bias that causes people to overcompensate for reduced risks. That’s ultimately an empirical question.

So, what does the empirical data say? Risk compensation is almost never a serious problem in the real world. Measures designed to increase safety, lo and behold, actually increase safety. Removing safety regulations, astonishingly enough, makes people less safe and worse off.

If we ever do find a case where risk compensation is very large, then I guess we can remove that safety measure, or find some way to get people to stop overcompensating. But in the real world this has basically never happened.

It’s still a fair question whether any given safety measure is worth the cost: Implementing regulations can be expensive, after all. And while many people would like to think that “no amount of money is worth a human life”, nobody does—or should, or even can—act like that in the real world. You wouldn’t drive to work or get out of bed in the morning if you honestly believed that.

If it would cost $4 billion to save one expected life, it’s definitely not worth it. Indeed, you should still be able to see that even if you don’t think lives can be compared with other things—because $4 billion could save an awful lot of lives if you spent it more efficiently. (Probablyover a million, in fact, as current estimates of the marginal cost to save one life are about $2,300.) Inefficient safety interventions don’t just cost money—they prevent us from doing other, more efficient safety interventions.

And as for airbags and wearing masks to prevent COVID? Yes, definitely 100% worth it, as both interventions have already saved tens if not hundreds of thousands of lives.

How can we fix medical residency?

Nov 21 JDN 459540

Most medical residents work 60 or more hours per week, and nearly 20% work 80 or more hours. 66% of medical residents report sleeping 6 hours or less each night, and 20% report sleeping 5 hours or less.

It’s not as if sleep deprivation is a minor thing: Worldwide, across all jobs, nearly 750,000 deaths annually are attributable to long working hours, most of these due to sleep deprivation.


By some estimates, medical errors account for as many as 250,000 deaths per year in the US alone. Even the most conservative estimates say that at least 25,000 deaths per year in the US are attributable to medical errors. It seems quite likely that long working hours increase the rate of dangerous errors (though it has been difficult to determine precisely how much).

Indeed, the more we study stress and sleep deprivation, the more we learn how incredibly damaging they are to health and well-being. Yet we seem to have set up a system almost intentionally designed to maximize the stress and sleep deprivation of our medical professionals. Some of them simply burn out and leave the profession (about 18% of surgical residents quit); surely an even larger number of people never enter medicine in the first place because they know they would burn out.

Even once a doctor makes it through residency and has learned to cope with absurd hours, this most likely distorts their whole attitude toward stress and sleep deprivation. They are likely to not consider them “real problems”, because they were able to “tough it out”—and they are likely to assume that their patients can do the same. One of the primary functions of a doctor is to reduce pain and suffering, and by putting doctors through unnecessary pain and suffering as part of their training, we are teaching them that pain and suffering aren’t really so bad and you should just grin and bear it.

We are also systematically selecting against doctors who have disabilities that would make it difficult to work these double-time hours—which means that the doctors who are most likely to sympathize with disabled patients are being systematically excluded from the profession.

There have been some attempts to regulate the working hours of residents, but they have generally not been effective. I think this is for three reasons:

1. They weren’t actually trying hard enough. A cap of 80 hours per week is still 40 hours too high, and looks to me like you are trying to get better PR without fixing the actual problem.

2. Their enforcement mechanisms left too much opportunity to cheat the system, and in fact most medical residents simply became pressured to continue over-working and under-report their hours.

3. They don’t seem to have considered how to effect the transition in a way that won’t reduce the total number of resident-hours, and so residents got less training and hospitals were less served.

The solution to problem 1 is obvious: The cap needs to be lower. Much lower.

The solution to problem 2 is trickier: What sort of enforcement mechanism would prevent hospitals from gaming the system?

I believe the answer is very steep overtime pay requirements, coupled with regular and intensive auditing. Every hour a medical resident goes over their cap, they should have to be paid triple time. Audits should be performed frequently, randomly and without notice. And if a hospital is caught falsifying their records, they should be required to pay all missing hours to all medical residents at quintuple time. And Medicare and Medicaid should not be allowed to reimburse these additional payments—they must come directly out of the hospital’s budget.

Under the current system, the “punishment” is usually a threat of losing accreditation, which is too extreme and too harmful to the residents. Precisely because this is such a drastic measure, it almost never happens. The punishment needs to be small enough that we will actually enforce it; and it needs to hurt the hospital, not the residents—overtime pay would do precisely that.

That brings me to problem 3: How can we ensure that we don’t reduce the total number of resident-hours?

This is important for two reasons: Each resident needs a certain number of hours of training to become a skilled doctor, and residents provide a significant proportion of hospital services. Of the roughly 1 million doctors in the US, about 140,000 are medical residents.

The answer is threefold:

1. Increase the number of residency slots (we have a global doctor shortage anyway).

2. Extend the duration of residency so that each resident gets the same number of total work hours.

3. Gradually phase in so that neither increase needs to be too fast.

Currently a typical residency is about 4 years. 4 years of 80-hour weeks is equivalent to 8 years of 40-hour weeks. The goal is for each resident to get 320 hour-years of training.

With 140,000 current residents averaging 4 years, a typical cohort is about 35,000. So the goal is to each year have at least (35,000 residents per cohort)(4 cohorts)(80 hours per week) = 11 million resident-hours per week.

In cohort 1, we reduce the cap to 70 hours, and increase the number of accepted residents to 40,000. Residents in cohort 1 will continue their residency for 4 years, 7 months. This gives each one 321 hour-years of training.

In cohort 2, we reduce the cap to 60 hours, and increase the number of accepted residents to 46,000.

Residents in cohort 2 will continue their residency for 5 years, 4 months. This gives each one 320 hour-years of training.

In cohort 3, we reduce the cap to 55 hours, and increase the number of accepted residents to 50,000.

Residents in cohort 3 will continue their residency for 6 years. This gives each one 330 hour-years of training.

In cohort 4, we reduce the cap to 50 hours, and increase the number of accepted residents to 56,000. Residents in cohort 4 will continue their residency for 6 years, 6 months. This gives each one 325 hour-years of training.

In cohort 5, we reduce the cap to 45 hours, and increase the number of accepted residents to 60,000. Residents in cohort 5 will continue their residency for 7 years, 2 months. This gives each one 322 hour-years of training.

In cohort 6, we reduce the cap to 40 hours, and increase the number of accepted residents to 65,000. Residents in cohort 6 will continue their residency for 8 years. This gives each one 320 hour-years of training.

In cohort 7, we keep the cap to 40 hours, and increase the number of accepted residents to 70,000. This is now the new standard, with 8-year residencies with 40 hour weeks.

I’ve made a graph here of what this does to the available number of resident-hours each year. There is a brief 5% dip in year 4, but by the time we reach year 14 we’ve actually doubled the total number of available resident-hours at any given time—without increasing the total amount of work each resident does, simply keeping them longer and working them less intensively each year. Given that quality of work is reduced by working longer hours, it’s likely that even this brief reduction in hours would not result in any reduced quality of care for patients.

[residency_hours.png]

I have thus managed to increase the number of available resident-hours, ensure that each resident gets the same amount of training as before, and still radically reduce the work hours from 80 per week to 40 per week. The additional recruitment each year is never more than 6,000 new residents or 15% of the current number of residents.

It takes several years to effect this transition. This is unavoidable if we are trying to avoid massive increases in recruitment, though if we were prepared to simply double the number of admitted residents each year we could immediately transition to 40-hour work weeks in a single cohort and the available resident-hours would then strictly increase every year.

This plan is likely not the optimal one; I don’t know enough about the details of how costly it would be to admit more residents, and it’s possible that some residents might actually prefer a briefer, more intense residency rather than a longer, less stressful one. (Though it’s worth noting that most people greatly underestimate the harms of stress and sleep deprivation, and doctors don’t seem to be any better in this regard.)

But this plan does prove one thing: There are solutions to this problem. It can be done. If our medical system isn’t solving this problem, it is not because solutions do not exist—it is because they are choosing not to take them.

What’s wrong with police unions?

Nov 14 JDN 2459531

In a previous post I talked about why unions, even though they are collusive, are generally a good thing. But there is one very important exception to this rule: Police unions are almost always harmful.

Most recently, police unions have been leading the charge to fight vaccine mandates. This despite the fact that COVID-19 now kills more police officers than any other cause. They threatened that huge numbers of officers would leave if the mandates were imposed—but it didn’t happen.

But there is a much broader pattern than this: Police unions systematically take the side of individual police offers over the interests of public safety. Even the most incompetent, negligent, or outright murderous behavior by police officers will typically be defended by police unions. (One encouraging development is that lately even some police unions have been reluctant to defend the most outrageous killings by police officers—but this very much the exception, not the rule.)

Police unions are also unusual among unions in their political ties. Conservatives generally oppose unions, but are much friendlier toward police unions. At the other end of the spectrum, socialists normally love unions, but have distanced themselves from police unions for a long time. (The argument in that article that this is because “no other job involves killing people” is a bit weird: Ostensibly, the circumstances in which police are allowed to kill people are not all that different from the circumstances in which private citizens are. Just like us, they’re only supposed to use deadly force to prevent death or grievous bodily harm to themselves or others. The main thing that police are allowed to do that we aren’t is imprison people. Killing isn’t supposed to be a major part of the job.)

Police union also have some other weird features. The total membership of all police unions exceeds the total number of police officers in the United States, because a single officer is often affiliated with multiple unions—normally not at all how unions work. Police unions are also especially powerful and well-organized among unions. They are especially well-funded, and their members are especially loyal.

If we were to adopt a categorical view that unions are always good or always bad—as many people seem to want to—it’s difficult to see why police unions should be different from teachers’ unions or factory workers’ unions. But my argument was very careful not to make such categorical statements. Unions aren’t always or inherently good; they are usually good, because of how they are correcting a power imbalance between workers and corporations.

But when it comes to police, the situation is quite different. Police unions give more bargaining power to government officers against… what? Public accountability? The democratic system? Corporate CEOs are accountable only to their shareholders, but the mayors and city councils who decide police policy are elected (in most of the UK, even police commissioners are directly elected). It’s not clear that there was an imbalance in bargaining power here we would want to correct.

A similar case could be made against all public-sector unions, and indeed that case often is extended to teachers’ unions. If we must sacrifice teachers’ unions in order to destroy police unions, I’d be prepared to bite that bullet. But there are vital differences here as well. Teachers are not responsible for imprisoning people, and bad teachers almost never kill people. (In the rare cases in which teachers have committed murder, they have been charged to the full extent of the law, just as they would be in any other profession.) There surely is some misconduct by teachers that some unions may be protecting, but the harm caused by that misconduct is far lower than the harm caused by police misconduct. Teacher unions also provide a layer of protection for teachers to exercise autonomy, promoting academic freedom.

The form of teacher misconduct I would be most concerned about is sexual abuse of students. And while I’ve seen many essays claiming that teacher unions protect sexual abusers, the only concrete evidence I could find on the subject was a teachers’ union publicly complaining that the government had failed to pass stricter laws against sexual abuse by teachers. The research on teacher misconduct mainly focuses on other casual factors aside from union representation.

Even this Fox News article cherry-picking the worst examples of unions protecting abusive teachers includes line after line like “he was ultimately fired”, “he was pressured to resign”, and “his license was suspended”. So their complaint seems to be that it wasn’t done fast enough? But a fair justice system is necessarily slow. False accusations are rare, but they do happen—we can’t just take someone’s word for it. Ensuring that you don’t get fired until the district mounts strong evidence of misconduct against you is exactly what unions should be doing.

Whether unions are good or bad in a particular industry is ultimately an empirical question. So let’s look at the data, shall we? Teacher unions are positively correlated with school performance. But police unions are positively correlated with increased violent misconduct. There you have it: Teacher unions are good, but police unions are bad.

Does power corrupt?

Nov 7 JDN 2459526

It’s a familiar saying, originally attributed to the Lord Acton: “Power tends to corrupt, and absolute power corrupts absolutely. Great men are nearly always bad men.”

I think this saying is not only wrong, but in fact dangerous. We can all observe plenty of corrupt people in power, that much is true. But if it’s simply the power that corrupts them, and they started as good people, then there’s really nothing to be done. We may try to limit the amount of power any one person can have, but in any large, complex society there will be power, and so, if the saying is right, there will also be corruption.

How do I know that this saying is wrong?

First of all, note that corruption varies tremendously, and with very little correlation with most sensible notions of power.

Consider used car salespeople, stockbrokers, drug dealers, and pimps. All of these professions are rather well known for their high level of corruption. Yet are people in these professions powerful? Yes, any manager has some power over their employees; but there’s no particular reason to think that used car dealers have more power over their employees than grocery stores, and yet there’s a very clear sense in which used car dealers are more corrupt.

Even power on a national scale is not inherently tied to corruption. Consider the following individuals: Nelson Mandela, Mahatma Gandhi, Abraham Lincoln, and Franklin Roosevelt.

These men were extremely powerful; each ruled an entire nation.Indeed, during his administration, FDR was probably the most powerful person in the world. And they certainly were not impeccable: Mandela was a good friend of Fidel Castro, Gandhi abused his wife, Lincoln suspended habeas corpus, and of course FDR ordered the internment of Japanese-Americans. Yet overall I think it’s pretty clear that these men were not especially corrupt and had a large positive impact on the world.

Say what you will about Bernie Sanders, Dennis Kucinich, or Alexandria Ocasio-Cortez. Idealistic? Surely. Naive? Perhaps. Unrealistic? Sometimes. Ineffective? Often. But they are equally as powerful as anyone else in the US Congress, and ‘corrupt’ is not a word I’d use to describe them. Mitch McConnell, on the other hand….

There does seem to be a positive correlation between a country’s level of corruption and its level of authoritarianism; the most democratic countries—Scandinavia—are also the least corrupt. Yet India is surely more democratic than China, but is widely rated as about the same level of corruption. Greece is not substantially less democratic than Chile, but it has considerably more corruption. So even at a national level, power is the not the only determinant of corruption.

I’ll even agree to the second clause: “absolute power corrupts absolutely.” Were I somehow granted an absolute dictatorship over the world, one of my first orders of business would be to establish a new democratic world government to replace my dictatorial rule. (Would it be my first order of business, or would I implement some policy reforms first? Now that’s a tougher question. I think I’d want to implement some kind of income redistribution and anti-discrimination laws before I left office, at least.) And I believe that most good people think similarly: We wouldn’t want to have that kind of power over other people. We wouldn’t trust ourselves to never abuse it. Anyone who maintains absolute power is either already corrupt or likely to become so. And anyone who seeks absolute power is precisely the sort of person who should not be trusted with power at all.

It may also be that power is one determinant of corruption—that a given person will generally end up more corrupt if you give them more power. This might help explain why even the best ‘great men’ are still usually bad men. But clearly there are other determinants that are equally important.

And I would like to offer a different hypothesis to explain the correlation between power and corruption, which has profoundly different implications: The corrupt seek power.

Donald Trump didn’t start out a good man and become corrupt by becoming a billionaire or becoming President. Donald Trump was born a narcissistic idiot.

Josef Stalin wasn’t a good man who became corrupted by the unlimited power of ruling the Soviet Union. Josef Stalin was born a psychopath.

Indeed, when you look closely at how corrupt leaders get into power, it often involves manipulating and exploiting others on a grand scale. They are willing to compromise principles that good people wouldn’t. They aren’t corrupt because they got into power; they got into power because they are corrupt.

Let me be clear: I’m not saying we should compromise all of our principles in order to achieve power. If there is a route by which power corrupts, it is surely that. Rather, I am saying that we must maintain constant vigilance against anyone who seems so eager to attain power that they will compromise principles to do it—for those are precisely the people who are likely to be most dangerous if they should achieve their aims.

Moreover, I’m saying that “power corrupts” is actually a very dangerous message. It tells good people not to seek power, because they would be corrupted by it. But in fact what we actually need in order to get good people in power is more good people seeking power, more opportunities to out-compete the corrupt. If Congress were composed entirely of people like Alexandria Ocasio-Cortez, then the left-wing agenda would no longer seem naive and unrealistic; it would simply be what gets done. (Who knows? Maybe it wouldn’t work out so well after all. But it definitely would get done.) Yet how many idealistic left-wing people have heard that phrase ‘power corrupts’ too many times, and decided they didn’t want to risk running for office?

Indeed, the notion that corruption is inherent to the exercise of power may well be the greatest tool we have ever given to those who are corrupt and seeking to hold onto power.

Are unions collusion?

Oct 31 JDN 2459519

The standard argument from center-right economists against labor unions is that they are a form of collusion: Producers are coordinating and intentionally holding back from what would be in their individual self-interest in order to gain a collective advantage. And this is basically true: In the broadest sense of the term, labor unions are are form of collusion. Since collusion is generally regarded as bad, therefore (this argument goes), unions are bad.

What this argument misses out on is why collusion is generally regarded as bad. The typical case for collusion is between large corporations, each of which already controls a large share of the market—collusion then allows them to act as if they control an even larger share, potentially even acting as a monopoly.

Labor unions are not like this. Literally no individual laborer controls a large segment of the market. (Some very specialized laborers, like professional athletes, or, say, economists, might control a not completely trivial segment of their particular job market—but we’re still talking something like 1% at most. Even Tiger Woods or Paul Krugman is not literally irreplaceable.) Moreover, even the largest unions can rarely achieve anything like a monopoly over a particular labor market.

Thus whereas typical collusion involves going from a large market share to an even larger—often even dominant—market share, labor unions involve going from a tiny market share to a moderate—and usually not dominant—market share.

But that, by itself, wouldn’t be enough to justify unions. While small family businesses banding together in collusion is surely less harmful than large corporations doing the same, it would probably still be a bad thing, insofar as it would raise prices and reduce the quantity or quality of products sold. It would just be less bad.

Yet unions differ from even this milder collusion in another important respect: They do not exist to increase bargaining power versus consumers. They exist to increase bargaining power versus corporations.

And corporations, it turns out, already have a great deal of bargaining power. While a labor union acts as something like a monopoly (or at least oligopoly), corporations act like the opposite: oligopsony or even monopsony.

While monopoly or monopsony on its own is highly unfair and inefficient, the combination of the two—bilateral monopolyis actually relatively fair and efficient. Bilateral monopoly is probably not as good as a truly competitive market, but it is definitely better than either a monopoly or monopsony alone. Whereas a monopoly has too much bargaining power for the seller (resulting in prices that are too high), and a monopsony has too much bargaining power for the buyer (resulting in prices that are too low), a bilateral monopoly has relatively balanced bargaining power, and thus gets an outcome that’s not too much different from fair competition in a free market.

Thus, unions really exist as a correction mechanism for the excessive bargaining power of corporations. Most unions are between workers in large industries who work for a relatively small number of employers, such as miners, truckers, and factory workers. (Teachers are also an interesting example, because they work for the government, which effectively has a monopsony on public education services.) In isolation they may seem inefficient; but in context they really exist to compensate for other, worse inefficiencies.


We could imagine a world where this was not so: Say there is a market with many independent buyers who are unwilling or unable to reliably collude, and they are served by a small number of powerful unions that use their bargaining power to raise prices and reduce output.


We have some markets that already look a bit like that: Consider the licensing systems for doctors and lawyers. These are basically guilds, which are collusive in the same way as labor unions.

Note that unlike, say, miners, truckers, or factory workers, doctors and lawyers are not a large segment of the population; they are bargaining against consumers just as much as corporations; and they are extremely well-paid and very likely undersupplied. (Doctors are definitely undersupplied; with lawyers it’s a bit more complicated, but given how often corporations get away with terrible things and don’t get sued for it, I think it’s fair to say that in the current system, lawyers are undersupplied.) So I think it is fair to be concerned that the guild systems for doctors and lawyers are too powerful. We want some system for certifying the quality of doctors and lawyers, but the existing standards are so demanding that they result in a shortage of much-needed labor.

One way to tell that unions aren’t inefficient is to look at how unionization relates to unemployment. If unions were acting as a harmful monopoly on labor, unemployment should be higher in places with greater unionization rates. The empirical data suggests that if there is any such effect, it’s a small one. There are far more important determinants of unemployment than unionization. (Wages, on the other hand, show a strong positive link with unionization.) Much like the standard prediction that raising minimum wage would reduce employment, the prediction that unions raise unemployment has largely not been borne out by the data. And for much the same reason: We had ignored the bargaining power of employers, which minimum wage and unions both reduce.

Thus, the justifiability of unions isn’t something that we could infer a priori without looking at the actual structure of the labor market. Unions aren’t always or inherently good—but they are usually good in the system as it stands. (Actually there’s one particular class of unions that do not seem to be good, and that’s police unions: But this is a topic for another time.)

My ultimate conclusion? Yes, unions are a form of collusion. But to infer from that they must be bad is to commit a Noncentral Fallacy. Unions are the good kind of collusion.

Labor history in the making

Oct 24 JDN 2459512

To say that these are not ordinary times would be a grave understatement. I don’t need to tell you all the ways that this interminable pandemic has changed the lives of people all around the world.

But one in particular is of notice to economists: Labor in the United States is fighting back.

Quit rates are at historic highs. Over 100,000 workers in a variety of industries are simultaneously on strike, ranging from farmworkers to nurses and freelance writers to university lecturers.

After decades of quiescence to ever-worsening working conditions, it seems that finally American workers are mad as hell and not gonna take it anymore.

It’s about time, frankly. The real question is why it took this long. Working conditions in the US have been systematically worse than the rest of the First World since at least the 1980s. It was substantially easier to get the leave I needed to attend my own wedding—in the US—after starting work in the UK than it would have been at the same kind of job in the US, because UK law requires employers to grant leave from the day they start work, while US federal law and the law in many states doesn’t require leave at all for anyone—not even people who are sick or recently gave birth.

So, why did it happen now? What changed? The pandemic threw our lives into turmoil, that much is true. But it didn’t fundamentally change the power imbalance between workers and employers. Why was that enough?

I think I know why. The shock from the pandemic didn’t have to be enough to actually change people’s minds about striking—it merely had to be enough to convince people that others would show up. It wasn’t the first-order intention “I want to strike” that changed; it was the second-order belief “Other people want to strike too”.

For a labor strike is a coordination game par excellence. If 1 person strikes, they get fired and replaced. If 2 or 3 or 10 strike, most likely the same thing. But if 10,000 strike? If 100,000 strike? Suddenly corporations have no choice but to give in.

The most important question on your mind when you are deciding whether or not to strike is not, “Do I hate my job?” but “Will my co-workers have my back?”.

Coordination games exhibit a very fascinating—and still not well-understood—phenomenon known as Schelling points. People will typically latch onto certain seemingly-arbitrary features of their choices, and do so well enough that simply having such a focal point can radically increase the level of successful coordination.

I believe that the pandemic shock was just such a Schelling point. It didn’t change most people’s working conditions all that much: though I can see why nurses in particular would be upset, it’s not clear to me that being a university lecturer is much worse now than it was a year ago. But what the pandemic did do was change everyone’s working conditions, all at once. It was a sudden shock toward work dissatisfaction that applied to almost the entire workforce.

Thus, many people who were previously on the fence about striking were driven over the edge—and then this in turn made others willing to take the leap as well, suddenly confident that they would not be acting alone.

Another important feature of the pandemic shock was that it took away a lot of what people had left to lose. Consider the two following games.

Game A: You and 100 other people each separately, without communicating, decide to choose X or Y. If you all choose X, you each get $20. But if even one of you chooses Y, then everyone who chooses Y gets $1 but everyone who chooses X gets nothing.

Game B: Same as the above, except that if anyone chooses Y, everyone who chooses Y also gets nothing.

Game A is tricky, isn’t it? You want to choose X, and you’d be best off if everyone did. But can you really trust 100 other people to all choose X? Maybe you should take the safe bet and choose Y—but then, they’re thinking the same way.


Game B, on the other hand, is painfully easy: Choose X. Obviously choose X. There’s no downside, and potentially a big upside.

In terms of game theory, both games have the same two Nash equilibria: All-X and All-Y. But in the second game, I made all-X also a weak dominant strategy equilibrium, and that made all the difference.

We could run these games in the lab, and I’m pretty sure I know what we’d find: In game A, most people choose X, but some people don’t, and if you repeat the game more and more people choose Y. But in game B, almost everyone chooses X and keeps on choosing X. Maybe they don’t get unanimity every time, but they probably do get it most of the time—because why wouldn’t you choose X? (These are testable hypotheses! I could in fact run this experiment! Maybe I should?)

It’s hard to say at this point how effective these strikes will be. Surely there will be some concessions won—there are far too many workers striking for them all to get absolutely nothing. But it remains uncertain whether the concessions will be small, token changes just to break up the strikes, or serious, substantive restructuring of how work is done in the United States.

If the latter sounds overly optimistic, consider that this is basically what happened in the New Deal. Those massive—and massively successful—reforms were not generated out of nowhere; they were the result of the economic crisis of the Great Depression and substantial pressure by organized labor. We may yet see a second New Deal (a Green New Deal?) in the 2020s if labor organizations can continue putting the pressure on.

The most important thing in making such a grand effort possible is believing that it’s possible—only if enough people believe it can happen will enough people take the risk and put in the effort to make it happen. Apathy and cynicism are the most powerful weapons of the status quo.


We are witnessing history in the making. Let’s make it in the right direction.

Stupid problems, stupid solutions

Oct 17 JDN 2459505

Krugman thinks we should Mint The Coin: Mint a $1 trillion platinum coin and then deposit it at the Federal Reserve, thus creating, by fiat, the money to pay for the current budget without increasing the national debt.

This sounds pretty stupid. Quite frankly, it is stupid. But sometimes stupid problems require stupid solutions. And the debt ceiling is an incredibly stupid problem.

Let’s be clear about this: Congress already passed the budget. They had a right to vote it down—that is indeed their Constitutional responsibility. But they passed it. And now that the budget is passed, including all its various changes to taxes and spending, it necessarily requires a certain amount of debt increase to make it work.

There’s really no reason to have a debt ceiling at all. This is an arbitrary self-imposed credit constraint on the US government, which is probably the single institution in the world that least needs to worry about credit constraints. The US is currently borrowing at extremely low interest rates, and has never defaulted in 200 years. There is no reason it should be worrying about taking on additional debt, especially when it is being used to pay for important long-term investments such as infrastructure and education.

But if we’re going to have a debt ceiling, it should be a simple formality. Congress does the calculation to see how much debt will be needed, and if it accepts that amount, passes the budget and raises the debt ceiling as necessary. If for whatever reason they don’t want to incur the additional debt, they should make changes to the budget accordingly—not pass the budget and then act shocked when they need to raise the debt ceiling.

In fact, there is a pretty good case to be made that the debt ceiling is a violation of the Fourteenth Amendment, which states in Section 4: “The validity of the public debt of the United States, authorized by law, including debts incurred for payment of pensions and bounties for services in suppressing insurrection or rebellion, shall not be questioned.” This was originally intended to ensure the validity of Civil War debt, but it has been interpreted by the Supreme Court to mean that all US public debt legally incurred is valid and thus render the debt ceiling un-Constitutional.

Of course, actually sending it to the Supreme Court would take a long time—too long to avoid turmoil in financial markets if the debt ceiling is not raised. So perhaps Krugman is right: Perhaps it’s time to Mint The Coin and fight stupid with stupid.

Marriage and matching

Oct 10 JDN 2459498

When this post goes live, I will be married. We already had a long engagement, but it was made even longer by the pandemic: We originally planned to be married in October 2020, but then rescheduled for October 2021. Back then, we naively thought that the pandemic would be under control by now and we could have a wedding without COVID testing and masks. As it turns out, all we really accomplished was having a wedding where everyone is vaccinated—and the venue still required testing and masks. Still, it should at least be safer than it was last year, because everyone is vaccinated.

Since marriage is on my mind, I thought I would at least say a few things about the behavioral economics of marriage.

Now when I say the “economics of marriage” you likely have in mind things like tax laws that advantage (or disadvantage) marriage at different incomes, or the efficiency gains from living together that allow you to save money relative to each having your own place. That isn’t what I’m interested in.

What I want to talk about today is something a bit less economic, but more directly about marriage: the matching process by which one finds a spouse.

Economists would refer to marriage as a matching market. Unlike a conventional market where you can buy and sell arbitrary quantities, marriage is (usually; polygamy notwithstanding) a one-to-one arrangement. And unlike even the job market (which is also a one-to-one matching market), marriage usually doesn’t involve direct monetary payments (though in cultures with dowries it arguably does).

The usual model of a matching market has two separate pools: Employers and employees, for example. Typical heteronormative analyses of marriage have done likewise, separating men and women into different pools. But it turns out that sometimes men marry men and women marry women.

So what happens to our matching theory if we allow the pools to overlap?

I think the most sensible way to do it, actually, is to have only one pool: people who want to get married. Then, the way we capture the fact that most—but not all—men only want to marry women, and most—but not all—women only want to marry men is through the utililty function: Heterosexuals are simply those for whom a same-sex match would have very low utility. This would actually mean modeling marriage as a form of the stable roommates problem. (Oh my god, they were roommates!)

The stable roommates problem actually turns out to be harder than the conventional (heteronormative) stable marriage problem; in fact, while the hetero marriage problem (as I’ll henceforth call it) guarantees at least one stable matching for any preference ordering, the queer marriage problem can fail to have any stable solutions. While the hetero marriage problem ensures that everyone will eventually be matched to someone (if the number of men is equal to the number of women), sadly, the queer marriage problem can result in some people being forever rejected and forever alone. (There. Now you can blame the gays for ruining something: We ruined marriage matching.)

The queer marriage problem is actually more general than the hetero marriage problem: The hetero marriage problem is just the queer marriage problem with a particular utility function that assigns everyone strictly gendered preferences.

The best known algorithm for the queer marriage problem is an extension of the standard Gale-Shapley algorithm for the hetero marriage problem, with the same O(n^2) complexity in theory but a considerably more complicated implementation in practice. Honestly, while I can clearly grok the standard algorithm well enough to explain it to someone, I’m not sure I completely follow this one.

Then again, maybe preference orderings aren’t such a great approach after all. There has been a movement in economics toward what is called ordinal utility, where we speak only of preference orderings: You can like A more than B, but there’s no way to say how much more. But I for one am much more inclined toward cardinal utility, where differences have magnitudes: I like Coke more than Pepsi, and I like getting massaged more than being stabbed—and the difference between Coke and Pepsi is a lot smaller than the difference between getting massaged and being stabbed. (Many economists make much of the notion that even cardinal utility is “equivalent up to an affine transformation”, but I’ve got some news for you: So are temperature and time. All you are really doing by making an “affine transformation” is assigning a starting point and a unit of measurement. Temperature has a sensible absolute zero to use as a starting point, you say? Well, so does utility—not existing. )

With cardinal utility, I can offer you a very simple naive algorithm for finding an optimal match: Just try out every possible set of matchings and pick the one that has the highest total utility.

There are up to n!/((n/2)! 2^n) possible matchings to check, so this could take a long time—but it should work. I’m sure there’s a more efficient algorithm out there, but I don’t have the mental energy to figure it out at the moment. It might still be NP-hard, but I doubt it’s that hard.

Moreover, even once we find a utility-maximizing matching, that doesn’t guarantee a stable matching: Some people might still prefer to change even if it would end up reducing total utility.

Here’s a simple set of preferences for which that becomes an issue. In this table, the row is the person making the evaluation, and the columns are how much utility they assign to a match with each person. The total utility of a match is just the sum of utility from the two partners. The utility of “matching with yourself” is the utility of not being matched at all.


ABCD
A0321
B2031
C3201
D3210

Since everyone prefers every other person to not being matched at all (likely not true in real life!), the optimal matchings will always match everyone with someone. Thus, there are actually only 3 matchings to compare:

AB, CD: (3+2)+(1+1) = 7

AC, BD: (2+3)+(1+2) = 8

AD, BC: (1+3)+(3+2) = 9

The optimal matching, in utilitarian terms, is to match A with D and B with C. This yields total utility of 9.

But that’s not stable, because A prefers C over D, and C prefers A over B. So A and C would choose to pair up instead.

In fact, this set of preferences yields no stable matching at all. For anyone who is partnered with D, another member will rate them highest, and D’s partner will prefer that person over D (because D is everyone’s last choice).

There is always a nonempty set of utility-maximizing matchings. (There must be at least one, and could in principle have as many as there are possible matchings.) This actually just follows from the well-ordering property of the real numbers: Any finite set of reals has a maximum.

As this counterexample shows, there isn’t always a stable matching.

So here are a couple of interesting theoretical questions that this gives rise to:
1. If there is a stable matching, must it be in the set of utility-maximizing matchings?

2. If there is a stable matching, must all utility-maximizing matchings be stable?

Question 1 asks whether being stable implies being utility-maximizing.
Question 2 asks whether being utility-maximizing implies being stable—conditional on there being at least one stable possibility.

So, what is the answer to these questions? I don’t know! I’m actually not sure anyone does! We may have stumbled onto cutting-edge research!

I found a paper showing that these properties do not hold when you are doing the hetero marriage problem and you use multiplicative utility for matchings, but this is the queer marriage problem, and moreover I think multiplicative utility is the wrong approach. It doesn’t make sense to me to say that a marriage where one person is extremely happy and the other is indifferent to leaving is equivalent to a marriage where both partners are indifferent to leaving, but that’s what you’d get if you multiply 1*0 = 0. And if you allow negative utility from matchings (i.e. some people would prefer to remain single than to be in a particular match—which seems sensible enough, right?), since -1*-1 = 1, multiplicative utility yields the incredibly perverse result that two people who despise each other constitute a great match. Additive utility solves both problems: 1+0 = 1 and -1+-1 = -2, so, as we would hope, like + indifferent = like, and hate + hate = even more hate.

There is something to be said for the idea that two people who kind of like each other is better than one person ecstatic and the other miserable, but (1) that’s actually debatable, isn’t it? And (2) I think that would be better captured by somehow penalizing inequality in matches, not by using multiplicative utility.

Of course, I haven’t done a really thorough literature search, so other papers may exist. Nor have I spent a lot of time just trying to puzzle through this problem myself. Perhaps I should; this is sort of my job, after all. But even if I had the spare energy to invest heavily in research at the moment (which I sadly do not), I’ve been warned many times that pure theory papers are hard to publish, and I have enough trouble getting published as it is… so perhaps not.

My intuition is telling me that 2 is probably true but 1 is probably false. That is, I would guess that the set of stable matchings, when it’s not empty, is actually larger than the set of utility-maximizing matchings.

I think where I’m getting that intuition is from the properties of Pareto-efficient allocations: Any utility-maximizing allocation is necessarily Pareto-efficient, but many Pareto-efficient allocations are not utility-maximizing. A stable matching is sort of a strengthening of the notion of a Pareto-efficient allocation (though the problem of finding a Pareto-efficient matching for the general queer marriage problem has been solved).

But it is interesting to note that while a Pareto-efficient allocation must exist (typically there are many, but there must be at least one, because it’s impossible to have a cycle of Pareto improvements as long as preferences are transitive), it’s entirely possible to have no stable matchings at all.

Against “doing your best”

Oct 3 JDN 2459491

It’s an appealing sentiment: Since we all have different skill levels, rather than be held to some constant standard which may be easy for some but hard for others, we should each do our best. This will ensure that we achieve the best possible outcome.

Yet it turns out that this advice is not so easy to follow: What is “your best”?

Is your best the theoretical ideal of what your performance could be if all obstacles were removed and you worked at your greatest possible potential? Then no one in history has ever done their best, and when people get close, they usually end up winning Nobel Prizes.

Is your best the performance you could attain if you pushed yourself to your limit, ignored all pain and fatigue, and forced yourself to work at maximum effort until you literally can’t anymore? Then doing your best doesn’t sound like such a great thing anymore—and you’re certainly not going to be able to do it all the time.

Is your best the performance you would attain by continuing to work at your usual level of effort? Then how is that “your best”? Is it the best you could attain if you work at a level of effort that is considered standard or normative? Is it the best you could do under some constraint limiting the amount of pain or fatigue you are willing to bear? If so, what constraint?

How does “your best” change under different circumstances? Does it become less demanding when you are sick, or when you have a migraine? What if you’re depressed? What if you’re simply not feeling motivated? What if you can’t tell whether this demotivation is a special circumstance, a depression system, a random fluctuation, or a failure to motivate yourself?

There’s another problem: Sometimes you really aren’t good at something.

A certain fraction of performance in most tasks is attributable to something we might call “innate talent”; be it truly genetic or fixed by your early environment, it nevertheless is something that as an adult you are basically powerless to change. Yes, you could always train and practice more, and your performance would thereby improve. But it can only improve so much; you are constrained by your innate talent or lack thereof. No amount of training effort will ever allow me to reach the basketball performance of Michael Jordan, the painting skill of Leonardo Da Vinci, or the mathematical insight of Leonhard Euler. (Of the three, only the third is even visible from my current horizon. As someone with considerable talent and training in mathematics, I can at least imagine what it would be like to be as good as Euler—though I surely never will be. I can do most of the mathematical methods that Euler was famous for; but could I have invented them?)

In fact it’s worse than this; there are levels of performance that would be theoretically possible for someone of your level of talent, yet would be so costly to obtain as to be clearly not worth it. Maybe, after all, there is some way I could become as good a mathematician as Euler—but if it would require me to work 16-hour days doing nothing but studying mathematics for the rest of my life, I am quite unwilling to do so.

With this in mind, what would it mean for me to “do my best” in mathematics? To commit those 16-hour days for the next 30 years and win my Fields Medal—if it doesn’t kill me first? If that’s not what we mean by “my best”, then what do we mean, after all?

Perhaps we should simply abandon the concept, and ask instead what successful people actually do.

This will of course depend on what they were successful at; the behavior of basketball superstars is considerably different from the behavior of Nobel Laureate physicists, which is in turn considerably different from the behavior of billionaire CEOs. But in theory we could each decide for ourselves which kind of success we actually would desire to emulate.

Another pitfall to avoid is looking only at superstars and not comparing them with a suitable control group. Every Nobel Laureate physicist eats food and breathes oxygen, but eating food and breathing oxygen will not automatically give you good odds of winning a Nobel (though I guess your odds are in fact a lot better relative to not doing them!). It is likely that many of the things we observe successful people doing—even less trivial things, like working hard and taking big risks—are in fact the sort of thing that a great many people do with far less success.

Upon making such a comparison, one of the first things that we would notice is that the vast majority of highly-successful people were born with a great deal of privilege. Most of them were born rich or at least upper-middle-class; nearly all of them were born healthy without major disabilities. Yes, there are exceptions to any particular form of privilege, and even particularly exceptional individuals who attained superstar status with more headwinds than tailwinds; but the overwhelming pattern is that people who get home runs in life tend to be people who started the game on third base.

But setting that aside, or recalibrating one’s expectations to try to attain a level of success often achieved by people with roughly the same level of privilege as oneself, we must ask: How often? Should you aspire to the median? The top 20%? The top 10%? The top 1%? And what is your proper comparison group? Should I be comparing against Americans, White male Americans, economists, queer economists, people with depression and chronic migraines, or White/Native American male queer economists with depression and chronic migraines who are American expatriates in Scotland? Make the criteria too narrow, and there won’t be many left in your sample. Make them instead too broad, and you’ll include people with very different circumstances who may not be a fair comparison. Perhaps some sort of weighted average of different groups could work—but with what weighting?

Or maybe it’s right to compare against a very broad group, since this is what ultimately decides our life prospects. What it would take to write the best novel you (or someone “like you” in whatever sense that means) can write may not be the relevant question: What you really needed to know was how likely it is that you could make a living as a novelist.


The depressing truth in such a broad comparison is that you may in fact find yourself faced with so many obstacles that there is no realistic path toward the level of success you were hoping for. If you are reading this, I doubt matters are so dire for you that you’re at serious risk of being homeless and starving—but there definitely are people in this world, millions of people, for whom that is not simply a risk but very likely the best they can hope for.

The question I think we are really trying to ask is this: What is the right standard to hold ourselves against?

Unfortunately, I don’t have a clear answer to this question. I have always been an extremely ambitious individual, and I have inclined toward comparisons with the whole world, or with the superstars of my own fields. It is perhaps not surprising, then, that I have consistently failed to live up to my own expectations for my own achievement—even as I surpass what many others expected for me, and have long-since left behind what most people expect for themselves and each other.

I would thus not exactly recommend my own standards. Yet I also can’t quite bear to abandon them, out of a deep-seated fear that it is only by holding myself to the patently unreasonable standard of trying to be the next Einstein or Schrodinger or Keynes or Nash that I have even managed what meager achievements I have made thus far.

Of course this could be entirely wrong: Perhaps I’d have achieved just as much if I held myself to a lower standard—or I could even have achieved more, by avoiding the pain and stress of continually failing to achieve such unattainable heights. But I also can’t rule out the possibility that it is true. I have no control group.

In general, what I think I want to say is this: Don’t try to do your best. You have no idea what your best is. Instead, try to find the highest standard you can consistently meet.