Labor history in the making

Oct 24 JDN 2459512

To say that these are not ordinary times would be a grave understatement. I don’t need to tell you all the ways that this interminable pandemic has changed the lives of people all around the world.

But one in particular is of notice to economists: Labor in the United States is fighting back.

Quit rates are at historic highs. Over 100,000 workers in a variety of industries are simultaneously on strike, ranging from farmworkers to nurses and freelance writers to university lecturers.

After decades of quiescence to ever-worsening working conditions, it seems that finally American workers are mad as hell and not gonna take it anymore.

It’s about time, frankly. The real question is why it took this long. Working conditions in the US have been systematically worse than the rest of the First World since at least the 1980s. It was substantially easier to get the leave I needed to attend my own wedding—in the US—after starting work in the UK than it would have been at the same kind of job in the US, because UK law requires employers to grant leave from the day they start work, while US federal law and the law in many states doesn’t require leave at all for anyone—not even people who are sick or recently gave birth.

So, why did it happen now? What changed? The pandemic threw our lives into turmoil, that much is true. But it didn’t fundamentally change the power imbalance between workers and employers. Why was that enough?

I think I know why. The shock from the pandemic didn’t have to be enough to actually change people’s minds about striking—it merely had to be enough to convince people that others would show up. It wasn’t the first-order intention “I want to strike” that changed; it was the second-order belief “Other people want to strike too”.

For a labor strike is a coordination game par excellence. If 1 person strikes, they get fired and replaced. If 2 or 3 or 10 strike, most likely the same thing. But if 10,000 strike? If 100,000 strike? Suddenly corporations have no choice but to give in.

The most important question on your mind when you are deciding whether or not to strike is not, “Do I hate my job?” but “Will my co-workers have my back?”.

Coordination games exhibit a very fascinating—and still not well-understood—phenomenon known as Schelling points. People will typically latch onto certain seemingly-arbitrary features of their choices, and do so well enough that simply having such a focal point can radically increase the level of successful coordination.

I believe that the pandemic shock was just such a Schelling point. It didn’t change most people’s working conditions all that much: though I can see why nurses in particular would be upset, it’s not clear to me that being a university lecturer is much worse now than it was a year ago. But what the pandemic did do was change everyone’s working conditions, all at once. It was a sudden shock toward work dissatisfaction that applied to almost the entire workforce.

Thus, many people who were previously on the fence about striking were driven over the edge—and then this in turn made others willing to take the leap as well, suddenly confident that they would not be acting alone.

Another important feature of the pandemic shock was that it took away a lot of what people had left to lose. Consider the two following games.

Game A: You and 100 other people each separately, without communicating, decide to choose X or Y. If you all choose X, you each get $20. But if even one of you chooses Y, then everyone who chooses Y gets $1 but everyone who chooses X gets nothing.

Game B: Same as the above, except that if anyone chooses Y, everyone who chooses Y also gets nothing.

Game A is tricky, isn’t it? You want to choose X, and you’d be best off if everyone did. But can you really trust 100 other people to all choose X? Maybe you should take the safe bet and choose Y—but then, they’re thinking the same way.


Game B, on the other hand, is painfully easy: Choose X. Obviously choose X. There’s no downside, and potentially a big upside.

In terms of game theory, both games have the same two Nash equilibria: All-X and All-Y. But in the second game, I made all-X also a weak dominant strategy equilibrium, and that made all the difference.

We could run these games in the lab, and I’m pretty sure I know what we’d find: In game A, most people choose X, but some people don’t, and if you repeat the game more and more people choose Y. But in game B, almost everyone chooses X and keeps on choosing X. Maybe they don’t get unanimity every time, but they probably do get it most of the time—because why wouldn’t you choose X? (These are testable hypotheses! I could in fact run this experiment! Maybe I should?)

It’s hard to say at this point how effective these strikes will be. Surely there will be some concessions won—there are far too many workers striking for them all to get absolutely nothing. But it remains uncertain whether the concessions will be small, token changes just to break up the strikes, or serious, substantive restructuring of how work is done in the United States.

If the latter sounds overly optimistic, consider that this is basically what happened in the New Deal. Those massive—and massively successful—reforms were not generated out of nowhere; they were the result of the economic crisis of the Great Depression and substantial pressure by organized labor. We may yet see a second New Deal (a Green New Deal?) in the 2020s if labor organizations can continue putting the pressure on.

The most important thing in making such a grand effort possible is believing that it’s possible—only if enough people believe it can happen will enough people take the risk and put in the effort to make it happen. Apathy and cynicism are the most powerful weapons of the status quo.


We are witnessing history in the making. Let’s make it in the right direction.

Stupid problems, stupid solutions

Oct 17 JDN 2459505

Krugman thinks we should Mint The Coin: Mint a $1 trillion platinum coin and then deposit it at the Federal Reserve, thus creating, by fiat, the money to pay for the current budget without increasing the national debt.

This sounds pretty stupid. Quite frankly, it is stupid. But sometimes stupid problems require stupid solutions. And the debt ceiling is an incredibly stupid problem.

Let’s be clear about this: Congress already passed the budget. They had a right to vote it down—that is indeed their Constitutional responsibility. But they passed it. And now that the budget is passed, including all its various changes to taxes and spending, it necessarily requires a certain amount of debt increase to make it work.

There’s really no reason to have a debt ceiling at all. This is an arbitrary self-imposed credit constraint on the US government, which is probably the single institution in the world that least needs to worry about credit constraints. The US is currently borrowing at extremely low interest rates, and has never defaulted in 200 years. There is no reason it should be worrying about taking on additional debt, especially when it is being used to pay for important long-term investments such as infrastructure and education.

But if we’re going to have a debt ceiling, it should be a simple formality. Congress does the calculation to see how much debt will be needed, and if it accepts that amount, passes the budget and raises the debt ceiling as necessary. If for whatever reason they don’t want to incur the additional debt, they should make changes to the budget accordingly—not pass the budget and then act shocked when they need to raise the debt ceiling.

In fact, there is a pretty good case to be made that the debt ceiling is a violation of the Fourteenth Amendment, which states in Section 4: “The validity of the public debt of the United States, authorized by law, including debts incurred for payment of pensions and bounties for services in suppressing insurrection or rebellion, shall not be questioned.” This was originally intended to ensure the validity of Civil War debt, but it has been interpreted by the Supreme Court to mean that all US public debt legally incurred is valid and thus render the debt ceiling un-Constitutional.

Of course, actually sending it to the Supreme Court would take a long time—too long to avoid turmoil in financial markets if the debt ceiling is not raised. So perhaps Krugman is right: Perhaps it’s time to Mint The Coin and fight stupid with stupid.

Marriage and matching

Oct 10 JDN 2459498

When this post goes live, I will be married. We already had a long engagement, but it was made even longer by the pandemic: We originally planned to be married in October 2020, but then rescheduled for October 2021. Back then, we naively thought that the pandemic would be under control by now and we could have a wedding without COVID testing and masks. As it turns out, all we really accomplished was having a wedding where everyone is vaccinated—and the venue still required testing and masks. Still, it should at least be safer than it was last year, because everyone is vaccinated.

Since marriage is on my mind, I thought I would at least say a few things about the behavioral economics of marriage.

Now when I say the “economics of marriage” you likely have in mind things like tax laws that advantage (or disadvantage) marriage at different incomes, or the efficiency gains from living together that allow you to save money relative to each having your own place. That isn’t what I’m interested in.

What I want to talk about today is something a bit less economic, but more directly about marriage: the matching process by which one finds a spouse.

Economists would refer to marriage as a matching market. Unlike a conventional market where you can buy and sell arbitrary quantities, marriage is (usually; polygamy notwithstanding) a one-to-one arrangement. And unlike even the job market (which is also a one-to-one matching market), marriage usually doesn’t involve direct monetary payments (though in cultures with dowries it arguably does).

The usual model of a matching market has two separate pools: Employers and employees, for example. Typical heteronormative analyses of marriage have done likewise, separating men and women into different pools. But it turns out that sometimes men marry men and women marry women.

So what happens to our matching theory if we allow the pools to overlap?

I think the most sensible way to do it, actually, is to have only one pool: people who want to get married. Then, the way we capture the fact that most—but not all—men only want to marry women, and most—but not all—women only want to marry men is through the utililty function: Heterosexuals are simply those for whom a same-sex match would have very low utility. This would actually mean modeling marriage as a form of the stable roommates problem. (Oh my god, they were roommates!)

The stable roommates problem actually turns out to be harder than the conventional (heteronormative) stable marriage problem; in fact, while the hetero marriage problem (as I’ll henceforth call it) guarantees at least one stable matching for any preference ordering, the queer marriage problem can fail to have any stable solutions. While the hetero marriage problem ensures that everyone will eventually be matched to someone (if the number of men is equal to the number of women), sadly, the queer marriage problem can result in some people being forever rejected and forever alone. (There. Now you can blame the gays for ruining something: We ruined marriage matching.)

The queer marriage problem is actually more general than the hetero marriage problem: The hetero marriage problem is just the queer marriage problem with a particular utility function that assigns everyone strictly gendered preferences.

The best known algorithm for the queer marriage problem is an extension of the standard Gale-Shapley algorithm for the hetero marriage problem, with the same O(n^2) complexity in theory but a considerably more complicated implementation in practice. Honestly, while I can clearly grok the standard algorithm well enough to explain it to someone, I’m not sure I completely follow this one.

Then again, maybe preference orderings aren’t such a great approach after all. There has been a movement in economics toward what is called ordinal utility, where we speak only of preference orderings: You can like A more than B, but there’s no way to say how much more. But I for one am much more inclined toward cardinal utility, where differences have magnitudes: I like Coke more than Pepsi, and I like getting massaged more than being stabbed—and the difference between Coke and Pepsi is a lot smaller than the difference between getting massaged and being stabbed. (Many economists make much of the notion that even cardinal utility is “equivalent up to an affine transformation”, but I’ve got some news for you: So are temperature and time. All you are really doing by making an “affine transformation” is assigning a starting point and a unit of measurement. Temperature has a sensible absolute zero to use as a starting point, you say? Well, so does utility—not existing. )

With cardinal utility, I can offer you a very simple naive algorithm for finding an optimal match: Just try out every possible set of matchings and pick the one that has the highest total utility.

There are up to n!/((n/2)! 2^n) possible matchings to check, so this could take a long time—but it should work. I’m sure there’s a more efficient algorithm out there, but I don’t have the mental energy to figure it out at the moment. It might still be NP-hard, but I doubt it’s that hard.

Moreover, even once we find a utility-maximizing matching, that doesn’t guarantee a stable matching: Some people might still prefer to change even if it would end up reducing total utility.

Here’s a simple set of preferences for which that becomes an issue. In this table, the row is the person making the evaluation, and the columns are how much utility they assign to a match with each person. The total utility of a match is just the sum of utility from the two partners. The utility of “matching with yourself” is the utility of not being matched at all.


ABCD
A0321
B2031
C3201
D3210

Since everyone prefers every other person to not being matched at all (likely not true in real life!), the optimal matchings will always match everyone with someone. Thus, there are actually only 3 matchings to compare:

AB, CD: (3+2)+(1+1) = 7

AC, BD: (2+3)+(1+2) = 8

AD, BC: (1+3)+(3+2) = 9

The optimal matching, in utilitarian terms, is to match A with D and B with C. This yields total utility of 9.

But that’s not stable, because A prefers C over D, and C prefers A over B. So A and C would choose to pair up instead.

In fact, this set of preferences yields no stable matching at all. For anyone who is partnered with D, another member will rate them highest, and D’s partner will prefer that person over D (because D is everyone’s last choice).

There is always a nonempty set of utility-maximizing matchings. (There must be at least one, and could in principle have as many as there are possible matchings.) This actually just follows from the well-ordering property of the real numbers: Any finite set of reals has a maximum.

As this counterexample shows, there isn’t always a stable matching.

So here are a couple of interesting theoretical questions that this gives rise to:
1. If there is a stable matching, must it be in the set of utility-maximizing matchings?

2. If there is a stable matching, must all utility-maximizing matchings be stable?

Question 1 asks whether being stable implies being utility-maximizing.
Question 2 asks whether being utility-maximizing implies being stable—conditional on there being at least one stable possibility.

So, what is the answer to these questions? I don’t know! I’m actually not sure anyone does! We may have stumbled onto cutting-edge research!

I found a paper showing that these properties do not hold when you are doing the hetero marriage problem and you use multiplicative utility for matchings, but this is the queer marriage problem, and moreover I think multiplicative utility is the wrong approach. It doesn’t make sense to me to say that a marriage where one person is extremely happy and the other is indifferent to leaving is equivalent to a marriage where both partners are indifferent to leaving, but that’s what you’d get if you multiply 1*0 = 0. And if you allow negative utility from matchings (i.e. some people would prefer to remain single than to be in a particular match—which seems sensible enough, right?), since -1*-1 = 1, multiplicative utility yields the incredibly perverse result that two people who despise each other constitute a great match. Additive utility solves both problems: 1+0 = 1 and -1+-1 = -2, so, as we would hope, like + indifferent = like, and hate + hate = even more hate.

There is something to be said for the idea that two people who kind of like each other is better than one person ecstatic and the other miserable, but (1) that’s actually debatable, isn’t it? And (2) I think that would be better captured by somehow penalizing inequality in matches, not by using multiplicative utility.

Of course, I haven’t done a really thorough literature search, so other papers may exist. Nor have I spent a lot of time just trying to puzzle through this problem myself. Perhaps I should; this is sort of my job, after all. But even if I had the spare energy to invest heavily in research at the moment (which I sadly do not), I’ve been warned many times that pure theory papers are hard to publish, and I have enough trouble getting published as it is… so perhaps not.

My intuition is telling me that 2 is probably true but 1 is probably false. That is, I would guess that the set of stable matchings, when it’s not empty, is actually larger than the set of utility-maximizing matchings.

I think where I’m getting that intuition is from the properties of Pareto-efficient allocations: Any utility-maximizing allocation is necessarily Pareto-efficient, but many Pareto-efficient allocations are not utility-maximizing. A stable matching is sort of a strengthening of the notion of a Pareto-efficient allocation (though the problem of finding a Pareto-efficient matching for the general queer marriage problem has been solved).

But it is interesting to note that while a Pareto-efficient allocation must exist (typically there are many, but there must be at least one, because it’s impossible to have a cycle of Pareto improvements as long as preferences are transitive), it’s entirely possible to have no stable matchings at all.

Against “doing your best”

Oct 3 JDN 2459491

It’s an appealing sentiment: Since we all have different skill levels, rather than be held to some constant standard which may be easy for some but hard for others, we should each do our best. This will ensure that we achieve the best possible outcome.

Yet it turns out that this advice is not so easy to follow: What is “your best”?

Is your best the theoretical ideal of what your performance could be if all obstacles were removed and you worked at your greatest possible potential? Then no one in history has ever done their best, and when people get close, they usually end up winning Nobel Prizes.

Is your best the performance you could attain if you pushed yourself to your limit, ignored all pain and fatigue, and forced yourself to work at maximum effort until you literally can’t anymore? Then doing your best doesn’t sound like such a great thing anymore—and you’re certainly not going to be able to do it all the time.

Is your best the performance you would attain by continuing to work at your usual level of effort? Then how is that “your best”? Is it the best you could attain if you work at a level of effort that is considered standard or normative? Is it the best you could do under some constraint limiting the amount of pain or fatigue you are willing to bear? If so, what constraint?

How does “your best” change under different circumstances? Does it become less demanding when you are sick, or when you have a migraine? What if you’re depressed? What if you’re simply not feeling motivated? What if you can’t tell whether this demotivation is a special circumstance, a depression system, a random fluctuation, or a failure to motivate yourself?

There’s another problem: Sometimes you really aren’t good at something.

A certain fraction of performance in most tasks is attributable to something we might call “innate talent”; be it truly genetic or fixed by your early environment, it nevertheless is something that as an adult you are basically powerless to change. Yes, you could always train and practice more, and your performance would thereby improve. But it can only improve so much; you are constrained by your innate talent or lack thereof. No amount of training effort will ever allow me to reach the basketball performance of Michael Jordan, the painting skill of Leonardo Da Vinci, or the mathematical insight of Leonhard Euler. (Of the three, only the third is even visible from my current horizon. As someone with considerable talent and training in mathematics, I can at least imagine what it would be like to be as good as Euler—though I surely never will be. I can do most of the mathematical methods that Euler was famous for; but could I have invented them?)

In fact it’s worse than this; there are levels of performance that would be theoretically possible for someone of your level of talent, yet would be so costly to obtain as to be clearly not worth it. Maybe, after all, there is some way I could become as good a mathematician as Euler—but if it would require me to work 16-hour days doing nothing but studying mathematics for the rest of my life, I am quite unwilling to do so.

With this in mind, what would it mean for me to “do my best” in mathematics? To commit those 16-hour days for the next 30 years and win my Fields Medal—if it doesn’t kill me first? If that’s not what we mean by “my best”, then what do we mean, after all?

Perhaps we should simply abandon the concept, and ask instead what successful people actually do.

This will of course depend on what they were successful at; the behavior of basketball superstars is considerably different from the behavior of Nobel Laureate physicists, which is in turn considerably different from the behavior of billionaire CEOs. But in theory we could each decide for ourselves which kind of success we actually would desire to emulate.

Another pitfall to avoid is looking only at superstars and not comparing them with a suitable control group. Every Nobel Laureate physicist eats food and breathes oxygen, but eating food and breathing oxygen will not automatically give you good odds of winning a Nobel (though I guess your odds are in fact a lot better relative to not doing them!). It is likely that many of the things we observe successful people doing—even less trivial things, like working hard and taking big risks—are in fact the sort of thing that a great many people do with far less success.

Upon making such a comparison, one of the first things that we would notice is that the vast majority of highly-successful people were born with a great deal of privilege. Most of them were born rich or at least upper-middle-class; nearly all of them were born healthy without major disabilities. Yes, there are exceptions to any particular form of privilege, and even particularly exceptional individuals who attained superstar status with more headwinds than tailwinds; but the overwhelming pattern is that people who get home runs in life tend to be people who started the game on third base.

But setting that aside, or recalibrating one’s expectations to try to attain a level of success often achieved by people with roughly the same level of privilege as oneself, we must ask: How often? Should you aspire to the median? The top 20%? The top 10%? The top 1%? And what is your proper comparison group? Should I be comparing against Americans, White male Americans, economists, queer economists, people with depression and chronic migraines, or White/Native American male queer economists with depression and chronic migraines who are American expatriates in Scotland? Make the criteria too narrow, and there won’t be many left in your sample. Make them instead too broad, and you’ll include people with very different circumstances who may not be a fair comparison. Perhaps some sort of weighted average of different groups could work—but with what weighting?

Or maybe it’s right to compare against a very broad group, since this is what ultimately decides our life prospects. What it would take to write the best novel you (or someone “like you” in whatever sense that means) can write may not be the relevant question: What you really needed to know was how likely it is that you could make a living as a novelist.


The depressing truth in such a broad comparison is that you may in fact find yourself faced with so many obstacles that there is no realistic path toward the level of success you were hoping for. If you are reading this, I doubt matters are so dire for you that you’re at serious risk of being homeless and starving—but there definitely are people in this world, millions of people, for whom that is not simply a risk but very likely the best they can hope for.

The question I think we are really trying to ask is this: What is the right standard to hold ourselves against?

Unfortunately, I don’t have a clear answer to this question. I have always been an extremely ambitious individual, and I have inclined toward comparisons with the whole world, or with the superstars of my own fields. It is perhaps not surprising, then, that I have consistently failed to live up to my own expectations for my own achievement—even as I surpass what many others expected for me, and have long-since left behind what most people expect for themselves and each other.

I would thus not exactly recommend my own standards. Yet I also can’t quite bear to abandon them, out of a deep-seated fear that it is only by holding myself to the patently unreasonable standard of trying to be the next Einstein or Schrodinger or Keynes or Nash that I have even managed what meager achievements I have made thus far.

Of course this could be entirely wrong: Perhaps I’d have achieved just as much if I held myself to a lower standard—or I could even have achieved more, by avoiding the pain and stress of continually failing to achieve such unattainable heights. But I also can’t rule out the possibility that it is true. I have no control group.

In general, what I think I want to say is this: Don’t try to do your best. You have no idea what your best is. Instead, try to find the highest standard you can consistently meet.

Where did all that money go?

Sep 26 JDN 2459484

Since 9/11, the US has spent a staggering $14 trillion on the military, averaging $700 billion per year. Some of this was the routine spending necessary to maintain a large standing army (though it is fair to ask whether we really need our standing army to be quite this large).

But a recent study by the Costs of War Project suggests that a disturbing amount of this money has gone to defense contractors: Somewhere between one-third and one-half, or in other words between $5 and $7 trillion.

This is revenue, not profit; presumably these defense contractors also incurred various costs in materials, labor, and logistics. But even as raw revenue that is an enormous amount of money. Apple, one of the largest corporations in the world, takes in on average about $300 billion per year. Over 20 years, that would be $6 trillion—so, our government has basically spent as much on defense contractors as the entire world spent on Apple products.

Of that $5 to $7 trillion, one-fourth to one-third went to just five corporations. That’s over $2 trillion just to Lockheed Martin, Boeing, General Dynamics, Raytheon, and Northrop Grumman. We pay more each year to Lockheed Martin than we do to the State Department and USAID.

Looking at just profit, each of these corporations appears to make a gross profit margin of about 10%. So we’re looking at something like $200 billion over 20 years—$10 billion per year—just handed over to shareholders.

And what were we buying with this money? Mostly overengineered high-tech military equipment that does little or nothing to actually protect soldiers, win battles, or promote national security. (It certainly didn’t do much to stop the Taliban from retaking control as soon as we left Afghanistan!)

Eisenhower tried to warn us about the military-industrial complex, but we didn’t listen.

Even when the equipment they sell us actually does its job, it still raises some serious questions about whether these are things we ought to be privatizing. As I mentioned in a post on private prisons several years ago, there are really three types of privatization of government functions.

Type 1 is innocuous: There are certain products and services that privatized businesses already provide in the open market and the government also has use for. There’s no reason the government should hesitate to buy wrenches or toothbrushes or hire cleaners or roofers.

Type 3 is the worst: There have been attempts to privatize fundamental government services, such as prisons, police, and the military. This is inherently unjust and undemocratic and must never be allowed. The use of force must never be for profit.

But defense contractors lie in the middle area, type 2: contracting services to specific companies that involve government-specific features such as military weapons. It’s true, there’s not that much difference functionally between a civilian airliner and a bomber plane, so it makes at least some sense that Boeing would be best qualified to produce both. This is not an obviously nonsensical idea. But there are still some very important differences, and I am deeply uneasy with the very concept of private corporations manufacturing weapons.


It’s true, there are some weapons that private companies make for civilians, such as knives and handguns. I think it would be difficult to maintain a free society while banning all such production, and it is literally impossible to ban anything that could potentially be used as a weapon (Wrenches? Kitchen knives? Tree branches!?). But we strictly regulate such production for very good reasons—and we probably don’t go far enough, really.

Moreover, there’s a pretty clear difference in magnitude if not in kind between a corporation making knives or even handguns and a corporation making cruise missiles—let alone nuclear missiles. Even if there is a legitimate overlap in skills and technology between making military weapons and whatever other products a corporation might make for the private market, it might still ultimately be better to nationalize the production of military weapons.

And then there are corporations that essentially do nothing but make military weapons—and we’re back to Lockheed-Martin again. Boeing does in fact make most of the world’s civilian airliners, in addition to making some military aircraft and missiles. But Lockheed-Martin? They pretty much just make fighters and bombers. This isn’t a company with generalized aerospace manufacturing skills that we are calling upon to make fighters in a time of war. This is an entire private, for-profit corporation that exists for the sole purpose of making fighter planes.

I really can’t see much reason not to simply nationalize Lockheed-Martin. They should be a division of the US Air Force or something.

I guess, in theory, the possibility of competing between different military contractors could potentially keep costs down… but, uh, how’s that working out for you? The acquisition costs of the F-35 are expected to run over $400 billion—the cost of the whole program a whopping $1.5 trillion. That doesn’t exactly sound like we’ve been holding costs down through competition.

And there really is something deeply unseemly about the idea of making profits through war. There’s a reason we have that word “profiteering”. Yes, manufacturing weapons has costs, and you should of course pay your workers and material suppliers at fair rates. But do we really want corporations to be making billions of dollars in profits for making machines of death?

But if nationalizing defense contractors or making them into nonprofit institutions seems too radical, I think there’s one very basic law we ought to make: No corporation with government contracts may engage in any form of lobbying. That’s such an obvious conflict of interest, such a clear opening for regulatory capture, that there’s really no excuse for it. If there must be shareholders profiting from war, at the very least they should have absolutely no say in whether we go to war or not.

And yet, we do allow defense contractors to spend on lobbying—and spend they do, tens of millions of dollars every year. Does all this lobbying affect our military budget or our willingness to go to war?

They must think so.

Hypocrisy is underrated

Sep 12 JDN 2459470

Hypocrisy isn’t a good thing, but it isn’t nearly as bad as most people seem to think. Often accusing someone of hypocrisy is taken as a knock-down argument for everything they are saying, and this is just utterly wrong. Someone can be a hypocrite and still be mostly right.

Often people are accused of hypocrisy when they are not being hypocritical; for instance the right wing seems to think that “They want higher taxes on the rich, but they are rich!” is hypocrisy, when in fact it’s simply altruism. (If they had wanted the rich guillotined, that would be hypocrisy. Maybe the problem is that the right wing can’t tell the difference?) Even worse, “They live under capitalism but they want to overthrow capitalism!” is not even close to hypocrisy—let alone why, how would someone overthrow a system they weren’t living under? (There are many things wrong with Marxists, but that is not one of them.)

But in fact I intend something stronger: Hypocrisy itself just isn’t that bad.


There are currently two classes of Republican politicians with regard to the COVID vaccines: Those who are consistent in their principles and don’t get the vaccines, and those who are hypocrites and get the vaccines while telling their constituents not to. Of the two, who is better? The hypocrites. At least they are doing the right thing even as they say things that are very, very wrong.

There are really four cases to consider. The principles you believe in could be right, or they could be wrong. And you could follow those principles, or you could be a hypocrite. Each of these two factors is independent.

If your principles are right and you are consistent, that’s the best case; if your principles are right and you are a hypocrite, that’s worse.

But if your principles are wrong and you are consistent, that’s the worst case; if your principles are wrong and you are a hypocrite, that’s better.

In fact I think for most things the ordering goes like this: Consistent Right > Hypocritical Wrong > Hypocritical Right > Consistent Wrong. Your behavior counts for more than your principles—so if you’re going to be a hypocrite, it’s better for your good actions to not match your bad principles.

Obviously if we could get people to believe good moral principles and then follow them, that would be best. And we should in fact be working to achieve that.

But if you know that someone’s moral principles are wrong, it doesn’t accomplish anything to accuse them of being a hypocrite. If it’s true, that’s a good thing.

Here’s a pretty clear example for you: Anyone who says that the Bible is infallible but doesn’t want gay people stoned to death is a hypocrite. The Bible is quite clear on this matter; Leviticus 20:13 really doesn’t leave much room for interpretation. By this standard, most Christians are hypocrites—and thank goodness for that. I owe my life to the hypocrisy of millions.

Of course if I could convince them that the Bible isn’t infallible—perhaps by pointing out all the things it says that contradict their most deeply-held moral and factual beliefs—that would be even better. But the last thing I want to do is make their behavior more consistent with their belief that the Bible is infallible; that would turn them into fanatical monsters. The Spanish Inquisition was very consistent in behaving according to the belief that the Bible is infallible.

Here’s another example: Anyone who thinks that cruelty to cats and dogs is wrong but is willing to buy factory-farmed beef and ham is a hypocrite. Any principle that would tell you that it’s wrong to kick a dog or cat would tell you that the way cows and pigs are treated in CAFOs is utterly unconscionable. But if you are really unwilling to give up eating meat and you can’t find or afford free-range beef, it still would be bad for you to start kicking dogs in a display of your moral consistency.

And one more example for good measure: The leaders of any country who resist human rights violations abroad but tolerate them at home are hypocrites. Obviously the best thing to do would be to fight human rights violations everywhere. But perhaps for whatever reason you are unwilling or unable to do this—one disturbing truth is that many human rights violations at home (such as draconian border policies) are often popular with your local constituents. Human-rights violations abroad are also often more severe—detaining children at the border is one thing, a full-scale genocide is quite another. So, for good reasons or bad, you may decide to focus your efforts on resisting human rights violations abroad rather than at home; this would make you a hypocrite. But it would still make you much better than a more consistent leader who simply ignores all human rights violations wherever they may occur.

In fact, there are cases in which it may be optimal for you to knowingly be a hypocrite. If you have two sets of competing moral beliefs, and you don’t know which is true but you know that as a whole they are inconsistent, your best option is to apply each set of beliefs in the domain for which you are most confident that it is correct, while searching for more information that might allow you to correct your beliefs and reconcile the inconsistency. If you are self-aware about this, you will know that you are behaving in a hypocritical way—but you will still behave better than you would if you picked the wrong beliefs and stuck to them dogmatically. In fact, given a reasonable level of risk aversion, you’ll be better off being a hypocrite than you would by picking one set of beliefs arbitrarily (say, at the flip of a coin). At least then you avoid the worst-case scenario of being the most wrong.

There is yet another factor to take into consideration. Sometimes following your own principles is hard.

Considerable ink has been spilled on the concept of akrasia, or “weakness of will”, in which we judge that A is better than B yet still find ourselves doing B. Philosophers continue to debate to this day whether this really happens. As a behavioral economist, I observe it routinely, perhaps even daily. In fact, I observe it in myself.

I think the philosophers’ mistake is to presume that there is one simple, well-defined “you” that makes all observations and judgments and takes actions. Our brains are much more complicated than that. There are many “you”s inside your brain, each with its own capacities, desires, and judgments. Yes, there is some important sense in which they are all somehow unified into a single consciousness—by a mechanism which still eludes our understanding. But it doesn’t take esoteric cognitive science to see that there are many minds inside you: Haven’t you ever felt an urge to do something you knew you shouldn’t do? Haven’t you ever succumbed to such an urge—drank the drink, eaten the dessert, bought the shoes, slept with the stranger—when it seemed so enticing but you knew it wasn’t really the right choice?

We even speak of being “of two minds” when we are ambivalent about something, and I think there is literal truth in this. The neural networks in your brain are forming coalitions, and arguing between them over which course of action you ought to take. Eventually one coalition will prevail, and your action will be taken; but afterward your reflective mind need not always agree that the coalition which won the vote was the one that deserved to.

The evolutionary reason for this is simple: We’re a kludge. We weren’t designed from the top down for optimal efficiency. We were the product of hundreds of millions of years of subtle tinkering, adding a bit here, removing a bit there, layering the mammalian, reflective cerebral cortex over the reptilian, emotional limbic system over the ancient, involuntary autonomic system. Combine this with the fact that we are built in pairs, with left and right halves of each kind of brain (and yes, they are independently functional when their connection is severed), and the wonder is that we ever agree with our own decisions.

Thus, there is a kind of hypocrisy that is not a moral indictment at all: You may genuinely and honestly agree that it is morally better to do something and still not be able to bring yourself to do it. You may know full well that it would be better to donate that money to malaria treatment rather than buy yourself that tub of ice cream—you may be on a diet and full well know that the ice cream won’t even benefit you in the long run—and still not be able to stop yourself from buying the ice cream.

Sometimes your feeling of hesitation at an altruistic act may be a useful insight; I certainly don’t think we should feel obliged to give all our income, or even all of our discretionary income, to high-impact charities. (For most people I encourage 5%. I personally try to aim for 10%. If all the middle-class and above in the First World gave even 1% we could definitely end world hunger.) But other times it may lead you astray, make you unable to resist the temptation of a delicious treat or a shiny new toy when even you know the world would be better off if you did otherwise.

Yet when following our own principles is so difficult, it’s not really much of a criticism to point out that someone has failed to do so, particularly when they themselves already recognize that they failed. The inconsistency between behavior and belief indicates that something is wrong, but it may not be any dishonesty or even anything wrong with their beliefs.

I wouldn’t go so far as to say you should stop ever calling out hypocrisy. Sometimes it is clearly useful to do so. But while hypocrisy is often the sign of a moral failing, it isn’t always—and even when it is, often as not the problem is the bad principles, not the behavior inconsistent with them.

Unending nightmares

Sep 19 JDN 2459477

We are living in a time of unending nightmares.

As I write this, we have just passed the 20th anniversary of 9/11. Yet only in the past month were US troops finally withdrawn from Afghanistan—and that withdrawal was immediately followed by a total collapse of the Afghan government and a reinstatement of the Taliban. The United States had been at war for nearly 20 years, spending trillions of dollars and causing thousands of deaths—and seems to have accomplished precisely nothing.

Some left-wing circles have been saying that the Taliban offered surrender all the way back in 2001; this is not accurate. Alternet even refers to it as an “unconditional surrender” which is utter nonsense. No one in their right mind—not even the most die-hard imperialist—would ever refuse an unconditional surrender, and the US most certainly did nothing of the sort.)

The Taliban did offer a peace deal in 2001, which would have involved giving the US control of Kandahar and turning Osama bin Laden over to a neutral country (not to the US or any US ally). It would also have granted amnesty to a number of high-level Taliban leaders, which was a major sticking point for the US. In hindsight, should they have taken the deal? Obviously. But I don’t think that was nearly so clear at the time—nor would it have been particularly palatable to most of the American public to leave Osama bin Laden under house arrest in some neutral country (which they never specified by the way; somewhere without US extradition, presumably?) and grant amnesty to the top leaders of the Taliban.

Thus, even after the 20-year nightmare of the war that refused to end, we are still back to the nightmare we were in before—Afghanistan ruled by fanatics who will oppress millions.

Yet somehow this isn’t even the worst unending nightmare, for after a year and a half we are still in the throes of a global pandemic which has now caused over 4.6 million deaths. We are still wearing masks wherever we go—at least, those of us who are complying with the rules. We have gotten vaccinated already, but likely will need booster shots—at least, those of us who believe in vaccines.

The most disturbing part of it all is how many people still aren’t willing to follow the most basic demands of public health agencies.

In case you thought this was just an American phenomenon: Just a few days ago I looked out the window of my apartment to see a protest in front of the Scottish Parliament complaining about vaccine and mask mandates, with signs declaring it all a hoax. (Yes, my current temporary apartment overlooks the Scottish Parliament.)

Some of those signs displayed a perplexing innumeracy. One sign claimed that the vaccines must be stopped because they had killed 1,400 people in the UK. This is not actually true; while there have been 1,400 people in the UK who died after receiving a vaccine, 48 million people in the UK have gotten the vaccine, and many of them were old and/or sick, so, purely by statistics, we’d expect some of them to die shortly afterward. Less than 100 of these deaths are in any way attributable to the vaccine. But suppose for a moment that we took the figure at face value, and assumed, quite implausibly, that everyone who died shortly after getting the vaccine was in fact killed by the vaccine. This 1,400 figure needs to be compared against the 156,000 UK deaths attributable to COVID itself. Since 7 million people in the UK have tested positive for the virus, this is a fatality rate of over 2%. Even if we suppose that literally everyone in the UK who hasn’t been vaccinated in fact had the virus, that would still only be 20 million (the UK population of 68 million – the 48 million vaccinated) people, so the death rate for COVID itself would still be at least 0.8%—a staggeringly high fatality rate for a pandemic airborne virus. Meanwhile, even on this ridiculous overestimate of the deaths caused by the vaccine, the fatality rate for vaccination would be at most 0.003%. Thus, even by the anti-vaxers’ own claims, the vaccine is nearly 300 times safer than catching the virus. If we use the official estimates of a 1.9% COVID fatality rate and 100 deaths caused by the vaccines, the vaccines are in fact over 9000 times safer.

Yet it does seem to be worse in the United States, as while 22% of Americans described themselves as opposed to vaccination in general, only about 2% of Britons said the same.

But this did not translate to such a large difference in actual vaccination: While 70% of people in the UK have received the vaccine, 64% of people in the US have. Both of these figures are tantalizingly close to, yet clearly below, the at least 84% necessary to achieve herd immunity. (Actually some early estimates thought 60-70% might be enough—but epidemiologists no longer believe this, and some think that even 90% wouldn’t be enough.)

Indeed, the predominant tone I get from trying to keep up on the current news in epidemiology is fatalism: It’s too late, we’ve already failed to contain the virus, we won’t reach herd immunity, we won’t ever eradicate it. At this point they now all seem to think that COVID is going to become the new influenza, always with us, a major cause of death that somehow recedes into the background and seems normal to us—but COVID, unlike influenza, may stick around all year long. The one glimmer of hope is that influenza itself was severely hampered by the anti-pandemic procedures, and influenza cases and deaths are indeed down in both the US and UK (though not zero, nor as drastically reduced as many have reported).

The contrast between terrorism and pandemics is a sobering one, as pandemics kill far more people, yet somehow don’t provoke anywhere near as committed a response.

9/11 was a massive outlier in terrorism, at 3,000 deaths on a single day; otherwise the average annual death rate by terrorism is about 20,000 worldwide, mostly committed by Islamist groups. Yet the threat is not actually to Americans in particular; annual deaths due to terrorism in the US are less than 100—and most of these by right-wing domestic terrorists, not international Islamists.

Meanwhile, in an ordinary year, influenza would kill 50,000 Americans and somewhere between 300,000 and 700,000 people worldwide. COVID in the past year and a half has killed over 650,000 Americans and 4.6 million people worldwide—annualize that and it would be 400,000 per year in the US and 3 million per year worldwide.

Yet in response to terrorism we as a country were prepared to spend $2.3 trillion dollars, lose nearly 4,000 US and allied troops, and kill nearly 50,000 civilians—not even counting the over 60,000 enemy soldiers killed. It’s not even clear that this accomplished anything as far as reducing terrorism—by some estimates it actually made it worse.

Were we prepared to respond so aggressively to pandemics? Certainly not to influenza; we somehow treat all those deaths are normal or inevitable. In response to COVID we did spend a great deal of money, even more than the wars in fact—a total of nearly $6 trillion. This was a very pleasant surprise to me (it’s the first time in my lifetime I’ve witnessed a serious, not watered-down Keynesian fiscal stimulus in the United States). And we imposed lockdowns—but these were all-too quickly removed, despite the pleading of public health officials. It seems to be that our governments tried to impose an aggressive response, but then too many of the citizens pushed back against it, unwilling to give up their “freedom” (read: convenience) in the name of public safety.

For the wars, all most of us had to do was pay some taxes and sit back and watch; but for the pandemic we were actually expected to stay home, wear masks, and get shots? Forget it.

Politics was clearly a very big factor here: In the US, the COVID death rate map and the 2020 election map look almost equivalent: By and large, people who voted for Biden have been wearing masks and getting vaccinated, while people who voted for Trump have not.

But pandemic response is precisely the sort of thing you can’t do halfway. If one area is containing a virus and another isn’t, the virus will still remain uncontained. (As some have remarked, it’s rather like having a “peeing section” of a swimming pool. Much worse, actually, as urine contains relatively few bacteria—but not zero—and is quickly diluted by the huge quantities of water in a swimming pool.)

Indeed, that seems to be what has happened, and why we can’t seem to return to normal life despite months of isolation. Since enough people are refusing to make any effort to contain the virus, the virus remains uncontained, and the only way to protect ourselves from it is to continue keeping restrictions in place indefinitely.

Had we simply kept the original lockdowns in place awhile longer and then made sure everyone got the vaccine—preferably by paying them for doing it, rather than punishing them for not—we might have been able to actually contain the virus and then bring things back to normal.

But as it is, this is what I think is going to happen: At some point, we’re just going to give up. We’ll see that the virus isn’t getting any more contained than it ever was, and we’ll be so tired of living in isolation that we’ll finally just give up on doing it anymore and take our chances. Some of us will continue to get our annual vaccines, but some won’t. Some of us will continue to wear masks, but most won’t. The virus will become a part of our lives, just as influenza did, and we’ll convince ourselves that millions of deaths is no big deal.

And then the nightmare will truly never end.

Realistic open borders

Sep 5 JDN 2459463

In an earlier post I lamented the tight restrictions on border crossings that prevail even between allied First World countries. (On a personal note, you’ll be happy to know that our visas have cleared and we are now moved into Edinburgh, cat and all, though we are still in temporary housing and our official biometric residence permits haven’t yet arrived.)

In this post I’d like to speculate on how we might get from our current regime to something more like open borders.

Obviously we can’t simply remove all border restrictions immediately. That would be a political non-starter, and even ethically or economically it wouldn’t make very much sense. There are sensible reasons behind some of our border regulations—just not most of them.

Instead we would want to remove a few restrictions at a time, starting with the most onerous or ridiculous ones.

High on my list in the UK in particular would be the requirement that pets must fly as cargo. I literally can’t think of a good reason for this; it seems practically designed to cost travelers more money and traumatize as many pets as possible. If it’s intended to support airlines somehow, please simply subsidize airlines. (But really, why are you doing that? You should be taxing airlines because of their high carbon emissions. Subsidize boats and trains.) If it’s intended to somehow prevent the spread of rabies, it’s obviously unnecessary, since every pet moved to the UK already has to document a recent rabies vaccine. But this particular rule seems to be a quirk of the UK in particular, hence not very generalizable.

But here’s one that actually seems quite common: Financial requirements for visas. Even tourist visas in most countries cost money, in amounts that seem to vary according to some sort of occult ritual. I can see no sensible economic reason why a visa would be $130 in Vietnam but only $20 in neighboring Cambodia, or why Kazakhstan can be visited for $25 but Azerbaijan costs $100, or why Myanmar costs only $30 but Bhutan will run you over $200.

Work visas are considerably more demanding still.

Financial requirements in the UK are especially onerous; you have to make above a certain salary and have a certain amount of savings in the bank, based on your family size. This was no problem for me personally, but it damn well shouldn’t be; I have a PhD in economics. My salary is now twice what it was as a grad student, and honestly that’s a good deal less than I was hoping for (and would have gotten on the tenure track at an R1 university).

All the countries in the Schengen Area have their own requirements for “financial subsistence” for visa applications, ranging from a trivial €3 in Hungary (not per day, just total; why do they even bother?) or manageable €14 per day in Latvia, through the more demanding amounts of €45 per day in Germany and Italy, to €92 per day in Switzerland and Liechtenstein, all the way up to the utterly unreasonable €120 per day in France. That would be €43,800 per year, or $51,700. Apparently you must be at least middle class to enter France.

Canada has a similar requirement known as “proof of funds”, but it’s considerably more reasonable, since you can substitute proof of employment and there are no wage minimums for such employment. Even if you don’t already have a job you can still apply and the minimum requirement is actually lower than the poverty line in Canada.

The United States doesn’t require financial requirements for most visas, but it does have a $160 visa fee. And the H1-B visa in particular (the nearest equivalent to the Skilled Worker visa I’ve got in the UK) requires that your wage or salary be at least the “prevailing wage” in your industry—meaning it is nearly impossible for a company to save money by hiring people on H1-B visas and hence they have very little incentive to hire H1-B workers. If you are of above-average talent and being paid only average wages, I guess they can save some money that way. But this is not how trade is supposed to work—nobody requires that you pay US prices for goods shipped from China, and if they did, nobody would ever buy anything from China. This is blatant, naked protectionism—but we’re apparently okay with it as long as it’s trade in labor instead of goods.

I wasn’t able to quickly find whether there are similar financial requirements in other countries. Perhaps there aren’t; these are the countries most people actually want to move to anyway. Permanent migration is overwhelminginly toward OECD (read: First World) countries, and is actually helping us sustain our populations in the face of low birth rates.

I must admit, I can see some fiscal benefits for a country not allowing poor people in, but this practice raises some very deep ethical problems: What right do we have to do this?

If someone is born poor in Laredo, Texas, we take responsibility for them as a US citizen. Maybe we don’t treat them particularly well (that is Texas, after all), but we do give them access to certain basic services, such as emergency services, Medicaid, TANF and SNAP. They are allowed to vote, own property, and even hold office in the United States. But if that same person were born in Nuevo Laredo, Tamaulipas—literally less than a mile away, right across the river—they would receive none of these benefits. They would not even be allowed to cross the river without a passport and a visa.

In some ways the contrast is even more dire if we consider a more liberal US state. A poor person born in Chula Vista, California has access to the full array of California services; Medi-Cal is honestly something close to a single-payer healthcare system, though the full morass of privatized US healthcare is layered on top of us. Then there is CalWORKS, CalFresh, and so on. But the same person born in Tijuana, Baja California would get none of these benefits.

They could be the same person. They could look the same and have essentially the same culture—even the same language, given how many Californians speak Spanish and how many Mexicans speak English. But if they were born on the other side of a river (in Texas) or even an arbitrary line (in California), we treat them completely differently. And then to add insult to injury, we won’t even let them across, not in spite, but because of how poor and desperate they are. If they were rich and educated, we’d let them come across—but then why would they need to?

“Give me your tired, your poor, your huddled masses yearning to breathe free”?

Some restrictions may apply.

Economists talk often of “trade barriers”, but in real terms we have basically removed all trade barriers in goods. Yes, there are still some small tariffs, and the occasional quota here and there—and these should go away too, especially the quotas, because they don’t even raise revenue—but in general we have an extremely globalized economy in terms of goods. The same complex product, like a car or a smartphone, is often made of parts from a dozen countries.

But when it comes to labor, we are still living in a protectionist world. Crossing borders to work is difficult, time-consuming, and above all, expensive. This dramatically reduces opportunities for workers to move where their labor is most valued—which hurts not only them, but also anyone who would employ them or buy products made by them. The poorest people are those who stand to gain the most from crossing borders, and they are precisely the ones that we work hardest to forbid.

So let’s start with that, shall we? We can keep all this nonsense about passports, visas, background checks, and customs inspections. It’s probably all unnecessary and wasteful and unfair, but politically it’s clearly too popular to remove. Let’s just remove this: No more financial requirements or fees for work visas. If you want to come to another country to work, you have to go through an application and all that; fine. But you shouldn’t have to prove you aren’t poor. Poor people have just as much right to live here as anybody else—and if we let them do so, they’d be a lot less poor.

How to change minds

Aug 29 JDN 2459456

Think for a moment about the last time you changed your mind on something important. If you can’t think of any examples, that’s not a good sign. Think harder; look back further. If you still can’t find any examples, you need to take a deep, hard look at yourself and how you are forming your beliefs. The path to wisdom is not found by starting with the right beliefs, but by starting with the wrong ones and recognizing them as wrong. No one was born getting everything right.

If you remember changing your mind about something, but don’t remember exactly when, that’s not a problem. Indeed, this is the typical case, and I’ll get to why in a moment. Try to remember as much as you can about the whole process, however long it took.

If you still can’t specifically remember changing your mind, try to imagine a situation in which you would change your mind—and if you can’t do that, you should be deeply ashamed and I have nothing further to say to you.

Thinking back to that time: Why did you change your mind?

It’s possible that it was something you did entirely on your own, through diligent research of primary sources or even your own mathematical proofs or experimental studies. This is occasionally something that happens; as an active researcher, it has definitely happened to me. But it’s clearly not the typical case of what changes people’s minds, and it’s quite likely that you have never experienced it yourself.

The far more common scenario—even for active researchers—is far more mundane: You changed your mind because someone convinced you. You encountered a persuasive argument, and it changed the way you think about things.

In fact, it probably wasn’t just one persuasive argument; it was probably many arguments, from multiple sources, over some span of time. It could be as little as minutes or hours; it could be as long as years.

Probably the first time someone tried to change your mind on that issue, they failed. The argument may even have degenerated into shouting and name-calling. You both went away thinking that the other side was composed of complete idiots or heartless monsters. And then, a little later, thinking back on the whole thing, you remembered one thing they said that was actually a pretty good point.

This happened again with someone else, and again with yet another person. And each time your mind changed just a little bit—you became less certain of some things, or incorporated some new information you didn’t know before. The towering edifice of your worldview would not be toppled by a single conversation—but a few bricks here and there did get taken out and replaced.

Or perhaps you weren’t even the target of the conversation; you simply overheard it. This seems especially common in the age of social media, where public and private spaces become blurred and two family members arguing about politics can blow up into a viral post that is viewed by millions. Perhaps you changed your mind not because of what was said to you, but because of what two other people said to one another; perhaps the one you thought was on your side just wasn’t making as many good arguments as the one on the other side.

Now, you may be thinking: Yes, people like me change our minds, because we are intelligent and reasonable. But those people, on the other side, aren’t like that. They are stubborn and foolish and dogmatic and stupid.

And you know what? You probably are an especially intelligent and reasonable person. If you’re reading this blog, there’s a good chance that you are at least above-average in your level of education, rationality, and open-mindedness.

But no matter what beliefs you hold, I guarantee you there is someone out there who shares many of them and is stubborn and foolish and dogmatic and stupid. And furthermore, there is probably someone out there who disagrees with many of your beliefs and is intelligent and open-minded and reasonable.

This is not to say that there’s no correlation between your level of reasonableness and what you actually believe. Obviously some beliefs are more rational than others, and rational people are more likely to hold those beliefs. (If this weren’t the case, we’d be doomed.) Other things equal, an atheist is more reasonable than a member of the Taliban; a social democrat is more reasonable than a neo-Nazi; a feminist is more reasonable than a misogynist; a member of the Human Rights Campaign is more reasonable than a member of the Westboro Baptist Church. But reasonable people can be wrong, and unreasonable people can be right.

You should be trying to seek out the most reasonable people who disagree with you. And you should be trying to present yourself as the most reasonable person who expresses your own beliefs.

This can be difficult—especially that first part, as the world (or at least the world spanned by Facebook and Twitter) seems to be filled with people who are astonishingly dogmatic and unreasonable. Often you won’t be able to find any reasonable disagreement. Often you will find yourself in threads full of rage, hatred and name-calling, and you will come away disheartened, frustrated, or even despairing for humanity. The whole process can feel utterly futile.

And yet, somehow, minds change.

Support for same-sex marriage in the US rose from 27% to 70% just since 1997.

Read that date again: 1997. Less than 25 years ago.

The proportion of new marriages which were interracial has risen from 3% in 1967 to 19% today. Given the racial demographics of the US, this is almost at the level of random assortment.

Ironically I think that the biggest reason people underestimate the effectiveness of rational argument is the availability heuristic: We can’t call to mind any cases where we changed someone’s mind completely. We’ve never observed a pi-radian turnaround in someone’s whole worldview, and thus, we conclude that nobody ever changes their mind about anything important.

But in fact most people change their minds slowly and gradually, and are embarrassed to admit they were wrong in public, so they change their minds in private. (One of the best single changes we could make toward improving human civilization would be to make it socially rewarded to publicly admit you were wrong. Even the scientific community doesn’t do this nearly as well as it should.) Often changing your mind doesn’t even really feel like changing your mind; you just experience a bit more doubt, learn a bit more, and repeat the process over and over again until, years later, you believe something different than you did before. You moved 0.1 or even 0.01 radians at a time, until at last you came all the way around.

It may be in fact that some people’s minds cannot be changed—either on particular issues, or even on any issue at all. But it is so very, very easy to jump to that conclusion after a few bad interactions, that I think we should intentionally overcompensate in the opposite direction: Only give up on someone after you have utterly overwhelming evidence that their mind cannot ever be changed in any way.

I can’t guarantee that this will work. Perhaps too many people are too far gone.

But I also don’t see any alternative. If the truth is to prevail, it will be by rational argument. This is the only method that systematically favors the truth. All other methods give equal or greater power to lies.

Capitalism can be fair

Aug 22 JDN 2459449

There are certainly extreme right-wing libertarians who seem to think that capitalism is inherently fair, or that “fairness” is meaningless and (some very carefully defined notion of) liberty is the only moral standard. I am not one of them. I agree that many of the actual practices of modern capitalism as we know it are unfair, particularly in the treatment of low-skill workers.

But lately I’ve been seeing a weirdly frequent left-wing take—Marxist take, really—that goes to the opposite extreme, saying that capitalism is inherently unfair, that the mere fact that capital owners ever get any profit on anything is proof that the system is exploitative and unjust and must be eliminated.

So I decided it would be worthwhile to provide a brief illustration of how, at least in the best circumstances, a capitalist system of labor can in fact be fair and just.

The argument that capitalism is inherently unjust seems to be based on the notion that profit means “workers are paid less than their labor is worth”. I think that the reason this argument is so insidious is that it’s true in one sense—but not true in another. Workers are indeed paid less than the total surplus of their actual output—but, crucially, they are not paid less than what the surplus of their output would have been had the capital owner not provided capital and coordination.

Suppose that we are making some sort of product. To make it more concrete, let’s say shirts. You can make a shirt by hand, but it’s a lot of work, and it takes a long time. Suppose that you, working on your own by hand, can make 1 shirt per day. You can sell each shirt for $10, so you get $10 per day.

Then, suppose that someone comes along who owns looms and sewing machines. They gather you and several other shirt-makers and offer to let you use their machines, in exchange for some of the revenue. With the aid of 9 other workers and the machines, you are able to make 30 shirts per day. You can still sell each shirt for $10, so now there is total revenue of $300.

Whether or not this is fair depends on precisely the bargain that was struck with the owner of the machines. Suppose that he asked for 40% of the revenue. Then the 10 workers including yourself would get (0.60)($300) = $180 to split, presumably evenly, and each get $18 per day. This seems fair; you’re clearly better off than you were making shirts by yourself. The capital owner then gets (0.40)($300) = $120, which is more than each of you, but not by a ridiculous amount; and he probably has costs to deal with in maintaining those machines.

But suppose instead the owner had demanded 80% of the revenue; then you would have to split (0.20)($300) = $60 between you, and each would only get $6 per day. The capital owner would then get (0.80)($300) = $240, 40 times as much as each of you.

Or perhaps instead of a revenue-sharing agreement, the owner offers to pay you a wage. If that wage is $18 per day, it seems fair. If it is $6 per day, it seems obviously unfair.

If this owner is the only employer, then he is competing only with working alone. So we would expect him to offer a wage of $10 per day, or maybe slightly more since working with the machines may be harder or more unpleasant than working by hand.

But if there are many employers, then he is now competing with those employers as well. If he offers $10, someone else might offer $12, and a third might offer $15. Competition should drive the system toward an equilibrium where workers are getting paid their marginal value product—in other words, the wage for one hour of work should equal the additional value added by one more hour of work.

In the case that seems fair, where workers are getting more money than they would have on their own, are they getting paid “less than the value of their labor”? In one sense, yes; the total surplus is not going all to the workers, but is being shared with the owner of the machines. But the more important sense is whether they’d be better off quitting and working on their own—and they obviously would not be.

What value does the capital owner provide? Well, the capital, of course. It’s their property and they are letting other people use it. Also, they incur costs to maintain it.

Of course, it matters how the capital owner obtained that capital. If they are an inventor who made it themselves, it seems obviously just that they should own it. If they inherited it or got lucky on the stock market, it isn’t something they deserve in a deep sense, but it’s reasonable to say they are entitled to it. But if the only reason they have the capital is by theft, fraud, or exploitation, then obviously they don’t deserve it. In practice, there are very few of the first category, a huge number of the second, and all too many of the third. Yet this is not inherent to the capitalist work arrangement. Many capital owners don’t deserve what they own; but those who do have a right to make a profit letting other people use their property.

There are of course many additional complexities that arise in the real world, in terms of market power, bargaining, asymmetric information, externalities, and so on. I freely admit that in practice, capitalism is often unfair. But I think it’s worth pointing out that the mere existence of profit from capital ownership is not inherently unjust, and in fact by organizing our economy around it we have managed to achieve unprecedented prosperity.