Maybe we should forgive student debt after all.

May 8 JDN 2459708

President Biden has been promising some form of student debt relief since the start of his campaign, though so far all he has actually implemented is a series of no-interest deferments and some improvements to the existing forgiveness programs. (This is still significant—it has definitely helped a lot of people with cashflow during the pandemic.) Actual forgiveness for a large segment of the population remains elusive, and if it does happen, it’s unclear how extensive it will be in either intensity (amount forgiven) or scope (who is eligible).

I personally had been fine with this; while I have a substantial loan balance myself, I also have a PhD in economics, which—theoretically—should at some point entitle me to sufficient income to repay those loans.

Moreover, until recently I had been one of the few left-wing people I know to not be terribly enthusiastic about loan forgiveness. It struck me as a poor use of those government funds, because $1.75 trillion is an awful lot of money, and college graduates are a relatively privileged population. (And yes, it is valid to consider this a question of “spending”, because the US government is the least liquidity-constrained entity on Earth. In lieu of forgiving $1.75 trillion in debt, they could borrow $1.75 trillion in debt and use it to pay for whatever they want, and their ultimate budget balance would be basically the same in each case.)

But I say all this in the past tense because Krugman’s recent column has caused me to reconsider. He gives two strong reasons why debt forgiveness may actually be a good idea.

The first is that Congress is useless. Thanks to gerrymandering and the 40% or so of our population who keeps electing Republicans no matter how crazy they get, it’s all but impossible to pass useful legislation. The pandemic relief programs were the exception that proves the rule: Somehow those managed to get through, even though in any other context it’s clear that Congress would never have approved any kind of (non-military) program that spent that much money or helped that many poor people.

Student loans are the purview of the Department of Education, which is entirely under control of the Executive Branch, and therefore, ultimately, the President of the United States. So Biden could forgive student loans by executive order and there’s very little Congress could do to stop him. Even if that $1.75 trillion could be better spent, if it wasn’t going to be anyway, we may as well use it for this.

The second is that “college graduates” is too broad a category. Usually I’m on guard for this sort of thing, but in this case I faltered, and did not notice the fallacy of composition so many labor economists were making by lumping all college grads into the same economic category. Yes, some of us are doing well, but many are not. Within-group inequality matters.

A key insight here comes from carefully analyzing the college wage premium, which is the median income of college graduates, divided by the median income of high school graduates. This is an estimate of the overall value of a college education. It’s pretty large, as a matter of fact: It amounts to something like a doubling of your income, or about $1 million over one’s whole lifespan.

From about 1980-2000, wage inequality grew about as fast as today, and the college wage premium grew even faster. So it was plausible—if not necessarily correct—to believe that the wage inequality reflected the higher income and higher productivity of college grads. But since 2000, wage inequality has continued to grow, while the college wage premium has been utterly stagnant. Thus, higher inequality can no longer (if it ever could) be explained by the effects of college education.

Now some college graduates are definitely making a lot more money—such as those who went into finance. But it turns out that most are not. As Krugman points out, the 95th percentile of male college grads has seen a 25% increase in real (inflation-adjusted) income in the last 20 years, while the median male college grad has actually seen a slight decrease. (I’m not sure why Krugman restricted to males, so I’m curious how it looks if you include women. But probably not radically different?)

I still don’t think student loan forgiveness would be the best use of that (enormous sum of) money. But if it’s what’s politically feasible, it definitely could help a lot of people. And it would be easy enough to make it more progressive, by phasing out forgiveness for graduates with higher incomes.

And hey, it would certainly help me, so maybe I shouldn’t argue too strongly against it?

Centrism is dying in America.

Apr 24 JDN 2459694

Four years ago—back when (shudder) Trump was President—I wrote a post about the true meaning of centrism, the kind of centrism worth defending.

I think it’s worth repeating now: Centrism isn’t saying “both sides are the same” when they aren’t. It’s recognizing that the norms of democracy themselves are worth defending—and more worth defending than almost any specific policy goal.

I wanted to say any specific policy goal, but I do think you can construct extreme counterexamples, like “establish a 100% tax on all income” (causing an immediate, total economic collapse), or “start a war with France” (our staunchest ally for the past 250 years who also has nuclear weapons). But barring anything that extreme, just about any policy is less important than defending democracy itself.

Or at least I think so. It seems that most Americans disagree. On both the left and the right—but especially on the right—a large majority of American voters are still willing to vote for a candidate who flouts basic democratic norms as long as they promise the right policies.

I guess on the right this fact should have been obvious: Trump. But things aren’t much better on the left, and should some actual radical authoritarian communist run for office (as opposed to, you know, literally every left-wing politician who is accused of being a radical authoritarian communist), this suggests that a lot of leftist voters might actually vote for them, which is nearly as terrifying.

My hope today is that I might tip the balance a little bit the other direction, remind people why democracy is worth defending, even at the cost of our preferred healthcare systems and marginal tax rates.

This is, above all, that democracy is self-correcting. If a bad policy gets put in place while democratic norms are still strong, then that policy can be removed and replaced with something better later on. Authoritarianism lacks this self-correction mechanism; get someone terrible in power and they stay in power, doing basically whatever they want, unless they are violently overthrown.

For the right wing, that’s basically it. You need to stop making excuses for authoritarianism. Basically none of your policies are so important that they would justify even moderate violations of democratic norms—much less than Trump already committed, let alone what he might do if re-elected and unleashed. I don’t care how economically efficient lower taxes or privatized healthcare might be (and I know that there are in fact many economists who would agree with you on that, though I don’t), it isn’t worth undermining democracy. And while I do understand why you consider abortion to be such a vital issue, you really need to ask yourself whether banning abortion is worth living under a fascist government, because that’s the direction you’re headed. Let me note that banning abortion doesn’t even seem to reduce it very much, so there’s that. While the claim that abortion bans do nothing is false, even a total overturn of Roe v. Wade would most likely reduce US abortions by about 15%—much less than the 25% decrease between 2008 and 2014, which was also part of a long-term trend of decreasing abortion rates which are now roughly half what they were in 1980. We don’t need to ban abortion in order to reduce it—and indeed many of the things that work are things like free healthcare and easy access to contraception that right-wing governments typically resist. So even if you consider abortion to be a human rights violation, which I know many of you do, is that relatively small reduction in abortion rates worth risking the slide into fascism?

But for the left wing, things are actually a bit more complicated. Some right-wing policies—particularly social policies—are inherently anti-democratic and violations of human rights. I gave abortion the benefit of the doubt above; I can at least see why someone would think it’s a human rights violation (though I do not). Here I’m thinking particularly of immigration policies that lock up children at the border and laws that actively discriminate against LGBT people. I can understand why people would be unwilling to “hold their nose” and vote for someone who wants to enact that kind of policy—though if it’s really the only way to avoid authoritarianism, I think we might still have to do it. Democracy is too high a price to pay; give it up now and there is nothing to stop that new authoritarian leftist government from turning into a terrible nightmare (that may not even remain leftist, by the way!). If we vote in someone who is pro-democratic but otherwise willing to commit these sorts of human rights violations, hopefully we can change things by civic engagement or vote them out of office later on (and over the long run, we do, in fact, have a track record of doing that). But if we vote in someone who will tear apart democracy even when they seem to have the high ground on human rights, then once democracy is undermined, the new authoritarian government can oppress us in all sorts of ways (even ways they specifically promised not to!), and we will have very little recourse.

Above all, even if they promise to give us everything we want, once you put an authoritarian in power, they can do whatever they want. They have no reason to keep their promises (whereas, contrary to popular belief, democratic politicians actually typically do), for we have no recourse if they don’t. Our only option to remove them from power is violent revolution—which usually fails, and even if it succeeds, would have an enormous cost in human lives.

Why is this a minority view? Why don’t more Americans agree with this?

I can think of a few possible reasons.

One is that they may not believe that these violations of democratic norms are really all that severe or worrisome. Overriding a judge with an executive order isn’t such a big deal, is it? Gerrymandering has been going on for decades, why should we worry about it now?

If that is indeed your view, let me remind you that in January 2021, armed insurrectionists stormed the Capitol building. That is not something we can just take lying down. This is a direct attack upon the foundations of democracy, and while it failed (miserably, and to be honest, hilariously), it wasn’t punished nearly severely enough—most of the people involved were not arrested on any charges, and several are now running for office. This lack of punishment means that it could very well happen again, and this time be better organized and more successful.

A second possibility is that people do not know that democracy is being undermined; they are somehow unaware that this is happening. If that’s the case, all I can tell you is that you really need to go to the Associated Press or New York Times website and read some news. You would have to be catastrophically ignorant of our political situation, and you frankly don’t deserve to be voting if that is the case.

But I suspect that for most people, a third reason applies: They see that democracy is being undermined, but they blame the other side. We aren’t the ones doing it—it’s them.

Such a view is tempting, at least from the left side of the aisle. No Democratic Party politician can hold a candle to Trump as far as authoritarianism (or narcissism). But we should still be cognizant of ways that our actions may also undermine democratic norms: Maybe we shouldn’t be considering packing the Supreme Court, unless we can figure out a way to ensure that it will genuinely lead to a more democratic and fair court long into the future. (For the latter sort of reform, suppose each federal district elected its own justice? Or we set up a mandatory retirement cycle such that every President would always appoint at least one justice?)

But for those of you on the right… How can you possibly think this? Where do you get your information from? How can you look at Donald Trump and think, “This man will defend our democracy from those left-wing radicals”? Right now you may be thinking, “oh, look, he suggested the New York Times; see his liberal bias”; that is a newspaper of record in the United States. While their editors are a bit left of center, they are held to the highest standards of factual accuracy. But okay, if you prefer the Wall Street Journal (also a newspaper of record, but whose editors are a bit more right of center), be my guest; their factual claims won’t disagree, because truth is truth. I also suggested the Associated Press, widely regarded worldwide as one of the most credible news sources. (I considered adding Al Jazeera, which has a similar reputation, but figured you wouldn’t go for that.)

If you think that the attack on the Capitol was even remotely acceptable, you must think that their claims of a stolen election were valid, or at least plausible. But every credible major news source, the US Justice Department, and dozens of law courts agree that they were not. Any large election is going to have a few cases of fraud, but there were literally only hundreds of fradulent votes—in an election in which over 150 million votes were cast, Biden won the popular vote by over 7 million votes, and no state was won by less than 10,000 votes. This means that 99.999% of votes were valid, and even if every single fradulent vote had been for Biden and in Georgia (obviously not the case), it wouldn’t have been enough to tip even that state.

I’m not going to say that left-wing politicians never try to undermine democratic norms—there’s certainly plenty of gerrymandering, and I just said, court-packing is at least problematic. Nor would I say that the right wing is always worse about this. But it should be pretty obvious to anyone with access to basic factual information—read: everyone with Internet access—that right now, the problem is much worse on the right. You on the right need to face up to that fact, and start voting out Republicans who refuse to uphold democracy, even if it means you have to wait a bit longer for lower taxes or more (let me remind you, not very effective) abortion bans.

In the long run, I would of course like to see changes in the whole political system, so that we are no longer dominated by two parties and have a wider variety of realistic options. (The best way to do that would of couse be range voting.) But for now, let’s start by ensuring that democracy continues to exist in America.

Russia has invaded Ukraine.

Mar 6 JDN 2459645

Russia has invaded Ukraine. No doubt you have heard it by now, as it’s all over the news now in dozens of outlets, from CNN to NBC to The Guardian to Al-Jazeera. And as well it should be, as this is the first time in history that a nuclear power has annexed another country. Yes, nuclear powers have fought wars before—the US just got out of one in Afghanistan as you may recall. They have even started wars and led invasions—the US did that in Iraq. And certainly, countries have been annexing and conquering other countries for millennia. But never before—never before, in human historyhas a nuclear-armed state invaded another country simply to claim it as part of itself. (Trump said he thought the US should have done something like that, and the world was rightly horrified.)

Ukraine is not a nuclear power—not anymore. The Soviet Union built up a great deal of its nuclear production in Ukraine, and in 1991 when Ukraine became independent it still had a sizable nuclear arsenal. But starting in 1994 Ukraine began disarming that arsenal, and now it is gone. Now that Russia has invaded them, the government of Ukraine has begun publicly reconsidering their agreements to disarm their nuclear arsenal.

Russia’s invasion of Ukraine has just disproved the most optimistic models of international relations, which basically said that major power wars for territory were over at the end of WW2. Some thought it was nuclear weapons, others the United Nations, still others a general improvement in trade integration and living standards around the world. But they’ve all turned out to be wrong; maybe such wars are rarer, but they can clearly still happen, because one just did.

I would say that only two major theories of the Long Peace are still left standing in light of this invasion, and that is nuclear deterrence and the democratic peace. Ukraine gave up its nuclear arsenal and later got attacked—that’s consistent with nuclear deterrence. Russia under Putin is nearly as authoritarian as the Soviet Union, and Ukraine is a “hybrid regime” (let’s call it a solid D), so there’s no reason the democratic peace would stop this invasion. But any model which posits that trade or the UN prevent war is pretty much off the table now, as Ukraine had very extensive trade with both Russia and the EU and the UN has been utterly toothless so far. (Maybe we could say the UN prevents wars except those led by permanent Security Council members.)

Well, then, what if the nuclear deterrence theory is right? What would have happened if Ukraine had kept its nuclear weapons? Would that have made this situation better, or worse? It could have made it better, if it acted as a deterrent against Russian aggression. But it could also have made it much, much worse, if it resulted in a nuclear exchange between Russia and Ukraine.

This is the problem with nukes. They are not a guarantee of safety. They are a guarantee of fat tails. To explain what I mean by that, let’s take a brief detour into statistics.

A fat-tailed distribution is one for which very extreme events have non-negligible probability. For some distributions, like a uniform distribution, events are clearly contained within a certain interval and nothing outside is even possible. For others, like a normal distribution or lognormal distribution, extreme events are theoretically possible, but so vanishingly improbable they aren’t worth worrying about. But for fat-tailed distributions like a Cauchy distribution or a Pareto distribution, extreme events are not so improbable. They may be unlikely, but they are not so unlikely they can simply be ignored. Indeed, they can actually dominate the average—most of what happens, happens in a handful of extreme events.

Deaths in war seem to be fat-tailed, even in conventional warfare. They seem to follow a Pareto distribution. There are lots of tiny skirmishes, relatively frequent regional conflicts, occasional major wars, and a handful of super-deadly global wars. This kind of pattern tends to emerge when a phenomenon is self-reinforcing by positive feedback—hence why we also see it in distributions of income and wildfire intensity.

Fat-tailed distributions typically (though not always—it’s easy to construct counterexamples, like the Cauchy distribution with low values truncated off) have another property as well, which is that minor events are common. More common, in fact, than they would be under a normal distribution. What seems to happen is that the probability mass moves away from the moderate outcomes and shifts to both the extreme outcomes and the minor ones.

Nuclear weapons fit this pattern perfectly. They may in fact reduce the probability of moderate, regional conflicts, in favor of increasing the probability of tiny skirmishes or peaceful negotiations. But they also increase the probability of utterly catastrophic outcomes—a full-scale nuclear war could kill billions of people. It probably wouldn’t wipe out all of humanity, and more recent analyses suggest that a catastrophic “nuclear winter” is unlikely. But even 2 billion people dead would be literally the worst thing that has ever happened, and nukes could make it happen in hours when such a death toll by conventional weapons would take years.

If we could somehow guarantee that such an outcome would never occur, then the lower rate of moderate conflicts nuclear weapons provide would justify their existence. But we can’t. It hasn’t happened yet, but it doesn’t have to happen often to be terrible. Really, just once would be bad enough.

Let us hope, then, that the democratic peace turns out to be the theory that’s right. Because a more democratic world would clearly be better—while a more nuclearized world could be better, but could also be much, much worse.

Basic income reconsidered

Feb 20 JDN 2459631

In several previous posts I have sung the praises of universal basic income (though I have also tried to acknowledge the challenges involved).

In this post I’d like to take a step back and reconsider the question of whether basic income is really the best approach after all. One nagging thought keeps coming back to me, and it is the fact that basic income is extremely expensive.

About 11% of the US population lives below the standard poverty line. There are many criticisms of the standard poverty line: Some say it’s too high, because you can compare it favorably with middle-class incomes in much poorer countries. Others say it’s too low, because income at that level doesn’t allow people to really live in financial security. There are many difficult judgment calls that go into devising a poverty threshold, and we can reasonably debate whether the right ones were made here.

However, I think this threshold is at least approximately correct; maybe the true poverty threshold for a household of 1 should be not $12,880 but $11,000 or $15,000, but I don’t think it should be $5,000 or $25,000. Maybe for a household of 4 it should be not $26,500 but $19,000 or $32,000; but I don’t think it should be $12,000 or $40,000.

So let’s suppose that we wanted to implement a universal basic income in the United States that would lift everyone out of poverty. We could essentially do that by taking the 2-person-household threshold of $17,420 and dividing it by 2, yielding $8,710 per person per year. (Why not use the 1-person-household threshold? There aren’t very many 1-person households in poverty, and that threshold would be considerably higher and thus considerably more expensive. A typical poor household is a single parent and one or more children; as long as kids get the basic income, that household would be above the threshold in this system.)

The US population is currently about 331 million people. If every single one of them were to receive a basic income of $8,710, that would cost nearly $2.9 trillion per year. This is a feasible amount—it’s less than half the current total federal budget—but it is still a very large amount. The tax increases required to support it would be massive, and that’s probably why, despite ostensibly bipartisan support for the idea of a basic income, no serious proposal has ever gotten off of the ground.

If on the other hand we were to only give the basic income to people below the poverty line, that would cost only 11% of that amount: A far more manageable $320 billion per year.

We don’t want to do exactly that, however, because it would create all kinds of harmful distortions in the economy. Consider someone who is just below the threshold, considering whether to take on more work or get a higher-paying job. If their household pre-tax income is currently $15,000 and they could raise it to $18,000, a basic income given only to people below the threshold would mean that they are choosing between $15,000+$17,000=$32,000 if they keep their current work and $18,000 if they increase it. Clearly, they would not want to take on more work. That’s a terrible system—it amounts to a marginal tax rate above 100%.

Another possible method would be to simply top off people’s income, give them whatever they need to get to the poverty line but no more. (This would actually be even cheaper; it would probably cost something more like $160 billion per year.) That removes the distortion for people near the threshold, at the cost of making it much worse for those far below the threshold. Someone considering whether to work for $7,000 or work for $11,000 is, in such a system, choosing whether to work less for $17,000 or work more for… $17,000. They will surely choose to work less.

In order to solve these problems, what we would most likely need to do is gradually phase out the basic income, so that say increasing your pre-tax income by $1.00 would decrease your basic income payment by $0.50. The cost of this system would be somewhere in between that of a truly universal basic income and a threshold-based system, so let’s ballpark that as around $600 billion per year. It would effectively implement a marginal tax rate of 50% for anyone who is receiving basic income payments.

In theory, this is probably worse than a universal basic income, because in the latter case you can target the taxes however you like—and thus (probably) make them less cause less distortion than the phased-out basic income system would. But in practice, a truly universal basic income might simply not be politically viable, and some kind of phased-out system seems much more likely to actually get passed.


Even then, I confess I am not extremely optimistic. For some reason, everyone seems to want to end poverty, but very few seem willing to use the obvious solution: Give poor people money.

Cryptocurrency and its failures

Jan 30 JDN 2459620

It started out as a neat idea, though very much a solution in search of a problem. Using encryption, could we decentralize currency and eliminate the need for a central bank?

Well, it’s been a few years now, and we have now seen how well that went. Bitcoin recently crashed, but it has always been astonishingly volatile. As a speculative asset, such volatility is often tolerable—for many, even profitable. But as a currency, it is completely unbearable. People need to know that their money will be a store of value and a medium of exchange—and something that changes price one minute to the next is neither.

Some of cryptocurrency’s failures have been hilarious, like the ill-fated island called [yes, really] “Cryptoland”, which crashed and burned when they couldn’t find any investors to help them buy the island.

Others have been darkly comic, but tragic in their human consequences. Chief among these was the failed attempt by El Salvador to make Bitcoin an official currency.

At the time, President Bukele justified it by an economically baffling argument: Total value of all Bitcoin in the world is $680 billion, therefore if even 1% gets invested in El Salvador, GDP will increase by $6.8 billion, which is 25%!

First of all, that would only happen if 1% of all Bitcoin were invested in El Salvador each year—otherwise you’re looking at a one-time injection of money, not an increase in GDP.

But more importantly, this is like saying that the total US dollar supply is $6 trillion, (that’s physically cash; the actual money supply is considerably larger) so maybe by dollarizing your economy you can get 1% of that—$60 billion, baby! No, that’s not how any of this works. Dollarizing could still be a good idea (though it didn’t go all that well in El Salvador), but it won’t give you some kind of share in the US economy. You can’t collect dividends on US GDP.

It’s actually good how El Salvador’s experiment in bitcoin failed: Nobody bought into it in the first place. They couldn’t convince people to buy government assets that were backed by Bitcoin (perhaps because the assets were a strictly worse deal than just, er, buying Bitcoin). So the human cost of this idiotic experiment should be relatively minimal: It’s not like people are losing their homes over this.

That is, unless President Bukele doubles down, which he now appears to be doing. Even people who are big fans of cryptocurrency are unimpressed with El Salvador’s approach to it.

It would be one thing if there were some stable cryptocurrency that one could try pegging one’s national currency to, but there isn’t. Even so-called stablecoins are generally pegged to… regular currencies, typically the US dollar but also sometimes the Euro or a few other currencies. (I’ve seen the Australian Dollar and the Swiss Franc, but oddly enough, not the Pound Sterling.)

Or a country could try issuing its own cryptocurrency, as an all-digital currency instead of one that is partly paper. It’s not totally clear to me what advantages this would have over the current system (in which most of the money supply is bank deposits, i.e. already digital), but it would at least preserve the key advantage of having a central bank that can regulate your money supply.

But no, President Bukele decided to take an already-existing cryptocurrency, backed by nothing but the whims of the market, and make it legal tender. Somehow he missed the fact that a currency which rises and falls by 10% in a single day is generally considered bad.

Why? Is he just an idiot? I mean, maybe, though Bukele’s approval rating is astonishingly high. (And El Salvador is… mostly democratic. Unlike, say, Putin’s, I think these approval ratings are basically real.) But that’s not the only reason. My guess is that he was gripped by the same FOMO that has gripped everyone else who evangelizes for Bitcoin. The allure of easy money is often irresistible.

Consider President Bukele’s position. You’re governing a poor, war-torn country which has had economic problems of various types since its founding. When the national currency collapsed a generation ago, the country was put on the US dollar, but that didn’t solve the problem. So you’re looking for a better solution to the monetary doldrums your country has been in for decades.

You hear about a fancy new monetary technology, “cryptocurrency”, which has all the tech people really excited and seems to be making tons of money. You don’t understand a thing about it—hardly anyone seems to, in fact—but you know that people with a lot of insider knowledge of technology and finance are really invested in it, so it seems like there must be something good here. So, you decide to launch a program that will convert your country’s currency from the US dollar to one of these new cryptocurrencies—and you pick the most famous one, which is also extremely valuable, Bitcoin.

Could cryptocurrencies be the future of money, you wonder? Could this be the way to save your country’s economy?

Despite all the evidence that had already accumulated that cryptocurrency wasn’t working, I can understand why Bukele would be tempted by that dream. Just as we’d all like to get free money without having to work, he wanted to save his country’s economy without having to implement costly and unpopular reforms.

But there is no easy money. Not really. Some people get lucky; but they ultimately benefit from other people’s hard work.

The lesson here is deeper than cryptocurrency. Yes, clearly, it was a dumb idea to try to make Bitcoin a national currency, and it will get even dumber if Bukele really does double down on it. But more than that, we must all resist the lure of easy money. If it sounds too good to be true, it probably is.

What’s wrong with police unions?

Nov 14 JDN 2459531

In a previous post I talked about why unions, even though they are collusive, are generally a good thing. But there is one very important exception to this rule: Police unions are almost always harmful.

Most recently, police unions have been leading the charge to fight vaccine mandates. This despite the fact that COVID-19 now kills more police officers than any other cause. They threatened that huge numbers of officers would leave if the mandates were imposed—but it didn’t happen.

But there is a much broader pattern than this: Police unions systematically take the side of individual police offers over the interests of public safety. Even the most incompetent, negligent, or outright murderous behavior by police officers will typically be defended by police unions. (One encouraging development is that lately even some police unions have been reluctant to defend the most outrageous killings by police officers—but this very much the exception, not the rule.)

Police unions are also unusual among unions in their political ties. Conservatives generally oppose unions, but are much friendlier toward police unions. At the other end of the spectrum, socialists normally love unions, but have distanced themselves from police unions for a long time. (The argument in that article that this is because “no other job involves killing people” is a bit weird: Ostensibly, the circumstances in which police are allowed to kill people are not all that different from the circumstances in which private citizens are. Just like us, they’re only supposed to use deadly force to prevent death or grievous bodily harm to themselves or others. The main thing that police are allowed to do that we aren’t is imprison people. Killing isn’t supposed to be a major part of the job.)

Police union also have some other weird features. The total membership of all police unions exceeds the total number of police officers in the United States, because a single officer is often affiliated with multiple unions—normally not at all how unions work. Police unions are also especially powerful and well-organized among unions. They are especially well-funded, and their members are especially loyal.

If we were to adopt a categorical view that unions are always good or always bad—as many people seem to want to—it’s difficult to see why police unions should be different from teachers’ unions or factory workers’ unions. But my argument was very careful not to make such categorical statements. Unions aren’t always or inherently good; they are usually good, because of how they are correcting a power imbalance between workers and corporations.

But when it comes to police, the situation is quite different. Police unions give more bargaining power to government officers against… what? Public accountability? The democratic system? Corporate CEOs are accountable only to their shareholders, but the mayors and city councils who decide police policy are elected (in most of the UK, even police commissioners are directly elected). It’s not clear that there was an imbalance in bargaining power here we would want to correct.

A similar case could be made against all public-sector unions, and indeed that case often is extended to teachers’ unions. If we must sacrifice teachers’ unions in order to destroy police unions, I’d be prepared to bite that bullet. But there are vital differences here as well. Teachers are not responsible for imprisoning people, and bad teachers almost never kill people. (In the rare cases in which teachers have committed murder, they have been charged to the full extent of the law, just as they would be in any other profession.) There surely is some misconduct by teachers that some unions may be protecting, but the harm caused by that misconduct is far lower than the harm caused by police misconduct. Teacher unions also provide a layer of protection for teachers to exercise autonomy, promoting academic freedom.

The form of teacher misconduct I would be most concerned about is sexual abuse of students. And while I’ve seen many essays claiming that teacher unions protect sexual abusers, the only concrete evidence I could find on the subject was a teachers’ union publicly complaining that the government had failed to pass stricter laws against sexual abuse by teachers. The research on teacher misconduct mainly focuses on other casual factors aside from union representation.

Even this Fox News article cherry-picking the worst examples of unions protecting abusive teachers includes line after line like “he was ultimately fired”, “he was pressured to resign”, and “his license was suspended”. So their complaint seems to be that it wasn’t done fast enough? But a fair justice system is necessarily slow. False accusations are rare, but they do happen—we can’t just take someone’s word for it. Ensuring that you don’t get fired until the district mounts strong evidence of misconduct against you is exactly what unions should be doing.

Whether unions are good or bad in a particular industry is ultimately an empirical question. So let’s look at the data, shall we? Teacher unions are positively correlated with school performance. But police unions are positively correlated with increased violent misconduct. There you have it: Teacher unions are good, but police unions are bad.

Does power corrupt?

Nov 7 JDN 2459526

It’s a familiar saying, originally attributed to the Lord Acton: “Power tends to corrupt, and absolute power corrupts absolutely. Great men are nearly always bad men.”

I think this saying is not only wrong, but in fact dangerous. We can all observe plenty of corrupt people in power, that much is true. But if it’s simply the power that corrupts them, and they started as good people, then there’s really nothing to be done. We may try to limit the amount of power any one person can have, but in any large, complex society there will be power, and so, if the saying is right, there will also be corruption.

How do I know that this saying is wrong?

First of all, note that corruption varies tremendously, and with very little correlation with most sensible notions of power.

Consider used car salespeople, stockbrokers, drug dealers, and pimps. All of these professions are rather well known for their high level of corruption. Yet are people in these professions powerful? Yes, any manager has some power over their employees; but there’s no particular reason to think that used car dealers have more power over their employees than grocery stores, and yet there’s a very clear sense in which used car dealers are more corrupt.

Even power on a national scale is not inherently tied to corruption. Consider the following individuals: Nelson Mandela, Mahatma Gandhi, Abraham Lincoln, and Franklin Roosevelt.

These men were extremely powerful; each ruled an entire nation.Indeed, during his administration, FDR was probably the most powerful person in the world. And they certainly were not impeccable: Mandela was a good friend of Fidel Castro, Gandhi abused his wife, Lincoln suspended habeas corpus, and of course FDR ordered the internment of Japanese-Americans. Yet overall I think it’s pretty clear that these men were not especially corrupt and had a large positive impact on the world.

Say what you will about Bernie Sanders, Dennis Kucinich, or Alexandria Ocasio-Cortez. Idealistic? Surely. Naive? Perhaps. Unrealistic? Sometimes. Ineffective? Often. But they are equally as powerful as anyone else in the US Congress, and ‘corrupt’ is not a word I’d use to describe them. Mitch McConnell, on the other hand….

There does seem to be a positive correlation between a country’s level of corruption and its level of authoritarianism; the most democratic countries—Scandinavia—are also the least corrupt. Yet India is surely more democratic than China, but is widely rated as about the same level of corruption. Greece is not substantially less democratic than Chile, but it has considerably more corruption. So even at a national level, power is the not the only determinant of corruption.

I’ll even agree to the second clause: “absolute power corrupts absolutely.” Were I somehow granted an absolute dictatorship over the world, one of my first orders of business would be to establish a new democratic world government to replace my dictatorial rule. (Would it be my first order of business, or would I implement some policy reforms first? Now that’s a tougher question. I think I’d want to implement some kind of income redistribution and anti-discrimination laws before I left office, at least.) And I believe that most good people think similarly: We wouldn’t want to have that kind of power over other people. We wouldn’t trust ourselves to never abuse it. Anyone who maintains absolute power is either already corrupt or likely to become so. And anyone who seeks absolute power is precisely the sort of person who should not be trusted with power at all.

It may also be that power is one determinant of corruption—that a given person will generally end up more corrupt if you give them more power. This might help explain why even the best ‘great men’ are still usually bad men. But clearly there are other determinants that are equally important.

And I would like to offer a different hypothesis to explain the correlation between power and corruption, which has profoundly different implications: The corrupt seek power.

Donald Trump didn’t start out a good man and become corrupt by becoming a billionaire or becoming President. Donald Trump was born a narcissistic idiot.

Josef Stalin wasn’t a good man who became corrupted by the unlimited power of ruling the Soviet Union. Josef Stalin was born a psychopath.

Indeed, when you look closely at how corrupt leaders get into power, it often involves manipulating and exploiting others on a grand scale. They are willing to compromise principles that good people wouldn’t. They aren’t corrupt because they got into power; they got into power because they are corrupt.

Let me be clear: I’m not saying we should compromise all of our principles in order to achieve power. If there is a route by which power corrupts, it is surely that. Rather, I am saying that we must maintain constant vigilance against anyone who seems so eager to attain power that they will compromise principles to do it—for those are precisely the people who are likely to be most dangerous if they should achieve their aims.

Moreover, I’m saying that “power corrupts” is actually a very dangerous message. It tells good people not to seek power, because they would be corrupted by it. But in fact what we actually need in order to get good people in power is more good people seeking power, more opportunities to out-compete the corrupt. If Congress were composed entirely of people like Alexandria Ocasio-Cortez, then the left-wing agenda would no longer seem naive and unrealistic; it would simply be what gets done. (Who knows? Maybe it wouldn’t work out so well after all. But it definitely would get done.) Yet how many idealistic left-wing people have heard that phrase ‘power corrupts’ too many times, and decided they didn’t want to risk running for office?

Indeed, the notion that corruption is inherent to the exercise of power may well be the greatest tool we have ever given to those who are corrupt and seeking to hold onto power.

Labor history in the making

Oct 24 JDN 2459512

To say that these are not ordinary times would be a grave understatement. I don’t need to tell you all the ways that this interminable pandemic has changed the lives of people all around the world.

But one in particular is of notice to economists: Labor in the United States is fighting back.

Quit rates are at historic highs. Over 100,000 workers in a variety of industries are simultaneously on strike, ranging from farmworkers to nurses and freelance writers to university lecturers.

After decades of quiescence to ever-worsening working conditions, it seems that finally American workers are mad as hell and not gonna take it anymore.

It’s about time, frankly. The real question is why it took this long. Working conditions in the US have been systematically worse than the rest of the First World since at least the 1980s. It was substantially easier to get the leave I needed to attend my own wedding—in the US—after starting work in the UK than it would have been at the same kind of job in the US, because UK law requires employers to grant leave from the day they start work, while US federal law and the law in many states doesn’t require leave at all for anyone—not even people who are sick or recently gave birth.

So, why did it happen now? What changed? The pandemic threw our lives into turmoil, that much is true. But it didn’t fundamentally change the power imbalance between workers and employers. Why was that enough?

I think I know why. The shock from the pandemic didn’t have to be enough to actually change people’s minds about striking—it merely had to be enough to convince people that others would show up. It wasn’t the first-order intention “I want to strike” that changed; it was the second-order belief “Other people want to strike too”.

For a labor strike is a coordination game par excellence. If 1 person strikes, they get fired and replaced. If 2 or 3 or 10 strike, most likely the same thing. But if 10,000 strike? If 100,000 strike? Suddenly corporations have no choice but to give in.

The most important question on your mind when you are deciding whether or not to strike is not, “Do I hate my job?” but “Will my co-workers have my back?”.

Coordination games exhibit a very fascinating—and still not well-understood—phenomenon known as Schelling points. People will typically latch onto certain seemingly-arbitrary features of their choices, and do so well enough that simply having such a focal point can radically increase the level of successful coordination.

I believe that the pandemic shock was just such a Schelling point. It didn’t change most people’s working conditions all that much: though I can see why nurses in particular would be upset, it’s not clear to me that being a university lecturer is much worse now than it was a year ago. But what the pandemic did do was change everyone’s working conditions, all at once. It was a sudden shock toward work dissatisfaction that applied to almost the entire workforce.

Thus, many people who were previously on the fence about striking were driven over the edge—and then this in turn made others willing to take the leap as well, suddenly confident that they would not be acting alone.

Another important feature of the pandemic shock was that it took away a lot of what people had left to lose. Consider the two following games.

Game A: You and 100 other people each separately, without communicating, decide to choose X or Y. If you all choose X, you each get $20. But if even one of you chooses Y, then everyone who chooses Y gets $1 but everyone who chooses X gets nothing.

Game B: Same as the above, except that if anyone chooses Y, everyone who chooses Y also gets nothing.

Game A is tricky, isn’t it? You want to choose X, and you’d be best off if everyone did. But can you really trust 100 other people to all choose X? Maybe you should take the safe bet and choose Y—but then, they’re thinking the same way.


Game B, on the other hand, is painfully easy: Choose X. Obviously choose X. There’s no downside, and potentially a big upside.

In terms of game theory, both games have the same two Nash equilibria: All-X and All-Y. But in the second game, I made all-X also a weak dominant strategy equilibrium, and that made all the difference.

We could run these games in the lab, and I’m pretty sure I know what we’d find: In game A, most people choose X, but some people don’t, and if you repeat the game more and more people choose Y. But in game B, almost everyone chooses X and keeps on choosing X. Maybe they don’t get unanimity every time, but they probably do get it most of the time—because why wouldn’t you choose X? (These are testable hypotheses! I could in fact run this experiment! Maybe I should?)

It’s hard to say at this point how effective these strikes will be. Surely there will be some concessions won—there are far too many workers striking for them all to get absolutely nothing. But it remains uncertain whether the concessions will be small, token changes just to break up the strikes, or serious, substantive restructuring of how work is done in the United States.

If the latter sounds overly optimistic, consider that this is basically what happened in the New Deal. Those massive—and massively successful—reforms were not generated out of nowhere; they were the result of the economic crisis of the Great Depression and substantial pressure by organized labor. We may yet see a second New Deal (a Green New Deal?) in the 2020s if labor organizations can continue putting the pressure on.

The most important thing in making such a grand effort possible is believing that it’s possible—only if enough people believe it can happen will enough people take the risk and put in the effort to make it happen. Apathy and cynicism are the most powerful weapons of the status quo.


We are witnessing history in the making. Let’s make it in the right direction.

Where did all that money go?

Sep 26 JDN 2459484

Since 9/11, the US has spent a staggering $14 trillion on the military, averaging $700 billion per year. Some of this was the routine spending necessary to maintain a large standing army (though it is fair to ask whether we really need our standing army to be quite this large).

But a recent study by the Costs of War Project suggests that a disturbing amount of this money has gone to defense contractors: Somewhere between one-third and one-half, or in other words between $5 and $7 trillion.

This is revenue, not profit; presumably these defense contractors also incurred various costs in materials, labor, and logistics. But even as raw revenue that is an enormous amount of money. Apple, one of the largest corporations in the world, takes in on average about $300 billion per year. Over 20 years, that would be $6 trillion—so, our government has basically spent as much on defense contractors as the entire world spent on Apple products.

Of that $5 to $7 trillion, one-fourth to one-third went to just five corporations. That’s over $2 trillion just to Lockheed Martin, Boeing, General Dynamics, Raytheon, and Northrop Grumman. We pay more each year to Lockheed Martin than we do to the State Department and USAID.

Looking at just profit, each of these corporations appears to make a gross profit margin of about 10%. So we’re looking at something like $200 billion over 20 years—$10 billion per year—just handed over to shareholders.

And what were we buying with this money? Mostly overengineered high-tech military equipment that does little or nothing to actually protect soldiers, win battles, or promote national security. (It certainly didn’t do much to stop the Taliban from retaking control as soon as we left Afghanistan!)

Eisenhower tried to warn us about the military-industrial complex, but we didn’t listen.

Even when the equipment they sell us actually does its job, it still raises some serious questions about whether these are things we ought to be privatizing. As I mentioned in a post on private prisons several years ago, there are really three types of privatization of government functions.

Type 1 is innocuous: There are certain products and services that privatized businesses already provide in the open market and the government also has use for. There’s no reason the government should hesitate to buy wrenches or toothbrushes or hire cleaners or roofers.

Type 3 is the worst: There have been attempts to privatize fundamental government services, such as prisons, police, and the military. This is inherently unjust and undemocratic and must never be allowed. The use of force must never be for profit.

But defense contractors lie in the middle area, type 2: contracting services to specific companies that involve government-specific features such as military weapons. It’s true, there’s not that much difference functionally between a civilian airliner and a bomber plane, so it makes at least some sense that Boeing would be best qualified to produce both. This is not an obviously nonsensical idea. But there are still some very important differences, and I am deeply uneasy with the very concept of private corporations manufacturing weapons.


It’s true, there are some weapons that private companies make for civilians, such as knives and handguns. I think it would be difficult to maintain a free society while banning all such production, and it is literally impossible to ban anything that could potentially be used as a weapon (Wrenches? Kitchen knives? Tree branches!?). But we strictly regulate such production for very good reasons—and we probably don’t go far enough, really.

Moreover, there’s a pretty clear difference in magnitude if not in kind between a corporation making knives or even handguns and a corporation making cruise missiles—let alone nuclear missiles. Even if there is a legitimate overlap in skills and technology between making military weapons and whatever other products a corporation might make for the private market, it might still ultimately be better to nationalize the production of military weapons.

And then there are corporations that essentially do nothing but make military weapons—and we’re back to Lockheed-Martin again. Boeing does in fact make most of the world’s civilian airliners, in addition to making some military aircraft and missiles. But Lockheed-Martin? They pretty much just make fighters and bombers. This isn’t a company with generalized aerospace manufacturing skills that we are calling upon to make fighters in a time of war. This is an entire private, for-profit corporation that exists for the sole purpose of making fighter planes.

I really can’t see much reason not to simply nationalize Lockheed-Martin. They should be a division of the US Air Force or something.

I guess, in theory, the possibility of competing between different military contractors could potentially keep costs down… but, uh, how’s that working out for you? The acquisition costs of the F-35 are expected to run over $400 billion—the cost of the whole program a whopping $1.5 trillion. That doesn’t exactly sound like we’ve been holding costs down through competition.

And there really is something deeply unseemly about the idea of making profits through war. There’s a reason we have that word “profiteering”. Yes, manufacturing weapons has costs, and you should of course pay your workers and material suppliers at fair rates. But do we really want corporations to be making billions of dollars in profits for making machines of death?

But if nationalizing defense contractors or making them into nonprofit institutions seems too radical, I think there’s one very basic law we ought to make: No corporation with government contracts may engage in any form of lobbying. That’s such an obvious conflict of interest, such a clear opening for regulatory capture, that there’s really no excuse for it. If there must be shareholders profiting from war, at the very least they should have absolutely no say in whether we go to war or not.

And yet, we do allow defense contractors to spend on lobbying—and spend they do, tens of millions of dollars every year. Does all this lobbying affect our military budget or our willingness to go to war?

They must think so.

Hypocrisy is underrated

Sep 12 JDN 2459470

Hypocrisy isn’t a good thing, but it isn’t nearly as bad as most people seem to think. Often accusing someone of hypocrisy is taken as a knock-down argument for everything they are saying, and this is just utterly wrong. Someone can be a hypocrite and still be mostly right.

Often people are accused of hypocrisy when they are not being hypocritical; for instance the right wing seems to think that “They want higher taxes on the rich, but they are rich!” is hypocrisy, when in fact it’s simply altruism. (If they had wanted the rich guillotined, that would be hypocrisy. Maybe the problem is that the right wing can’t tell the difference?) Even worse, “They live under capitalism but they want to overthrow capitalism!” is not even close to hypocrisy—let alone why, how would someone overthrow a system they weren’t living under? (There are many things wrong with Marxists, but that is not one of them.)

But in fact I intend something stronger: Hypocrisy itself just isn’t that bad.


There are currently two classes of Republican politicians with regard to the COVID vaccines: Those who are consistent in their principles and don’t get the vaccines, and those who are hypocrites and get the vaccines while telling their constituents not to. Of the two, who is better? The hypocrites. At least they are doing the right thing even as they say things that are very, very wrong.

There are really four cases to consider. The principles you believe in could be right, or they could be wrong. And you could follow those principles, or you could be a hypocrite. Each of these two factors is independent.

If your principles are right and you are consistent, that’s the best case; if your principles are right and you are a hypocrite, that’s worse.

But if your principles are wrong and you are consistent, that’s the worst case; if your principles are wrong and you are a hypocrite, that’s better.

In fact I think for most things the ordering goes like this: Consistent Right > Hypocritical Wrong > Hypocritical Right > Consistent Wrong. Your behavior counts for more than your principles—so if you’re going to be a hypocrite, it’s better for your good actions to not match your bad principles.

Obviously if we could get people to believe good moral principles and then follow them, that would be best. And we should in fact be working to achieve that.

But if you know that someone’s moral principles are wrong, it doesn’t accomplish anything to accuse them of being a hypocrite. If it’s true, that’s a good thing.

Here’s a pretty clear example for you: Anyone who says that the Bible is infallible but doesn’t want gay people stoned to death is a hypocrite. The Bible is quite clear on this matter; Leviticus 20:13 really doesn’t leave much room for interpretation. By this standard, most Christians are hypocrites—and thank goodness for that. I owe my life to the hypocrisy of millions.

Of course if I could convince them that the Bible isn’t infallible—perhaps by pointing out all the things it says that contradict their most deeply-held moral and factual beliefs—that would be even better. But the last thing I want to do is make their behavior more consistent with their belief that the Bible is infallible; that would turn them into fanatical monsters. The Spanish Inquisition was very consistent in behaving according to the belief that the Bible is infallible.

Here’s another example: Anyone who thinks that cruelty to cats and dogs is wrong but is willing to buy factory-farmed beef and ham is a hypocrite. Any principle that would tell you that it’s wrong to kick a dog or cat would tell you that the way cows and pigs are treated in CAFOs is utterly unconscionable. But if you are really unwilling to give up eating meat and you can’t find or afford free-range beef, it still would be bad for you to start kicking dogs in a display of your moral consistency.

And one more example for good measure: The leaders of any country who resist human rights violations abroad but tolerate them at home are hypocrites. Obviously the best thing to do would be to fight human rights violations everywhere. But perhaps for whatever reason you are unwilling or unable to do this—one disturbing truth is that many human rights violations at home (such as draconian border policies) are often popular with your local constituents. Human-rights violations abroad are also often more severe—detaining children at the border is one thing, a full-scale genocide is quite another. So, for good reasons or bad, you may decide to focus your efforts on resisting human rights violations abroad rather than at home; this would make you a hypocrite. But it would still make you much better than a more consistent leader who simply ignores all human rights violations wherever they may occur.

In fact, there are cases in which it may be optimal for you to knowingly be a hypocrite. If you have two sets of competing moral beliefs, and you don’t know which is true but you know that as a whole they are inconsistent, your best option is to apply each set of beliefs in the domain for which you are most confident that it is correct, while searching for more information that might allow you to correct your beliefs and reconcile the inconsistency. If you are self-aware about this, you will know that you are behaving in a hypocritical way—but you will still behave better than you would if you picked the wrong beliefs and stuck to them dogmatically. In fact, given a reasonable level of risk aversion, you’ll be better off being a hypocrite than you would by picking one set of beliefs arbitrarily (say, at the flip of a coin). At least then you avoid the worst-case scenario of being the most wrong.

There is yet another factor to take into consideration. Sometimes following your own principles is hard.

Considerable ink has been spilled on the concept of akrasia, or “weakness of will”, in which we judge that A is better than B yet still find ourselves doing B. Philosophers continue to debate to this day whether this really happens. As a behavioral economist, I observe it routinely, perhaps even daily. In fact, I observe it in myself.

I think the philosophers’ mistake is to presume that there is one simple, well-defined “you” that makes all observations and judgments and takes actions. Our brains are much more complicated than that. There are many “you”s inside your brain, each with its own capacities, desires, and judgments. Yes, there is some important sense in which they are all somehow unified into a single consciousness—by a mechanism which still eludes our understanding. But it doesn’t take esoteric cognitive science to see that there are many minds inside you: Haven’t you ever felt an urge to do something you knew you shouldn’t do? Haven’t you ever succumbed to such an urge—drank the drink, eaten the dessert, bought the shoes, slept with the stranger—when it seemed so enticing but you knew it wasn’t really the right choice?

We even speak of being “of two minds” when we are ambivalent about something, and I think there is literal truth in this. The neural networks in your brain are forming coalitions, and arguing between them over which course of action you ought to take. Eventually one coalition will prevail, and your action will be taken; but afterward your reflective mind need not always agree that the coalition which won the vote was the one that deserved to.

The evolutionary reason for this is simple: We’re a kludge. We weren’t designed from the top down for optimal efficiency. We were the product of hundreds of millions of years of subtle tinkering, adding a bit here, removing a bit there, layering the mammalian, reflective cerebral cortex over the reptilian, emotional limbic system over the ancient, involuntary autonomic system. Combine this with the fact that we are built in pairs, with left and right halves of each kind of brain (and yes, they are independently functional when their connection is severed), and the wonder is that we ever agree with our own decisions.

Thus, there is a kind of hypocrisy that is not a moral indictment at all: You may genuinely and honestly agree that it is morally better to do something and still not be able to bring yourself to do it. You may know full well that it would be better to donate that money to malaria treatment rather than buy yourself that tub of ice cream—you may be on a diet and full well know that the ice cream won’t even benefit you in the long run—and still not be able to stop yourself from buying the ice cream.

Sometimes your feeling of hesitation at an altruistic act may be a useful insight; I certainly don’t think we should feel obliged to give all our income, or even all of our discretionary income, to high-impact charities. (For most people I encourage 5%. I personally try to aim for 10%. If all the middle-class and above in the First World gave even 1% we could definitely end world hunger.) But other times it may lead you astray, make you unable to resist the temptation of a delicious treat or a shiny new toy when even you know the world would be better off if you did otherwise.

Yet when following our own principles is so difficult, it’s not really much of a criticism to point out that someone has failed to do so, particularly when they themselves already recognize that they failed. The inconsistency between behavior and belief indicates that something is wrong, but it may not be any dishonesty or even anything wrong with their beliefs.

I wouldn’t go so far as to say you should stop ever calling out hypocrisy. Sometimes it is clearly useful to do so. But while hypocrisy is often the sign of a moral failing, it isn’t always—and even when it is, often as not the problem is the bad principles, not the behavior inconsistent with them.