Reversals in progress against poverty

Jan 16 JDN 2459606

I don’t need to tell you that the COVID pandemic has been very bad for the world. Yet perhaps the worst outcome of the pandemic is one that most people don’t recognize: It has reversed years of progress against global poverty.

Estimates of the number of people who will be thrown into extreme poverty as a result of the pandemic are consistently around 100 million, though some forecasts have predicted this will rise to 150 million, or, in the most pessimistic scenarios, even as high as 500 million.

Pre-COVID projections showed the global poverty rate falling steadily from 8.4% in 2019 to 6.3% by 2030. But COVID resulted in the first upward surge in global poverty in decades, and updated models now suggest that the global poverty rate in 2030 will be as high as 7.0%. That difference is 0.7% of a forecasted population of 8.5 billion—so that’s a difference of 59 million people.

This is a terrible reversal of fortune, and a global tragedy. Ten or perhaps even hundreds of millions of people will suffer the pain of poverty because of this global pandemic and the numerous missteps by many of the world’s governments—not least the United States—in response to it.

Yet it’s important to keep in mind that this is a short-term reversal in a long-term trend toward reduced poverty. Yes, the most optimistic predictions are turning out to be wrong—but the general pattern of dramatic reductions in global poverty over the late 20th and early 21st century are still holding up.

That post-COVID estimate of a global poverty rate of 7.0% needs to be compared against the fact that as recently as 1980 the global poverty rate at the same income level (adjust for inflation and purchasing power of course) income level was a whopping 44%.

This pattern makes me feel deeply ambivalent about the effects of globalization on inequality. While it now seems clear that globalization has exacerbated inequality within First World countries—and triggered a terrible backlash of right-wing populism as a result—it also seems clear that globalization was a major reason for the dramatic reductions in global poverty in the past few decades.

I think the best answer I’ve been able to come up with is that globalization is overall a good thing, and we must continue it—but we also need to be much more mindful of its costs, and we must make policy that mitigates those costs. Expanded trade has winners and losers, and we should be taxing the winners to compensate the losers. To make good economic policy, it simply isn’t enough to increase aggregate GDP; you actually have to make life better for everyone (or at least as many people as you can).

Unfortunately, knowing what policies to make is only half the battle. We must actually implement those policies, which means winning elections, which means restoring the public’s faith in the authority of economic experts.

Some of the people voting for Donald Trump were just what Hillary Clinton correctly (if tone-deafly) referred to as “deplorables“: racists, misogynists, xenophobes. But I think that many others weren’t voting for Trump but against Clinton; they weren’t embracing far-right populism but rather rejecting center-left technocratic globalization. They were tired of being told what to do by experts who didn’t seem to care about them or their interests.

And the thing is, they were right about that. Not about voting for Trump—that’s unforgivable—but about the fact that expert elites had been ignoring their interests and needed a wake-up call. There were a hundred better ways of making that wake-up call that didn’t involve putting a narcissistic, incompetent maniac in charge of the world’s largest economy, military and nuclear arsenal, and millions of people should be ashamed of themselves for not taking those better options. Yet the fact remains: The wake-up call was necessary, and we should be responding to it.

We expert elites (I think I can officially carry that card, now that I have a PhD and a faculty position at a leading research university) need to do a much better job of two things: First, articulating the case for our policy recommendations in a way that ordinary people can understand, so that they feel justified and not simply rammed down people’s throats; and second, recognizing the costs and downsides of these policies and taking action to mitigate them whenever possible.

For instance: Yes, we need to destroy all the coal jobs. They are killing workers and the planet. Coal companies need to be transitioned to new industries or else shut down. This is not optional. It must be done. But we also need to explain to those coal miners why it’s necessary to move on from coal to solar and nuclear, and we need to be implementing various policies to help those workers move on to better, safer jobs that pay as well and don’t involve filling their lungs with soot and the atmosphere with carbon dioxide. We need to articulate, emphasize—and loudly repeat—that this isn’t about hurting coal miners to help everyone else, but about helping everyone, coal miners included, and that if anyone gets hurt it will only be a handful of psychopathic billionaires who already have more money than any human being could possibly need or deserve.

Another example: We cannot stop trading with India and China. Hundreds of millions of innocent people would suddenly be thrown out of work and into poverty if we did. We need the products they make for us, and they need the money we pay for those products. But we must also acknowledge that trading with poor countries does put downward pressure on wages back home, and take action to help First World workers who are now forced to compete with global labor markets. Maybe this takes the form of better unemployment benefits, or job-matching programs, or government-sponsored job training. But we cannot simply shrug and let people lose their jobs and their homes because the factories they worked in were moved to China.

A very Omicron Christmas

Dec 26 JDN 2459575

Remember back in spring of 2020 when we thought that this pandemic would quickly get under control and life would go back to normal? How naive we were.

The newest Omicron strain seems to be the most infectious yet—even people who are fully vaccinated are catching it. The good news is that it also seems to be less deadly than most of the earlier strains. COVID is evolving to spread itself better, but not be as harmful to us—much as influenza and cold viruses evolved. While weekly cases are near an all-time peek, weekly deaths are well below the worst they had been.

Indeed, at this point, it’s looking like COVID will more or less be with us forever. In the most likely scenario, the virus will continue to evolve to be more infectious but less lethal, and then we will end up with another influenza on our hands: A virus that can’t be eradicated, gets huge numbers of people sick, but only kills a relatively small number. At some point we will decide that the risk of getting sick is low enough that it isn’t worth forcing people to work remotely or maybe even wear masks. And we’ll relax various restrictions and get back to normal with this new virus a regular part of our lives.

Merry Christmas?

But it’s not all bad news. The vaccination campaign has been staggeringly successful—now the total number of vaccine doses exceeds the world population, so the average human being has been vaccinated for COVID at least once.

And while 5.3 million deaths due to the virus over the last two years sounds terrible, it should be compared against the baseline rate of 15 million deaths during that same interval, and the fact that worldwide death rates have been rapidly declining. Had COVID not happened, 2021 would be like 2019, which had nearly the lowest death rate on record, at 7,579 deaths per million people per year. As it is, we’re looking at something more like 10,000 deaths per million people per year (1%), or roughly what we considered normal way back in the long-ago times of… the 1980s. To get even as bad as things were in the 1950s, we would have to double our current death rate.

Indeed, there’s something quite remarkable about the death rate we had in 2019, before the pandemic hit: 7,579 per million is only 0.76%. A being with a constant annual death rate of 0.76% would have a life expectancy of over 130 years. This very low death rate is partly due to demographics: The current world population is unusually young and healthy because the world recently went through huge surges in population growth. Due to demographic changes the UN forecasts that our death rate will start to climb again as fertility falls and the average age increases; but they are still predicting it will stabilize at about 11,200 per million per year, which would be a life expectancy of 90. And that estimate could well be too pessimistic, if medical technology continues advancing at anything like its current rate.

We call it Christmas, but it’s really a syncretized amalgamation of holidays: Yule, Saturnalia, various Solstice celebrations. (Indeed, there’s no particular reason to think Jesus was even born in December.) Most Northern-hemisphere civilizations have some sort of Solstice holiday, and we’ve greedily co-opted traditions from most of them. The common theme really seems to be this:

Now it is dark, but band together and have hope, for the light shall return.

Diurnal beings in northerly latitudes instinctively fear the winter, when it becomes dark and cold and life becomes more hazardous—but we have learned to overcome this fear together, and we remind ourselves that light and warmth will return by ritual celebrations.

The last two years have made those celebrations particularly difficult, as we have needed to isolate ourselves in order to keep ourselves and others safe. Humans are fundamentally social at a level most people—even most scientists—do not seem to grasp: We need contact with other human beings as deeply and vitally as we need food or sleep.

The Internet has allowed us to get some level of social contact while isolated, which has been a tremendous boon; but I think many of us underestimated how much we would miss real face-to-face contact. I think much of the vague sense of malaise we’ve all been feeling even when we aren’t sick and even when we’ve largely adapted our daily routine to working remotely comes from this: We just aren’t getting the chance to see people in person nearly as often as we want—as often as we hadn’t even realized we needed.

So, if you do travel to visit family this holiday season, I understand your need to do so. But be careful. Get vaccinated—three times, if you can. Don’t have any contact with others who are at high risk if you do have any reason to think you’re infected.

Let’s hope next Christmas is better.

Does power corrupt?

Nov 7 JDN 2459526

It’s a familiar saying, originally attributed to the Lord Acton: “Power tends to corrupt, and absolute power corrupts absolutely. Great men are nearly always bad men.”

I think this saying is not only wrong, but in fact dangerous. We can all observe plenty of corrupt people in power, that much is true. But if it’s simply the power that corrupts them, and they started as good people, then there’s really nothing to be done. We may try to limit the amount of power any one person can have, but in any large, complex society there will be power, and so, if the saying is right, there will also be corruption.

How do I know that this saying is wrong?

First of all, note that corruption varies tremendously, and with very little correlation with most sensible notions of power.

Consider used car salespeople, stockbrokers, drug dealers, and pimps. All of these professions are rather well known for their high level of corruption. Yet are people in these professions powerful? Yes, any manager has some power over their employees; but there’s no particular reason to think that used car dealers have more power over their employees than grocery stores, and yet there’s a very clear sense in which used car dealers are more corrupt.

Even power on a national scale is not inherently tied to corruption. Consider the following individuals: Nelson Mandela, Mahatma Gandhi, Abraham Lincoln, and Franklin Roosevelt.

These men were extremely powerful; each ruled an entire nation.Indeed, during his administration, FDR was probably the most powerful person in the world. And they certainly were not impeccable: Mandela was a good friend of Fidel Castro, Gandhi abused his wife, Lincoln suspended habeas corpus, and of course FDR ordered the internment of Japanese-Americans. Yet overall I think it’s pretty clear that these men were not especially corrupt and had a large positive impact on the world.

Say what you will about Bernie Sanders, Dennis Kucinich, or Alexandria Ocasio-Cortez. Idealistic? Surely. Naive? Perhaps. Unrealistic? Sometimes. Ineffective? Often. But they are equally as powerful as anyone else in the US Congress, and ‘corrupt’ is not a word I’d use to describe them. Mitch McConnell, on the other hand….

There does seem to be a positive correlation between a country’s level of corruption and its level of authoritarianism; the most democratic countries—Scandinavia—are also the least corrupt. Yet India is surely more democratic than China, but is widely rated as about the same level of corruption. Greece is not substantially less democratic than Chile, but it has considerably more corruption. So even at a national level, power is the not the only determinant of corruption.

I’ll even agree to the second clause: “absolute power corrupts absolutely.” Were I somehow granted an absolute dictatorship over the world, one of my first orders of business would be to establish a new democratic world government to replace my dictatorial rule. (Would it be my first order of business, or would I implement some policy reforms first? Now that’s a tougher question. I think I’d want to implement some kind of income redistribution and anti-discrimination laws before I left office, at least.) And I believe that most good people think similarly: We wouldn’t want to have that kind of power over other people. We wouldn’t trust ourselves to never abuse it. Anyone who maintains absolute power is either already corrupt or likely to become so. And anyone who seeks absolute power is precisely the sort of person who should not be trusted with power at all.

It may also be that power is one determinant of corruption—that a given person will generally end up more corrupt if you give them more power. This might help explain why even the best ‘great men’ are still usually bad men. But clearly there are other determinants that are equally important.

And I would like to offer a different hypothesis to explain the correlation between power and corruption, which has profoundly different implications: The corrupt seek power.

Donald Trump didn’t start out a good man and become corrupt by becoming a billionaire or becoming President. Donald Trump was born a narcissistic idiot.

Josef Stalin wasn’t a good man who became corrupted by the unlimited power of ruling the Soviet Union. Josef Stalin was born a psychopath.

Indeed, when you look closely at how corrupt leaders get into power, it often involves manipulating and exploiting others on a grand scale. They are willing to compromise principles that good people wouldn’t. They aren’t corrupt because they got into power; they got into power because they are corrupt.

Let me be clear: I’m not saying we should compromise all of our principles in order to achieve power. If there is a route by which power corrupts, it is surely that. Rather, I am saying that we must maintain constant vigilance against anyone who seems so eager to attain power that they will compromise principles to do it—for those are precisely the people who are likely to be most dangerous if they should achieve their aims.

Moreover, I’m saying that “power corrupts” is actually a very dangerous message. It tells good people not to seek power, because they would be corrupted by it. But in fact what we actually need in order to get good people in power is more good people seeking power, more opportunities to out-compete the corrupt. If Congress were composed entirely of people like Alexandria Ocasio-Cortez, then the left-wing agenda would no longer seem naive and unrealistic; it would simply be what gets done. (Who knows? Maybe it wouldn’t work out so well after all. But it definitely would get done.) Yet how many idealistic left-wing people have heard that phrase ‘power corrupts’ too many times, and decided they didn’t want to risk running for office?

Indeed, the notion that corruption is inherent to the exercise of power may well be the greatest tool we have ever given to those who are corrupt and seeking to hold onto power.

Realistic open borders

Sep 5 JDN 2459463

In an earlier post I lamented the tight restrictions on border crossings that prevail even between allied First World countries. (On a personal note, you’ll be happy to know that our visas have cleared and we are now moved into Edinburgh, cat and all, though we are still in temporary housing and our official biometric residence permits haven’t yet arrived.)

In this post I’d like to speculate on how we might get from our current regime to something more like open borders.

Obviously we can’t simply remove all border restrictions immediately. That would be a political non-starter, and even ethically or economically it wouldn’t make very much sense. There are sensible reasons behind some of our border regulations—just not most of them.

Instead we would want to remove a few restrictions at a time, starting with the most onerous or ridiculous ones.

High on my list in the UK in particular would be the requirement that pets must fly as cargo. I literally can’t think of a good reason for this; it seems practically designed to cost travelers more money and traumatize as many pets as possible. If it’s intended to support airlines somehow, please simply subsidize airlines. (But really, why are you doing that? You should be taxing airlines because of their high carbon emissions. Subsidize boats and trains.) If it’s intended to somehow prevent the spread of rabies, it’s obviously unnecessary, since every pet moved to the UK already has to document a recent rabies vaccine. But this particular rule seems to be a quirk of the UK in particular, hence not very generalizable.

But here’s one that actually seems quite common: Financial requirements for visas. Even tourist visas in most countries cost money, in amounts that seem to vary according to some sort of occult ritual. I can see no sensible economic reason why a visa would be $130 in Vietnam but only $20 in neighboring Cambodia, or why Kazakhstan can be visited for $25 but Azerbaijan costs $100, or why Myanmar costs only $30 but Bhutan will run you over $200.

Work visas are considerably more demanding still.

Financial requirements in the UK are especially onerous; you have to make above a certain salary and have a certain amount of savings in the bank, based on your family size. This was no problem for me personally, but it damn well shouldn’t be; I have a PhD in economics. My salary is now twice what it was as a grad student, and honestly that’s a good deal less than I was hoping for (and would have gotten on the tenure track at an R1 university).

All the countries in the Schengen Area have their own requirements for “financial subsistence” for visa applications, ranging from a trivial €3 in Hungary (not per day, just total; why do they even bother?) or manageable €14 per day in Latvia, through the more demanding amounts of €45 per day in Germany and Italy, to €92 per day in Switzerland and Liechtenstein, all the way up to the utterly unreasonable €120 per day in France. That would be €43,800 per year, or $51,700. Apparently you must be at least middle class to enter France.

Canada has a similar requirement known as “proof of funds”, but it’s considerably more reasonable, since you can substitute proof of employment and there are no wage minimums for such employment. Even if you don’t already have a job you can still apply and the minimum requirement is actually lower than the poverty line in Canada.

The United States doesn’t require financial requirements for most visas, but it does have a $160 visa fee. And the H1-B visa in particular (the nearest equivalent to the Skilled Worker visa I’ve got in the UK) requires that your wage or salary be at least the “prevailing wage” in your industry—meaning it is nearly impossible for a company to save money by hiring people on H1-B visas and hence they have very little incentive to hire H1-B workers. If you are of above-average talent and being paid only average wages, I guess they can save some money that way. But this is not how trade is supposed to work—nobody requires that you pay US prices for goods shipped from China, and if they did, nobody would ever buy anything from China. This is blatant, naked protectionism—but we’re apparently okay with it as long as it’s trade in labor instead of goods.

I wasn’t able to quickly find whether there are similar financial requirements in other countries. Perhaps there aren’t; these are the countries most people actually want to move to anyway. Permanent migration is overwhelminginly toward OECD (read: First World) countries, and is actually helping us sustain our populations in the face of low birth rates.

I must admit, I can see some fiscal benefits for a country not allowing poor people in, but this practice raises some very deep ethical problems: What right do we have to do this?

If someone is born poor in Laredo, Texas, we take responsibility for them as a US citizen. Maybe we don’t treat them particularly well (that is Texas, after all), but we do give them access to certain basic services, such as emergency services, Medicaid, TANF and SNAP. They are allowed to vote, own property, and even hold office in the United States. But if that same person were born in Nuevo Laredo, Tamaulipas—literally less than a mile away, right across the river—they would receive none of these benefits. They would not even be allowed to cross the river without a passport and a visa.

In some ways the contrast is even more dire if we consider a more liberal US state. A poor person born in Chula Vista, California has access to the full array of California services; Medi-Cal is honestly something close to a single-payer healthcare system, though the full morass of privatized US healthcare is layered on top of us. Then there is CalWORKS, CalFresh, and so on. But the same person born in Tijuana, Baja California would get none of these benefits.

They could be the same person. They could look the same and have essentially the same culture—even the same language, given how many Californians speak Spanish and how many Mexicans speak English. But if they were born on the other side of a river (in Texas) or even an arbitrary line (in California), we treat them completely differently. And then to add insult to injury, we won’t even let them across, not in spite, but because of how poor and desperate they are. If they were rich and educated, we’d let them come across—but then why would they need to?

“Give me your tired, your poor, your huddled masses yearning to breathe free”?

Some restrictions may apply.

Economists talk often of “trade barriers”, but in real terms we have basically removed all trade barriers in goods. Yes, there are still some small tariffs, and the occasional quota here and there—and these should go away too, especially the quotas, because they don’t even raise revenue—but in general we have an extremely globalized economy in terms of goods. The same complex product, like a car or a smartphone, is often made of parts from a dozen countries.

But when it comes to labor, we are still living in a protectionist world. Crossing borders to work is difficult, time-consuming, and above all, expensive. This dramatically reduces opportunities for workers to move where their labor is most valued—which hurts not only them, but also anyone who would employ them or buy products made by them. The poorest people are those who stand to gain the most from crossing borders, and they are precisely the ones that we work hardest to forbid.

So let’s start with that, shall we? We can keep all this nonsense about passports, visas, background checks, and customs inspections. It’s probably all unnecessary and wasteful and unfair, but politically it’s clearly too popular to remove. Let’s just remove this: No more financial requirements or fees for work visas. If you want to come to another country to work, you have to go through an application and all that; fine. But you shouldn’t have to prove you aren’t poor. Poor people have just as much right to live here as anybody else—and if we let them do so, they’d be a lot less poor.

How to change minds

Aug 29 JDN 2459456

Think for a moment about the last time you changed your mind on something important. If you can’t think of any examples, that’s not a good sign. Think harder; look back further. If you still can’t find any examples, you need to take a deep, hard look at yourself and how you are forming your beliefs. The path to wisdom is not found by starting with the right beliefs, but by starting with the wrong ones and recognizing them as wrong. No one was born getting everything right.

If you remember changing your mind about something, but don’t remember exactly when, that’s not a problem. Indeed, this is the typical case, and I’ll get to why in a moment. Try to remember as much as you can about the whole process, however long it took.

If you still can’t specifically remember changing your mind, try to imagine a situation in which you would change your mind—and if you can’t do that, you should be deeply ashamed and I have nothing further to say to you.

Thinking back to that time: Why did you change your mind?

It’s possible that it was something you did entirely on your own, through diligent research of primary sources or even your own mathematical proofs or experimental studies. This is occasionally something that happens; as an active researcher, it has definitely happened to me. But it’s clearly not the typical case of what changes people’s minds, and it’s quite likely that you have never experienced it yourself.

The far more common scenario—even for active researchers—is far more mundane: You changed your mind because someone convinced you. You encountered a persuasive argument, and it changed the way you think about things.

In fact, it probably wasn’t just one persuasive argument; it was probably many arguments, from multiple sources, over some span of time. It could be as little as minutes or hours; it could be as long as years.

Probably the first time someone tried to change your mind on that issue, they failed. The argument may even have degenerated into shouting and name-calling. You both went away thinking that the other side was composed of complete idiots or heartless monsters. And then, a little later, thinking back on the whole thing, you remembered one thing they said that was actually a pretty good point.

This happened again with someone else, and again with yet another person. And each time your mind changed just a little bit—you became less certain of some things, or incorporated some new information you didn’t know before. The towering edifice of your worldview would not be toppled by a single conversation—but a few bricks here and there did get taken out and replaced.

Or perhaps you weren’t even the target of the conversation; you simply overheard it. This seems especially common in the age of social media, where public and private spaces become blurred and two family members arguing about politics can blow up into a viral post that is viewed by millions. Perhaps you changed your mind not because of what was said to you, but because of what two other people said to one another; perhaps the one you thought was on your side just wasn’t making as many good arguments as the one on the other side.

Now, you may be thinking: Yes, people like me change our minds, because we are intelligent and reasonable. But those people, on the other side, aren’t like that. They are stubborn and foolish and dogmatic and stupid.

And you know what? You probably are an especially intelligent and reasonable person. If you’re reading this blog, there’s a good chance that you are at least above-average in your level of education, rationality, and open-mindedness.

But no matter what beliefs you hold, I guarantee you there is someone out there who shares many of them and is stubborn and foolish and dogmatic and stupid. And furthermore, there is probably someone out there who disagrees with many of your beliefs and is intelligent and open-minded and reasonable.

This is not to say that there’s no correlation between your level of reasonableness and what you actually believe. Obviously some beliefs are more rational than others, and rational people are more likely to hold those beliefs. (If this weren’t the case, we’d be doomed.) Other things equal, an atheist is more reasonable than a member of the Taliban; a social democrat is more reasonable than a neo-Nazi; a feminist is more reasonable than a misogynist; a member of the Human Rights Campaign is more reasonable than a member of the Westboro Baptist Church. But reasonable people can be wrong, and unreasonable people can be right.

You should be trying to seek out the most reasonable people who disagree with you. And you should be trying to present yourself as the most reasonable person who expresses your own beliefs.

This can be difficult—especially that first part, as the world (or at least the world spanned by Facebook and Twitter) seems to be filled with people who are astonishingly dogmatic and unreasonable. Often you won’t be able to find any reasonable disagreement. Often you will find yourself in threads full of rage, hatred and name-calling, and you will come away disheartened, frustrated, or even despairing for humanity. The whole process can feel utterly futile.

And yet, somehow, minds change.

Support for same-sex marriage in the US rose from 27% to 70% just since 1997.

Read that date again: 1997. Less than 25 years ago.

The proportion of new marriages which were interracial has risen from 3% in 1967 to 19% today. Given the racial demographics of the US, this is almost at the level of random assortment.

Ironically I think that the biggest reason people underestimate the effectiveness of rational argument is the availability heuristic: We can’t call to mind any cases where we changed someone’s mind completely. We’ve never observed a pi-radian turnaround in someone’s whole worldview, and thus, we conclude that nobody ever changes their mind about anything important.

But in fact most people change their minds slowly and gradually, and are embarrassed to admit they were wrong in public, so they change their minds in private. (One of the best single changes we could make toward improving human civilization would be to make it socially rewarded to publicly admit you were wrong. Even the scientific community doesn’t do this nearly as well as it should.) Often changing your mind doesn’t even really feel like changing your mind; you just experience a bit more doubt, learn a bit more, and repeat the process over and over again until, years later, you believe something different than you did before. You moved 0.1 or even 0.01 radians at a time, until at last you came all the way around.

It may be in fact that some people’s minds cannot be changed—either on particular issues, or even on any issue at all. But it is so very, very easy to jump to that conclusion after a few bad interactions, that I think we should intentionally overcompensate in the opposite direction: Only give up on someone after you have utterly overwhelming evidence that their mind cannot ever be changed in any way.

I can’t guarantee that this will work. Perhaps too many people are too far gone.

But I also don’t see any alternative. If the truth is to prevail, it will be by rational argument. This is the only method that systematically favors the truth. All other methods give equal or greater power to lies.

Capitalism can be fair

Aug 22 JDN 2459449

There are certainly extreme right-wing libertarians who seem to think that capitalism is inherently fair, or that “fairness” is meaningless and (some very carefully defined notion of) liberty is the only moral standard. I am not one of them. I agree that many of the actual practices of modern capitalism as we know it are unfair, particularly in the treatment of low-skill workers.

But lately I’ve been seeing a weirdly frequent left-wing take—Marxist take, really—that goes to the opposite extreme, saying that capitalism is inherently unfair, that the mere fact that capital owners ever get any profit on anything is proof that the system is exploitative and unjust and must be eliminated.

So I decided it would be worthwhile to provide a brief illustration of how, at least in the best circumstances, a capitalist system of labor can in fact be fair and just.

The argument that capitalism is inherently unjust seems to be based on the notion that profit means “workers are paid less than their labor is worth”. I think that the reason this argument is so insidious is that it’s true in one sense—but not true in another. Workers are indeed paid less than the total surplus of their actual output—but, crucially, they are not paid less than what the surplus of their output would have been had the capital owner not provided capital and coordination.

Suppose that we are making some sort of product. To make it more concrete, let’s say shirts. You can make a shirt by hand, but it’s a lot of work, and it takes a long time. Suppose that you, working on your own by hand, can make 1 shirt per day. You can sell each shirt for $10, so you get $10 per day.

Then, suppose that someone comes along who owns looms and sewing machines. They gather you and several other shirt-makers and offer to let you use their machines, in exchange for some of the revenue. With the aid of 9 other workers and the machines, you are able to make 30 shirts per day. You can still sell each shirt for $10, so now there is total revenue of $300.

Whether or not this is fair depends on precisely the bargain that was struck with the owner of the machines. Suppose that he asked for 40% of the revenue. Then the 10 workers including yourself would get (0.60)($300) = $180 to split, presumably evenly, and each get $18 per day. This seems fair; you’re clearly better off than you were making shirts by yourself. The capital owner then gets (0.40)($300) = $120, which is more than each of you, but not by a ridiculous amount; and he probably has costs to deal with in maintaining those machines.

But suppose instead the owner had demanded 80% of the revenue; then you would have to split (0.20)($300) = $60 between you, and each would only get $6 per day. The capital owner would then get (0.80)($300) = $240, 40 times as much as each of you.

Or perhaps instead of a revenue-sharing agreement, the owner offers to pay you a wage. If that wage is $18 per day, it seems fair. If it is $6 per day, it seems obviously unfair.

If this owner is the only employer, then he is competing only with working alone. So we would expect him to offer a wage of $10 per day, or maybe slightly more since working with the machines may be harder or more unpleasant than working by hand.

But if there are many employers, then he is now competing with those employers as well. If he offers $10, someone else might offer $12, and a third might offer $15. Competition should drive the system toward an equilibrium where workers are getting paid their marginal value product—in other words, the wage for one hour of work should equal the additional value added by one more hour of work.

In the case that seems fair, where workers are getting more money than they would have on their own, are they getting paid “less than the value of their labor”? In one sense, yes; the total surplus is not going all to the workers, but is being shared with the owner of the machines. But the more important sense is whether they’d be better off quitting and working on their own—and they obviously would not be.

What value does the capital owner provide? Well, the capital, of course. It’s their property and they are letting other people use it. Also, they incur costs to maintain it.

Of course, it matters how the capital owner obtained that capital. If they are an inventor who made it themselves, it seems obviously just that they should own it. If they inherited it or got lucky on the stock market, it isn’t something they deserve in a deep sense, but it’s reasonable to say they are entitled to it. But if the only reason they have the capital is by theft, fraud, or exploitation, then obviously they don’t deserve it. In practice, there are very few of the first category, a huge number of the second, and all too many of the third. Yet this is not inherent to the capitalist work arrangement. Many capital owners don’t deserve what they own; but those who do have a right to make a profit letting other people use their property.

There are of course many additional complexities that arise in the real world, in terms of market power, bargaining, asymmetric information, externalities, and so on. I freely admit that in practice, capitalism is often unfair. But I think it’s worth pointing out that the mere existence of profit from capital ownership is not inherently unjust, and in fact by organizing our economy around it we have managed to achieve unprecedented prosperity.

Locked donation boxes and moral variation

Aug 8 JDN 2459435

I haven’t been able to find the quote, but I think it was Kahneman who once remarked: “Putting locks on donation boxes shows that you have the correct view of human nature.”

I consider this a deep insight. Allow me to explain.

Some people think that human beings are basically good. Rousseau is commonly associated with this view, a notion that, left to our own devices, human beings would naturally gravitate toward an anarchic but peaceful society.

The question for people who think this needs to be: Why haven’t we? If your answer is “government holds us back”, you still need to explain why we have government. Government was not imposed upon us from On High in time immemorial. We were fairly anarchic (though not especially peaceful) in hunter-gatherer tribes for nearly 200,000 years before we established governments. How did that happen?

And if your answer to that is “a small number of tyrannical psychopaths forced government on everyone else”, you may not be wrong about that—but it already breaks your original theory, because we’ve just shown that human society cannot maintain a peaceful anarchy indefinitely.

Other people think that human beings are basically evil. Hobbes is most commonly associated with this view, that humans are innately greedy, violent, and selfish, and only by the overwhelming force of a government can civilization be maintained.

This view more accurately predicts the level of violence and death that generally accompanies anarchy, and can at least explain why we’d want to establish government—but it still has trouble explaining how we would establish government. It’s not as if we’re ruled by a single ubermensch with superpowers, or an army of robots created by a mad scientist in a secret undergroud laboratory. Running a government involves cooperation on an absolutely massive scale—thousands or even millions of unrelated, largely anonymous individuals—and this cooperation is not maintained entirely by force: Yes, there is some force involved, but most of what a government does most of the time is mediated by norms and customs, and if a government did ever try to organize itself entirely by force—not paying any of the workers, not relying on any notion of patriotism or civic duty—it would immediately and catastrophically collapse.

What is the right answer? Humans aren’t basically good or basically evil. Humans are basically varied.

I would even go so far as to say that most human beings are basically good. They follow a moral code, they care about other people, they work hard to support others, they try not to break the rules. Nobody is perfect, and we all make various mistakes. We disagree about what is right and wrong, and sometimes we even engage in actions that we ourselves would recognize as morally wrong. But most people, most of the time, try to do the right thing.

But some people are better than others. There are great humanitarians, and then there are ordinary folks. There are people who are kind and compassionate, and people who are selfish jerks.

And at the very opposite extreme from the great humanitarians is the roughly 1% of people who are outright psychopaths. About 5-10% of people have significant psychopathic traits, but about 1% are really full-blown psychopaths.

I believe it is fair to say that psychopaths are in fact basically evil. They are incapable of empathy or compassion. Morality is meaningless to them—they literally cannot distinguish moral rules from other rules. Other people’s suffering—even their very lives—means nothing to them except insofar as it is instrumentally useful. To a psychopath, other people are nothing more than tools, resources to be exploited—or obstacles to be removed.

Some philosophers have argued that this means that psychopaths are incapable of moral responsibility. I think this is wrong. I think it relies on a naive, pre-scientific notion of what “moral responsibility” is supposed to mean—one that was inevitably going to be destroyed once we had a greater understanding of the brain. Do psychopaths understand the consequences of their actions? Yes. Do rewards motivate psychopaths to behave better? Yes. Does the threat of punishment motivate them? Not really, but it was never that effective on anyone else, either. What kind of “moral responsibility” are we still missing? And how would our optimal action change if we decided that they do or don’t have moral responsibility? Would you still imprison them for crimes either way? Maybe it doesn’t matter whether or not it’s really a blegg.

Psychopaths are a small portion of our population, but are responsible for a large proportion of violent crimes. They are also overrepresented in top government positions as well as police officers, and it’s pretty safe to say that nearly every murderous dictator was a psychopath of one shade or another.

The vast majority of people are not psychopaths, and most people don’t even have any significant psychopathic traits. Yet psychopaths have an enormously disproportionate impact on society—nearly all of it harmful. If psychopaths did not exist, Rousseau might be right after all; we wouldn’t need government. If most people were psychopaths, Hobbes would be right; we’d long for the stability and security of government, but we could never actually cooperate enough to create it.

This brings me back to the matter of locked donation boxes.

Having a donation box is only worthwhile if most people are basically good: Asking people to give money freely in order to achieve some good only makes any sense if people are capable of altruism, empathy, cooperation. And it can’t be just a few, because you’d never raise enough money to be useful that way. It doesn’t have to be everyone, or maybe even a majority; but it has to be a large fraction. 90% is more than enough.

But locking things is only worthwhile if some people are basically evil: For a lock to make sense, there must be at least a few people who would be willing to break in and steal the money, even if it was earmarked for a very worthy cause. It doesn’t take a huge fraction of people, but it must be more than a negligible one. 1% to 10% is just about the right sort of range.

Hence, locked donation boxes are a phenomenon that would only exist in a world where most people are basically good—but some people are basically evil.

And this is in fact the world in which we live. It is a world where the Holocaust could happen but then be followed by the founding of the United Nations, a world where nuclear weapons would be invented and used to devastate cities, but then be followed by an era of nearly unprecedented peace. It is a world where governments are necessary to reign in violence, but also a world where governments can function (reasonably well) even in countries with hundreds of millions of people. It is a world with crushing poverty and people who work tirelessly to end it. It is a world where Exxon and BP despoil the planet for riches while WWF and Greenpeace fight back. It is a world where religions unite millions of people under a banner of peace and justice, and then go on crusadees to murder thousands of other people who united under a different banner of peace and justice. It is a world of richness, complexity, uncertainty, conflict—variance.

It is not clear how much of this moral variance is innate versus acquired. If we somehow rewound the film of history and started it again with a few minor changes, it is not clear how many of us would end up the same and how many would be far better or far worse than we are. Maybe psychopaths were born the way they are, or maybe they were made that way by culture or trauma or lead poisoning. Maybe with the right upbringing or brain damage, we, too, could be axe murderers. Yet the fact remains—there are axe murderers, but we, and most people, are not like them.

So, are people good, or evil? Was Rousseau right, or Hobbes? Yes. Both. Neither. There is no one human nature; there are many human natures. We are capable of great good and great evil.

When we plan how to run a society, we must make it work the best we can with that in mind: We can assume that most people will be good most of the time—but we know that some people won’t, and we’d better be prepared for them as well.

Set out your donation boxes with confidence. But make sure they are locked.

Love the disabled, hate the disability

Aug 1 JDN 2459428

There is a common phrase Christians like to say: “Love the sinner, hate the sin.” This seems to be honored more in the breach than the observance, and many of the things that most Christians consider “sins” are utterly harmless or even good; but the principle is actually quite sound. You can disagree with someone or even believe that what they are doing is wrong while still respecting them as a human being. Indeed, my attitude toward religion is very much “Love the believer, hate the belief.” (Though somehow they don’t seem to like that one so much….)

Yet while ethically this is often the correct attitude, psychologically it can be very difficult for people to maintain. The Halo Effect is a powerful bias, and most people recoil instinctively from saying anything good about someone bad or anything bad about someone good. This can make it uncomfortable to simply state objective facts like “Hitler was a charismatic leader” or “Stalin was a competent administrator”—how dare you say something good about someone so evil? Yet in fact Hitler and Stalin could never have accomplished so much evil if they didn’t have these positive attributes—if we want to understand how such atrocities can occur and prevent them in the future, we need to recognize that evil people can also be charismatic and competent.

Halo Effect also makes it difficult for people to understand the complexities of historical figures who have facets of both great good and great evil: Thomas Jefferson led the charge on inventing modern democracy—but he also owned and raped slaves. Lately it seems like the left wants to deny the former and the right wants to deny the latter; but both are historical truths that important to know.

Halo Effect is the best explanation I have for why so many disability activists want to deny that disabilities are inherently bad. They can’t keep in their head the basic principle of “Love the disabled, hate the disability.”

There is a large community of deaf people who say that being deaf isn’t bad. There are even some blind people who say that being blind isn’t bad—though they’re considerably rarer.

Is music valuable? Is art valuable? Is the world better off because Mozart’s symphonies and the Mona Lisa exist? Yes. It follows that being unable to experience these things is bad. Therefore blindness and deafness are bad. QED.

No human being is made better of by not being able to do something. More capability is better than less capability. More freedom is better than less freedom. Less pain is better than more pain.

(Actually there are a few exceptions to “less pain is better than more pain”: People with CIPA are incapable of feeling pain even when injured, which is very dangerous.)

From this, it follows immediately that disabilities are bad and we should be trying to fix them.

And frankly this seems so utterly obvious to me that it’s hard for me to understand why anyone could possibly disagree. Maybe people who are blind or deaf simply don’t know what they’re missing? Even that isn’t a complete explanation, because I don’t know what it would be like to experience four dimensions or see ultraviolet—yet I still think that I’d be better off if I could. If there were people who had these experiences telling me how great they are, I’d be certain of it.

Don’t get me wrong: A lot of ableist discrimination does exist, and much of it seems to come from the same psychological attitude: Since being disabled is bad, they think that disabled people must be bad and we shouldn’t do anything to make them better off because they are bad. Stated outright this sounds ludicrous; but most people who think this way don’t consciously reflect on it. They just have a general sense of badness related to disability which then rubs off on their attitudes toward disabled people as well.

Yet it makes hardly any more sense to go the other way: Disabled people are human beings of value, they are good; therefore their disabilities are good? Therefore this thing that harms and limits them is good?

It’s certainly true that most disabilities would be more manageable with better accommodations, and many of those accommodations would be astonishingly easy and cheap to implement. It’s terrible that we often fail to do this. Yet the fact remains: The best-case scenario would be not needing accommodations because we can simply cure the disability.

It never ceases to baffle me that disability activists will say things like this:

“A wheelchair user isn’t disabled because of the impairment that interferes with her ability to walk, but because society refuses to make spaces wheelchair-accessible.”

No, the problem is pretty clearly the fact that she can’t walk. There are various ways that we could make society more accessible to people in wheelchairs—and we should do those things—but there are inherently certain things you simply cannot do if you can’t walk, and that has nothing to do with anything society does. You would be better off if society were more accommodating, but you’d be better off still if you could simply walk again.

Perhaps my perspective on this is skewed, because my major disability—chronic migraine—involves agonizing, debilitating chronic pain. Perhaps people whose disabilities don’t cause them continual agony can convince themselves that there’s nothing wrong with them. But it seems pretty obvious to me that I would be better off without migraines.

Indeed, it’s utterly alien to my experience to hear people say things like this: “We’re not suffering. We’re just living our lives in a different way.” I’m definitely suffering, thank you very much. Maybe not everyone with disabilities is suffering—but a lot of us definitely are. Every single day I have to maintain specific habits and avoid triggers, and I still get severe headaches twice a week. I had a particularly nasty one just this morning.

There are some more ambiguous cases, to be sure: Neurodivergences like autism and ADHD that exist on a spectrum, where the most extreme forms are utterly debilitating but the mildest forms are simply ordinary variation. It can be difficult to draw the line at when we should be willing to treat and when we shouldn’t; but this isn’t fundamentally different from the sort of question psychiatrists deal with all the time, regarding the difference between normal sadness and nervousness versus pathological depression and anxiety disorders.

Of course there is natural variation in almost all human traits, and one can have less of something good without it being pathological. Some things we call disabilities could just be considered below-average capabilities within ordinary variation. Yet even then, if we could make everyone healthier, stronger, faster, tougher, and smarter than they currently are, I have trouble seeing why we wouldn’t want to do that. I don’t even see any particular reason to think that the current human average—or even the current human maximum—is in any way optimal. Better is better. If we have the option to become transhuman gods, why wouldn’t we?

Another way to see this is to think about how utterly insane it would be to actively try to create disabilities. If there’s nothing wrong with being deaf, why not intentionally deafen yourself? If being bound to a wheelchair is not a bad thing, why not go get your legs paralyzed? If being blind isn’t so bad, why not stare into a welding torth? In these cases you’d even have consented—which is absolutely not the case for an innate disability. I never consented to these migraines and never would have.

I respect individual autonomy, so I would never force someone to get treatment for their disability. I even recognize that society can pressure people to do things they wouldn’t want to, and so maybe occasionally people really are better off being unable to do something so that nobody can pressure them into it. But it still seems utterly baffling to me that there are people who argue that we’d be better off not even having the option to make our bodies work better.

I think this is actually a major reason why disability activism hasn’t been more effective; the most vocal activists are the ones saying ridiculous things like “the problem isn’t my disability, it’s your lack of accommodations” or “there’s nothing wrong with being unable to hear”. If there is anything you’d be able to do if your disability didn’t exist that you can’t do even with accommodations, that isn’t true—and there basically always is.

Finance is the commodification of trust

Jul 18 JDN 2459414

What is it about finance?

Why is it that whenever we have an economic crisis, it seems to be triggered by the financial industry? Why has the dramatic rise in income and wealth inequality come in tandem with a rise in finance as a proportion of our economic output? Why are so many major banks implicated in crimes ranging from tax evasion to money laundering for terrorists?

In other words, why are the people who run our financial industry such utter scum? What is it about finance that it seems to attract the very worst people on Earth?

One obvious answer is that it is extremely lucrative: Incomes in the financial industry are higher than almost any other industry. Perhaps people who are particularly unscrupulous are drawn to the industries that make the most money, and don’t care about much else. But other people like making money too, so this is far from a full explanation. Indeed, incomes for physicists are comparable to those of Wall Street brokers, yet physicists rarely seem to be implicated in mass corruption scandals.

I think there is a deeper reason: Finance is the commodification of trust.

Many industries sell products, physical artifacts like shirts or televisions. Others sell services like healthcare or auto repair, which involve the physical movement of objects through space. Information-based industries are a bit different—what a software developer or an economist sells isn’t really a physical object moving through space. But then what they are selling is something more like knowledge—information that can be used to do useful things.

Finance is different. When you make a loan or sell a stock, you aren’t selling a thing—and you aren’t really doing a thing either. You aren’t selling information, either. You’re selling trust. You are making money by making promises.

Most people are generally uncomfortable with the idea of selling promises. It isn’t that we’d never do it—but we’re reluctant to do it. We try to avoid it whenever we can. But if you want to be successful in finance, you can’t have that kind of reluctance. To succeed on Wall Street, you need to be constantly selling trust every hour of every day.

Don’t get me wrong: Certain kinds of finance are tremendously useful, and we’d be much worse off without them. I would never want to get rid of government bonds, auto loans or home mortgages. I’m actually pretty reluctant to even get rid of student loans, despite the large personal benefits I would get if all student loans were suddenly forgiven. (I would be okay with a system like Elizabeth Warren’s proposal, where people with college degrees pay a surtax that supports free tuition. The problem with most proposals for free college is that they make people who never went to college pay for those who did, and that seems unfair and regressive to me.)

But the Medieval suspicion against “usury“—the notion that there is something immoral about making money just from having money and making promises—isn’t entirely unfounded. There really is something deeply problematic about a system in which the best way to get rich is to sell commodified packages of trust, and the best way to make money is to already have it.

Moreover, the more complex finance gets, the more divorced it becomes from genuinely necessary transactions, and the more commodified it becomes. A mortgage deal that you make with a particular banker in your own community isn’t particularly commodified; a mortgage that is sliced and redistributed into mortgage-backed securities that are sold anonymously around the world is about as commodified as anything can be. It’s rather like the difference between buying a bag of apples from your town farmers’ market versus ordering a barrel of apple juice concentrate. (And of course the most commodified version of all is the financial one: buying apple juice concentrate futures.)

Commodified trust is trust that has lost its connection to real human needs. Those bankers who foreclosed on thousands of mortgages (many of them illegally) weren’t thinking about the people they were making homeless—why would they, when for them those people have always been nothing more than numbers on a spreadsheet? Your local banker might be willing to work with you to help you keep your home, because they see you as a person. (They might not for various reasons, but at least they might.) But there’s no reason for HSBC to do so, especially when they know that they are so rich and powerful they can get away with just about anything (have I mentioned money laundering for terrorists?).

I don’t think we can get rid of finance. We will always need some mechanism to let people who need money but don’t have it borrow that money from people who have it but don’t need it, and it makes sense to have interest charges to compensate lenders for the time and risk involved.

Yet there is much of finance we can clearly dispense with. Credit default swaps could simply be banned, and we’d gain much and lose little. Credit default swaps are basically unregulated insurance, and there’s no reason to allow that. If banks need insurance, they can buy the regulated kind like everyone else. Those regulations are there for a reason. We could ban collateralized debt obligations and similar tranche-based securities, again with far more benefit than harm. We probably still need stocks and commodity futures, and perhaps also stock options—but we could regulate their sale considerably more, particularly with regard to short-selling. Banking should be boring.

Some amount of commodification may be inevitable, but clearly much of what we currently have could be eliminated. In particular, the selling of loans should simply be banned. Maybe even your local banker won’t ever really get to know you or care about you—but there’s no reason we have to allow them to sell your loan to some bank in another country that you’ve never even heard of. When you make a deal with a bank, the deal should be between you and that bank—not potentially any bank in the world that decides to buy the contract at any point in the future. Maybe we’ll always be numbers on spreadsheets—but at least we should be able to choose whose spreadsheets.

If banks want more liquidity, they can borrow from other banks—themselves, taking on the risk themselves. A lending relationship is built on trust. You are free to trust whomever you choose; but forcing me to trust someone I’ve never met is something you have no right to do.

In fact, we might actually be able to get rid of banks—credit unions have a far cleaner record than banks, and provide nearly all of the financial services that are genuinely necessary. Indeed, if you’re considering getting an auto loan or a home mortgage, I highly recommend you try a credit union first.

For now, we can’t simply get rid of banks—we’re too dependent on them. But we could at least acknowledge that banks are too powerful, they get away with far too much, and their whole industry is founded upon practices that need to be kept on a very tight leash.

Social science is broken. Can we fix it?

May 16 JDN 2459349

Social science is broken. I am of course not the first to say so. The Atlantic recently published an article outlining the sorry state of scientific publishing, and several years ago Slate Star Codex published a lengthy post (with somewhat harsher language than I generally use on this blog) showing how parapsychology, despite being obviously false, can still meet the standards that most social science is expected to meet. I myself discussed the replication crisis in social science on this very blog a few years back.

I was pessimistic then about the incentives of scientific publishing be fixed any time soon, and I am even more pessimistic now.

Back then I noted that journals are often run by for-profit corporations that care more about getting attention than getting the facts right, university administrations are incompetent and top-heavy, and publish-or-perish creates cutthroat competition without providing incentives for genuinely rigorous research. But these are widely known facts, even if so few in the scientific community seem willing to face up to them.

Now I am increasingly concerned that the reason we aren’t fixing this system is that the people with the most power to fix it don’t want to. (Indeed, as I have learned more about political economy I have come to believe this more and more about all the broken institutions in the world. American democracy has its deep flaws because politicians like it that way. China’s government is corrupt because that corruption is profitable for many of China’s leaders. Et cetera.)

I know economics best, so that is where I will focus; but most of what I’m saying here would also apply to other social sciences such as sociology and psychology as well. (Indeed it was psychology that published Daryl Bem.)

Rogoff and Reinhart’s 2010 article “Growth in a Time of Debt”, which was a weak correlation-based argument to begin with, was later revealed (by an intrepid grad student! His name is Thomas Herndon.) to be based upon deep, fundamental errors. Yet the article remains published, without any notice of retraction or correction, in the American Economic Review, probably the most prestigious journal in economics (and undeniably in the vaunted “Top Five”). And the paper itself was widely used by governments around the world to justify massive austerity policies—which backfired with catastrophic consequences.

Why wouldn’t the AER remove the article from their website? Or issue a retraction? Or at least add a note on the page explaining the errors? If their primary concern were scientific truth, they would have done something like this. Their failure to do so is a silence that speaks volumes, a hound that didn’t bark in the night.

It’s rational, if incredibly selfish, for Rogoff and Reinhart themselves to not want a retraction. It was one of their most widely-cited papers. But why wouldn’t AER’s editors want to retract a paper that had been so embarrassingly debunked?

And so I came to realize: These are all people who have succeeded in the current system. Their work is valued, respected, and supported by the system of scientific publishing as it stands. If we were to radically change that system, as we would necessarily have to do in order to re-align incentives toward scientific truth, they would stand to lose, because they would suddenly be competing against other people who are not as good at satisfying the magical 0.05, but are in fact at least as good—perhaps even better—actual scientists than they are.

I know how they would respond to this criticism: I’m someone who hasn’t succeeded in the current system, so I’m biased against it. This is true, to some extent. Indeed, I take it quite seriously, because while tenured professors stand to lose prestige, they can’t really lose their jobs even if there is a sudden flood of far superior research. So in directly economic terms, we would expect the bias against the current system among grad students, adjuncts, and assistant professors to be larger than the bias in favor of the current system among tenured professors and prestigious researchers.

Yet there are other motives aside from money: Norms and social status are among the most powerful motivations human beings have, and these biases are far stronger in favor of the current system—even among grad students and junior faculty. Grad school is many things, some good, some bad; but one of them is a ritual gauntlet that indoctrinates you into the belief that working in academia is the One True Path, without which your life is a failure. If your claim is that grad students are upset at the current system because we overestimate our own qualifications and are feeling sour grapes, you need to explain our prevalence of Impostor Syndrome. By and large, grad students don’t overestimate our abilities—we underestimate them. If we think we’re as good at this as you are, that probably means we’re better. Indeed I have little doubt that Thomas Herndon is a better economist than Kenneth Rogoff will ever be.

I have additional evidence that insider bias is important here: When Paul Romer—Nobel laureate—left academia he published an utterly scathing criticism of the state of academic macroeconomics. That is, once he had escaped the incentives toward insider bias, he turned against the entire field.

Romer pulls absolutely no punches: He literally compares the standard methods of DSGE models to “phlogiston” and “gremlins”. And the paper is worth reading, because it’s obviously entirely correct. He pulls no punches and every single one lands on target. It’s also a pretty fun read, at least if you have the background knowledge to appreciate the dry in-jokes. (Much like “Transgressing the Boundaries: Toward a Transformative Hermeneutics of Quantum Gravity.” I still laugh out loud every time I read the phrase “hegemonic Zermelo-Frankel axioms”, though I realize most people would be utterly nonplussed. For the unitiated, these are the Zermelo-Frankel axioms. Can’t you just see the colonialist imperialism in sentences like “\forall x \forall y (\forall z, z \in x \iff z \in y) \implies x = y”?)

In other words, the Upton Sinclair Principle seems to be applying here: “It is difficult to get a man to understand something when his salary depends upon not understanding it.” The people with the most power to change the system of scientific publishing are journal editors and prestigious researchers, and they are the people for whom the current system is running quite swimmingly.

It’s not that good science can’t succeed in the current system—it often does. In fact, I’m willing to grant that it almost always does, eventually. When the evidence has mounted for long enough and the most adamant of the ancien regime finally retire or die, then, at last, the paradigm will shift. But this process takes literally decades longer than it should. In principle, a wrong theory can be invalidated by a single rigorous experiment. In practice, it generally takes about 30 years of experiments, most of which don’t get published, until the powers that be finally give in.

This delay has serious consequences. It means that many of the researchers working on the forefront of a new paradigm—precisely the people that the scientific community ought to be supporting most—will suffer from being unable to publish their work, get grant funding, or even get hired in the first place. It means that not only will good science take too long to win, but that much good science will never get done at all, because the people who wanted to do it couldn’t find the support they needed to do so. This means that the delay is in fact much longer than it appears: Because it took 30 years for one good idea to take hold, all the other good ideas that would have sprung from it in that time will be lost, at least until someone in the future comes up with them.

I don’t think I’ll ever forget it: At the AEA conference a few years back, I went to a luncheon celebrating Richard Thaler, one of the founders of behavioral economics, whom I regard as one of the top 5 greatest economists of the 20th century (I’m thinking something like, “Keynes > Nash > Thaler > Ramsey > Schelling”). Yes, now he is being rightfully recognized for his seminal work; he won a Nobel, and he has an endowed chair at Chicago, and he got an AEA luncheon in his honor among many other accolades. But it was not always so. Someone speaking at the luncheon offhandedly remarked something like, “Did we think Richard would win a Nobel? Honestly most of us weren’t sure he’d get tenure.” Most of the room laughed; I had to resist the urge to scream. If Richard Thaler wasn’t certain to get tenure, then the entire system is broken. This would be like finding out that Erwin Schrodinger or Niels Bohr wasn’t sure he would get tenure in physics.

A. Gary Schilling, a renowned Wall Street economist (read: One Who Has Turned to the Dark Side), once remarked (the quote is often falsely attributed to Keynes): “markets can remain irrational a lot longer than you and I can remain solvent.” In the same spirit, I would say this: the scientific community can remain wrong a lot longer than you and I can extend our graduate fellowships and tenure clocks.