Is grade inflation a real problem?

Mar 4 JDN 2458182

You can’t spend much time teaching at the university level and not hear someone complain about “grade inflation”. Almost every professor seems to believe in it, and yet they must all be participating in it, if it’s really such a widespread problem.

This could be explained as a collective action problem, a Tragedy of the Commons: If the incentives are always to have the students with the highest grades—perhaps because of administrative pressure, or in order to get better reviews from students—then even if all professors would prefer a harsher grading scheme, no individual professor can afford to deviate from the prevailing norms.

But in fact I think there is a much simpler explanation: Grade inflation doesn’t exist.

In economic growth theory, economists make a sharp distinction between inflation—increase in prices without change in underlying fundamentals—and growth—increase in the real value of output. I contend that there is no such thing as grade inflation—what we are in fact observing is grade growth.
Am I saying that students are actually smarter now than they were 30 years ago?

Yes. That’s exactly what I’m saying.

But don’t take it from me. Take it from the decades of research on the Flynn Effect: IQ scores have been rising worldwide at a rate of about 0.3 IQ points per year for as long as we’ve been keeping good records. Students today are about 10 IQ points smarter than students 30 years ago—a 2018 IQ score of 95 is equivalent to a 1988 score of 105, which is equivalent to a 1958 score of 115. There is reason to think this trend won’t continue indefinitely, since the effect is mainly concentrated at the bottom end of the distribution; but it has continued for quite some time already.

This by itself would probably be enough to explain the observed increase in grades, but there’s more: College students are also a self-selected sample, admitted precisely because they were believed to be the smartest individuals in the application pool. Rising grades at top institutions are easily explained by rising selectivity at top schools: Harvard now accepts 5.6% of applicants. In 1942, Harvard accepted 92% of applicants. The odds of getting in have fallen from 9:1 in favor to 19:1 against. Today, you need a 4.0 GPA, a 36 ACT in every category, glowing letters of recommendation, and hundreds of hours of extracurricular activities (or a family member who donated millions of dollars, of course) to get into Harvard. In the 1940s, you needed a high school diploma and a B average.

In fact, when educational researchers have tried to quantitatively study the phenomenon of “grade inflation”, they usually come back with the result that they simply can’t find it. The US department of education conducted a study in 1995 showing that average university grades had declined since 1965. Given that the Flynn effect raised IQ by almost 10 points during that time, maybe we should be panicking about grade deflation.

It really wouldn’t be hard to make that case: “Back in my day, you could get an A just by knowing basic algebra! Now they want these kids to take partial derivatives?” “We used to just memorize facts to ace the exam; but now teachers keep asking for reasoning and critical thinking?”

More recently, a study in 2013 found that grades rose at the high school level, but fell at the college level, and showed no evidence of losing any informativeness as a signaling mechanism. The only recent study I could find showing genuinely compelling evidence for grade inflation was a 2017 study of UK students estimating that grades are growing about twice as fast as the Flynn effect alone would predict. Most studies don’t even consider the possibility that students are smarter than they used to be—they just take it for granted that any increase in average grades constitutes grade inflation. Many of them don’t even control for the increase in selectivity—here’s one using the fact that Harvard’s average rose from 2.7 to 3.4 from 1960 to 2000 as evidence of “grade inflation” when Harvard’s acceptance rate fell from almost 30% to only 10% during that period.

Indeed, the real mystery is why so many professors believe in grade inflation, when the evidence for it is so astonishingly weak.

I think it’s availability heuristic. Who are professors? They are the cream of the crop. They aced their way through high school, college, and graduate school, then got hired and earned tenure—they were one of a handful of individuals who won a fierce competition with hundreds of competitors at each stage. There are over 320 million people in the US, and only 1.3 million college faculty. This means that college professors represent about the top 0.4% of high-scoring students.

Combine that with the fact that human beings assort positively (we like to spend time with people who are similar to us) and use availability heuristic (we judge how likely something is based on how many times we have seen it).

Thus, when a professor compares to her own experience of college, she is remembering her fellow top-scoring students at elite educational institutions. She is recalling the extreme intellectual demands she had to meet to get where she is today, and erroneously assuming that these are representative of most the population of her generation. She probably went to school at one of a handful of elite institutions, even if she now teaches at a mid-level community college: three quarters of college faculty come from the top one quarter of graduate schools.

And now she compares to the students she has to teach, most of whom would not be able to meet such demands—but of course most people in her generation couldn’t either. She frets for the future of humanity only because not everyone is a genius like her.

Throw in the Curse of Knowledge: The professor doesn’t remember how hard it was to learn what she has learned so far, and so the fact that it seems easy now makes her think it was easy all along. “How can they not know how to take partial derivatives!?” Well, let’s see… were you born knowing how to take partial derivatives?

Giving a student an A for work far inferior to what you’d have done in their place isn’t unfair. Indeed, it would clearly be unfair to do anything less. You have years if not decades of additional education ahead of them, and you are from self-selected elite sample of highly intelligent individuals. Expecting everyone to perform as well as you would is simply setting up most of the population for failure.

There are potential incentives for grade inflation that do concern me: In particular, a lot of international student visas and scholarship programs insist upon maintaining a B or even A- average to continue. Professors are understandably loathe to condemn a student to having to drop out or return to their home country just because they scored 81% instead of 84% on the final exam. If we really intend to make C the average score, then students shouldn’t lose funding or visas just for scoring a B-. Indeed, I have trouble defending any threshold above outright failing—which is to say, a minimum score of D-. If you pass your classes, that should be good enough to keep your funding.

Yet apparently even this isn’t creating too much upward bias, as students who are 10 IQ points smarter are still getting about the same scores as their forebears. We should be celebrating that our population is getting smarter, but instead we’re panicking over “easy grading”.

But kids these days, am I right?

Influenza vaccination, herd immunity, and the Tragedy of the Commons

Dec 24, JDN 2458112

Usually around this time of year I do a sort of “Christmas special” blog post, something about holidays or gifts. But this year I have a rather different seasonal idea in mind. It’s not just the holiday season; it’s also flu season.

Each year, influenza kills over 56,000 people in the US, and between 300,000 and 600,000 people worldwide, mostly in the winter months. And yet, in any given year, only about 40% of adults and 60% of children get the flu vaccine.

The reason for this should be obvious to any student of economics: It’s a Tragedy of the Commons. If enough people got vaccinated that we attained reliable herd immunity (which would take about 90%), then almost nobody would get influenza, and the death rate would plummet. But for any given individual, the vaccine is actually not all that effective. Your risk of getting the flu only drops by about half if you receive the vaccine. The effectiveness is particularly low among the elderly, who are also at the highest risk for serious complications due to influenza.

Thus, for any given individual, the incentive to get vaccinated isn’t all that strong, even though society as a whole would be much better off if we all got vaccinated. Your probability of suffering serious complications from influenza is quite low, and wouldn’t be reduced all that much if you got the vaccine; so even though flu vaccines aren’t that costly in terms of time, money, discomfort, and inconvenience, the cost is just high enough that a lot of us don’t bother to get the shot each year.

On an individual level, my advice is simple: Go get a flu shot. Don’t do it just for yourself; do it for everyone around you. You are protecting the most vulnerable people in our society.

But if we really want everyone to get vaccinated, we need a policy response. I can think of two policies that might work, which can be broadly called a “stick” and a “carrot”.

The “stick” approach would be to make vaccination mandatory, as it already is for many childhood vaccines. Some sort of penalty would have to be introduced, but that’s not the real challenge. The real challenge would be how to actually enforce that penalty: How do we tell who is vaccinated and who isn’t?

When schools make vaccination mandatory, they require vaccination records for admission. It would be simple enough to add annual flu vaccines to the list of required shots for high schools and colleges (though no doubt the anti-vax crowd would make a ruckus). But can you make vaccination mandatory for work? That seems like a much larger violation of civil liberties. Alternatively, we could require that people submit medical records with their tax returns to avoid a tax penalty—but the privacy violations there are quite substantial as well.

Hence, I would favor the “carrot” approach: Use government subsidies to provide a positive incentive for vaccination. Don’t simply make vaccination free; actually pay people to get vaccinated. Make the subsidy larger than the actual cost of the shots, and require that the doctors and pharmacies administering them remit the extra to the customers. Something like $20 per shot ought to do it; since the cost of the shots is also around $20, then vaccinating the full 300 million people of the United States every year would cost about $12 billion; this is less than the estimated economic cost of influenza, so it would essentially pay for itself.

$20 isn’t a lot of money for most people; but then, like I said, the time and inconvenience of a flu shot aren’t that large either. There have been moderately successful (but expensive) programs incentivizing doctors to perform vaccinations, but that’s stupid; frankly I’m amazed it worked at all. It’s patients who need incentivized. Doctors will give you a flu shot if you ask them. The problem is that most people don’t ask.

Do this, and we could potentially save tens of thousands of lives every year, for essentially zero net cost. And that sounds to me like a Christmas wish worth making.

When are we going to get serious about climate change?

Oct 8, JDN 24578035

Those two storms weren’t simply natural phenomena. We had a hand in creating them.

The EPA doesn’t want to talk about the connection, and we don’t have enough statistical power to really be certain, but there is by now an overwhelming scientific consensus that global climate change will increase hurricane intensity. The only real question left is whether it is already doing so.

The good news is that global carbon emissions are no longer rising. They have been essentially static for the last few years. The bad news is that this is almost certainly too little, too late.

The US is not on track to hit our 2025 emission target; we will probably exceed it by at least 20%.

But the real problem is that the targets themselves are much too high. Most countries have pledged to drop emissions only about 8-10% below their 1990s levels.

Even with the progress we have made, we are on track to exceed the global carbon budget needed to keep warming below 2 C by the year 2040. We have been reducing emission intensity by about 0.8% per year—we need to be reducing it by at least 3% per year and preferably faster. Highly-developed nations should be switching to nuclear energy as quickly as possible; an equitable global emission target requires us to reduce our emissions by 80% by 2050.

At the current rate of improvement, we will overshoot the 2 C warming target and very likely the 3C target as well.

Why aren’t we doing better? There is of course the Tragedy of the Commons to consider: Each individual country acting in its own self-interest will continue to pollute more, as this is the cheapest and easiest way to maintain industrial development. But then if all countries do so, the result is a disaster for us all.
But this explanation is too simple. We have managed to achieve some international cooperation on this issue. The Kyoto protocol has worked; emissions among Kyoto member nations have been reduced by more than 20% below 1990 levels, far more than originally promised. The EU in particular has taken a leadership role in reducing emissions, and has a serious shot at hitting their target of 40% reduction by 2030.

That is a truly astonishing scale of cooperation; the EU has a population of over 500 million people and spans 28 nations. It would seem like doing that should get us halfway to cooperating across all nations and all the world’s people.

But there is a vital difference between the EU and the world as a whole: The tribal paradigm. Europeans certainly have their differences: The UK and France still don’t really get along, everyone’s bitter with Germany about that whole Hitler business, and as the acronym PIIGS emphasizes, the peripheral countries have never quite felt as European as the core Schengen members. But despite all this, there has been a basic sense of trans-national (meta-national?) unity among Europeans for a long time.
For one thing, today Europeans see each other as the same race. That wasn’t always the case. In Medieval times, ethnic categories were as fine as “Cornish” and “Liverpudlian”. (To be fair, there do still exist a handful of Cornish nationalists.) Starting around the 18th cenutry, Europeans began to unite under the heading of “White people”, a classification that took on particular significance during the trans-Atlantic slave trade. But even in the 19th century, “Irish” and “Sicilian” were seen as racial categories. It wasn’t until the 20th century that Europeans really began to think of themselves as one “kind of people”, and not coincidentally it was at the end of the 20th century that the European Union finally took hold.

There is another region that has had a similar sense of unification: Latin America. Again, there are conflicts: There are a lot of nasty stereotypes about Puerto Ricans among Cubans and vice-versa. But Latinos, by and large, think of each other as the same “kind of people”, distinct from both Europeans and the indigenous population of the Americas.

I don’t think it is coincidental that the lowest carbon emission intensity (carbon emissions / GDP PPP) in the world is in Latin America, followed closely by Europe.
And if you had to name right now the most ethnically divided region in the world, what would you say? The Middle East, of course. And sure enough, they have the worst carbon emission intensity. (Of course, oil is an obvious confounding variable here, likely contributing to both.)

Indeed, the countries with the lowest ethnic fractionalization ratings tend to be in Europe and Latin America, and the highest tend to be in the Middle East and Africa.

Even within the United States, political polarization seems to come with higher carbon emissions. When we think of Democrats and Republicans as different “kinds of people”, we become less willing to cooperate on finding climate policy solutions.

This is not a complete explanation, of course. China has a low fractionalization rating but a high carbon intensity, and extremely high overall carbon emissions due to their enormous population. Africa’s carbon intensity isn’t as high as you’d think just from their terrible fractionalization, especially if you exclude Nigeria which is a major oil producer.

But I think there is nonetheless a vital truth here: One of the central barriers to serious long-term solutions to climate change is the entrenchment of racial and national identity. Solving the Tragedy of the Commons requires cooperation, we will only cooperate with those we trust, and we will only trust those we consider to be the same “kind of people”.

You can even hear it in the rhetoric: If “we” (Americans) give up our carbon emissions, then “they” (China) will take advantage of us. No one seems to worry about Alabama exploiting California—certainly no Republican would—despite the fact that in real economic terms they basically do. But people in Alabama are Americans; in other words, they count as actual people. People in China don’t count. If anything, people in California are supposed to be considered less American than people in Alabama, despite the fact that vastly more Americans live in California than Alabama. This mirrors the same pattern where we urban residents are somehow “less authentic” even though we outnumber the rural by four to one.
I don’t know how to mend this tribal division; I very much wish I did. But I do know that simply ignoring it isn’t going to work. We can talk all we want about carbon taxes and cap-and-trade, but as long as most of the world’s people are divided into racial, ethnic, and national identities that they consider to be in zero-sum conflict with one another, we are never going to achieve the level of cooperation necessary for a real permanent solution to climate change.

The temperatures and the oceans rise. United we must stand, or divided we shall fall.

Can we have property rights without violence?

Apr 23, JDN 2457867

Most likely, you have by now heard of the incident on a United Airlines flight, where a man was beaten and dragged out of a plane because the airline decided that they needed more seats than they had. In case you somehow missed all the news articles and memes, the Wikipedia page on the incident is actually fairly good.

There is a lot of gossip about the passenger’s history, which the flight crew couldn’t possibly have known and is therefore irrelevant. By far the best take I’ve seen on the ethical and legal implications of the incident can be found on Naked Capitalism, so if you do want to know more about it I highly recommend starting there. Probably the worst take I’ve read is on The Pilot Wife Life, but I suppose if you want a counterpoint there you go.

I really have little to add on this particular incident; instead my goal here is to contextualize it in a broader discussion of property rights in general.

Despite the fact that what United’s employees and contractors did was obviously unethical and very likely illegal, there are still a large number of people defending their actions. Aiming for a Woodman if not an Ironman, the most coherent defense I’ve heard offered goes something like this:

Yes, what United did in this particular case was excessive. But it’s a mistake to try to make this illegal, because any regulation that did so would necessarily impose upon fundamental property rights. United owns the airplane; they can set the rules for who is allowed to be on that airplane. And once they set those rules, they need to be able to enforce them. Sometimes, however distasteful it may be, that enforcement will require violence. But property rights are too important to give up. Would you want to live in a society where anyone could just barge into your home and you were not allowed to use force to remove them?

Understood in this context, United contractors calling airport security to get a man dragged off of a plane isn’t an isolated act of violence for no reason; it is part of a broader conflict between the protection of property rights and the reduction of violence. “Stand your ground” laws, IMF “structural adjustment” policies, even Trump’s wall against immigrants can be understood as part of this broader conflict.

One very far-left approach to resolving such a conflict—as taken by the Paste editorial “You’re not mad at United Airlines; you’re mad at America”—is to fall entirely on the side of nonviolence, and say essentially that any system which allows the use of violence to protect property rights is fundamentally corrupt and illegitimate.

I can see why such a view is tempting. It’s simple, for one thing, and that’s always appealing. But if you stop and think carefully about the consequences of this hardline stance, it becomes clear that such a system would be unsustainable. If we could truly never use violence ever to protect any property rights, that would mean that property law in general could no longer be enforced. People could in fact literally break into your home and steal your furniture, and you’d have no recourse, because the only way to stop them would involve either using violence yourself or calling the police, who would end up using violence. Property itself would lose all its meaning—and for those on the far-left who think that sounds like a good thing, I want you to imagine what the world would look like if the only things you could ever use were the ones you could physically hold onto, where you’d leave home never knowing whether your clothes or your food would still be there when you came back. A world without property sounds good if you are imagining that the insane riches of corrupt billionaires would collapse; but if you stop and think about coming home to no food and no furniture, perhaps it doesn’t sound so great. And while it does sound nice to have a world where no one is homeless because they can always find a place to sleep, that may seem less appealing if your home is the one that a dozen homeless people decide to squat in.

The Tragedy of the Commons would completely destroy any such economic system; the only way to sustain it would be either to produce such an enormous abundance of wealth that no amount of greed could ever overtake it, or, more likely, somehow re-engineer human brains so that greed no longer exists. I’m not aware of any fundamental limits on greed; as long as social status increases monotonically with wealth, there will be people who try to amass as much wealth as they possibly can, far beyond what any human being could ever actually consume, much less need. How do I know this? Because they already exist; we call them “billionaires”. A billionaire, essentially by definition, is a hoarder of wealth who owns more than any human being could consume. If someone happens upon a billion dollars and immediately donates most of it to charity (as J.K. Rowling did), they can escape such a categorization; and if they use the wealth to achieve grand visionary ambitions—and I mean real visions, not like Steve Jobs but like Elon Musk—perhaps they can as well. Saving the world from climate change and colonizing Mars are the sort of projects that really do take many billions of dollars to achieve. (Then again, shouldn’t our government be doing these things?) And if they just hold onto the wealth or reinvest it to make even more, a billionaire is nothing less than a hoarder, seeking gratification and status via ownership itself.

Indeed, I think the maximum amount of wealth one could ever really need is probably around $10 million in today’s dollars; with that amount, even a very low-risk investment portfolio could supply enough income to live wherever you want, wear whatever you want, drive whatever you want, eat whatever you want, travel whenever you want. At even a 5% return, that’s $500,000 per year to spend without ever working or depleting your savings. At 10%, you’d get a million dollars a year for sitting there and doing nothing. And yet there are people with one thousand times as much wealth as this.

But not all property is of this form. I was about to say “the vast majority” is not, but actually that’s not true; a large proportion of wealth is in fact in the form of capital hoarded by the rich. Indeed, about 50% of the world’s wealth is owned by the richest 1%. (To be fair, the world’s top 1% is a broader category than one might think; the top 1% in the world is about the top 5% in the US; based on census data, that puts the cutoff at about $250,000 in net wealth.) But the majority of people have wealth in some form, and would stand to suffer if property rights were not enforced at all.

So we might be tempted to the other extreme, as the far-right seems to be, and say that any force is justified in the protection of fundamental property rights—that if vagrants step onto my land, I am well within my rights to get out my shotgun. (You know, hypothetically; not that I own a shotgun, or, for that matter, any land.) This seems to appeal especially to those who nostalgize the life on the frontier, “living off the land” (often losing family members to what now seem like trivial bacterial illnesses), “self-sufficient” (with generous government subsidies), in the “unspoiled wilderness” (from which the Army had forcibly removed Native Americans). Westerns have given us this sense that frontier life offers a kind of freedom and adventure that this urbane civilization lacks. And I suppose I am a fan of at least one Western, since one should probably count Firefly.

Yet of course this is madness; no civilization could survive if it really allowed people to just arbitrarily “defend” whatever property claims they decided to make. Indeed, it’s really just the flip side of the coin; as we’ve seen in Somalia (oh, by the way, we’re deploying troops there again), not protecting property and allowing universal violence to defend any perceived property largely amount to the same thing. If anything, the far-left fantasy seems more appealing; at least then we would not be subject to physical violence, and could call upon the authorities to protect us from that. In the far-right fantasy, we could accidentally step on what someone else claims to be his land and end up shot in the head.

So we need to have rules about who can use violence to defend what property and why. And that, of course, is complicated. We can start by having a government that defines property claims and places limits on their enforcement; but that still leaves the question of which sort of property claims and enforcement mechanisms the government should allow.

I think the principle should essentially be minimum force. We do need to protect property rights, yes; but if there is a way of doing so without committing violence, that’s the way we should do it. And if we do need to use violence, we should use as little as possible.

In theory we already do this: We have “rules of engagement” for the military and “codes of conduct” for police. But in practice, these rules are rarely enforced; they only get applied to really extreme violations, and sometimes not even then. The idea seems to be that enforcing strict rules on our soldiers and police officers constitutes disloyalty, even treason. We should “let them do their jobs”. This is the norm that must change. Those rules are their jobs. If they break those rules, they aren’t doing their jobs—they’re doing something else, something that endangers the safety and security of our society. The disloyalty is not in investigating and enforcing rules against police misconduct—the disloyalty is in police misconduct. If you want to be a cop but you’re not willing to follow the rules, you don’t actually want to be a cop—you want to be a bully with a gun and a badge.

And of course, one need not be a government agency in order to use excessive force. Many private corporations have security forces of their own, which frequently abuse and assault people. Most terrifying of all, there are whole corporations of “private military contractors”—let’s call them what they are: mercenaries—like Academi, formerly known as Blackwater. The whole reason these corporations even exist is to evade regulations on military conduct, and that is why they must be eliminated.

In the United case, there was obviously a nonviolent answer; all they had to do was offer to pay people to give up their seats, and bid up the price until enough people left. Someone would have left eventually; there clearly was a market-clearing price. That would have cost $2,000, maybe $5,000 at the most—a lot better than the $255 million lost in United’s stock value as a result of the bad PR.

If a homeless person decides to squat in your house, yes, perhaps you’d be justified in calling the police to remove them. Clearly you’re under no obligation to provide them room and board indefinitely. But there may be better solutions: Is there a homeless shelter in the area? Could you give them a ride there, or at least bus fare?

When immigrants cross our borders, may we turn them away? Now, here’s one where I’m pretty strongly tempted to go all the way and say we have no right whatsoever to stop them. There are no requirements for being born into citizenship, after all—so on what grounds do we add requirements to acquire citizenship? Is there something in the water of the Great Lakes and the Mississippi River that, when you drink it for 18 years (processed by municipal water systems of course; what are we, barbarians?), automatically makes you into a patriotic American? Does one become more law-abiding, or less capable of cruelty or fanaticism, by being brought into the world on one side of an imaginary line in the sand? If there are going to be requirements for citizenship, shouldn’t they be applied to everyone, and not just people who were born in the wrong place?

Yes, when we have no other choice, we must be prepared to use violence to defend property—because otherwise, there’s no such thing as property. But more often than not, we use violence when we didn’t need to, or use much more violence than was actually necessary. The principle that violence can be justified in defense of property does not entail that any violence is always justified in defense of property.

We do not benefit from economic injustice.

JDN 2457461

Recently I think I figured out why so many middle-class White Americans express so much guilt about global injustice: A lot of people seem to think that we actually benefit from it. Thus, they feel caught between a rock and a hard place; conquering injustice would mean undermining their own already precarious standard of living, while leaving it in place is unconscionable.

The compromise, is apparently to feel really, really guilty about it, constantly tell people to “check their privilege” in this bizarre form of trendy autoflagellation, and then… never really get around to doing anything about the injustice.

(I guess that’s better than the conservative interpretation, which seems to be that since we benefit from this, we should keep doing it, and make sure we elect big, strong leaders who will make that happen.)

So let me tell you in no uncertain words: You do not benefit from this.

If anyone does—and as I’ll get to in a moment, that is not even necessarily true—then it is the billionaires who own the multinational corporations that orchestrate these abuses. Billionaires and billionaires only stand to gain from the exploitation of workers in the US, China, and everywhere else.

How do I know this with such certainty? Allow me to explain.

First of all, it is a common perception that prices of goods would be unattainably high if they were not produced on the backs of sweatshop workers. This perception is mistaken. The primary effect of the exploitation is simply to raise the profits of the corporation; there is a secondary effect of raising the price a moderate amount; and even this would be overwhelmed by the long-run dynamic effect of the increased consumer spending if workers were paid fairly.

Let’s take an iPad, for example. The price of iPads varies around the world in a combination of purchasing power parity and outright price discrimination; but the top model almost never sells for less than $500. The raw material expenditure involved in producing one is about $370—and the labor expenditure? Just $11. Not $110; $11. If it had been $110, the price could still be kept under $500 and turn a profit; it would simply be much smaller. That is, even if prices are really so elastic that Americans would refuse to buy an iPad at any more than $500 that would still mean Apple could still afford to raise the wages they pay (or rather, their subcontractors pay) workers by an order of magnitude. A worker who currently works 50 hours a week for $10 per day could now make $10 per hour. And the price would not have to change; Apple would simply lose profit, which is why they don’t do this. In the absence of pressure to the contrary, corporations will do whatever they can to maximize profits.

Now, in fact, the price probably would go up, because Apple fans are among the most inelastic technology consumers in the world. But suppose it went up to $600, which would mean a 1:1 absorption of these higher labor expenditures into price. Does that really sound like “Americans could never afford this”? A few people right on the edge might decide they couldn’t buy it at that price, but it wouldn’t be very many—indeed, like any well-managed monopoly, Apple knows to stop raising the price at the point where they start losing more revenue than they gain.

Similarly, half the price of an iPhone is pure profit for Apple, and only 2% goes into labor. Once again, wages could be raised by an order of magnitude and the price would not need to change.

Apple is a particularly obvious example, but it’s quite simple to see why exploitative labor cannot be the source of improved economic efficiency. Paying workers less does not make them do better work. Treating people more harshly does not improve their performance. Quite the opposite: People work much harder when they are treated well. In addition, at the levels of income we’re talking about, small improvements in wages would result in substantial improvements in worker health, further improving performance. Finally, substitution effect dominates income effect at low incomes. At very high incomes, income effect can dominate substitution effect, so higher wages might result in less work—but it is precisely when we’re talking about poor people that it makes the least sense to say they would work less if you paid them more and treated them better.

At most, paying higher wages can redistribute existing wealth, if we assume that the total amount of wealth does not increase. So it’s theoretically possible that paying higher wages to sweatshop workers would result in them getting some of the stuff that we currently have (essentially by a price mechanism where the things we want get more expensive, but our own wages don’t go up). But in fact our wages are most likely too low as well—wages in the US have become unlinked from productivity, around the time of Reagan—so there’s reason to think that a more just system would improve our standard of living also. Where would all the extra wealth come from? Well, there’s an awful lot of room at the top.

The top 1% in the US own 35% of net wealth, about as much as the bottom 95%. The 400 billionaires of the Forbes list have more wealth than the entire African-American population combined. (We’re double-counting Oprah—but that’s it, she’s the only African-American billionaire in the US.) So even assuming that the total amount of wealth remains constant (which is too conservative, as I’ll get to in a moment), improving global labor standards wouldn’t need to pull any wealth from the middle class; it could get plenty just from the top 0.01%.

In surveys, most Americans are willing to pay more for goods in order to improve labor standards—and the amounts that people are willing to pay, while they may seem small (on the order of 10% to 20% more), are in fact clearly enough that they could substantially increase the wages of sweatshop workers. The biggest problem is that corporations are so good at covering their tracks that it’s difficult to know whether you are really supporting higher labor standards. The multiple layers of international subcontractors make things even more complicated; the people who directly decide the wages are not the people who ultimately profit from them, because subcontractors are competitive while the multinationals that control them are monopsonists.

But for now I’m not going to deal with the thorny question of how we can actually regulate multinational corporations to stop them from using sweatshops. Right now, I just really want to get everyone on the same page and be absolutely clear about cui bono. If there is a benefit at all, it’s not going to you and me.

Why do I keep saying “if”? As so many people will ask me: “Isn’t it obvious that if one person gets less money, someone else must get more?” If you’ve been following my blog at all, you know that the answer is no.

On a single transaction, with everything else held constant, that is true. But we’re not talking about a single transaction. We’re talking about a system of global markets. Indeed, we’re not really talking about money at all; we’re talking about wealth.

By paying their workers so little that those workers can barely survive, corporations are making it impossible for those workers to go out and buy things of their own. Since the costs of higher wages are concentrated in one corporation while the benefits of higher wages are spread out across society, there is a Tragedy of the Commons where each corporation acting in its own self-interest undermines the consumer base that would have benefited all corporations (not to mention people who don’t own corporations). It does depend on some parameters we haven’t measured very precisely, but under a wide range of plausible values, it works out that literally everyone is worse off under this system than they would have been under a system of fair wages.

This is not simply theoretical. We have empirical data about what happened when companies (in the US at least) stopped using an even more extreme form of labor exploitation: slavery.

Because we were on the classical gold standard, GDP growth in the US in the 19th century was extremely erratic, jumping up and down as high as 10 lp and as low as -5 lp. But if you try to smooth out this roller-coaster business cycle, you can see that our growth rate did not appear tobe slowed by the ending of slavery:

US_GDP_growth_1800s

 

Looking at the level of real per capita GDP (on a log scale) shows a continuous growth trend as if nothing had changed at all:

US_GDP_per_capita_1800s

In fact, if you average the growth rates (in log points, averaging makes sense) from 1800 to 1860 as antebellum and from 1865 to 1900 as postbellum, you find that the antebellum growth rate averaged 1.04 lp, while the postbellum growth rate averaged 1.77 lp. Over a period of 50 years, that’s the difference between growing by a factor of 1.7 and growing by a factor of 2.4. Of course, there were a lot of other factors involved besides the end of slavery—but at the very least it seems clear that ending slavery did not reduce economic growth, which it would have if slavery were actually an efficient economic system.

This is a different question from whether slaveowners were irrational in continuing to own slaves. Purely on the basis of individual profit, it was most likely rational to own slaves. But the broader effects on the economic system as a whole were strongly negative. I think that part of why the debate on whether slavery is economically inefficient has never been settled is a confusion between these two questions. One side says “Slavery damaged overall economic growth.” The other says “But owning slaves produced a rate of return for investors as high as manufacturing!” Yeah, those… aren’t answering the same question. They are in fact probably both true. Something can be highly profitable for individuals while still being tremendously damaging to society.

I don’t mean to imply that sweatshops are as bad as slavery; they are not. (Though there is still slavery in the world, and some sweatshops tread a fine line.) What I’m saying is that showing that sweatshops are profitable (no doubt there) or even that they are better than most of the alternatives for their workers (probably true in most cases) does not show that they are economically efficient. Sweatshops are beneficent exploitationthey make workers better off, but in an obviously unjust way. And they only make workers better off compared to the current alternatives; if they were replaced with industries paying fair wages, workers would obviously be much better off still.

And my point is, so would we. While the prices of goods would increase slightly in the short run, in the long run the increased consumer spending by people in Third World countries—which soon would cease to be Third World countries, as happened in Korea and Japan—would result in additional trade with us that would raise our standard of living, not lower it. The only people it is even plausible to think would be harmed are the billionaires who own our multinational corporations; and yet even they might stand to benefit from the improved efficiency of the global economy.

No, you do not benefit from sweatshops. So stop feeling guilty, stop worrying so much about “checking your privilege”—and let’s get out there and do something about it.

The Tragedy of the Commons

JDN 2457387

In a previous post I talked about one of the most fundamental—perhaps the most fundamental—problem in game theory, the Prisoner’s Dilemma, and how neoclassical economic theory totally fails to explain actual human behavior when faced with this problem in both experiments and the real world.

As a brief review, the essence of the game is that both players can either cooperate or defect; if they both cooperate, the outcome is best overall; but it is always in each player’s interest to defect. So a neoclassically “rational” player would always defect—resulting in a bad outcome for everyone. But real human beings typically cooperate, and thus do better. The “paradox” of the Prisoner’s Dilemma is that being “rational” results in making less money at the end.

Obviously, this is not actually a good definition of rational behavior. Being short-sighted and ignoring the impact of your behavior on others doesn’t actually produce good outcomes for anybody, including yourself.

But the Prisoner’s Dilemma only has two players. If we expand to a larger number of players, the expanded game is called a Tragedy of the Commons.

When we do this, something quite surprising happens: As you add more people, their behavior starts converging toward the neoclassical solution, in which everyone defects and we get a bad outcome for everyone.

Indeed, people in general become less cooperative, less courageous, and more apathetic the more of them you put together. K was quite apt when he said, “A person is smart; people are dumb, panicky, dangerous animals and you know it.” There are ways to counteract this effect, as I’ll get to in a moment—but there is a strong effect that needs to be counteracted.

We see this most vividly in the bystander effect. If someone is walking down the street and sees someone fall and injure themselves, there is about a 70% chance that they will go try to help the person who fell—humans are altruistic. But if there are a dozen people walking down the street who all witness the same event, there is only a 40% chance that any of them will help—humans are irrational.

The primary reason appears to be diffusion of responsibility. When we are alone, we are the only one could help, so we feel responsible for helping. But when there are others around, we assume that someone else could take care of it for us, so if it isn’t done that’s not our fault.

There also appears to be a conformity effect: We want to conform our behavior to social norms (as I said, to a first approximation, all human behavior is social norms). The mere fact that there are other people who could have helped but didn’t suggests the presence of an implicit social norm that we aren’t supposed to help this person for some reason. It never occurs to most people to ask why such a norm would exist or whether it’s a good one—it simply never occurs to most people to ask those questions about any social norms. In this case, by hesitating to act, people actually end up creating the very norm they think they are obeying.

This can lead to what’s called an Abilene Paradox, in which people simultaneously try to follow what they think everyone else wants and also try to second-guess what everyone else wants based on what they do, and therefore end up doing something that none of them actually wanted. I think a lot of the weird things humans do can actually be attributed to some form of the Abilene Paradox. (“Why are we sacrificing this goat?” “I don’t know, I thought you wanted to!”)

Autistic people are not as good at following social norms (though some psychologists believe this is simply because our social norms are optimized for the neurotypical population). My suspicion is that autistic people are therefore less likely to suffer from the bystander effect, and more likely to intervene to help someone even if they are surrounded by passive onlookers. (Unfortunately I wasn’t able to find any good empirical data on that—it appears no one has ever thought to check before.) I’m quite certain that autistic people are less likely to suffer from the Abilene Paradox—if they don’t want to do something, they’ll tell you so (which sometimes gets them in trouble).

Because of these psychological effects that blunt our rationality, in large groups human beings often do end up behaving in a way that appears selfish and short-sighted.

Nowhere is this more apparent than in ecology. Recycling, becoming vegetarian, driving less, buying more energy-efficient appliances, insulating buildings better, installing solar panels—none of these things are particularly difficult or expensive to do, especially when weighed against the tens of millions of people who will die if climate change continues unabated. Every recyclable can we throw in the trash is a silent vote for a global holocaust.

But as it no doubt immediately occurred to you to respond: No single one of us is responsible for all that. There’s no way I myself could possibly save enough carbon emissions to significantly reduce climate change—indeed, probably not even enough to save a single human life (though maybe). This is certainly true; the error lies in thinking that this somehow absolves us of the responsibility to do our share.

I think part of what makes the Tragedy of the Commons so different from the Prisoner’s Dilemma, at least psychologically, is that the latter has an identifiable victimwe know we are specifically hurting that person more than we are helping ourselves. We may even know their name (and if we don’t, we’re more likely to defect—simply being on the Internet makes people more aggressive because they don’t interact face-to-face). In the Tragedy of the Commons, it is often the case that we don’t know who any of our victims are; moreover, it’s quite likely that we harm each one less than we benefit ourselves—even though we harm everyone overall more.

Suppose that driving a gas-guzzling car gives me 1 milliQALY of happiness, but takes away an average of 1 nanoQALY from everyone else in the world. A nanoQALY is tiny! Negligible, even, right? One billionth of a year, a mere 30 milliseconds! Literally less than the blink of an eye. But take away 30 milliseconds from everyone on Earth and you have taken away 7 years of human life overall. Do that 10 times, and statistically one more person is dead because of you. And you have gained only 10 milliQALY, roughly the value of $300 to a typical American. Would you kill someone for $300?

Peter Singer has argued that we should in fact think of it this way—when we cause a statistical death by our inaction, we should call it murder, just as if we had left a child to drown to keep our clothes from getting wet. I can’t agree with that. When you think seriously about the scale and uncertainty involved, it would be impossible to live at all if we were constantly trying to assess whether every action would lead to statistically more or less happiness to the aggregate of all human beings through all time. We would agonize over every cup of coffee, every new video game. In fact, the global economy would probably collapse because none of us would be able to work or willing to buy anything for fear of the consequences—and then whom would we be helping?

That uncertainty matters. Even the fact that there are other people who could do the job matters. If a child is drowning and there is a trained lifeguard right next to you, the lifeguard should go save the child, and if they don’t it’s their responsibility, not yours. Maybe if they don’t you should try; but really they should have been the one to do it.
But we must also not allow ourselves to simply fall into apathy, to do nothing simply because we cannot do everything. We cannot assess the consequences of every specific action into the indefinite future, but we can find general rules and patterns that govern the consequences of actions we might take. (This is the difference between act utilitarianism, which is unrealistic, and rule utilitarianism, which I believe is the proper foundation for moral understanding.)

Thus, I believe the solution to the Tragedy of the Commons is policy. It is to coordinate our actions together, and create enforcement mechanisms to ensure compliance with that coordinated effort. We don’t look at acts in isolation, but at policy systems holistically. The proper question is not “What should I do?” but “How should we live?”

In the short run, this can lead to results that seem deeply suboptimal—but in the long run, policy answers lead to sustainable solutions rather than quick-fixes.

People are starving! Why don’t we just steal money from the rich and use it to feed people? Well, think about what would happen if we said that the property system can simply be unilaterally undermined if someone believes they are achieving good by doing so. The property system would essentially collapse, along with the economy as we know it. A policy answer to that same question might involve progressive taxation enacted by a democratic legislature—we agree, as a society, that it is justified to redistribute wealth from those who have much more than they need to those who have much less.

Our government is corrupt! We should launch a revolution! Think about how many people die when you launch a revolution. Think about past revolutions. While some did succeed in bringing about more just governments (e.g. the French Revolution, the American Revolution), they did so only after a long period of strife; and other revolutions (e.g. the Russian Revolution, the Iranian Revolution) have made things even worse. Revolution is extremely costly and highly unpredictable; we must use it only as a last resort against truly intractable tyranny. The policy answer is of course democracy; we establish a system of government that elects leaders based on votes, and then if they become corrupt we vote to remove them. (Sadly, we don’t seem so good about that second part—the US Congress has a 14% approval rating but a 95% re-election rate.)

And in terms of ecology, this means that berating ourselves for our sinfulness in forgetting to recycle or not buying a hybrid car does not solve the problem. (Not that it’s bad to recycle, drive a hybrid car, and eat vegetarian—by all means, do these things. But it’s not enough.) We need a policy solution, something like a carbon tax or cap-and-trade that will enforce incentives against excessive carbon emissions.

In case you don’t think politics makes a difference, all of the Democrat candidates for President have proposed such plans—Bernie Sanders favors a carbon tax, Martin O’Malley supports an aggressive cap-and-trade plan, and Hillary Clinton favors heavily subsidizing wind and solar power. The Republican candidates on the other hand? Most of them don’t even believe in climate change. Chris Christie and Carly Fiorina at least accept the basic scientific facts, but (1) they are very unlikely to win at this point and (2) even they haven’t announced any specific policy proposals for dealing with it.

This is why voting is so important. We can’t do enough on our own; the coordination problem is too large. We need to elect politicians who will make policy. We need to use the systems of coordination enforcement that we have built over generations—and that is fundamentally what a government is, a system of coordination enforcement. Only then can we overcome the tendency among human beings to become apathetic and short-sighted when faced with a Tragedy of the Commons.

Why building more roads doesn’t stop rush hour

JDN 2457362

The topic of this post was selected based on the very first Patreon vote (which was albeit limited because I only had three patrons eligible to vote and only one of them actually did vote; but these things always start small, right?). It is what you (well, one of you) wanted to see. In future months there will be more such posts, and hopefully more people will vote.

Most Americans face an economic paradox every morning and every evening. Our road network is by far the largest in the world (for three reasons: We’re a huge country geographically, we have more money than anyone else, and we love our cars), and we continue to expand it; yet every morning around 8:00-9:00 and every evening around 17:00-18:00 we face rush hour, in which our roads become completely clogged by commuters and it takes two or three times as long to get anywhere.

Indeed, rush hour is experienced around the world, though it often takes the slightly different form of clogged public transit instead of clogged roads. In most countries, there are two specific one-hour periods in the morning and the evening in which all transportation is clogged to a standstill.

This is probably such a familiar part of your existence you never stopped to question it. But in fact it is quite bizarre; the natural processes of economic supply and demand should have solved this problem decades ago, so why haven’t they?

There are a number of important forces at work here, all of which conspire to doom our transit systems.

The first is the Tragedy of the Commons, which I’ll likely write about in the future (but since it didn’t win the vote, not just yet). The basic idea of the Tragedy of the Commons is similar to the Prisoner’s Dilemma, but expanded to a large number of people. A Tragedy of the Commons is a situation in which there are many people, each of whom has the opportunity to either cooperate with the group and help everyone a small amount, or defect from the group and help themselves a larger amount. If everyone cooperates, everyone is better off; but holding everyone else’s actions fixed, it is in each person’s self-interest to defect.

As it turns out, people do act closer to the neoclassical prediction in the Tragedy of the Commons—which is something I’d definitely like to get into at some point. Two different psychological mechanisms counter one another, and result in something fairly close to the prediction of neoclassical rational self-interest, at least when the number of people involved is very large. It’s actually a good example of how real human beings can deviate from neoclassical rationality both in a good way (we are altruistic) and in a bad way (we are irrational).

The large-scale way roads are a Tragedy of the Commons is that they are a public good, something that we share as a society. Except for toll roads (which I’ll get to in a moment), roads are set up so that once they are built, anyone can use them; so the best option for any individual person is to get everyone else to pay to build them and then quite literally free-ride on the roads everyone else built. But if everyone tries to do that, nobody is going to pay for the roads at all.

And indeed, our roads are massively underfunded. Simply to maintain currently-existing roads we need to spend about an additional $100 billion per year over what we’re already spending. Yet once you factor in all the extra costs of damaged vehicles, increased accidents, time wasted, and the fact that fixing things is cheaper than replacing them, in fact the cost to not maintain our roads is about 3 times as large as that. This is exactly what you expect to see in a Tragedy of the Commons; there’s a huge benefit for everyone just sitting there, not getting done, because nobody wants to pay for it themselves. Michigan saw this quite dramatically when we voted down increased road funding because it would have slightly increased sales taxes. (Granted, we should be funding roads with fuel taxes, not general sales taxes—but those are hardly any more popular.)

Toll roads can help with this, because they internalize the externality: When you have to pay for the roads that you use, you either use them less (creating less wear and tear) or pay more; either way, the gap between what is paid and what is needed is closed. And indeed, toll roads are better maintained than other roads. There are downsides, however; the additional effort to administrate the tolls is expensive, and traffic can be slowed down by toll booths (though modern transponder systems mitigate this effect substantially). Also, it’s difficult to fully privatize roads, because there is a large up-front cost and it takes a long time for a toll road to become profitable; most corporations don’t want to wait that long.

But we do build a lot of roads, and yet still we have rush hour. So that isn’t the full explanation.

The small-scale way that roads are a Tragedy of the Commons is that when you decide to drive during rush hour, you are in a sense defecting in a Tragedy of the Commons. You will get to your destination sooner than if you had waited until traffic clears; but by adding one more car to the congestion you have slowed everyone else down just a little bit. When we sum up all these little delays, we get the total gridlock that is rush hour. If you had instead waited to drive on clear roads, you would get to your destination without inconveniencing anyone else—but you’d get there a lot later.

The second major reason why we have rush hour is what is called induced demand. When you widen a road or add a parallel route, you generally fail to reduce traffic congestion on that route in the long run. What happens instead is that driving during rush hour becomes more convenient for a little while, which makes more people start driving during rush hour—they buy a car when they used to take the bus, or they don’t leave as early to go to work. Eventually enough people shift over that the equilibrium is restored—and the equilibrium is gridlock.

But if you think carefully, that can’t be the whole explanation. There are only so many people who could start driving during rush hour, so what if we simply built enough roads to accommodate them all? And if our public transit systems were better, people would feel no need to switch to driving, even if driving had in fact been made more convenient. And indeed, transportation economists have found that adding more capacity does reduce congestion—it just isn’t enough unless you also improve public transit. So why aren’t we improving public transit? See above, Tragedy of the Commons.

Yet we still don’t have a complete explanation, because of something that’s quite obvious in hindsight: Why do we all work 9:00 to 17:00!? There’s no reason for that. There’s nothing inherent about the angle of sunlight or something which requires us to work these hours—indeed, if there were, Daylight Savings Time wouldn’t work (which is not to say that it works well—Daylight Savings Times kills).

There should be a competitive market pressure to work different hours, which should ultimately lead to an equilibrium where traffic is roughly constant throughout the day, at least during the time when a large swath of the population is awake and outside. Congestion should spread itself out over time, because it is to the advantage of all involved if each driver tries to drive at a time when other driver’s aren’t. Driving outside of rush hour gives us an opportunity for something like “temporal arbitrage”, where you can pay a small amount of time here to get a larger amount of time there. And if there’s one thing a competitive economy is supposed to get rid of, it’s arbitrage.

But no, we keep almost all our working hours aligned at 09:00-17:00, and thus we get rush hour.

In fact, a lot of jobs would function better if they weren’t aligned in this way—retail sales, for example, is most successful during the “off hours”, because people only shop when they aren’t working. (Well, except for online shopping, and even then they’re not supposed to.) Banks continually insist on making their hours 9:00 to 17:00 when they know that on most days they’d actually get more business from 17:00 to 19:00 than they did from 9:00 to 17:00. Some banks are at least figuring that out enough to be open from 17:00 to 19:00—but they still don’t seem to grasp that retail banking services have no reason to be open during normal business hours. Commerce banking services do; but that’s a small portion of their overall customers (albeit not of their overall revenue). There’s no reason to have so many full branches open so many hours with most of the tellers doing nothing most of the time.

Education would be better off being later in the day, when students—particularly teenagers—have a chance to sleep in the way their brains are evolved to. The benefits of later school days in terms of academic performance and public health are actually astonishingly large. When you move the start of high school from 07:00 to 09:00, auto collisions involving teenagers drop 70%. Perhaps should be the new slogans: “Early classes cause car crashes.” Since 25% of auto collisions occur during rush hour, here’s another: “Always working nine to five? Vehicular homicide.”

Other jobs could have whatever hours they please. There’s no reason for most forms of manufacturing to be done at any particular hour of the day. Most clerical and office work could be done at any time (and thanks to the Internet, any place; though there are real benefits to working in an office). Writing can be done whenever it is convenient for the author—and when you think about it, an awful lot of jobs basically amount to writing.

Finance is only handled 09:00-17:00 because we force it to be. The idea of “opening” and “closing” the stock market each day is profoundly anachronistic, and actually amounts to granting special arbitrage privileges to the small number of financial institutions that are allowed to do so-called “after hours” trading.

And then there’s the fact that different people have different circadian rhythms, require different amounts of sleep and prefer to sleep at different times—it’s genetic. (My boyfriend and I are roughly three hours phase-shifted relative to one another, which made it surprisingly convenient to stay in touch when I lived in California and he lived in Michigan.)

Why do we continue to accept such absurdity?

Whenever you find yourself asking that question, try this answer first, for it is by far the most likely:

Social norms.

Social norms will make human beings do just about anything, from eating cockroaches to murdering elephants, from kilts to burqas, from waving giant foam hands to throwing octopus onto ice rinks, from landing on the moon to crashing into the World Trade Center, from bombing Afghanistan to marching on Washington, from eating only raw foods to using dead pigs as sex toys. Our basic mental architecture is structured around tribal identity, and to preserve that identity we will follow almost any rule imaginable. To a first approximation, all human behavior is social norms.

And indeed I can find no other explanation for why we continue to work on a “nine-to-five” 09:00-17:00 schedule (or for that matter why it probably feels weird to you that I say “17:00” instead of the far less efficient and more confusion-prone “5:00 PM”). Our productivity has skyrocketed, increasing by a factor of 4 just since 1950 (and these figures dramatically underestimate the gains in productivity from computer technology, because so much is in the form of free content, which isn’t counted in GDP). We could do the same work in a quarter the time, or twice as much in half the time. Yet still we continue to work the same old 40-hour work week, nine-to-five work day. We each do the work of a dozen previous workers, yet we still find a way to fill the same old work week, and the rich who grow ever richer still pay us more or less the same real wages. It’s all basically social norms at this point; this is how things have always been done, and we can’t imagine any other way. When you get right down to it, capitalism is fundamentally a system of social norms—a very successful one, but far from the only possibility and perhaps not the best.

Thus, why does building more roads not solve the problem of rush hour? Because we have a social norm that says we are all supposed to start work at 09:00 and end work at 17:00.
And that, dear readers, is what we must endeavor to change. Change our thinking, and we will change the norms. Change the norms, and we will change the world.

Externalities

JDN 2457202 EDT 17:52.

The 1992 Bill Clinton campaign had a slogan, “It’s the economy, stupid.”: A snowclone I’ve used on occasion is “it’s the externalities, stupid.” (Though I’m actually not all that fond of calling people ‘stupid’; though occasionally true is it never polite and rarely useful.) Externalities are one of the most important concepts in economics, and yet one that even all too many economists frequently neglect.

Fortunately for this one, I really don’t need much math; the concept isn’t even that complicated, which makes it all the more mysterious how frequently it is ignored. An externality is simply an effect that an action has upon those who were not involved in choosing to perform that action.

All sorts of actions have externalities; indeed, much rarer are actions that don’t. An obvious example is that punching someone in the face has the externality of injuring that person. Pollution is an important externality of many forms of production, because the people harmed by pollution are typically not the same people who were responsible for creating it. Traffic jams are created because every car on the road causes a congestion externality on all the other cars.

All the aforementioned are negative externalities, but there are also positive externalities. When one individual becomes educated, they tend to improve the overall economic viability of the place in which they live. Building infrastructure benefits whole communities. New scientific discoveries enhance the well-being of all humanity.

Externalities are a fundamental problem for the functioning of markets. In the absence of externalities—if each person’s actions only affected that one person and nobody else—then rational self-interest would be optimal and anything else would make no sense. In arguing that rationality is equivalent to self-interest, generations of economists have been, tacitly or explicitly, assuming that there are no such things as externalities.

This is a necessary assumption to show that self-interest would lead to something I discussed in an earlier post: Pareto-efficiency, in which the only way to make one person better off is to make someone else worse off. As I already talked about in that other post, Pareto-efficiency is wildly overrated; a wide variety of Pareto-efficient systems would be intolerable to actually live in. But in the presence of externalities, markets can’t even guarantee Pareto-efficiency, because it’s possible to have everyone acting in their rational self-interest cause harm to everyone at once.

This is called a tragedy of the commons; the basic idea is really quite simple. Suppose that when I burn a gallon of gasoline, that makes me gain 5 milliQALY by driving my car, but then makes everyone lose 1 milliQALY in increased pollution. On net, I gain 4 milliQALY, so if I am rational and self-interested I would do that. But now suppose that there are 10 people all given the same choice. If we all make that same choice, each of us will gain 1 milliQALY—and then lose 10 milliQALY. We would all have been better off if none of us had done it, even though it made sense to each of us at the time. Burning a gallon of gasoline to drive my car is beneficial to me, more so than the release of carbon dioxide into the atmosphere is harmful; but as a result of millions of people burning gasoline, the carbon dioxide in the atmosphere is destabilizing our planet’s climate. We’d all be better off if we could find some way to burn less gasoline.

In order for rational self-interest to be optimal, externalities have to somehow be removed from the system. Otherwise, there are actions we can take that benefit ourselves but harm other people—and thus, we would all be better off if we acted to some degree altruistically. (When I say things like this, most non-economists think I am saying something trivial and obvious, while most economists insist that I am making an assertion that is radical if not outright absurd.)

But of course a world without externalities is a world of complete isolation; it’s a world where everyone lives on their own deserted island and there is no way of communicating or interacting with any other human being in the world. The only reasonable question about this world is whether we would die first or go completely insane first; clearly those are the two things that would happen. Human beings are fundamentally social animals—I would argue that we are in fact more social even than eusocial animals like ants and bees. (Ants and bees are only altruistic toward their own kin; humans are altruistic to groups of millions of people we’ve never even met.) Humans without social interaction are like flowers without sunlight.

Indeed, externalities are so common that if markets only worked in their absence, markets would make no sense at all. Fortunately this isn’t true; there are some ways that markets can be adjusted to deal with at least some kinds of externalities.

One of the most well-known is the Coase theorem; this is odd because it is by far the worst solution. The Coase theorem basically says that if you can assign and enforce well-defined property rights and there is absolutely no cost in making any transaction, markets will automatically work out all externalities. The basic idea is that if someone is about to perform an action that would harm you, you can instead pay them not to do it. Then, the harm to you will be prevented and they will incur an additional benefit.

In the above example, we could all agree to pay $30 (which let’s say is worth 1 milliQALY) to each person who doesn’t burn a gallon of gasoline that would pollute our air. Then, if I were thinking about burning some gasoline, I wouldn’t want to do it, because I’d lose the $300 in payments, which costs me 10 milliQALY, while the benefits of burning the gasoline are only 5 milliQALY. We all reason the same way, and the result is that nobody burns gasoline and actually the money exchanged all balances out so we end up where we were before. The result is that we are all better off.

The first thought you probably have is: How do I pay everyone who doesn’t hurt me? How do I even find all those people? How do I ensure that they follow through and actually don’t hurt me? These are the problems of transaction costs and contract enforcement that are usually presented as the problem with the Coase theorem, and they certainly are very serious problems. You end up needing some sort of government simply to enforce all those contracts, and even then there’s the question of how we can possibly locate everyone who has ever polluted our air or our water.

But in fact there’s an even more fundamental problem: This is extortion. We are almost always in the condition of being able to harm other people, and a system in which the reason people don’t hurt each other is because they’re constantly paying each other not to is a system in which the most intimidating psychopath is the wealthiest person in the world. That system is in fact Pareto-efficient (the psychopath does quite well for himself indeed); but it’s exactly the sort of Pareto-efficient system that isn’t worth pursuing.

Another response to externalities is simply to accept them, which isn’t as awful as it sounds. There are many kinds of externalities that really aren’t that bad, and anything we might do to prevent them is likely to make the cure worse than the disease. Think about the externality of people standing in front of you in line, or the externality of people buying the last cereal box off the shelf before you can get there. The externality of taking the job you applied for may hurt at the time, but in the long run that’s how we maintain a thriving and competitive labor market. In fact, even the externality of ‘gentrifying’ your neighborhood so you can no longer afford it is not nearly as bad as most people seem to think—indeed, the much larger problem seems to be the poor neighborhoods that don’t have rising incomes, remaining poor for generations. (It also makes no sense to call this “gentrifying”; the only landed gentry we have in America is the landowners who claim a ludicrous proportion of our wealth, not the middle-class people who buy cheap homes and move in. If you really want to talk about a gentry, you should be thinking Waltons and Kochs—or Bushs and Clintons.) These sorts of minor externalities that are better left alone are sometimes characterized as pecuniary externalities because they usually are linked to prices, but I think that really misses the point; it’s quite possible for an externality to be entirely price-related and do enormous damage (read: the entire financial system) and to have little or nothing to do with prices and still be not that bad (like standing in line as I mentioned above).

But obviously we can’t leave all externalities alone in this way. We can’t just let people rob and murder one another arbitrarily, or ignore the destruction of the world’s climate that threatens hundreds of millions of lives. We can’t stand back and let forests burn and rivers run dry when we could easily have saved them.

The much more reasonable and realistic response to externalities is what we call government—there are rules you have to follow in society and punishments you face if you don’t. We can avoid most of the transaction problems involved in figuring out who polluted our water by simply making strict rules about polluting water in general. We can prevent people from stealing each other’s things or murdering each other by police who will investigate and punish such crimes.

This is why regulation—and a government strong enough to enforce that regulation—is necessary for the functioning of a society. This dichotomy we have been sold about “regulations versus the market” is totally nonsensical; the market depends upon regulations. This doesn’t justify any particular regulation—and indeed, an awful lot of regulations are astonshingly bad. But some sort of regulatory system is necessary for a market to function at all, and the question has never been whether we will have regulations but which regulations we will have. People who argue that all regulations must go and the market would somehow work on its own are either deeply ignorant of economics or operating from an ulterior motive; some truly horrendous policies have been made by arguing that “less government is always better” when the truth is nothing of the sort.

In fact, there is one real-world method I can think of that actually comes reasonably close to eliminating all externalities—and it is called social democracy. By involving everyone—democracy—in a system that regulates the economy—socialism—we can, in a sense, involve everyone in every transaction, and thus make it impossible to have externalities. In practice it’s never that simple, of course; but the basic concept of involving our whole society in making the rules that our society will follow is sound—and in fact I can think of no reasonable alternative.

We have to institute some sort of regulatory system, but then we need to decide what the regulations will be and who will control them. If we want to instead vest power in a technocratic elite, how do you decide whom to include in that elite? How do we ensure that the technocrats are actually better for the general population if there is no way for that general population to have a say in their election? By involving as many people as we can in the decision-making process, we make it much less likely that one person’s selfish action will harm many others. Indeed, this is probably why democracy prevents famine and genocide—which are, after all, rather extreme examples of negative externalities.