No, the system is not working as designed

You say you’ve got a real solution…

Well, you know,

We’d all love to see the plan.

“Revolution”, the Beatles


Jun 16 JDN 2460478


There are several different versions of the meme, but they all follow the same basic format: Rejecting the statement “the system is broken and must be fixed”, they endorse the statement “the system is working exactly as intended and must be destroyed”.


This view is not just utterly wrong; it’s also incredibly dangerous.

First of all, it should be apparent to anyone who has ever worked in any large, complex organization—a corporation, a university, even a large nonprofit org—that no human system works exactly as intended. Some obviously function better than others, and most function reasonably well most of the time (probably because those that don’t fail and disappear, so there is a sort of natural selection process at work); but even with apparently simple goals and extensive resources, no complex organization will ever be able to coordinate its actions perfectly toward those goals.

But when we’re talking about “the system”, well, first of all:

What exactly is “the system”?

Is it government? Society as a whole? The whole culture, or some subculture? Is it local, national, or international? Are we talking about democracy, or maybe capitalism? The world isn’t just one system; it’s a complex network of interacting systems. So to be quite honest with you, I don’t even know what people are complaining about when they complain about “the system”. All I know is that there is some large institution that they don’t like.

Let’s suppose we can pin that down—say we’re talking about capitalism, for instance, or the US government. Then, there is still the obvious fact that any real-world implementation of a system is going to have failures. Particularly when millions of people are involved, no system is ever going to coordinate exactly toward achieving its goals as efficiently as possible. At best it’s going to coordinate reasonably well and achieve its goals most of the time.

But okay, let’s try to be as charitable as possible here.

What are people trying to say when they say this?

I think that fundamentally this is meant as an expression of Conflict Theory over Mistake Theory: The problems with the world aren’t due to well-intentioned people making honest mistakes, they are due to people being evil. The response isn’t to try to correct their mistakes; it’s to fight them (kill them?), because they are evil.

Well, it is certainly true that evil people exist. There are mass murderers and tyrants, rapists and serial killers. And though they may be less extreme, it is genuinely true that billionaires are disproportionately likely to be psychopaths and that those who aren’t typically share a lot of psychopathic traits.

But does this really look like the sort of system that was designed to optimize payoffs for a handful of psychopaths? Really? You can’t imagine any way that the world could be more optimized for that goal?

How about, say… feudalism?

Not that long ago, historically—less than a millennium—the world was literally ruled by those same sorts of uber-rich psychopaths, and they wielded absolute power over their subjects. In medieval times, your king could confiscate your wealth whenever he chose, or even have you executed on a whim. That system genuinely looks like it’s optimized for the power of a handful of evil people.

Democracy, on the other hand, actually looks like it’s trying to be better. Maybe sometimes it isn’t better—or at least isn’t enough better. But why would they even bother letting us vote, if they were building a system to optimize their own power over us? Why would we have these free speech protections—that allow you to post those memes without going to prison?

In fact, there are places today where near-absolute power really is concentrated in a handful of psychopaths, where authoritarian dictators still act very much like kings of yore. In North Korea or Russia or China, there really is a system in place that’s very well optimized to maximize the power of a few individuals over everyone else.

But in the United States, we don’t have that. Not yet, anyway. Our democracy is flawed and imperilled, but so far, it stands. It needs our constant vigilance to defend it, but so far, it stands.

This is precisely why these ideas are so dangerous.

If you tell people that the system is already as bad as it’s ever going to get, that the only hope now is to burn it all down and build something new, then those people aren’t going to stand up and defend what we still have. They aren’t going to fight to keep authoritarians out of office, because they don’t believe that their votes or donations or protests actually do anything to control who ends up in office.

In other words, they are acting exactly as the authoritarians want them to.

Short of your actual support, the best gift you can give your enemy is apathy.

If all the good people give up on democracy, then it will fail, and we will see something worse in its place. Your belief that the world can’t get any worse can make the world much, much worse.

I’m not saying our system of government couldn’t be radically improved. It absolutely could, even by relatively simple reforms, such as range voting and a universal basic income. But there are people who want to tear it all down, and if they succeed, what they put in its place is almost certainly going to be worse, not better.

That’s what happened in Communist countries, after all: They started with bad systems, they tore them down in the name of making something better—and then they didn’t make something better. They made something worse.

And I don’t think it’s an accident that Marxists are so often Conflict Theorists; Marx himself certainly was. Marx seemed convinced that all we needed to do was tear down the old system, and a new, better system would spontaneously emerge. But that isn’t how any of this works.

Good governance is actually really hard.

Life isn’t simple. People aren’t easy to coordinate. Conflicts of interest aren’t easy to resolve. Coordination failures are everywhere. If you tear down the best systems we have for solving these problems, with no vision at all of what you would replace them with, you’re not going to get something better.

Different people want different things. We have to resolve those disagreements somehow. There are lots of ways we could go about doing that. But so far, some variation on voting seems to be the best method we have for resolving disagreements fairly.

It’s true; some people out there are really just bad people. Some of what even good people want is ultimately not reasonable, or based on false presumptions. (Like people who want to “cut” foreign aid to 5% of the budget—when it is in fact about 1%.) Maybe there is some alternative system out there that could solve these problems better, ensure that only the reasonable voices with correct facts actually get heard.

If so, well, you know:

We’d all love to see the plan.

It’s not enough to recognize that our current system is flawed and posit that something better could exist. You need to actually have a clear vision of what that better system looks like. For if you go tearing down the current system without any idea of what to replace it with, you’re going to end up with something much worse.

Indeed, if you had a detailed plan of how to improve things, it’s quite possible you could convince enough people to get that plan implemented, without tearing down the whole system first.

We’ve done it before, after all:

We ended slavery, then racial segregation. We gave women the right to vote, then integrated them into the workforce. We removed the ban of homosexuality, and then legalized same-sex marriage.


We have a very clear track record of reform working. Things are getting better, on a lot of different fronts. (Maybe not all fronts, I admit.) When the moral case becomes overwhelming, we really can convince people to change their minds and then vote to change our policies.

We do not have such a track record when it comes to revolutions.

Yes, some revolutions have worked out well, such as the one that founded the United States. (But I really cannot emphasize this: they had a plan!) But plenty more have worked out very badly. Even France, which turned out okay in the end, had to go through a Napoleon phase first.

Overall, it seems like our odds are better when we treat the system as broken and try to fix it, than when we treat it as evil and try to tear it down.

The world could be a lot better than it is. But never forget: It could also be a lot worse.

The Tragedy of the Commons

JDN 2457387

In a previous post I talked about one of the most fundamental—perhaps the most fundamental—problem in game theory, the Prisoner’s Dilemma, and how neoclassical economic theory totally fails to explain actual human behavior when faced with this problem in both experiments and the real world.

As a brief review, the essence of the game is that both players can either cooperate or defect; if they both cooperate, the outcome is best overall; but it is always in each player’s interest to defect. So a neoclassically “rational” player would always defect—resulting in a bad outcome for everyone. But real human beings typically cooperate, and thus do better. The “paradox” of the Prisoner’s Dilemma is that being “rational” results in making less money at the end.

Obviously, this is not actually a good definition of rational behavior. Being short-sighted and ignoring the impact of your behavior on others doesn’t actually produce good outcomes for anybody, including yourself.

But the Prisoner’s Dilemma only has two players. If we expand to a larger number of players, the expanded game is called a Tragedy of the Commons.

When we do this, something quite surprising happens: As you add more people, their behavior starts converging toward the neoclassical solution, in which everyone defects and we get a bad outcome for everyone.

Indeed, people in general become less cooperative, less courageous, and more apathetic the more of them you put together. K was quite apt when he said, “A person is smart; people are dumb, panicky, dangerous animals and you know it.” There are ways to counteract this effect, as I’ll get to in a moment—but there is a strong effect that needs to be counteracted.

We see this most vividly in the bystander effect. If someone is walking down the street and sees someone fall and injure themselves, there is about a 70% chance that they will go try to help the person who fell—humans are altruistic. But if there are a dozen people walking down the street who all witness the same event, there is only a 40% chance that any of them will help—humans are irrational.

The primary reason appears to be diffusion of responsibility. When we are alone, we are the only one could help, so we feel responsible for helping. But when there are others around, we assume that someone else could take care of it for us, so if it isn’t done that’s not our fault.

There also appears to be a conformity effect: We want to conform our behavior to social norms (as I said, to a first approximation, all human behavior is social norms). The mere fact that there are other people who could have helped but didn’t suggests the presence of an implicit social norm that we aren’t supposed to help this person for some reason. It never occurs to most people to ask why such a norm would exist or whether it’s a good one—it simply never occurs to most people to ask those questions about any social norms. In this case, by hesitating to act, people actually end up creating the very norm they think they are obeying.

This can lead to what’s called an Abilene Paradox, in which people simultaneously try to follow what they think everyone else wants and also try to second-guess what everyone else wants based on what they do, and therefore end up doing something that none of them actually wanted. I think a lot of the weird things humans do can actually be attributed to some form of the Abilene Paradox. (“Why are we sacrificing this goat?” “I don’t know, I thought you wanted to!”)

Autistic people are not as good at following social norms (though some psychologists believe this is simply because our social norms are optimized for the neurotypical population). My suspicion is that autistic people are therefore less likely to suffer from the bystander effect, and more likely to intervene to help someone even if they are surrounded by passive onlookers. (Unfortunately I wasn’t able to find any good empirical data on that—it appears no one has ever thought to check before.) I’m quite certain that autistic people are less likely to suffer from the Abilene Paradox—if they don’t want to do something, they’ll tell you so (which sometimes gets them in trouble).

Because of these psychological effects that blunt our rationality, in large groups human beings often do end up behaving in a way that appears selfish and short-sighted.

Nowhere is this more apparent than in ecology. Recycling, becoming vegetarian, driving less, buying more energy-efficient appliances, insulating buildings better, installing solar panels—none of these things are particularly difficult or expensive to do, especially when weighed against the tens of millions of people who will die if climate change continues unabated. Every recyclable can we throw in the trash is a silent vote for a global holocaust.

But as it no doubt immediately occurred to you to respond: No single one of us is responsible for all that. There’s no way I myself could possibly save enough carbon emissions to significantly reduce climate change—indeed, probably not even enough to save a single human life (though maybe). This is certainly true; the error lies in thinking that this somehow absolves us of the responsibility to do our share.

I think part of what makes the Tragedy of the Commons so different from the Prisoner’s Dilemma, at least psychologically, is that the latter has an identifiable victimwe know we are specifically hurting that person more than we are helping ourselves. We may even know their name (and if we don’t, we’re more likely to defect—simply being on the Internet makes people more aggressive because they don’t interact face-to-face). In the Tragedy of the Commons, it is often the case that we don’t know who any of our victims are; moreover, it’s quite likely that we harm each one less than we benefit ourselves—even though we harm everyone overall more.

Suppose that driving a gas-guzzling car gives me 1 milliQALY of happiness, but takes away an average of 1 nanoQALY from everyone else in the world. A nanoQALY is tiny! Negligible, even, right? One billionth of a year, a mere 30 milliseconds! Literally less than the blink of an eye. But take away 30 milliseconds from everyone on Earth and you have taken away 7 years of human life overall. Do that 10 times, and statistically one more person is dead because of you. And you have gained only 10 milliQALY, roughly the value of $300 to a typical American. Would you kill someone for $300?

Peter Singer has argued that we should in fact think of it this way—when we cause a statistical death by our inaction, we should call it murder, just as if we had left a child to drown to keep our clothes from getting wet. I can’t agree with that. When you think seriously about the scale and uncertainty involved, it would be impossible to live at all if we were constantly trying to assess whether every action would lead to statistically more or less happiness to the aggregate of all human beings through all time. We would agonize over every cup of coffee, every new video game. In fact, the global economy would probably collapse because none of us would be able to work or willing to buy anything for fear of the consequences—and then whom would we be helping?

That uncertainty matters. Even the fact that there are other people who could do the job matters. If a child is drowning and there is a trained lifeguard right next to you, the lifeguard should go save the child, and if they don’t it’s their responsibility, not yours. Maybe if they don’t you should try; but really they should have been the one to do it.
But we must also not allow ourselves to simply fall into apathy, to do nothing simply because we cannot do everything. We cannot assess the consequences of every specific action into the indefinite future, but we can find general rules and patterns that govern the consequences of actions we might take. (This is the difference between act utilitarianism, which is unrealistic, and rule utilitarianism, which I believe is the proper foundation for moral understanding.)

Thus, I believe the solution to the Tragedy of the Commons is policy. It is to coordinate our actions together, and create enforcement mechanisms to ensure compliance with that coordinated effort. We don’t look at acts in isolation, but at policy systems holistically. The proper question is not “What should I do?” but “How should we live?”

In the short run, this can lead to results that seem deeply suboptimal—but in the long run, policy answers lead to sustainable solutions rather than quick-fixes.

People are starving! Why don’t we just steal money from the rich and use it to feed people? Well, think about what would happen if we said that the property system can simply be unilaterally undermined if someone believes they are achieving good by doing so. The property system would essentially collapse, along with the economy as we know it. A policy answer to that same question might involve progressive taxation enacted by a democratic legislature—we agree, as a society, that it is justified to redistribute wealth from those who have much more than they need to those who have much less.

Our government is corrupt! We should launch a revolution! Think about how many people die when you launch a revolution. Think about past revolutions. While some did succeed in bringing about more just governments (e.g. the French Revolution, the American Revolution), they did so only after a long period of strife; and other revolutions (e.g. the Russian Revolution, the Iranian Revolution) have made things even worse. Revolution is extremely costly and highly unpredictable; we must use it only as a last resort against truly intractable tyranny. The policy answer is of course democracy; we establish a system of government that elects leaders based on votes, and then if they become corrupt we vote to remove them. (Sadly, we don’t seem so good about that second part—the US Congress has a 14% approval rating but a 95% re-election rate.)

And in terms of ecology, this means that berating ourselves for our sinfulness in forgetting to recycle or not buying a hybrid car does not solve the problem. (Not that it’s bad to recycle, drive a hybrid car, and eat vegetarian—by all means, do these things. But it’s not enough.) We need a policy solution, something like a carbon tax or cap-and-trade that will enforce incentives against excessive carbon emissions.

In case you don’t think politics makes a difference, all of the Democrat candidates for President have proposed such plans—Bernie Sanders favors a carbon tax, Martin O’Malley supports an aggressive cap-and-trade plan, and Hillary Clinton favors heavily subsidizing wind and solar power. The Republican candidates on the other hand? Most of them don’t even believe in climate change. Chris Christie and Carly Fiorina at least accept the basic scientific facts, but (1) they are very unlikely to win at this point and (2) even they haven’t announced any specific policy proposals for dealing with it.

This is why voting is so important. We can’t do enough on our own; the coordination problem is too large. We need to elect politicians who will make policy. We need to use the systems of coordination enforcement that we have built over generations—and that is fundamentally what a government is, a system of coordination enforcement. Only then can we overcome the tendency among human beings to become apathetic and short-sighted when faced with a Tragedy of the Commons.