The evolution of human cooperation

Jun 17 JDN 2458287

If alien lifeforms were observing humans (assuming they didn’t turn out the same way—which they actually might, for reasons I’ll get to shortly), the thing that would probably baffle them the most about us is how we organize ourselves into groups. Each individual may be part of several groups at once, and some groups are closer-knit than others; but the most tightly-knit groups exhibit extremely high levels of cooperation, coordination, and self-sacrifice.

They might think at first that we are eusocial, like ants or bees; but upon closer study they would see that our groups are not very strongly correlated with genetic relatedness. We are somewhat more closely related to those in our groups than to those outsides, usually; but it’s a remarkably weak effect, especially compared to the extremely high relatedness of worker bees in a hive. No, to a first approximation, these groups are of unrelated humans; yet their level of cooperation is equal to if not greater than that exhibited by the worker bees.

However, the alien anthropologists would find that it is not that humans are simply predisposed toward extremely high altruism and cooperation in general; when two humans groups come into conflict, they are capable of the most extreme forms of violence imaginable. Human history is full of atrocities that combine the indifferent brutality of nature red in tooth and claw with the boundless ingenuity of a technologically advanced species. Yet except for a small proportion perpetrated by individual humans with some sort of mental pathology, these atrocities are invariably committed by one unified group against another. Even in genocide there is cooperation.

Humans are not entirely selfish. But nor are they paragons of universal altruism (though some of them aspire to be). Humans engage in a highly selective form of altruism—virtually boundless for the in-group, almost negligible for the out-group. Humans are tribal.

Being a human yourself, this probably doesn’t strike you as particularly strange. Indeed, I’ve mentioned it many times previously on this blog. But it is actually quite strange, from an evolutionary perspective; most organisms are not like this.

As I said earlier, there is actually reason to think that our alien anthropologist would come from a species with similar traits, simply because such cooperation may be necessary to achieve a full-scale technological civilization, let alone the capacity for interstellar travel. But there might be other possibilities; perhaps they come from a eusocial species, and their large-scale cooperation is within an extremely large hive.

It’s true that most organisms are not entirely selfish. There are various forms of cooperation within and even across species. But these usually involve only close kin, and otherwise involve highly stable arrangements of mutual benefit. There is nothing like the large-scale cooperation between anonymous unrelated individuals that is exhibited by all human societies.

How would such an unusual trait evolve? It must require a very particular set of circumstances, since it only seems to have evolved in a single species (or at most a handful of species, since other primates and cetaceans display some of the same characteristics).

Once evolved, this trait is clearly advantageous; indeed it turned a local apex predator into a species so successful that it can actually intentionally control the evolution of other species. Humans have become a hegemon over the entire global ecology, for better or for worse. Cooperation gave us a level of efficiency in producing the necessities of survival so great that at this point most of us spend our time working on completely different tasks. If you are not a farmer or a hunter or a carpenter (and frankly, even if you are a farmer with a tractor, a hunter with a rifle, or a carpenter with a table saw), you are doing work that would simply not have been possible without very large-scale human cooperation.

This extremely high fitness benefit only makes the matter more puzzling, however: If the benefits are so great, why don’t more species do this? There must be some other requirements that other species were unable to meet.

One clear requirement is high intelligence. As frustrating as it may be to be a human and watch other humans kill each other over foolish grievances, this is actually evidence of how smart humans are, biologically speaking. We might wish we were even smarter still—but most species don’t have the intelligence to make it even as far as we have.

But high intelligence is likely not sufficient. We can’t be sure of that, since we haven’t encountered any other species with equal intelligence; but what we do know is that even Homo sapiens didn’t coordinate on anything like our current scale for tens of thousands of years. We may have had tribal instincts, but if so they were largely confined to a very small scale. Something happened, about 50,000 years ago or so—not very long ago in evolutionary time—that allowed us to increase that scale dramatically.

Was this a genetic change? It’s difficult to say. There could have been some subtle genetic mutation, something that wouldn’t show up in the fossil record. But more recent expansions in human cooperation to the level of the nation-state and beyond clearly can’t be genetic; they were much too fast for that. They must be a form of cultural evolution: The replicators being spread are ideas and norms—memes—rather than genes.

So perhaps the very early shift toward tribal cooperation was also a cultural one. Perhaps it began not as a genetic mutation but as an idea—perhaps a metaphor of “universal brotherhood” as we often still hear today. The tribes that believed this ideas prospered; the tribes that didn’t were outcompeted or even directly destroyed.

This would explain why it had to be an intelligent species. We needed brains big enough to comprehend metaphors and generalize concepts. We needed enough social cognition to keep track of who was in the in-group and who was in the out-group.

If it was indeed a cultural shift, this should encourage us. (And since the most recent changes definitely were cultural, that is already quite encouraging.) We are not limited by our DNA to only care about a small group of close kin; we are capable of expanding our scale of unity and cooperation far beyond.
The real question is whether we can expand it to everyone. Unfortunately, there is some reason to think that this may not be possible. If our concept of tribal identity inherently requires both an in-group and an out-group, then we may never be able to include everyone. If we are only unified against an enemy, never simply for our own prosperity, world peace may forever remain a dream.

But I do have a work-around that I think is worth considering. Can we expand our concept of the out-group to include abstract concepts? With phrases like “The War on Poverty” and “The War on Terror”, it would seem in fact that we can. It feels awkward; it is somewhat imprecise—but then, so was the original metaphor of “universal brotherhood”. Our brains are flexible enough that they don’t actually seem to need the enemy to be a person; it can also be an idea. If this is right, then we can actually include everyone in our in-group, as long as we define the right abstract out-group. We can choose enemies like poverty, violence, cruelty, and despair instead of other nations or ethnic groups. If we must continue to fight a battle, let it be a battle against the pitiless indifference of the universe, rather than our fellow human beings.

Of course, the real challenge will be getting people to change their existing tribal identities. In the moment, these identities seem fundamentally intractable. But that can’t really be the case—for these identities have changed over historical time. Once-important categories have disappeared; new ones have arisen in their place. Someone in 4th century Constantinople would find the conflict between Democrats and Republicans as baffling as we would find the conflict between Trinitarians and Arians. The ongoing oppression of Native American people by White people would be unfathomable to someone of the 11th century Onondaga, who could scarcely imagine an enemy more different than the Seneca west of them. Even the conflict between Russia and NATO would probably seem strange to someone living in France in 1943, for whom Germany was the enemy and Russia was at least the enemy of the enemy—and many of those people are still alive.

I don’t know exactly how these tribal identities change (I’m working on it). It clearly isn’t as simple as convincing people with rational arguments. In fact, part of how it seems to work is that someone will shift their identity slowly enough that they can’t perceive the shift themselves. People rarely seem to appreciate, much less admit, how much their own minds have changed over time. So don’t ever expect to change someone’s identity in one sitting. Don’t even expect to do it in one year. But never forget that identities do change, even within an individual’s lifetime.

Self-fulfilling norms

Post 242: Jun 10 JDN 2458280

Imagine what it would be like to live in a country with an oppressive totalitarian dictator. For millions of people around the world, this is already reality. For us in the United States, it’s becoming more terrifyingly plausible all the time.

You would probably want to get rid of this dictator. And even if you aren’t in the government yourself, there are certainly things you could do to help with that: Join protests, hide political dissenters in your basement, publish refutations of the official propaganda on the Internet. But all of these things carry great risks. How do you know whether it’s worth the risk?

Well, a very important consideration in that reasoning is how many other people agree with you. In the extreme case where everyone but the dictator agrees with you, overthrowing him should be no problem. In the other extreme case where nobody agrees with you, attempting to overthrow him will inevitably result in being imprisoned and tortured as a political prisoner. Everywhere in between, your probability of success increases as the number of people who agree with you increases.

But how do you know how many people agree with you? You can’t just ask them—simply asking someone “Do you support the dictator?” is a dangerous thing to do in a totalitarian society. Simply by asking around, you could get yourself into a lot of trouble. And if people think you might be asking on behalf of the government, they’re always going to say they support the dictator whether or not they do.

If you believe that enough people would support you, you will take action against the dictator. But if you don’t believe that, you won’t take the chance. Now, consider the fact that many other people are in the same position: They too would only take action if they believed others would.

You are now in what’s called a coordination game. The best decision for you depends upon what everyone else decides. There are two equilibrium outcomes of this game: In one, you all keep your mouths shut and the dictator continues to oppress you. In the other, you all rise up together and overthrow the dictator. But if you take an action out of equilibrium, that could be very bad for you: If you rise up against the dictator without support, you’ll be imprisoned and tortured. If you support the dictator while others try to overthrow him, you might be held responsible for some of his crimes once the coup d’etat is complete.

And what about people who do support the dictator? They might still be willing to go along with overthrowing him, if they saw the writing on the wall. But if they think the dictator can still win, they will stand with him. So their beliefs, also, are vital in deciding whether to try to overthrow the dictator.

This results in a self-fulfilling norm. The dictator can be overthrown, if and only if enough people believe that the dictator can be overthrown.

There are much more mundane examples of of self-fulfilling norms. Most of our traffic laws are actually self-fulfilling norms as much as they are real laws; enforcement is remarkably weak, particularly when you compare it to the rate of compliance. Most of us have driven faster than the speed limit or run a red light on occasion; but how often do you drive on the wrong side of the road, or stop on green and go on red? It is best to drive on the right side of the road if, and only if, everyone believes it is best to drive on the right side of the road. That’s a self-fulfilling norm.

Self-fulfilling norms are a greatly underappreciated force in global history. We often speak as though historical changes are made by “great men”—powerful individuals who effect chance through their charisma or sheer force of will. But that power didn’t exist in a vacuum. For good (Martin Luther King) or for ill (Adolf Hitler), “great men” only have their power because they can amass followers. The reason they can amass followers is that a large number of people already agree with them—but are too afraid to speak up, because they are trapped in a self-fulfilling norm. The primary function of a great leader is to announce—at great personal risk—views that they believe others already hold. If indeed they are correct, then they can amass followers by winning the coordination game. If they are wrong, they may suffer terribly at the hands of a populace that hates them.

There is persuasion involved, but typically it’s not actually persuading people to believe that something is right; it’s persuading people to actually take action, convincing them that there is really enough chance of succeeding that it is worth the risk. Because of the self-fulfilling norm, this is a very all-or-nothing affair; do it right and you win, but do it wrong and your whole movement collapses. You essentially need to know exactly what battles you can win, so that you only fight those battles.

The good news is that information technology may actually make this easier. Honest assessment of people’s anonymous opinions is now easier than ever. Large-scale coordination of activity with relative security is now extremely easy, as we saw in the Arab Spring. This means that we are entering an era of rapid social change, where self-fulfilling norms will rise and fall at a rate never before seen.

In the best-case scenario, this means we get rid of all the bad norms and society becomes much better.

In the worst-case scenario, we may find out that most people actually believe in the bad norms, and this makes those norms all the more entrenched.

Only time will tell.

Fake skepticism

Jun 3 JDN 2458273

“You trust the mainstream media?” “Wake up, sheeple!” “Don’t listen to what so-called scientists say; do your own research!”

These kinds of statements have become quite ubiquitous lately (though perhaps the attitudes were always there, and we only began to hear them because of the Internet and social media), and are often used to defend the most extreme and bizarre conspiracy theories, from moon-landing denial to flat Earth. The amazing thing about these kinds of statements is that they can be used to defend literally anything, as long as you can find some source with less than 100% credibility that disagrees with it. (And what source has 100% credibility?)

And that, I think, should tell you something. An argument that can prove anything is an argument that proves nothing.

Reversed stupidity is not intelligence. The fact that the mainstream media, or the government, or the pharmaceutical industry, or the oil industry, or even gangsters, fanatics, or terrorists believes something does not make it less likely to be true.

In fact, the vast majority of beliefs held by basically everyone—including the most fanatical extremists—are true. I could list such consensus true beliefs for hours: “The sky is blue.” “2+2=4.” “Ice is colder than fire.”

Even if a belief is characteristic of a specifically evil or corrupt organization, that does not necessarily make it false (though it usually is evidence of falsehood in a Bayesian sense). If only terrible people belief X, then maybe you shouldn’t believe X. But if both good and bad people believe X, the fact that bad people believe X really shouldn’t matter to you.

People who use this kind of argument often present themselves as being “skeptics”. They imagine that they have seen through the veil of deception that blinds others.

In fact, quite the opposite is the case: This is fake skepticism. These people are not uniquely skeptical; they are uniquely credulous. If you think the Earth is flat because you don’t trust the mainstream scientific community, that means you do trust someone far less credible than the mainstream scientific community.

Real skepticism is difficult. It requires concerted effort and investigation, and typically takes years. To really seriously challenge the expert consensus in a field, you need to become an expert in that field. Ideally, you should get a graduate degree in that field and actually start publishing your heterodox views. Failing that, you should at least be spending hundreds or thousands of hours doing independent research. If you are unwilling or unable to do that, you are not qualified to assess the validity of the expert consensus.

This does not mean the expert consensus is always right—remarkably often, it isn’t. But it means you aren’t allowed to say it’s wrong, because you don’t know enough to assess that.

This is not elitism. This is not an argument from authority. This is a basic respect for the effort and knowledge that experts spend their lives acquiring.

People don’t like being told that they are not as smart as other people—even though, with any variation at all, that’s got to be true for a certain proportion of people. But I’m not even saying experts are smarter than you. I’m saying they know more about their particular field of expertise.

Do you walk up to construction workers on the street and critique how they lay concrete? When you step on an airplane, do you explain to the captain how to read an altimeter? When you hire a plumber, do you insist on using the snake yourself?

Probably not. And why not? Because you know these people have training; they do this for a living. Yeah, well, scientists do this for a living too—and our training is much longer. To be a plumber, you need a high school diploma and an apprenticeship that usually lasts about four years. To be a scientist, you need a PhD, which means four years of college plus an additional five or six years of graduate school.

To be clear, I’m not saying you should listen to experts speaking outside their expertise. Some of the most idiotic, arrogant things ever said by human beings have been said by physicists opining on biology or economists ranting about politics. Even within a field, some people have such narrow expertise that you can’t really trust them even on things that seem related—like macroeconomists with idiotic views on trade, or ecologists who clearly don’t understand evolution.

This is also why one of the great challenges of being a good interdisciplinary scientist is actually obtaining enough expertise in both fields you’re working in; it isn’t literally twice the work (since there is overlap—or you wouldn’t be doing it—and you do specialize in particular interdisciplinary subfields), but it’s definitely more work, and there are definitely a lot of people on each side of the fence who may never take you seriously no matter what you do.

How do you tell who to trust? This is why I keep coming back to the matter of expert consensus. The world is much too complicated for anyone, much less everyone, to understand it all. We must be willing to trust the work of others. The best way we have found to decide which work is trustworthy is by the norms and institutions of the scientific community itself. Since 97% of climatologists say that climate change is caused by humans, they’re probably right. Since 99% of biologists believe humans evolved by natural selection, that’s probably what happened. Since 87% of economists oppose tariffs, tariffs probably aren’t a good idea.

Can we be certain that the consensus is right? No. There is precious little in this universe that we can be certain about. But as in any game of chance, you need to play the best odds, and my money will always be on the scientific consensus.

What we could, what we should, and what we must

May 27 JDN 2458266

In one of the most famous essays in all of ethical philosophy, Peter Singer famously argued that we are morally obligated to give so much to charity that we would effectively reduce ourselves to poverty only slightly better than what our donations sought to prevent. His argument is a surprisingly convincing one, especially for such a radical proposition. Indeed, one of the core activities of the Effective Altruism movement has basically been finding ways to moderate Singer’s argument without giving up on its core principles, because it’s so obvious both that we ought to do much more to help people around the world and that there’s no way we’re ever going to do what that argument actually asks of us.

The most cost-effective charities in the world can save a human life for an average cost of under $4,000. The maneuver that Singer basically makes is quite simple: If you know that you could save someone’s life for $4,000, you have $4,000 to spend, and instead you spend that $4,000 on something else, aren’t you saying that whatever you did spend it on was more important than saving that person’s life? And is that really something you believe?

But if you think a little more carefully, it becomes clear that things are not quite so simple. You aren’t being paid $4,000 to kill someone, first of all. If you were willing to accept $4,000 as sufficient payment to commit a murder, you would be, quite simply, a monster. Implicitly the “infinite identical psychopath” of neoclassical rational agent models would be willing to do such a thing, but very few actual human beings—even actual psychopaths—are that callous.

Obviously, we must refrain from murdering people, even for amounts far in excess of $4,000. If you were offered the chance to murder someone for $4 billion dollars, I can understand why you would be tempted to do such a thing. Think of what you could do with all that money! Not only would you and everyone in your immediate family be independently wealthy for life, you could donate billions of dollars to charity and save as much as a million lives. What’s one life for a million? Even then, I have a strong intuition that you shouldn’t commit this murder—but I have never been able to find a compelling moral argument for why. The best I’ve been able to come up with a sort of Kantian notion: What if everyone did this?

Since the most plausible scenario is that the $4 billion comes from existing wealth, all those murders would simply be transferring wealth around, from unknown sources. If you stipulate where the wealth comes from, the dilemma can change quite a bit.

Suppose for example the $4 billion is confiscated from Bashar Al-Assad. That would be in itself a good thing, lessening the power of a genocidal tyrant. So we need to add that to the positive side of the ledger. It is probably worth killing one innocent person just to undermine Al-Assad’s power; indeed, the US Air Force certainly seems to think so, as they average more than one civilian fatality every day in airstrikes.

Now suppose the wealth was extracted by clever financial machinations that took just a few dollars out of every bank account in America. This would be in itself a bad thing, but perhaps not a terrible thing, especially since we’re planning on giving most of it to UNICEF. Those people should have given it anyway, right? This sounds like a pretty good movie, actually; a cyberpunk Robin Hood basically.

Next, suppose it was obtained by stealing the life savings of a million poor people in Africa. Now the method of obtaining the money is so terrible that it’s not clear that funneling it through UNICEF would compensate, even if you didn’t have to murder someone to get it.

Finally, suppose that the wealth is actually created anew—not printed money from the Federal Reserve, but some new technology that will increase the world’s wealth by billions of dollars yet requires the death of an innocent person to create. In this scenario, the murder has become something more like the inherent risk in human subjects biomedical research, and actually seems justifiable. And indeed, that fits with the Kantian answer, for if we all had the chance to kill one person in order to create something that would increase the wealth of the world by $4 billion, we could turn this planet into a post-scarcity utopia within a generation for fewer deaths than are currently caused by diabetes.

Anyway, my point here is that the detailed context of a decision actually matters a great deal. We can’t simply abstract away from everything else in the world and ask whether the money is worth the life.

When we consider this broader context with regard to the world’s most cost-effective charities, it becomes apparent that a small proportion of very dedicated people giving huge proportions of their income to charity is not the kind of world we want to see.

If I actually gave so much that I equalized my marginal utility of wealth to that of a child dying of malaria in Ghana, I would have to donate over 95% of my income—and well before that point, I would be homeless and impoverished. This actually seems penny-wise and pound-foolish even from the perspective of total altruism: If I stop paying rent, it gets a lot harder for me to finish my doctorate and become a development economist. And even if I never donated another dollar, the world would be much better off with one more good development economist than with even another $23,000 to the Against Malaria Foundation. Once you factor in the higher income I’ll have (and proportionately higher donations I’ll make), it’s obviously the wrong decision for me to give 95% of $25,000 today rather than 10% of $70,000 every year for the next 20 years after I graduate.

But the optimal amount for me to donate from that perspective is whatever the maximum would be that I could give without jeopardizing my education and career prospects. This is almost certainly more than I am presently giving. Exactly how much more is actually not all that apparent: It’s not enough to say that I need to be able to pay rent, eat three meals a day, and own a laptop that’s good enough for programming and statistical analysis. There’s also a certain amount that I need for leisure, to keep myself at optimal cognitive functioning for the next several years. Do I need that specific video game, that specific movie? Surely not—but if I go the next ten years without ever watching another movie or playing another video game, I’m probably going to be in trouble psychologically. But what exactly is the minimum amount to keep me functioning well? And how much should I be willing to spend attending conferences? Those can be important career-building activities, but they can also be expensive wastes of time.

Singer acts as though jeopardizing your career prospects is no big deal, but this is clearly wrong: The harm isn’t just to your own well-being, but also to your productivity and earning power that could have allowed you to donate more later. You are a human capital asset, and you are right to invest in yourself. Exactly how much you should invest in yourself is a much harder question.
Such calculations are extremely difficult to do. There are all sorts of variables I simply don’t know, and don’t have any clear way of finding out. It’s not a good sign for an ethical theory when even someone with years of education and expertise on specifically that topic still can’t figure out the answer. Ethics is supposed to be something we can apply to everyone.

So I think it’s most helpful to think in those terms: What could we apply to everyone? What standard of donation would be high enough if we could get everyone on board?

World poverty is rapidly declining. The direct poverty gap at the UN poverty line of $1.90 per day is now only $80 billion. Realistically, we couldn’t simply close that gap precisely (there would also be all sorts of perverse incentives if we tried to do it that way). But the standard estimate that it would take about $300 billion per year in well-targeted spending to eliminate world hunger is looking very good.

How much would each person, just those in the middle class or above within the US or the EU, have to give in order to raise this much?
89% of US income is received by the top 60% of households (who I would said are unambiguously “middle class or above”). Income inequality is not as extreme within the EU, so the proportion of income received by the top 60% seems to be more like 75%.

89% of US GDP plus 75% of EU GDP is all together about $29 trillion per year. This means that in order to raise $300 billion, each person in the middle class or above would need to donate just over one percent of their income.

Not 95%. Not 25%. Not even 10%. Just 1%. That would be enough.

Of course, more is generally better—at least until you start jeopardizing your career prospects. So by all means, give 2% or 5% or even 10%. But I really don’t think it’s helpful to make people feel guilty about not giving 95% when all we really needed was for everyone to give 1%.

There is an important difference between what we could do, what we should do, and what we must do.

What we must do are moral obligations so strong they are essentially inviolable: We must not murder people. There may be extreme circumstances where exceptions can be made (such as collateral damage in war), and we can always come up with hypothetical scenarios that would justify almost anything, but for the vast majority of people the vast majority of time, these ethical rules are absolutely binding.

What we should do are moral obligations that are strong enough to be marks against your character if you break them, but not so absolutely binding that you have to be a monster not to follow them. This is where I put donating at least 1% of your income. (This is also where I put being vegetarian, but perhaps that is a topic for another time.) You really ought to do it, and you are doing something wrongful if you don’t—but most people don’t, and you are not a terrible person if you don’t.

This latter category is in part socially constructed, based on the norms people actually follow. Today, slavery is obviously a grave crime, and to be a human trafficker who participates in it you must be a psychopath. But two hundred years ago, things were somewhat different: Slavery was still wrong, yes, but it was quite possible to be an ordinary person who was generally an upstanding citizen in most respects and yet still own slaves. I would still condemn people who owned slaves back then, but not nearly as forcefully as I would condemn someone who owned slaves today. Two hundred years from now, perhaps vegetarianism will move up a category: The norm will be that everyone eats only plants, and someone who went out of their way to kill and eat a pig would have to be a psychopath. Eating meat is already wrong today—but it will be more wrong in the future. I’d say the same about donating 1% of your income, but actually I’m hoping that by two hundred years from now there will be no more poverty left to eradicate, and donation will no longer be necessary.

Finally, there is what we could do—supererogatory, even heroic actions of self-sacrifice that would make the world a better place, but cannot be reasonably expected of us. This is where donating 95% or even 25% of your income would fall. Yes, absolutely, that would help more people than donating 1%; but you don’t owe the world that much. It’s not wrong for you to contribute less than this. You don’t need to feel guilty for not giving this much.

But I do want to make you feel guilty if you don’t give at least 1%. Don’t tell me you can’t. You can. If your income is $30,000 per year, that’s $300 per year. If you needed that much for a car repair, or dental work, or fixing your roof, you’d find a way to come up with it. No one in the First World middle class is that liquidity-constrained. It is true that half of Americans say they couldn’t come up with $400 in an emergency, but I frankly don’t believe it. (I believe it for the bottom 25% or so, who are actually poor; but not half of Americans.) If you have even one credit card that’s not maxed out, you can do this—and frankly even if a card is maxed out, you can probably call them and get them to raise your limit. There is something you could cut out of your spending that would allow you to get back 1% of your annual income. I don’t know what it is, necessarily: Restaurants? Entertainment? Clothes? But I’m not asking you to give a third of your income—I’m asking you to give one penny out of every dollar.

I give considerably more than that; my current donation target is 8% and I’m planning on raising it to 10% or more once I get a high-paying job. I live on a grad student salary which is less than the median personal income in the US. So I know it can be done. But I am very intentionally not asking you to give this much; that would be above and beyond the call of duty. I’m only asking you to give 1%.

The vector geometry of value change

Post 239: May 20 JDN 2458259

This post is one of those where I’m trying to sort out my own thoughts on an ongoing research project, so it’s going to be a bit more theoretical than most, but I’ll try to spare you the mathematical details.

People often change their minds about things; that should be obvious enough. (Maybe it’s not as obvious as it might be, as the brain tends to erase its prior beliefs as wastes of data storage space.)

Most of the ways we change our minds are fairly minor: We get corrected about Napoleon’s birthdate, or learn that George Washington never actually chopped down any cherry trees, or look up the actual weight of an average African elephant and are surprised.

Sometimes we change our minds in larger ways: We realize that global poverty and violence are actually declining, when we thought they were getting worse; or we learn that climate change is actually even more dangerous than we thought.

But occasionally, we change our minds in an even more fundamental way: We actually change what we care about. We convert to a new religion, or change political parties, or go to college, or just read some very compelling philosophy books, and come out of it with a whole new value system.

Often we don’t anticipate that our values are going to change. That is important and interesting in its own right, but I’m going to set it aside for now, and look at a different question: What about the cases where we know our values are going to change?
Can it ever be rational for someone to choose to adopt a new value system?

Yes, it can—and I can put quite tight constraints on precisely when.

Here’s the part where I hand-wave the math, but imagine for a moment there are only two goods in the world that anyone would care about. (This is obviously vastly oversimplified, but it’s easier to think in two dimensions to make the argument, and it generalizes to n dimensions easily from there.) Maybe you choose a job caring only about money and integrity, or design policy caring only about security and prosperity, or choose your diet caring only about health and deliciousness.

I can then represent your current state as a vector, a two dimensional object with a length and a direction. The length describes how happy you are with your current arrangement. The direction describes your values—the direction of the vector characterizes the trade-off in your mind of how much you care about each of the two goods. If your vector is pointed almost entirely parallel with health, you don’t much care about deliciousness. If it’s pointed mostly at integrity, money isn’t that important to you.

This diagram shows your current state as a green vector.

vector1

Now suppose you have the option of taking some action that will change your value system. If that’s all it would do and you know that, you wouldn’t accept it. You will be no better off, and your value system will be different, which is bad from your current perspective. So here, you would not choose to move to the red vector:

vector2

But suppose that the action would change your value system, and make you better off. Now the red vector is longer than the green vector. Should you choose the action?

vector3

It’s not obvious, right? From the perspective of your new self, you’ll definitely be better off, and that seems good. But your values will change, and maybe you’ll start caring about the wrong things.

I realized that the right question to ask is whether you’ll be better off from your current perspective. If you and your future self both agree that this is the best course of action, then you should take it.

The really cool part is that (hand-waving the math again) it’s possible to work this out as a projection of the new vector onto the old vector. A large change in values will be reflected as a large angle between the two vectors; to compensate for that you need a large change in length, reflecting a greater improvement in well-being.

If the projection of the new vector onto the old vector is longer than the old vector itself, you should accept the value change.

vector4
If the projection of the new vector onto the old vector is shorter than the old vector, you should not accept the value change.

vector5

This captures the trade-off between increased well-being and changing values in a single number. It fits the simple intuitions that being better off is good, and changing values more is bad—but more importantly, it gives us a way of directly comparing the two on the same scale.

This is a very simple model with some very profound implications. One is that certain value changes are impossible in a single step: If a value change would require you to take on values that are completely orthogonal or diametrically opposed to your own, no increase in well-being will be sufficient.

It doesn’t matter how long I make this red vector, the projection onto the green vector will always be zero. If all you care about is money, no amount of integrity will entice you to change.

vector6

But a value change that was impossible in a single step can be feasible, even easy, if conducted over a series of smaller steps. Here I’ve taken that same impossible transition, and broken it into five steps that now make it feasible. By offering a bit more money for more integrity, I’ve gradually weaned you into valuing integrity above all else:

vector7

This provides a formal justification for the intuitive sense many people have of a “moral slippery slope” (commonly regarded as a fallacy). If you make small concessions to an argument that end up changing your value system slightly, and continue to do so many times, you could end up with radically different beliefs at the end, even diametrically opposed to your original beliefs. Each step was rational at the time you took it, but because you changed yourself in the process, you ended up somewhere you would not have wanted to go.

This is not necessarily a bad thing, however. If the reason you made each of those changes was actually a good one—you were provided with compelling evidence and arguments to justify the new beliefs—then the whole transition does turn out to be a good thing, even though you wouldn’t have thought so at the time.

This also allows us to formalize the notion of “inferential distance”: the inferential distance is the number of steps of value change required to make someone understand your point of view. It’s a function of both the difference in values and the difference in well-being between their point of view and yours.

Another key insight is that if you want to persuade someone to change their mind, you need to do it slowly, with small changes repeated many times, and you need to benefit them at each step. You can only persuade someone to change their minds if they will end up better off than they were at each step.

Is this an endorsement of wishful thinking? Not if we define “well-being” in the proper way. It can make me better off in a deep sense to realize that my wishful thinking was incorrect, so that I realize what must be done to actually get the good things I thought I already had.  It’s not necessary to appeal to material benefits; it’s necessary to appeal to current values.

But it does support the notion that you can’t persuade someone by belittling them. You won’t convince people to join your side by telling them that they are defective and bad and should feel guilty for being who they are.

If that seems obvious, well, maybe you should talk to some of the people who are constantly pushing “White privilege”. If you focused on how reducing racism would make people—even White people—better off, you’d probably be more effective. In some cases there would be direct material benefits: Racism creates inefficiency in markets that reduces overall output. But in other cases, sure, maybe there’s no direct benefit for the person you’re talking to; but you can talk about other sorts of benefits, like what sort of world they want to live in, or how proud they would feel to be part of the fight for justice. You can say all you want that they shouldn’t need this kind of persuasion, they should already believe and do the right thing—and you might even be right about that, in some ultimate sense—but do you want to change their minds or not? If you actually want to change their minds, you need to meet them where they are, make small changes, and offer benefits at each step.

If you don’t, you’ll just keep on projecting a vector orthogonally, and you’ll keep ending up with zero.

Downsides of rent control

May 13 JDN 2458252

One of the largest ideological divides between economists and the rest of the population concerns rent control.

Tent control is very popular among the general population, especially in California—with support hovering around 60% in Orange County, San Diego County, and across California in general. About 60% of people in the UK and over 50% in Ontario, Canada also support rent control.

Meanwhile, economists overwhelmingly oppose rent control: When evaluating the statement “A ceiling on rents reduces the quantity and quality of housing available.”, over 76% of economists agreed, and 16% agreed with qualifications. For the record, I would be an “agree with qualifications” as well (as they say, there are few one-handed economists).

There is evidence of some benefits of rent control, at least for the small number of people who can actually manage to stay in rent-controlled units. People who live in rent-controlled units are about 15% more likely to stay where they are, even in places as expensive as San Francisco, which could be considered a good thing (though I’m not convinced it always is; mobility is one of the key forces driving the dynamism of the US economy).

But there are winners and losers. Landlords whose properties are rent-controlled decreased their supply of housing by an average of 15%, via a combination of converting them to condos, removing them from the market, or demolishing the buildings outright. As a result, rent control increases average rent in a city by an average of 5%. One of the most effective ways to get out of rent control is to remove a building from the market entirely; this allows you to evict all of your tenants with very little notice, and is responsible for thousands of tenants being evicted every year in Los Angeles.

Rent control disincentivizes both new housing construction and the proper maintenance of existing housing. The quality of rent-controlled homes is systematically lower than the quality of other homes.

The benefits of rent control mainly fall upon the upper-middle class, not the poor. Rent control can make an area more racially diverse—but it benefits middle-class members of racial minorities, not poor members. Most of the benefits of rent control go to older families who have lived in a city for a long time—which makes them a transfer of wealth away from young people.

Cities such as Chicago without rent control systematically have lower rents, not higher; partly this is a cause, rather than an effect, as tenants are less likely to panic and demand rent control when rents are not high. But it’s also an effect, as rent control holds down the price in part of the market but ends up driving it up in the rest. Over 40% of San Francisco’s apartments are rent-controlled, and they have the highest rents in the world.

Rent control also contributes to the tendency toward building high-end luxury apartments; if you know that you will never be able to raise the rent on your existing buildings, and may end up being stuck with whatever rent you charge the first year on your new buildings, you have a strong reason to want to charge as much as possible the first year you build new apartments. Rent control also creates subtler distortions in the size and location of apartment construction. The effects of rent control even spill over into other housing markets, such as owner-occupied homes and mobile homes.
Because it locks people into place and reduces the construction of new homes near city centers, rent control increases commute times and carbon emissions. This is probably something we should especially point out to people in California, as the two things Californians hate most are environmental degradation and traffic congestion. (Then again, the third is high rent.) California is good at avoiding the first one—our GDP/carbon emission ratio is near the best in the US. The other two? Not so much.

Of course, simply removing rent control would not immediately solve the housing shortage; while it would probably have benefits in the long run, during the transition period a lot of people currently protected by rent control would lose their homes. Even in the long run, it would probably not be enough to actually make rent affordable in the largest coastal cities.

But it’s vital not to confuse “lower rent” with “rent control”; there are much, much better ways to reduce rent prices than simply enforcing arbitrary caps on them.

We have learned not to use price controls in other markets, but not housing for some reason. Think about the gasoline market, for example. High gas prices are very politically unpopular (though frankly I never quite understood why; it’s a tiny fraction of consumption expenditure, and if we ever want to make a dent in our carbon emissions we need to make our gas prices much higher), but imagine how ridiculous it would seem for a politician to propose simply making an arbitrary cap that says you aren’t allowed to sell gasoline for more than $2.50 per gallon in a particular city. The obvious outcome would be for most gas stations in that city to immediately close, and everyone to end up buying their gas at the new gas stations that spring up just outside the city limits charging $4.00 per gallon. This is basically what happens in the housing market: Rent-controlled apartments apartments are taken off the market, and the new housing that is built ends up even more expensive.

In a future post, I’ll discuss things we can do instead of rent control that would reliably make housing more affordable. Most of these would involve additional government spending; but there are two things I’d like to say about that. First, we are already spending this money, we just don’t see it, because it comes in the form of inefficiencies and market distortions instead of a direct expenditure. Second, do we really care about making housing affordable, or not? If we really care, we should be willing to spend money on it. If we aren’t willing to spend money on it, then we must not really care.

Sympathy for the incel

Post 237: May 6 JDN 2458245

If you’ve been following the news surrounding the recent terrorist attack in Toronto, you may have encountered the word “incel” for the first time via articles in NPR, Vox, USA Today, or other sources linking the attack to the incel community.

If this was indeed your first exposure to the concept of “incel”, I think you are getting a distorted picture of their community, which is actually a surprisingly large Internet subculture. Finding out about incel this way would be like finding out about Islam from 9/11. (Actually, I’m fairly sure a lot of Americans did learn that way, which is awful.) The incel community is remarkably large one—hundreds of thousands of members at least, and quite likely millions.

While a large proportion subscribe to a toxic and misogynistic ideology, a similarly large proportion do not; while the ideology has contributed to terrorism and other violence, the vast majority of members of the community are not violent.

Note that the latter sentence is also entirely true of Islam. So if you are sympathetic toward Muslims and want to protect them from abuse and misunderstanding, I maintain that you should want to do the same for incels, and for basically the same reasons.

I want to make something abundantly clear at the outset:

This attack was terrorism. I am in no way excusing or defending the use of terrorism. Once someone crosses the line and starts attacking random civilians, I don’t care what their grievances were; the best response to their behavior involves snipers on rooftops. I frankly don’t even understand the risks police are willing to take in order to capture these people alive—especially considering how trigger-happy they are when it comes to random Black men. If you start shooting (or bombing, or crashing vehicles into) civilians, the police should shoot you. It’s that simple.

I do not want to evoke sympathy for incel-motivated terrorism. I want to evoke sympathy for the hundreds of thousands of incels who would never support terrorism and are now being publicly demonized.

I also want to make it clear that I am not throwing in my hat with the likes of Robin Hanson (who is also well-known as a behavioral economist, blogger, science fiction fan, Less Wrong devotee, and techno-utopian—so I feel a particular need to clarify my differences with him) when he defends something he calls in purposefully cold language “redistribution of sex” (that one is from right after the attack, but he has done this before, in previous blog posts).

Hanson has drunk Robert Nozick‘s Kool-Aid, and thinks that redistribution of wealth via taxation is morally equivalent to theft or even slavery. He is fond of making comparisons between redistribution of wealth and other forms of “redistribution” that obviously would be tantamount to theft and slavery, and asking “What’s the difference?” when in fact the difference is glaringly obvious to everyone but him. He is also fond of saying that “inequality between households within a nation” is a small portion of inequality, and then wondering aloud why we make such a big deal out of it. The answer here is also quite obvious: First of all, it’s not that small a portion of inequality—it’s a third of global income inequality by most measures, it’s increasing while across-nation inequality is decreasing, and the absolute magnitude of within-nation inequality is staggering: there are households with incomes over one million times that of other households within the same nation. (Where are the people who have had sex one hundred billion times, let alone the ones who had sex forty billion times in one year? Because here’s the man who has one hundred billion dollars and made almost $40 billion in one year.) Second, within-nation inequality is extremely simple to fix by public policy; just change a few numbers in the tax code—in fact, just change them back to what they were in the 1950s. Cross-national inequality is much more complicated (though I believe it can be solved, eventually) and some forms of what he’s calling “inequality” (like “inequality across periods of human history” or “inequality of innate talent”) don’t seem amenable to correction under any conceivable circumstances.

Hanson has lots of just-so stories about the evolutionary psychology of why “we don’t care” about cross-national inequality (gee, I thought maybe devoting my career to it was a pretty good signal otherwise?) or inequality in access to sex (which is thousands of times smaller than income inequality), but no clear policy suggestions for how these other forms of inequality could be in any way addressed. This whole idea of “redistribution of sex”; what does that mean, exactly? Legalized or even subsidized prostitution or sex robots would be one thing; I can see pros and cons there at least. But without clarification, it sounds like he’s endorsing the most extremist misogynist incels who think that women should be rightfully compelled to have sex with sexually frustrated men—which would be quite literally state-sanctioned rape. I think really Hanson isn’t all that interested in incels, and just wants to make fun of silly “socialists” who would dare suppose that maybe Jeff Bezos doesn’t need his 120 billion dollars as badly as some of the starving children in Africa could benefit from them, or that maybe having a tax system similar to Sweden or Denmark (which consistently rate as some of the happiest, most prosperous nations on Earth) sounds like a good idea. He takes things that are obviously much worse than redistributive taxation, and compares them to redistributive taxation to make taxation seem worse than it is.

No, I do not support “redistribution of sex”. I might be able to support legalized prostitution, but I’m concerned about the empirical data suggesting that legalized prostitution correlates with increased human sex trafficking. I think I would also support legalized sex robots, but for reasons that will become clear shortly, I strongly suspect they would do little to solve the problem, even if they weren’t ridiculously expensive. Beyond that, I’ve said enough about Hanson; Lawyers, Guns & Money nicely skewers Hanson’s argument, so I’ll not bother with it any further.
Instead, I want to talk about the average incel, one of hundreds of thousands if not millions of men who feels cast aside by society because he is socially awkward and can’t get laid. I want to talk about him because I used to be very much like him (though I never specifically identified as “incel”), and I want to talk about him because I think that he is genuinely suffering and needs help.

There is a moderate wing of the incel community, just as there is a moderate wing of the Muslim community. The moderate wing of incels is represented by sites like Love-Shy.com that try to reach out to people (mostly, but not exclusively young heterosexual men) who are lonely and sexually frustrated and often suffering from social anxiety or other mood disorders. Though they can be casually sexist (particularly when it comes to stereotypes about differences between men and women), they are not virulently misogynistic and they would never support violence. Moreover, they provide a valuable service in offering social support to men who otherwise feel ostracized by society. I disagree with a lot of things these groups say, but they are providing valuable benefits to their members and aren’t hurting anyone else. Taking out your anger against incel terrorists on Love-Shy.com is like painting graffiti on a mosque in response to 9/11 (which, of course, people did).

To some extent, I can even understand the more misogynistic (but still non-violent) wings of the incel community. I don’t want to defend their misogyny, but I can sort of understand where it might come from.

You see, men in our society (and most societies) are taught from a very young age that their moral worth as human beings is based primarily on one thing in particular: Sexual prowess. If you are having a lot of sex with a lot of women, you are a good and worthy man. If you are not, you are broken and defective. (Donald Trump has clearly internalized this narrative quite thoroughly—as have a shockingly large number of his supporters.)

This narrative is so strong and so universal, in fact, that I wouldn’t be surprised if it has a genetic component. It actually makes sense as a matter of evolutionary psychology than males would evolve to think this way; in an evolutionary sense it’s true that a male’s ultimate worth—that is, fitness, the one thing natural selection cares about—is defined by mating with a maximal number of females. But even if it has a genetic component, there is enough variation in this belief that I am confident that social norms can exaggerate or suppress it. One thing I can’t stand about popular accounts of evolutionary psychology is how they leap from “plausible evolutionary account” to “obviously genetic trait” all the way to “therefore impossible to change or compensate for”. My myopia and astigmatism are absolutely genetic; we can point to some of the specific genes. And yet my glasses compensate for them perfectly, and for a bit more money I could instead get LASIK surgery that would correct them permanently. Never think for a moment that “genetic” implies “immutable”.

Because of this powerful narrative, men who are sexually frustrated get treated like garbage by other men and even women. They feel ostracized and degraded. Often, they even feel worthless. If your worth as a human being is defined by how many women you have sex with, and you aren’t having sex with any, it follows that your worth is zero. No wonder, then, that so many become overcome with despair.
The incel community provides an opportunity to escape that despair. If you are told that you are not defective, but instead there is something wrong with society that keeps you down, you no longer have to feel worthless. It’s not that you don’t deserve to have sex, it’s that you’ve been denied what you deserve. When the only other narrative you’ve been given is that you are broken and worthless, I can see why “society is screwing you over” is an appealing counter-narrative. Indeed, it’s not even that far off from the truth.

The moderate wing of the incel community even offers some constructive solutions: They offer support to help men improve themselves, overcome their own social anxiety, and ultimately build fulfilling sexual relationships.

The extremist wing gets this all wrong: Instead of blaming the narrative that sex equals worth, they blame women—often, all women—for somehow colluding to deny them access to the sex they so justly deserve. They often link themselves to the “pick-up artist” community who try to manipulate women into having sex.

And then in the most extreme cases, they may even decide to turn their anger into violence.

But really I don’t think most of these men actually want sex at all, which is part of why I don’t think sex robots would be particularly effective.

Rather, to clarify: They want sex, as most of us do—but that’s not what they need. A simple lack of sex can be compensated reasonably well by pornography and masturbation. (Let me state this outright: Pornography and masturbation are fundamental human rights. Porn is free speech, and masturbation is part of the fundamental right of bodily autonomy. The fact that increased access to porn reduces incidence of sexual assault is nice, but secondary; porn is freedom.) Obviously it would be more satisfying to have a real sexual relationship, but with such substitutes available, a mere lack of sex does not cause suffering.

The need that these men are feeling is companionship. It is love. It is understanding. These are things that can’t be replaced, even partially, by sex robots or Internet porn.

Why do they conflate the two? Again, because society has taught them to do so. This one is clearly cultural, as it varies quite considerably between nations; it’s not nearly as bad in Southern Europe for example.
In American society (and many, but not all others), men are taught three things: First, expression of any emotion except for possibly anger, and especially expression of affection, is inherently erotic. Second, emotional vulnerability jeopardizes masculinity. Third, erotic expression must be only between men and women in a heterosexual relationship.

In principle, it might be enough to simply drop the third proposition: This is essentially what happens in the LGBT community. Gay men still generally suffer from the suspicion that all emotional expression is erotic, but have long-since abandoned their fears of expressing eroticism with other men. Often they’ve also given up on trying to sustain norms of masculinity as well. So gay men can hug each other and cry in front of each other, for example, without breaking norms within the LGBT community; the sexual subtext is often still there, but it’s considered unproblematic. (Gay men typically aren’t even as concerned about sexual infidelity as straight men; over 40% of gay couples are to some degree polyamorous, compared to 5% of straight couples.) It may also be seen as a loss of masculinity, but this too is considered unproblematic in most cases. There is a notable exception, which is the substantial segment of gay men who pride themselves upon hypermasculinity (generally abbreviated “masc”); and indeed, within that subcommunity you often see a lot of the same toxic masculinity norms that are found in the society as large.

That is also what happened in Classical Greece and Rome, I think: These societies were certainly virulently misogynistic in their own way, but their willingness to accept erotic expression between men opened them to accepting certain kinds of emotional expression between men as well, as long as it was not perceived as a threat to masculinity per se.

But when all three of those norms are in place, men find that the only emotional outlet they are even permitted to have while remaining within socially normative masculinity is a woman who is a romantic partner. Family members are allowed certain minimal types of affection—you can hug your mom, as long as you don’t seem too eager—but there is only one person in the world that you are allowed to express genuine emotional vulnerability toward, and that is your girlfriend. If you don’t have one? Get one. If you can’t get one? Well, sorry, pal, you’re just out of luck. Deal with it, or you’re not a real man.

But really what I’d like to get rid of is the first two propositions: Emotional expression should not be considered inherently sexual. Expressing emotional vulnerability should not be taken as a capitulation of your masculinity—and if I really had my druthers, the whole idea of “masculinity” would disappear or become irrelevant. This is the way that society is actually holding incels down: Not by denying them access to sex—the right to refuse sex is also a fundamental human right—but by denying them access to emotional expression and treating them like garbage because they are unable to have sex.

My sense is that what most incels are really feeling is not a dearth of sexual expression; it’s a dearth of emotional expression. But precisely because social norms have forced them into getting the two from the same place, they have conflated them. Further evidence in favor of this proposition? A substantial proportion of men who hire prostitutes spend a lot of the time they paid for simply talking.

I think what most of these men really need is psychotherapy. I’m not saying that to disparage them; I myself am a regular consumer of psychotherapy, which is one of the most cost-effective medical interventions known to humanity. I feel a need to clarify this because there is so much stigma on mental illness that saying someone is mentally ill and needs therapy can be taken as an insult; but I literally mean that a lot of these men are mentally ill and need therapy. Many of them exhibit significant signs of social anxiety, depression, or bipolar disorder.

Even for those who aren’t outright mentally ill, psychotherapy might be able to help them sort out some of these toxic narratives they’ve been fed by society, get them to think a little more carefully about what it means to be a good man and whether the “man” part is even so important. A good therapist could tease out the fabric of their tangled cognition and point out that when they say they want sex, it really sounds like they want self-worth, and when they say they want a girlfriend it really sounds like they want someone to talk to.

Such a solution won’t work on everyone, and it won’t work overnight on anyone. But the incel community did not emerge from a vacuum; it was catalyzed by a great deal of genuine suffering. Remove some of that suffering, and we might just undermine the most dangerous parts of the incel community and prevent at least some future violence.

No one owes sex to anyone. But maybe we do, as a society, owe these men a little more sympathy?