What we could, what we should, and what we must

May 27 JDN 2458266

In one of the most famous essays in all of ethical philosophy, Peter Singer famously argued that we are morally obligated to give so much to charity that we would effectively reduce ourselves to poverty only slightly better than what our donations sought to prevent. His argument is a surprisingly convincing one, especially for such a radical proposition. Indeed, one of the core activities of the Effective Altruism movement has basically been finding ways to moderate Singer’s argument without giving up on its core principles, because it’s so obvious both that we ought to do much more to help people around the world and that there’s no way we’re ever going to do what that argument actually asks of us.

The most cost-effective charities in the world can save a human life for an average cost of under $4,000. The maneuver that Singer basically makes is quite simple: If you know that you could save someone’s life for $4,000, you have $4,000 to spend, and instead you spend that $4,000 on something else, aren’t you saying that whatever you did spend it on was more important than saving that person’s life? And is that really something you believe?

But if you think a little more carefully, it becomes clear that things are not quite so simple. You aren’t being paid $4,000 to kill someone, first of all. If you were willing to accept $4,000 as sufficient payment to commit a murder, you would be, quite simply, a monster. Implicitly the “infinite identical psychopath” of neoclassical rational agent models would be willing to do such a thing, but very few actual human beings—even actual psychopaths—are that callous.

Obviously, we must refrain from murdering people, even for amounts far in excess of $4,000. If you were offered the chance to murder someone for $4 billion dollars, I can understand why you would be tempted to do such a thing. Think of what you could do with all that money! Not only would you and everyone in your immediate family be independently wealthy for life, you could donate billions of dollars to charity and save as much as a million lives. What’s one life for a million? Even then, I have a strong intuition that you shouldn’t commit this murder—but I have never been able to find a compelling moral argument for why. The best I’ve been able to come up with a sort of Kantian notion: What if everyone did this?

Since the most plausible scenario is that the $4 billion comes from existing wealth, all those murders would simply be transferring wealth around, from unknown sources. If you stipulate where the wealth comes from, the dilemma can change quite a bit.

Suppose for example the $4 billion is confiscated from Bashar Al-Assad. That would be in itself a good thing, lessening the power of a genocidal tyrant. So we need to add that to the positive side of the ledger. It is probably worth killing one innocent person just to undermine Al-Assad’s power; indeed, the US Air Force certainly seems to think so, as they average more than one civilian fatality every day in airstrikes.

Now suppose the wealth was extracted by clever financial machinations that took just a few dollars out of every bank account in America. This would be in itself a bad thing, but perhaps not a terrible thing, especially since we’re planning on giving most of it to UNICEF. Those people should have given it anyway, right? This sounds like a pretty good movie, actually; a cyberpunk Robin Hood basically.

Next, suppose it was obtained by stealing the life savings of a million poor people in Africa. Now the method of obtaining the money is so terrible that it’s not clear that funneling it through UNICEF would compensate, even if you didn’t have to murder someone to get it.

Finally, suppose that the wealth is actually created anew—not printed money from the Federal Reserve, but some new technology that will increase the world’s wealth by billions of dollars yet requires the death of an innocent person to create. In this scenario, the murder has become something more like the inherent risk in human subjects biomedical research, and actually seems justifiable. And indeed, that fits with the Kantian answer, for if we all had the chance to kill one person in order to create something that would increase the wealth of the world by $4 billion, we could turn this planet into a post-scarcity utopia within a generation for fewer deaths than are currently caused by diabetes.

Anyway, my point here is that the detailed context of a decision actually matters a great deal. We can’t simply abstract away from everything else in the world and ask whether the money is worth the life.

When we consider this broader context with regard to the world’s most cost-effective charities, it becomes apparent that a small proportion of very dedicated people giving huge proportions of their income to charity is not the kind of world we want to see.

If I actually gave so much that I equalized my marginal utility of wealth to that of a child dying of malaria in Ghana, I would have to donate over 95% of my income—and well before that point, I would be homeless and impoverished. This actually seems penny-wise and pound-foolish even from the perspective of total altruism: If I stop paying rent, it gets a lot harder for me to finish my doctorate and become a development economist. And even if I never donated another dollar, the world would be much better off with one more good development economist than with even another $23,000 to the Against Malaria Foundation. Once you factor in the higher income I’ll have (and proportionately higher donations I’ll make), it’s obviously the wrong decision for me to give 95% of $25,000 today rather than 10% of $70,000 every year for the next 20 years after I graduate.

But the optimal amount for me to donate from that perspective is whatever the maximum would be that I could give without jeopardizing my education and career prospects. This is almost certainly more than I am presently giving. Exactly how much more is actually not all that apparent: It’s not enough to say that I need to be able to pay rent, eat three meals a day, and own a laptop that’s good enough for programming and statistical analysis. There’s also a certain amount that I need for leisure, to keep myself at optimal cognitive functioning for the next several years. Do I need that specific video game, that specific movie? Surely not—but if I go the next ten years without ever watching another movie or playing another video game, I’m probably going to be in trouble psychologically. But what exactly is the minimum amount to keep me functioning well? And how much should I be willing to spend attending conferences? Those can be important career-building activities, but they can also be expensive wastes of time.

Singer acts as though jeopardizing your career prospects is no big deal, but this is clearly wrong: The harm isn’t just to your own well-being, but also to your productivity and earning power that could have allowed you to donate more later. You are a human capital asset, and you are right to invest in yourself. Exactly how much you should invest in yourself is a much harder question.
Such calculations are extremely difficult to do. There are all sorts of variables I simply don’t know, and don’t have any clear way of finding out. It’s not a good sign for an ethical theory when even someone with years of education and expertise on specifically that topic still can’t figure out the answer. Ethics is supposed to be something we can apply to everyone.

So I think it’s most helpful to think in those terms: What could we apply to everyone? What standard of donation would be high enough if we could get everyone on board?

World poverty is rapidly declining. The direct poverty gap at the UN poverty line of $1.90 per day is now only $80 billion. Realistically, we couldn’t simply close that gap precisely (there would also be all sorts of perverse incentives if we tried to do it that way). But the standard estimate that it would take about $300 billion per year in well-targeted spending to eliminate world hunger is looking very good.

How much would each person, just those in the middle class or above within the US or the EU, have to give in order to raise this much?
89% of US income is received by the top 60% of households (who I would said are unambiguously “middle class or above”). Income inequality is not as extreme within the EU, so the proportion of income received by the top 60% seems to be more like 75%.

89% of US GDP plus 75% of EU GDP is all together about $29 trillion per year. This means that in order to raise $300 billion, each person in the middle class or above would need to donate just over one percent of their income.

Not 95%. Not 25%. Not even 10%. Just 1%. That would be enough.

Of course, more is generally better—at least until you start jeopardizing your career prospects. So by all means, give 2% or 5% or even 10%. But I really don’t think it’s helpful to make people feel guilty about not giving 95% when all we really needed was for everyone to give 1%.

There is an important difference between what we could do, what we should do, and what we must do.

What we must do are moral obligations so strong they are essentially inviolable: We must not murder people. There may be extreme circumstances where exceptions can be made (such as collateral damage in war), and we can always come up with hypothetical scenarios that would justify almost anything, but for the vast majority of people the vast majority of time, these ethical rules are absolutely binding.

What we should do are moral obligations that are strong enough to be marks against your character if you break them, but not so absolutely binding that you have to be a monster not to follow them. This is where I put donating at least 1% of your income. (This is also where I put being vegetarian, but perhaps that is a topic for another time.) You really ought to do it, and you are doing something wrongful if you don’t—but most people don’t, and you are not a terrible person if you don’t.

This latter category is in part socially constructed, based on the norms people actually follow. Today, slavery is obviously a grave crime, and to be a human trafficker who participates in it you must be a psychopath. But two hundred years ago, things were somewhat different: Slavery was still wrong, yes, but it was quite possible to be an ordinary person who was generally an upstanding citizen in most respects and yet still own slaves. I would still condemn people who owned slaves back then, but not nearly as forcefully as I would condemn someone who owned slaves today. Two hundred years from now, perhaps vegetarianism will move up a category: The norm will be that everyone eats only plants, and someone who went out of their way to kill and eat a pig would have to be a psychopath. Eating meat is already wrong today—but it will be more wrong in the future. I’d say the same about donating 1% of your income, but actually I’m hoping that by two hundred years from now there will be no more poverty left to eradicate, and donation will no longer be necessary.

Finally, there is what we could do—supererogatory, even heroic actions of self-sacrifice that would make the world a better place, but cannot be reasonably expected of us. This is where donating 95% or even 25% of your income would fall. Yes, absolutely, that would help more people than donating 1%; but you don’t owe the world that much. It’s not wrong for you to contribute less than this. You don’t need to feel guilty for not giving this much.

But I do want to make you feel guilty if you don’t give at least 1%. Don’t tell me you can’t. You can. If your income is $30,000 per year, that’s $300 per year. If you needed that much for a car repair, or dental work, or fixing your roof, you’d find a way to come up with it. No one in the First World middle class is that liquidity-constrained. It is true that half of Americans say they couldn’t come up with $400 in an emergency, but I frankly don’t believe it. (I believe it for the bottom 25% or so, who are actually poor; but not half of Americans.) If you have even one credit card that’s not maxed out, you can do this—and frankly even if a card is maxed out, you can probably call them and get them to raise your limit. There is something you could cut out of your spending that would allow you to get back 1% of your annual income. I don’t know what it is, necessarily: Restaurants? Entertainment? Clothes? But I’m not asking you to give a third of your income—I’m asking you to give one penny out of every dollar.

I give considerably more than that; my current donation target is 8% and I’m planning on raising it to 10% or more once I get a high-paying job. I live on a grad student salary which is less than the median personal income in the US. So I know it can be done. But I am very intentionally not asking you to give this much; that would be above and beyond the call of duty. I’m only asking you to give 1%.

The vector geometry of value change

Post 239: May 20 JDN 2458259

This post is one of those where I’m trying to sort out my own thoughts on an ongoing research project, so it’s going to be a bit more theoretical than most, but I’ll try to spare you the mathematical details.

People often change their minds about things; that should be obvious enough. (Maybe it’s not as obvious as it might be, as the brain tends to erase its prior beliefs as wastes of data storage space.)

Most of the ways we change our minds are fairly minor: We get corrected about Napoleon’s birthdate, or learn that George Washington never actually chopped down any cherry trees, or look up the actual weight of an average African elephant and are surprised.

Sometimes we change our minds in larger ways: We realize that global poverty and violence are actually declining, when we thought they were getting worse; or we learn that climate change is actually even more dangerous than we thought.

But occasionally, we change our minds in an even more fundamental way: We actually change what we care about. We convert to a new religion, or change political parties, or go to college, or just read some very compelling philosophy books, and come out of it with a whole new value system.

Often we don’t anticipate that our values are going to change. That is important and interesting in its own right, but I’m going to set it aside for now, and look at a different question: What about the cases where we know our values are going to change?
Can it ever be rational for someone to choose to adopt a new value system?

Yes, it can—and I can put quite tight constraints on precisely when.

Here’s the part where I hand-wave the math, but imagine for a moment there are only two goods in the world that anyone would care about. (This is obviously vastly oversimplified, but it’s easier to think in two dimensions to make the argument, and it generalizes to n dimensions easily from there.) Maybe you choose a job caring only about money and integrity, or design policy caring only about security and prosperity, or choose your diet caring only about health and deliciousness.

I can then represent your current state as a vector, a two dimensional object with a length and a direction. The length describes how happy you are with your current arrangement. The direction describes your values—the direction of the vector characterizes the trade-off in your mind of how much you care about each of the two goods. If your vector is pointed almost entirely parallel with health, you don’t much care about deliciousness. If it’s pointed mostly at integrity, money isn’t that important to you.

This diagram shows your current state as a green vector.

vector1

Now suppose you have the option of taking some action that will change your value system. If that’s all it would do and you know that, you wouldn’t accept it. You will be no better off, and your value system will be different, which is bad from your current perspective. So here, you would not choose to move to the red vector:

vector2

But suppose that the action would change your value system, and make you better off. Now the red vector is longer than the green vector. Should you choose the action?

vector3

It’s not obvious, right? From the perspective of your new self, you’ll definitely be better off, and that seems good. But your values will change, and maybe you’ll start caring about the wrong things.

I realized that the right question to ask is whether you’ll be better off from your current perspective. If you and your future self both agree that this is the best course of action, then you should take it.

The really cool part is that (hand-waving the math again) it’s possible to work this out as a projection of the new vector onto the old vector. A large change in values will be reflected as a large angle between the two vectors; to compensate for that you need a large change in length, reflecting a greater improvement in well-being.

If the projection of the new vector onto the old vector is longer than the old vector itself, you should accept the value change.

vector4
If the projection of the new vector onto the old vector is shorter than the old vector, you should not accept the value change.

vector5

This captures the trade-off between increased well-being and changing values in a single number. It fits the simple intuitions that being better off is good, and changing values more is bad—but more importantly, it gives us a way of directly comparing the two on the same scale.

This is a very simple model with some very profound implications. One is that certain value changes are impossible in a single step: If a value change would require you to take on values that are completely orthogonal or diametrically opposed to your own, no increase in well-being will be sufficient.

It doesn’t matter how long I make this red vector, the projection onto the green vector will always be zero. If all you care about is money, no amount of integrity will entice you to change.

vector6

But a value change that was impossible in a single step can be feasible, even easy, if conducted over a series of smaller steps. Here I’ve taken that same impossible transition, and broken it into five steps that now make it feasible. By offering a bit more money for more integrity, I’ve gradually weaned you into valuing integrity above all else:

vector7

This provides a formal justification for the intuitive sense many people have of a “moral slippery slope” (commonly regarded as a fallacy). If you make small concessions to an argument that end up changing your value system slightly, and continue to do so many times, you could end up with radically different beliefs at the end, even diametrically opposed to your original beliefs. Each step was rational at the time you took it, but because you changed yourself in the process, you ended up somewhere you would not have wanted to go.

This is not necessarily a bad thing, however. If the reason you made each of those changes was actually a good one—you were provided with compelling evidence and arguments to justify the new beliefs—then the whole transition does turn out to be a good thing, even though you wouldn’t have thought so at the time.

This also allows us to formalize the notion of “inferential distance”: the inferential distance is the number of steps of value change required to make someone understand your point of view. It’s a function of both the difference in values and the difference in well-being between their point of view and yours.

Another key insight is that if you want to persuade someone to change their mind, you need to do it slowly, with small changes repeated many times, and you need to benefit them at each step. You can only persuade someone to change their minds if they will end up better off than they were at each step.

Is this an endorsement of wishful thinking? Not if we define “well-being” in the proper way. It can make me better off in a deep sense to realize that my wishful thinking was incorrect, so that I realize what must be done to actually get the good things I thought I already had.  It’s not necessary to appeal to material benefits; it’s necessary to appeal to current values.

But it does support the notion that you can’t persuade someone by belittling them. You won’t convince people to join your side by telling them that they are defective and bad and should feel guilty for being who they are.

If that seems obvious, well, maybe you should talk to some of the people who are constantly pushing “White privilege”. If you focused on how reducing racism would make people—even White people—better off, you’d probably be more effective. In some cases there would be direct material benefits: Racism creates inefficiency in markets that reduces overall output. But in other cases, sure, maybe there’s no direct benefit for the person you’re talking to; but you can talk about other sorts of benefits, like what sort of world they want to live in, or how proud they would feel to be part of the fight for justice. You can say all you want that they shouldn’t need this kind of persuasion, they should already believe and do the right thing—and you might even be right about that, in some ultimate sense—but do you want to change their minds or not? If you actually want to change their minds, you need to meet them where they are, make small changes, and offer benefits at each step.

If you don’t, you’ll just keep on projecting a vector orthogonally, and you’ll keep ending up with zero.

Downsides of rent control

May 13 JDN 2458252

One of the largest ideological divides between economists and the rest of the population concerns rent control.

Tent control is very popular among the general population, especially in California—with support hovering around 60% in Orange County, San Diego County, and across California in general. About 60% of people in the UK and over 50% in Ontario, Canada also support rent control.

Meanwhile, economists overwhelmingly oppose rent control: When evaluating the statement “A ceiling on rents reduces the quantity and quality of housing available.”, over 76% of economists agreed, and 16% agreed with qualifications. For the record, I would be an “agree with qualifications” as well (as they say, there are few one-handed economists).

There is evidence of some benefits of rent control, at least for the small number of people who can actually manage to stay in rent-controlled units. People who live in rent-controlled units are about 15% more likely to stay where they are, even in places as expensive as San Francisco, which could be considered a good thing (though I’m not convinced it always is; mobility is one of the key forces driving the dynamism of the US economy).

But there are winners and losers. Landlords whose properties are rent-controlled decreased their supply of housing by an average of 15%, via a combination of converting them to condos, removing them from the market, or demolishing the buildings outright. As a result, rent control increases average rent in a city by an average of 5%. One of the most effective ways to get out of rent control is to remove a building from the market entirely; this allows you to evict all of your tenants with very little notice, and is responsible for thousands of tenants being evicted every year in Los Angeles.

Rent control disincentivizes both new housing construction and the proper maintenance of existing housing. The quality of rent-controlled homes is systematically lower than the quality of other homes.

The benefits of rent control mainly fall upon the upper-middle class, not the poor. Rent control can make an area more racially diverse—but it benefits middle-class members of racial minorities, not poor members. Most of the benefits of rent control go to older families who have lived in a city for a long time—which makes them a transfer of wealth away from young people.

Cities such as Chicago without rent control systematically have lower rents, not higher; partly this is a cause, rather than an effect, as tenants are less likely to panic and demand rent control when rents are not high. But it’s also an effect, as rent control holds down the price in part of the market but ends up driving it up in the rest. Over 40% of San Francisco’s apartments are rent-controlled, and they have the highest rents in the world.

Rent control also contributes to the tendency toward building high-end luxury apartments; if you know that you will never be able to raise the rent on your existing buildings, and may end up being stuck with whatever rent you charge the first year on your new buildings, you have a strong reason to want to charge as much as possible the first year you build new apartments. Rent control also creates subtler distortions in the size and location of apartment construction. The effects of rent control even spill over into other housing markets, such as owner-occupied homes and mobile homes.
Because it locks people into place and reduces the construction of new homes near city centers, rent control increases commute times and carbon emissions. This is probably something we should especially point out to people in California, as the two things Californians hate most are environmental degradation and traffic congestion. (Then again, the third is high rent.) California is good at avoiding the first one—our GDP/carbon emission ratio is near the best in the US. The other two? Not so much.

Of course, simply removing rent control would not immediately solve the housing shortage; while it would probably have benefits in the long run, during the transition period a lot of people currently protected by rent control would lose their homes. Even in the long run, it would probably not be enough to actually make rent affordable in the largest coastal cities.

But it’s vital not to confuse “lower rent” with “rent control”; there are much, much better ways to reduce rent prices than simply enforcing arbitrary caps on them.

We have learned not to use price controls in other markets, but not housing for some reason. Think about the gasoline market, for example. High gas prices are very politically unpopular (though frankly I never quite understood why; it’s a tiny fraction of consumption expenditure, and if we ever want to make a dent in our carbon emissions we need to make our gas prices much higher), but imagine how ridiculous it would seem for a politician to propose simply making an arbitrary cap that says you aren’t allowed to sell gasoline for more than $2.50 per gallon in a particular city. The obvious outcome would be for most gas stations in that city to immediately close, and everyone to end up buying their gas at the new gas stations that spring up just outside the city limits charging $4.00 per gallon. This is basically what happens in the housing market: Rent-controlled apartments apartments are taken off the market, and the new housing that is built ends up even more expensive.

In a future post, I’ll discuss things we can do instead of rent control that would reliably make housing more affordable. Most of these would involve additional government spending; but there are two things I’d like to say about that. First, we are already spending this money, we just don’t see it, because it comes in the form of inefficiencies and market distortions instead of a direct expenditure. Second, do we really care about making housing affordable, or not? If we really care, we should be willing to spend money on it. If we aren’t willing to spend money on it, then we must not really care.

Sympathy for the incel

Post 237: May 6 JDN 2458245

If you’ve been following the news surrounding the recent terrorist attack in Toronto, you may have encountered the word “incel” for the first time via articles in NPR, Vox, USA Today, or other sources linking the attack to the incel community.

If this was indeed your first exposure to the concept of “incel”, I think you are getting a distorted picture of their community, which is actually a surprisingly large Internet subculture. Finding out about incel this way would be like finding out about Islam from 9/11. (Actually, I’m fairly sure a lot of Americans did learn that way, which is awful.) The incel community is remarkably large one—hundreds of thousands of members at least, and quite likely millions.

While a large proportion subscribe to a toxic and misogynistic ideology, a similarly large proportion do not; while the ideology has contributed to terrorism and other violence, the vast majority of members of the community are not violent.

Note that the latter sentence is also entirely true of Islam. So if you are sympathetic toward Muslims and want to protect them from abuse and misunderstanding, I maintain that you should want to do the same for incels, and for basically the same reasons.

I want to make something abundantly clear at the outset:

This attack was terrorism. I am in no way excusing or defending the use of terrorism. Once someone crosses the line and starts attacking random civilians, I don’t care what their grievances were; the best response to their behavior involves snipers on rooftops. I frankly don’t even understand the risks police are willing to take in order to capture these people alive—especially considering how trigger-happy they are when it comes to random Black men. If you start shooting (or bombing, or crashing vehicles into) civilians, the police should shoot you. It’s that simple.

I do not want to evoke sympathy for incel-motivated terrorism. I want to evoke sympathy for the hundreds of thousands of incels who would never support terrorism and are now being publicly demonized.

I also want to make it clear that I am not throwing in my hat with the likes of Robin Hanson (who is also well-known as a behavioral economist, blogger, science fiction fan, Less Wrong devotee, and techno-utopian—so I feel a particular need to clarify my differences with him) when he defends something he calls in purposefully cold language “redistribution of sex” (that one is from right after the attack, but he has done this before, in previous blog posts).

Hanson has drunk Robert Nozick‘s Kool-Aid, and thinks that redistribution of wealth via taxation is morally equivalent to theft or even slavery. He is fond of making comparisons between redistribution of wealth and other forms of “redistribution” that obviously would be tantamount to theft and slavery, and asking “What’s the difference?” when in fact the difference is glaringly obvious to everyone but him. He is also fond of saying that “inequality between households within a nation” is a small portion of inequality, and then wondering aloud why we make such a big deal out of it. The answer here is also quite obvious: First of all, it’s not that small a portion of inequality—it’s a third of global income inequality by most measures, it’s increasing while across-nation inequality is decreasing, and the absolute magnitude of within-nation inequality is staggering: there are households with incomes over one million times that of other households within the same nation. (Where are the people who have had sex one hundred billion times, let alone the ones who had sex forty billion times in one year? Because here’s the man who has one hundred billion dollars and made almost $40 billion in one year.) Second, within-nation inequality is extremely simple to fix by public policy; just change a few numbers in the tax code—in fact, just change them back to what they were in the 1950s. Cross-national inequality is much more complicated (though I believe it can be solved, eventually) and some forms of what he’s calling “inequality” (like “inequality across periods of human history” or “inequality of innate talent”) don’t seem amenable to correction under any conceivable circumstances.

Hanson has lots of just-so stories about the evolutionary psychology of why “we don’t care” about cross-national inequality (gee, I thought maybe devoting my career to it was a pretty good signal otherwise?) or inequality in access to sex (which is thousands of times smaller than income inequality), but no clear policy suggestions for how these other forms of inequality could be in any way addressed. This whole idea of “redistribution of sex”; what does that mean, exactly? Legalized or even subsidized prostitution or sex robots would be one thing; I can see pros and cons there at least. But without clarification, it sounds like he’s endorsing the most extremist misogynist incels who think that women should be rightfully compelled to have sex with sexually frustrated men—which would be quite literally state-sanctioned rape. I think really Hanson isn’t all that interested in incels, and just wants to make fun of silly “socialists” who would dare suppose that maybe Jeff Bezos doesn’t need his 120 billion dollars as badly as some of the starving children in Africa could benefit from them, or that maybe having a tax system similar to Sweden or Denmark (which consistently rate as some of the happiest, most prosperous nations on Earth) sounds like a good idea. He takes things that are obviously much worse than redistributive taxation, and compares them to redistributive taxation to make taxation seem worse than it is.

No, I do not support “redistribution of sex”. I might be able to support legalized prostitution, but I’m concerned about the empirical data suggesting that legalized prostitution correlates with increased human sex trafficking. I think I would also support legalized sex robots, but for reasons that will become clear shortly, I strongly suspect they would do little to solve the problem, even if they weren’t ridiculously expensive. Beyond that, I’ve said enough about Hanson; Lawyers, Guns & Money nicely skewers Hanson’s argument, so I’ll not bother with it any further.
Instead, I want to talk about the average incel, one of hundreds of thousands if not millions of men who feels cast aside by society because he is socially awkward and can’t get laid. I want to talk about him because I used to be very much like him (though I never specifically identified as “incel”), and I want to talk about him because I think that he is genuinely suffering and needs help.

There is a moderate wing of the incel community, just as there is a moderate wing of the Muslim community. The moderate wing of incels is represented by sites like Love-Shy.com that try to reach out to people (mostly, but not exclusively young heterosexual men) who are lonely and sexually frustrated and often suffering from social anxiety or other mood disorders. Though they can be casually sexist (particularly when it comes to stereotypes about differences between men and women), they are not virulently misogynistic and they would never support violence. Moreover, they provide a valuable service in offering social support to men who otherwise feel ostracized by society. I disagree with a lot of things these groups say, but they are providing valuable benefits to their members and aren’t hurting anyone else. Taking out your anger against incel terrorists on Love-Shy.com is like painting graffiti on a mosque in response to 9/11 (which, of course, people did).

To some extent, I can even understand the more misogynistic (but still non-violent) wings of the incel community. I don’t want to defend their misogyny, but I can sort of understand where it might come from.

You see, men in our society (and most societies) are taught from a very young age that their moral worth as human beings is based primarily on one thing in particular: Sexual prowess. If you are having a lot of sex with a lot of women, you are a good and worthy man. If you are not, you are broken and defective. (Donald Trump has clearly internalized this narrative quite thoroughly—as have a shockingly large number of his supporters.)

This narrative is so strong and so universal, in fact, that I wouldn’t be surprised if it has a genetic component. It actually makes sense as a matter of evolutionary psychology than males would evolve to think this way; in an evolutionary sense it’s true that a male’s ultimate worth—that is, fitness, the one thing natural selection cares about—is defined by mating with a maximal number of females. But even if it has a genetic component, there is enough variation in this belief that I am confident that social norms can exaggerate or suppress it. One thing I can’t stand about popular accounts of evolutionary psychology is how they leap from “plausible evolutionary account” to “obviously genetic trait” all the way to “therefore impossible to change or compensate for”. My myopia and astigmatism are absolutely genetic; we can point to some of the specific genes. And yet my glasses compensate for them perfectly, and for a bit more money I could instead get LASIK surgery that would correct them permanently. Never think for a moment that “genetic” implies “immutable”.

Because of this powerful narrative, men who are sexually frustrated get treated like garbage by other men and even women. They feel ostracized and degraded. Often, they even feel worthless. If your worth as a human being is defined by how many women you have sex with, and you aren’t having sex with any, it follows that your worth is zero. No wonder, then, that so many become overcome with despair.
The incel community provides an opportunity to escape that despair. If you are told that you are not defective, but instead there is something wrong with society that keeps you down, you no longer have to feel worthless. It’s not that you don’t deserve to have sex, it’s that you’ve been denied what you deserve. When the only other narrative you’ve been given is that you are broken and worthless, I can see why “society is screwing you over” is an appealing counter-narrative. Indeed, it’s not even that far off from the truth.

The moderate wing of the incel community even offers some constructive solutions: They offer support to help men improve themselves, overcome their own social anxiety, and ultimately build fulfilling sexual relationships.

The extremist wing gets this all wrong: Instead of blaming the narrative that sex equals worth, they blame women—often, all women—for somehow colluding to deny them access to the sex they so justly deserve. They often link themselves to the “pick-up artist” community who try to manipulate women into having sex.

And then in the most extreme cases, they may even decide to turn their anger into violence.

But really I don’t think most of these men actually want sex at all, which is part of why I don’t think sex robots would be particularly effective.

Rather, to clarify: They want sex, as most of us do—but that’s not what they need. A simple lack of sex can be compensated reasonably well by pornography and masturbation. (Let me state this outright: Pornography and masturbation are fundamental human rights. Porn is free speech, and masturbation is part of the fundamental right of bodily autonomy. The fact that increased access to porn reduces incidence of sexual assault is nice, but secondary; porn is freedom.) Obviously it would be more satisfying to have a real sexual relationship, but with such substitutes available, a mere lack of sex does not cause suffering.

The need that these men are feeling is companionship. It is love. It is understanding. These are things that can’t be replaced, even partially, by sex robots or Internet porn.

Why do they conflate the two? Again, because society has taught them to do so. This one is clearly cultural, as it varies quite considerably between nations; it’s not nearly as bad in Southern Europe for example.
In American society (and many, but not all others), men are taught three things: First, expression of any emotion except for possibly anger, and especially expression of affection, is inherently erotic. Second, emotional vulnerability jeopardizes masculinity. Third, erotic expression must be only between men and women in a heterosexual relationship.

In principle, it might be enough to simply drop the third proposition: This is essentially what happens in the LGBT community. Gay men still generally suffer from the suspicion that all emotional expression is erotic, but have long-since abandoned their fears of expressing eroticism with other men. Often they’ve also given up on trying to sustain norms of masculinity as well. So gay men can hug each other and cry in front of each other, for example, without breaking norms within the LGBT community; the sexual subtext is often still there, but it’s considered unproblematic. (Gay men typically aren’t even as concerned about sexual infidelity as straight men; over 40% of gay couples are to some degree polyamorous, compared to 5% of straight couples.) It may also be seen as a loss of masculinity, but this too is considered unproblematic in most cases. There is a notable exception, which is the substantial segment of gay men who pride themselves upon hypermasculinity (generally abbreviated “masc”); and indeed, within that subcommunity you often see a lot of the same toxic masculinity norms that are found in the society as large.

That is also what happened in Classical Greece and Rome, I think: These societies were certainly virulently misogynistic in their own way, but their willingness to accept erotic expression between men opened them to accepting certain kinds of emotional expression between men as well, as long as it was not perceived as a threat to masculinity per se.

But when all three of those norms are in place, men find that the only emotional outlet they are even permitted to have while remaining within socially normative masculinity is a woman who is a romantic partner. Family members are allowed certain minimal types of affection—you can hug your mom, as long as you don’t seem too eager—but there is only one person in the world that you are allowed to express genuine emotional vulnerability toward, and that is your girlfriend. If you don’t have one? Get one. If you can’t get one? Well, sorry, pal, you’re just out of luck. Deal with it, or you’re not a real man.

But really what I’d like to get rid of is the first two propositions: Emotional expression should not be considered inherently sexual. Expressing emotional vulnerability should not be taken as a capitulation of your masculinity—and if I really had my druthers, the whole idea of “masculinity” would disappear or become irrelevant. This is the way that society is actually holding incels down: Not by denying them access to sex—the right to refuse sex is also a fundamental human right—but by denying them access to emotional expression and treating them like garbage because they are unable to have sex.

My sense is that what most incels are really feeling is not a dearth of sexual expression; it’s a dearth of emotional expression. But precisely because social norms have forced them into getting the two from the same place, they have conflated them. Further evidence in favor of this proposition? A substantial proportion of men who hire prostitutes spend a lot of the time they paid for simply talking.

I think what most of these men really need is psychotherapy. I’m not saying that to disparage them; I myself am a regular consumer of psychotherapy, which is one of the most cost-effective medical interventions known to humanity. I feel a need to clarify this because there is so much stigma on mental illness that saying someone is mentally ill and needs therapy can be taken as an insult; but I literally mean that a lot of these men are mentally ill and need therapy. Many of them exhibit significant signs of social anxiety, depression, or bipolar disorder.

Even for those who aren’t outright mentally ill, psychotherapy might be able to help them sort out some of these toxic narratives they’ve been fed by society, get them to think a little more carefully about what it means to be a good man and whether the “man” part is even so important. A good therapist could tease out the fabric of their tangled cognition and point out that when they say they want sex, it really sounds like they want self-worth, and when they say they want a girlfriend it really sounds like they want someone to talk to.

Such a solution won’t work on everyone, and it won’t work overnight on anyone. But the incel community did not emerge from a vacuum; it was catalyzed by a great deal of genuine suffering. Remove some of that suffering, and we might just undermine the most dangerous parts of the incel community and prevent at least some future violence.

No one owes sex to anyone. But maybe we do, as a society, owe these men a little more sympathy?

Why are humans so bad with probability?

Apr 29 JDN 2458238

In previous posts on deviations from expected utility and cumulative prospect theory, I’ve detailed some of the myriad ways in which human beings deviate from optimal rational behavior when it comes to probability.

This post is going to be a bit different: Yes, we behave irrationally when it comes to probability. Why?

Why aren’t we optimal expected utility maximizers?
This question is not as simple as it sounds. Some of the ways that human beings deviate from neoclassical behavior are simply because neoclassical theory requires levels of knowledge and intelligence far beyond what human beings are capable of; basically anything requiring “perfect information” qualifies, as does any game theory prediction that involves solving extensive-form games with infinite strategy spaces by backward induction. (Don’t feel bad if you have no idea what that means; that’s kind of my point. Solving infinite extensive-form games by backward induction is an unsolved problem in game theory; just this past week I saw a new paper presented that offered a partial potential solutionand yet we expect people to do it optimally every time?)

I’m also not going to include questions of fundamental uncertainty, like “Will Apple stock rise or fall tomorrow?” or “Will the US go to war with North Korea in the next ten years?” where it isn’t even clear how we would assign a probability. (Though I will get back to them, for reasons that will become clear.)

No, let’s just look at the absolute simplest cases, where the probabilities are all well-defined and completely transparent: Lotteries and casino games. Why are we so bad at that?

Lotteries are not a computationally complex problem. You figure out how much the prize is worth to you, multiply it by the probability of winning—which is clearly spelled out for you—and compare that to how much the ticket price is worth to you. The most challenging part lies in specifying your marginal utility of wealth—the “how much it’s worth to you” part—but that’s something you basically had to do anyway, to make any kind of trade-offs on how to spend your time and money. Maybe you didn’t need to compute it quite so precisely over that particular range of parameters, but you need at least some idea how much $1 versus $10,000 is worth to you in order to get by in a market economy.

Casino games are a bit more complicated, but not much, and most of the work has been done for you; you can look on the Internet and find tables of probability calculations for poker, blackjack, roulette, craps and more. Memorizing all those probabilities might take some doing, but human memory is astonishingly capacious, and part of being an expert card player, especially in blackjack, seems to involve memorizing a lot of those probabilities.

Furthermore, by any plausible expected utility calculation, lotteries and casino games are a bad deal. Unless you’re an expert poker player or blackjack card-counter, your expected income from playing at a casino is always negative—and the casino set it up that way on purpose.

Why, then, can lotteries and casinos stay in business? Why are we so bad at such a simple problem?

Clearly we are using some sort of heuristic judgment in order to save computing power, and the people who make lotteries and casinos have designed formal models that can exploit those heuristics to pump money from us. (Shame on them, really; I don’t fully understand why this sort of thing is legal.)

In another previous post I proposed what I call “categorical prospect theory”, which I think is a decently accurate description of the heuristics people use when assessing probability (though I’ve not yet had the chance to test it experimentally).

But why use this particular heuristic? Indeed, why use a heuristic at all for such a simple problem?

I think it’s helpful to keep in mind that these simple problems are weird; they are absolutely not the sort of thing a tribe of hunter-gatherers is likely to encounter on the savannah. It doesn’t make sense for our brains to be optimized to solve poker or roulette.

The sort of problems that our ancestors encountered—indeed, the sort of problems that we encounter, most of the time—were not problems of calculable probability risk; they were problems of fundamental uncertainty. And they were frequently matters of life or death (which is why we’d expect them to be highly evolutionarily optimized): “Was that sound a lion, or just the wind?” “Is this mushroom safe to eat?” “Is that meat spoiled?”

In fact, many of the uncertainties most important to our ancestors are still important today: “Will these new strangers be friendly, or dangerous?” “Is that person attracted to me, or am I just projecting my own feelings?” “Can I trust you to keep your promise?” These sorts of social uncertainties are even deeper; it’s not clear that any finite being could ever totally resolve its uncertainty surrounding the behavior of other beings with the same level of intelligence, as the cognitive arms race continues indefinitely. The better I understand you, the better you understand me—and if you’re trying to deceive me, as I get better at detecting deception, you’ll get better at deceiving.

Personally, I think that it was precisely this sort of feedback loop that resulting in human beings getting such ridiculously huge brains in the first place. Chimpanzees are pretty good at dealing with the natural environment, maybe even better than we are; but even young children can outsmart them in social tasks any day. And once you start evolving for social cognition, it’s very hard to stop; basically you need to be constrained by something very fundamental, like, say, maximum caloric intake or the shape of the birth canal. Where chimpanzees look like their brains were what we call an “interior solution”, where evolution optimized toward a particular balance between cost and benefit, human brains look more like a “corner solution”, where the evolutionary pressure was entirely in one direction until we hit up against a hard constraint. That’s exactly what one would expect to happen if we were caught in a cognitive arms race.

What sort of heuristic makes sense for dealing with fundamental uncertainty—as opposed to precisely calculable probability? Well, you don’t want to compute a utility function and multiply by it, because that adds all sorts of extra computation and you have no idea what probability to assign. But you’ve got to do something like that in some sense, because that really is the optimal way to respond.

So here’s a heuristic you might try: Separate events into some broad categories based on how frequently they seem to occur, and what sort of response would be necessary.

Some things, like the sun rising each morning, seem to always happen. So you should act as if those things are going to happen pretty much always, because they do happen… pretty much always.

Other things, like rain, seem to happen frequently but not always. So you should look for signs that those things might happen, and prepare for them when the signs point in that direction.

Still other things, like being attacked by lions, happen very rarely, but are a really big deal when they do. You can’t go around expecting those to happen all the time, that would be crazy; but you need to be vigilant, and if you see any sign that they might be happening, even if you’re pretty sure they’re not, you may need to respond as if they were actually happening, just in case. The cost of a false positive is much lower than the cost of a false negative.

And still other things, like people sprouting wings and flying, never seem to happen. So you should act as if those things are never going to happen, and you don’t have to worry about them.

This heuristic is quite simple to apply once set up: It can simply slot in memories of when things did and didn’t happen in order to decide which category they go in—i.e. availability heuristic. If you can remember a lot of examples of “almost never”, maybe you should move it to “unlikely” instead. If you get a really big number of examples, you might even want to move it all the way to “likely”.

Another large advantage of this heuristic is that by combining utility and probability into one metric—we might call it “importance”, though Bayesian econometricians might complain about that—we can save on memory space and computing power. I don’t need to separately compute a utility and a probability; I just need to figure out how much effort I should put into dealing with this situation. A high probability of a small cost and a low probability of a large cost may be equally worth my time.

How might these heuristics go wrong? Well, if your environment changes sufficiently, the probabilities could shift and what seemed certain no longer is. For most of human history, “people walking on the Moon” would seem about as plausible as sprouting wings and flying away, and yet it has happened. Being attacked by lions is now exceedingly rare except in very specific places, but we still harbor a certain awe and fear before lions. And of course availability heuristic can be greatly distorted by mass media, which makes people feel like terrorist attacks and nuclear meltdowns are common and deaths by car accidents and influenza are rare—when exactly the opposite is true.

How many categories should you set, and what frequencies should they be associated with? This part I’m still struggling with, and it’s an important piece of the puzzle I will need before I can take this theory to experiment. There is probably a trade-off between more categories giving you more precision in tailoring your optimal behavior, but costing more cognitive resources to maintain. Is the optimal number 3? 4? 7? 10? I really don’t know. Even I could specify the number of categories, I’d still need to figure out precisely what categories to assign.

Are some ideas too ridiculous to bother with?

Apr 22 JDN 2458231

Flat Earth. Young-Earth Creationism. Reptilians. 9/11 “Truth”. Rothschild conspiracies.

There are an astonishing number of ideas that satisfy two apparently-contrary conditions:

  1. They are so obviously ridiculous that even a few minutes of honest, rational consideration of evidence that is almost universally available will immediately refute them;
  2. They are believed by tens or hundreds of millions of otherwise-intelligent people.

Young-Earth Creationism is probably the most alarming, seeing as it grips the minds of some 38% of Americans.

What should we do when faced with such ideas? This is something I’ve struggled with before.

I’ve spent a lot of time and effort trying to actively address and refute them—but I don’t think I’ve even once actually persuaded someone who believes these ideas to change their mind. This doesn’t mean my time and effort were entirely wasted; it’s possible that I managed to convince bystanders, or gained some useful understanding, or simply improved my argumentation skills. But it does seem likely that my time and effort were mostly wasted.

It’s tempting, therefore, to give up entirely, and just let people go on believing whatever nonsense they want to believe. But there’s a rather serious downside to that as well: Thirty-eight percent of Americans.

These people vote. They participate in community decisions. They make choices that affect the rest of our lives. Nearly all of those Creationists are Evangelical Christians—and White Evangelical Christians voted overwhelmingly in favor of Donald Trump. I can’t be sure that changing their minds about the age of the Earth would also change their minds about voting for Trump, but I can say this: If all the Creationists in the US had simply not voted, Hillary Clinton would have won the election.

And let’s not leave the left wing off the hook either. Jill Stein is a 9/11 “Truther”, and pulled a lot of fellow “Truthers” to her cause in the election as well. Had all of Jill Stein’s votes gone to Hillary Clinton instead, again Hillary would have won, even if all the votes for Trump had remained the same. (That said, there is reason to think that if Stein had dropped out, most of those folks wouldn’t have voted at all.)

Therefore, I don’t think it is safe to simply ignore these ridiculous beliefs. We need to do something; the question is what.

We could try to censor them, but first of all that violates basic human rights—which should be a sufficient reason not to do it—and second, it probably wouldn’t even work. Censorship typically leads to radicalization, not assimilation.

We could try to argue against them. Ideally this would be the best option, but it has not shown much effect so far. The kind of person who sincerely believes that the Earth is 6,000 years old (let alone that governments are secretly ruled by reptilian alien invaders) isn’t the kind of person who is highly responsive to evidence and rational argument.

In fact, there is reason to think that these people don’t actually believe what they say the same way that you and I believe things. I’m not saying they’re lying, exactly. They think they believe it; they want to believe it. They believe in believing it. But they don’t actually believe it—not the way that I believe that cyanide is poisonous or the way I believe the sun will rise tomorrow. It isn’t fully integrated into the way that they anticipate outcomes and choose behaviors. It’s more of a free-floating sort of belief, where professing a particular belief allows them to feel good about themselves, or represent their status in a community.

To be clear, it isn’t that these beliefs are unimportant to them; on the contrary, they are in some sense more important. Creationism isn’t really about the age of the Earth; it’s about who you are and where you belong. A conventional belief can be changed by evidence about the world because it is about the world; a belief-in-belief can’t be changed by evidence because it was never really about that.

But if someone’s ridiculous belief is really about their identity, how do we deal with that? I can’t refute an identity. If your identity is tied to a particular social group, maybe they could ostracize you and cause you to lose the identity; but an outsider has no power to do that. (Even then, I strongly suspect that, for instance, most excommunicated Catholics still see themselves as Catholic.) And if it’s a personal identity not tied to a particular group, even that option is unavailable.

Where, then, does that leave us? It would seem that we can’t change their minds—but we also can’t afford not to change their minds. We are caught in a terrible dilemma.

I think there might be a way out. It’s a bit counter-intuitive, but I think what we need to do is stop taking them seriously as beliefs, and start treating them purely as announcements of identity.

So when someone says something like, “The Rothschilds run everything!”, instead of responding as though this were a coherent proposition being asserted, treat it as if someone had announced, “Boo! I hate the Red Sox!” Belief in the Rothschild conspiracies isn’t a well-defined set of propositions about the world; it’s an assertion of membership in a particular sort of political sect that is vaguely left-wing and anarchist. You don’t really think the Rothschilds rule everything. You just want to express your (quite justifiable) anger at how our current political system privileges the rich.

Likewise, when someone says they think the Earth is 6,000 years old, you could try to present the overwhelming scientific evidence that they are wrong—but it might be more productive, and it is certainly easier, to just think of this as a funny way of saying “I’m an Evangelical Christian”.

Will this eliminate the ridiculous beliefs? Not immediately. But it might ultimately do so, in the following way: By openly acknowledging the belief-in-belief as a signaling mechanism, we can open opportunities for people to develop new, less pathological methods of signaling. (Instead of saying you think the Earth is 6,000 years old, maybe you could wear a funny hat, like Orthodox Jews do. Funny hats don’t hurt anybody. Everyone loves funny hats.) People will always want to signal their identity, and there are fundamental reasons why such signals will typically be costly for those who use them; but we can try to make them not so costly for everyone else.

This also makes arguments a lot less frustrating, at least at your end. It might make them more frustrating at the other end, because people want their belief-in-belief to be treated like proper belief, and you’ll be refusing them that opportunity. But this is not such a bad thing; if we make it more frustrating to express ridiculous beliefs in public, we might manage to reduce the frequency of such expression.

Today would be my father’s birthday.

Apr 15 JDN 2458224

When this post goes live, it will be April 15, 2018. My father was born April 15, 1954 and died August 31, 2017, so this is the first time we will be celebrating his birthday without him.

I’m not sure that grief ever really goes away. The shock of the unexpected death fades eventually, and at last you can accept that this has really happened and make it a part of your life. But the sum total of all missed opportunities for life events you could have had together only continues to increase.

There are many cliches about this sort of thing: “Death is a part of life.” “Everything happens for a reason.” It’s all making excuses for the dragon. If we could find a way to make people stop dying, we ought to do it. The other consequences are things we could figure out later.

But, alas, we can’t, at least not in general. We have managed to cure or vaccinate against a wide variety of diseases, and as a result people do, on average, live longer than ever before in human history. But none of us live “on average”—and sometimes you get a very unlucky draw.

Yet somehow, we do learn to go on. I’m not sure how. I guess it’s a kind of desensitization: Right after my father’s death, any reminder of him was painful. But over time, that pain began to lessen. Each new reminder hurts a little less than the last, until eventually the pain is mild enough that it can mostly be ignored. It never really goes away, I think; but eventually it is below your just-noticeable-difference.

I had hoped to do more with this post. I had hoped that reflecting on the grief I’ve felt for the last several months would allow me to find some greater insight that I could share. Instead, I find myself re-writing the same sentences over and over again, trying in vain to express something that might help me, or help someone else who is going through similar grief. I keep looking for ways to distract myself, other things to think about—anything but this. Maybe there are no simple insights, no way for words to shorten the process that everyone must go through.