Locked donation boxes and moral variation

Aug 8 JDN 2459435

I haven’t been able to find the quote, but I think it was Kahneman who once remarked: “Putting locks on donation boxes shows that you have the correct view of human nature.”

I consider this a deep insight. Allow me to explain.

Some people think that human beings are basically good. Rousseau is commonly associated with this view, a notion that, left to our own devices, human beings would naturally gravitate toward an anarchic but peaceful society.

The question for people who think this needs to be: Why haven’t we? If your answer is “government holds us back”, you still need to explain why we have government. Government was not imposed upon us from On High in time immemorial. We were fairly anarchic (though not especially peaceful) in hunter-gatherer tribes for nearly 200,000 years before we established governments. How did that happen?

And if your answer to that is “a small number of tyrannical psychopaths forced government on everyone else”, you may not be wrong about that—but it already breaks your original theory, because we’ve just shown that human society cannot maintain a peaceful anarchy indefinitely.

Other people think that human beings are basically evil. Hobbes is most commonly associated with this view, that humans are innately greedy, violent, and selfish, and only by the overwhelming force of a government can civilization be maintained.

This view more accurately predicts the level of violence and death that generally accompanies anarchy, and can at least explain why we’d want to establish government—but it still has trouble explaining how we would establish government. It’s not as if we’re ruled by a single ubermensch with superpowers, or an army of robots created by a mad scientist in a secret undergroud laboratory. Running a government involves cooperation on an absolutely massive scale—thousands or even millions of unrelated, largely anonymous individuals—and this cooperation is not maintained entirely by force: Yes, there is some force involved, but most of what a government does most of the time is mediated by norms and customs, and if a government did ever try to organize itself entirely by force—not paying any of the workers, not relying on any notion of patriotism or civic duty—it would immediately and catastrophically collapse.

What is the right answer? Humans aren’t basically good or basically evil. Humans are basically varied.

I would even go so far as to say that most human beings are basically good. They follow a moral code, they care about other people, they work hard to support others, they try not to break the rules. Nobody is perfect, and we all make various mistakes. We disagree about what is right and wrong, and sometimes we even engage in actions that we ourselves would recognize as morally wrong. But most people, most of the time, try to do the right thing.

But some people are better than others. There are great humanitarians, and then there are ordinary folks. There are people who are kind and compassionate, and people who are selfish jerks.

And at the very opposite extreme from the great humanitarians is the roughly 1% of people who are outright psychopaths. About 5-10% of people have significant psychopathic traits, but about 1% are really full-blown psychopaths.

I believe it is fair to say that psychopaths are in fact basically evil. They are incapable of empathy or compassion. Morality is meaningless to them—they literally cannot distinguish moral rules from other rules. Other people’s suffering—even their very lives—means nothing to them except insofar as it is instrumentally useful. To a psychopath, other people are nothing more than tools, resources to be exploited—or obstacles to be removed.

Some philosophers have argued that this means that psychopaths are incapable of moral responsibility. I think this is wrong. I think it relies on a naive, pre-scientific notion of what “moral responsibility” is supposed to mean—one that was inevitably going to be destroyed once we had a greater understanding of the brain. Do psychopaths understand the consequences of their actions? Yes. Do rewards motivate psychopaths to behave better? Yes. Does the threat of punishment motivate them? Not really, but it was never that effective on anyone else, either. What kind of “moral responsibility” are we still missing? And how would our optimal action change if we decided that they do or don’t have moral responsibility? Would you still imprison them for crimes either way? Maybe it doesn’t matter whether or not it’s really a blegg.

Psychopaths are a small portion of our population, but are responsible for a large proportion of violent crimes. They are also overrepresented in top government positions as well as police officers, and it’s pretty safe to say that nearly every murderous dictator was a psychopath of one shade or another.

The vast majority of people are not psychopaths, and most people don’t even have any significant psychopathic traits. Yet psychopaths have an enormously disproportionate impact on society—nearly all of it harmful. If psychopaths did not exist, Rousseau might be right after all; we wouldn’t need government. If most people were psychopaths, Hobbes would be right; we’d long for the stability and security of government, but we could never actually cooperate enough to create it.

This brings me back to the matter of locked donation boxes.

Having a donation box is only worthwhile if most people are basically good: Asking people to give money freely in order to achieve some good only makes any sense if people are capable of altruism, empathy, cooperation. And it can’t be just a few, because you’d never raise enough money to be useful that way. It doesn’t have to be everyone, or maybe even a majority; but it has to be a large fraction. 90% is more than enough.

But locking things is only worthwhile if some people are basically evil: For a lock to make sense, there must be at least a few people who would be willing to break in and steal the money, even if it was earmarked for a very worthy cause. It doesn’t take a huge fraction of people, but it must be more than a negligible one. 1% to 10% is just about the right sort of range.

Hence, locked donation boxes are a phenomenon that would only exist in a world where most people are basically good—but some people are basically evil.

And this is in fact the world in which we live. It is a world where the Holocaust could happen but then be followed by the founding of the United Nations, a world where nuclear weapons would be invented and used to devastate cities, but then be followed by an era of nearly unprecedented peace. It is a world where governments are necessary to reign in violence, but also a world where governments can function (reasonably well) even in countries with hundreds of millions of people. It is a world with crushing poverty and people who work tirelessly to end it. It is a world where Exxon and BP despoil the planet for riches while WWF and Greenpeace fight back. It is a world where religions unite millions of people under a banner of peace and justice, and then go on crusadees to murder thousands of other people who united under a different banner of peace and justice. It is a world of richness, complexity, uncertainty, conflict—variance.

It is not clear how much of this moral variance is innate versus acquired. If we somehow rewound the film of history and started it again with a few minor changes, it is not clear how many of us would end up the same and how many would be far better or far worse than we are. Maybe psychopaths were born the way they are, or maybe they were made that way by culture or trauma or lead poisoning. Maybe with the right upbringing or brain damage, we, too, could be axe murderers. Yet the fact remains—there are axe murderers, but we, and most people, are not like them.

So, are people good, or evil? Was Rousseau right, or Hobbes? Yes. Both. Neither. There is no one human nature; there are many human natures. We are capable of great good and great evil.

When we plan how to run a society, we must make it work the best we can with that in mind: We can assume that most people will be good most of the time—but we know that some people won’t, and we’d better be prepared for them as well.

Set out your donation boxes with confidence. But make sure they are locked.

Finance is the commodification of trust

Jul 18 JDN 2459414

What is it about finance?

Why is it that whenever we have an economic crisis, it seems to be triggered by the financial industry? Why has the dramatic rise in income and wealth inequality come in tandem with a rise in finance as a proportion of our economic output? Why are so many major banks implicated in crimes ranging from tax evasion to money laundering for terrorists?

In other words, why are the people who run our financial industry such utter scum? What is it about finance that it seems to attract the very worst people on Earth?

One obvious answer is that it is extremely lucrative: Incomes in the financial industry are higher than almost any other industry. Perhaps people who are particularly unscrupulous are drawn to the industries that make the most money, and don’t care about much else. But other people like making money too, so this is far from a full explanation. Indeed, incomes for physicists are comparable to those of Wall Street brokers, yet physicists rarely seem to be implicated in mass corruption scandals.

I think there is a deeper reason: Finance is the commodification of trust.

Many industries sell products, physical artifacts like shirts or televisions. Others sell services like healthcare or auto repair, which involve the physical movement of objects through space. Information-based industries are a bit different—what a software developer or an economist sells isn’t really a physical object moving through space. But then what they are selling is something more like knowledge—information that can be used to do useful things.

Finance is different. When you make a loan or sell a stock, you aren’t selling a thing—and you aren’t really doing a thing either. You aren’t selling information, either. You’re selling trust. You are making money by making promises.

Most people are generally uncomfortable with the idea of selling promises. It isn’t that we’d never do it—but we’re reluctant to do it. We try to avoid it whenever we can. But if you want to be successful in finance, you can’t have that kind of reluctance. To succeed on Wall Street, you need to be constantly selling trust every hour of every day.

Don’t get me wrong: Certain kinds of finance are tremendously useful, and we’d be much worse off without them. I would never want to get rid of government bonds, auto loans or home mortgages. I’m actually pretty reluctant to even get rid of student loans, despite the large personal benefits I would get if all student loans were suddenly forgiven. (I would be okay with a system like Elizabeth Warren’s proposal, where people with college degrees pay a surtax that supports free tuition. The problem with most proposals for free college is that they make people who never went to college pay for those who did, and that seems unfair and regressive to me.)

But the Medieval suspicion against “usury“—the notion that there is something immoral about making money just from having money and making promises—isn’t entirely unfounded. There really is something deeply problematic about a system in which the best way to get rich is to sell commodified packages of trust, and the best way to make money is to already have it.

Moreover, the more complex finance gets, the more divorced it becomes from genuinely necessary transactions, and the more commodified it becomes. A mortgage deal that you make with a particular banker in your own community isn’t particularly commodified; a mortgage that is sliced and redistributed into mortgage-backed securities that are sold anonymously around the world is about as commodified as anything can be. It’s rather like the difference between buying a bag of apples from your town farmers’ market versus ordering a barrel of apple juice concentrate. (And of course the most commodified version of all is the financial one: buying apple juice concentrate futures.)

Commodified trust is trust that has lost its connection to real human needs. Those bankers who foreclosed on thousands of mortgages (many of them illegally) weren’t thinking about the people they were making homeless—why would they, when for them those people have always been nothing more than numbers on a spreadsheet? Your local banker might be willing to work with you to help you keep your home, because they see you as a person. (They might not for various reasons, but at least they might.) But there’s no reason for HSBC to do so, especially when they know that they are so rich and powerful they can get away with just about anything (have I mentioned money laundering for terrorists?).

I don’t think we can get rid of finance. We will always need some mechanism to let people who need money but don’t have it borrow that money from people who have it but don’t need it, and it makes sense to have interest charges to compensate lenders for the time and risk involved.

Yet there is much of finance we can clearly dispense with. Credit default swaps could simply be banned, and we’d gain much and lose little. Credit default swaps are basically unregulated insurance, and there’s no reason to allow that. If banks need insurance, they can buy the regulated kind like everyone else. Those regulations are there for a reason. We could ban collateralized debt obligations and similar tranche-based securities, again with far more benefit than harm. We probably still need stocks and commodity futures, and perhaps also stock options—but we could regulate their sale considerably more, particularly with regard to short-selling. Banking should be boring.

Some amount of commodification may be inevitable, but clearly much of what we currently have could be eliminated. In particular, the selling of loans should simply be banned. Maybe even your local banker won’t ever really get to know you or care about you—but there’s no reason we have to allow them to sell your loan to some bank in another country that you’ve never even heard of. When you make a deal with a bank, the deal should be between you and that bank—not potentially any bank in the world that decides to buy the contract at any point in the future. Maybe we’ll always be numbers on spreadsheets—but at least we should be able to choose whose spreadsheets.

If banks want more liquidity, they can borrow from other banks—themselves, taking on the risk themselves. A lending relationship is built on trust. You are free to trust whomever you choose; but forcing me to trust someone I’ve never met is something you have no right to do.

In fact, we might actually be able to get rid of banks—credit unions have a far cleaner record than banks, and provide nearly all of the financial services that are genuinely necessary. Indeed, if you’re considering getting an auto loan or a home mortgage, I highly recommend you try a credit union first.

For now, we can’t simply get rid of banks—we’re too dependent on them. But we could at least acknowledge that banks are too powerful, they get away with far too much, and their whole industry is founded upon practices that need to be kept on a very tight leash.

Men and violence

Apr4 JDN 2459302

Content warning: In this post, I’m going to be talking about violence, including sexual violence. April is Sexual Assault Awareness and Prevention Month. I won’t go into any explicit detail, but I understand that discussion of such topics can still be very upsetting for many people.

After short posts for the past two weeks, get ready for a fairly long post. This is a difficult and complicated topic, and I want to make sure that I state things very clearly and with all necessary nuance.

While the overall level of violence between human societies varies tremendously, one thing is astonishingly consistent: Violence is usually committed by men.

In fact, violence is usually suffered by men as well—with the quite glaring exception of sexual violence. This is why I am particularly offended by claims like “All men benefit from male violence”; no, men who were murdered by other men did not benefit from male violence, and it is frankly appalling to say otherwise. Most men would be better off if male violence were somehow eliminated from the world. (Most women would also be much better off as well, of course.)

I therefore consider it both a matter of both moral obligation and self-interest to endeavor to reduce the amount of male violence in the world, which is almost coextensive with reducing the amount of violence in general.

On the other hand, ought implies can, and despite significant efforts I have made to seek out recommendations for concrete actions I could be taking… I haven’t been able to find very many.

The good news is that we appear to be doing something right—overall rates of violent crime have declined by nearly half since 1990. The decline in rape has been slower, only about 25% since 1990, though this is a bit misleading since the legal definition of rape has been expanded during that interval. The causes of this decline in violence are unclear: Some of the most important factors seem to be changes in policing, economic growth, and reductions in lead pollution. For whatever reason, Millennials just don’t seem to commit crimes at the same rates that Gen-X-ers or Boomers did. We are also substantially more feminist, so maybe that’s an important factor too; the truth is, we really don’t know.

But all of this still leaves me asking: What should I be doing?

When I searched for an answer to this question, a significant fraction of the answers I got from various feminist sources were some variation on “ruminate on your own complicity in male violence”. I tried it; it was painful, difficult—and basically useless. I think this is particularly bad advice for someone like me who has a history of depression.

When you ruminate on your own life, it’s easy to find mistakes; but how important were those mistakes? How harmful were they? I can’t say that I’ve never done anything in my whole life that hurt anyone emotionally (can anyone?), but I can only think of a few times I’ve harmed someone physically (mostly by accident, once in self-defense). I’ve definitely never raped or murdered anyone, and as far as I can tell I’ve never done anything that would have meaningfully contributed to anyone getting raped or murdered. If you were to somehow replace every other man in the world with a copy of me, maybe that wouldn’t immediately bring about a utopian paradise—but I’m pretty sure that rates of violence would be a lot lower. (And in this world ruled by my clones, we’d have more progressive taxes! Less military spending! A basic income! A global democratic federation! Greater investment in space travel! Hey, this sounds pretty good, actually… though inbreeding would be a definite concern.) So, okay, I’m no angel; but I don’t think it’s really fair to say that I’m complicit in something that would radically decrease if everyone behaved as I do.

The really interesting thing is, I think this is true of most men. A typical man commits less than the average amount of violence—because there is great skew in the distribution, with most men committing little or no violence and a small number of men committing lots of violence. Truly staggering amounts of violence are committed by those at the very top of the distribution—that would be mass murderers like Hitler and Stalin. It sounds strange, but if all men in the world were replaced by a typical man, the world would surely be better off. The loss of the very best men would be more than compensated by the removal of the very worst. In fact, since most men are not rapists or murderers, replacing every man in the world with the median man would automatically bring the rates of rape and murder to zero. I know that feminists don’t like to hear #NotAllMen; but it’s not even most men. Maybe the reason that the “not all men” argument keeps coming up is… it’s actually kind of true? Maybe it’s not so unreasonable for men to resent the implication that we are complicit in acts we abhor that we have never done and would never do? Maybe this whole concept that an entire sex of people, literally almost half the human race, can share responsibility for violent crimes—is wrong?

I know that most women face a nearly constant bombardment of sexual harassment, and feel pressured to remain constantly vigilant in order to protect themselves against being raped. I know that victims of sexual violence are often blamed for their victimization (though this happens in a lot of crimes, not just sex crimes). I know that #YesAllWomen is true—basically all women have been in some way harmed or threatened by sexual violence. But the fact remains that most men are already not committing sexual violence. Many people seem to confuse the fact that most women are harmed by men with the claim that most men harm women; these are not at all equivalent. As long as one man can harm many women, there don’t need to be very many harmful men for all women to be affected.

Plausible guesses would be that about 20-25% of women suffer sexual assault, committed by about 4% or 5% of men, each of whom commits an average of 4 to 6 assaults—and some of whom commit far more. If these figures are right, then 95% of men are not guilty of sexual assault. The highest plausible estimate I’ve seen is from a study which found that 11% of men had committed rape. Since it’s only one study and its sample size was pretty small, I’m actually inclined to think that this is an overestimate which got excessive attention because it was so shocking. Larger studies rarely find a number above 5%.

But even if we suppose that it’s really 11%, that leaves 89%; in what sense is 89% not “most men”? I saw some feminist sites responding to this result by saying things like “We can’t imprison 11% of men!” but, uh, we almost do already. About 9% of American men will go to prison in their lifetimes. This is probably higher than it should be—it’s definitely higher than any other country—but if those convictions were all for rape, I’d honestly have trouble seeing the problem. (In fact only about 10% of US prisoners are incarcerated for rape.) If the US were the incarceration capital of the world simply because we investigated and prosecuted rape more reliably, that would be a point of national pride, not shame. In fact, the American conservatives who don’t see the problem with our high incarceration rate probably do think that we’re mostly incarcerating people for things like rape and murder—when in fact large portions of our inmates are incarcerated for drug possession, “public order” crimes, or pretrial detention.

Even if that 11% figure is right, “If you know 10 men, one is probably a rapist” is wrong. The people you know are not a random sample. If you don’t know any men who have been to prison, then you likely don’t know any men who are rapists. 37% of prosecuted rapists have prior criminal convictions, and 60% will be convicted of another crime within 5 years. (Of course, most rapes are never even reported; but where would we get statistics on those rapists?) Rapists are not typical men. They may seem like typical men—it may be hard to tell the difference at a glance, or even after knowing someone for a long time. But the fact that narcissists and psychopaths may hide among us does not mean that all of us are complicit in the crimes of narcissists and psychopaths. If you can’t tell who is a psychopath, you may have no choice but to be wary; but telling every man to search his heart is worthless, because the only ones who will listen are the ones who aren’t psychopaths.

That, I think, is the key disagreement here: Where the standard feminist line is “any man could be a rapist, and every man should search his heart”, I believe the truth is much more like, “monsters hide among us, and we should do everything in our power to stop them”. The monsters may look like us, they may often act like us—but they are not us. Maybe there are some men who would commit rapes but can be persuaded out of it—but this is not at all the typical case. Most rapes are committed by hardened, violent criminals and all we can really do is lock them up. (And for the love of all that is good in the world, test all the rape kits!)

It may be that sexual harassment of various degrees is more spread throughout the male population; perhaps the median man indeed commits some harassment at some point in his life. But even then, I think it’s pretty clear that the really awful kinds of harassment are largely committed by a small fraction of serial offenders. Indeed, there is a strong correlation between propensity toward sexual harassment and various measures of narcissism and psychopathy. So, if most men look closely enough, maybe they can think of a few things that they do occasionally that might make women uncomfortable; okay, stop doing those things. (Hint: Do not send unsolicited dick pics. Ever. Just don’t. Anyone who wants to see your genitals will ask first.) But it isn’t going to make a huge difference in anyone’s life. As long as the serial offenders continue, women will still feel utterly bombarded.

There are other kinds of sexual violations that more men commit—being too aggressive, or persisting too much after the first rejection, or sending unsolicited sexual messages or images. I’ve had people—mostly, but not only, men—do things like that to me; but it would be obviously unfair to both these people and actual rape victims to say I’d ever been raped. I’ve been groped a few times, but it seems like quite a stretch to call it “sexual assault”. I’ve had experiences that were uncomfortable, awkward, frustrating, annoying, occasionally creepy—but never traumatic. Never violence. Teaching men (and women! There is evidence that women are not much less likely than men to commit this sort of non-violent sexual violation) not to do these things is worthwhile and valuable in itself—but it’s not going to do much to prevent rape or murder.

Thus, whatever responsibility men have in reducing sexual violence, it isn’t simply to stop; you can’t stop doing what you already aren’t doing.

After pushing through all that noise, at last I found a feminist site making a more concrete suggestion: They recommended that I read a book by Jackson Katz on the subject entitled The Macho Paradox: Why Some Men Hurt Women and How All Men Can Help.

First of all, I must say I can’t remember any other time I’ve read a book that was so poorly titled. The only mention of the phrase “macho paradox” is a brief preface that was added to the most recent edition explaining what the term was meant to mean; it occurs nowhere else in the book. And in all its nearly 300 pages, the book has almost nothing that seriously addresses either the motivations underlying sexual violence or concrete actions that most men could take in order to reduce it.

As far as concrete actions (“How all men can help”), the clearest, most consistent advice the book seems to offer that would apply to most men is “stop consuming pornography” (something like 90% of men and 60% of women regularly consume porn), when in fact there is a strong negative correlation between consumption of pornography and real-world sexual violence. (Perhaps Millennials are less likely to commit rape and murder because we are so into porn and video games!) This advice is literally worse than nothing.

The sex industry exists on a continuum from the adult-only but otherwise innocuous (smutty drawings and erotic novels), through the legal but often problematic (mainstream porn, stripping), to the usually illegal but defensible (consensual sex work), all the way to the utterly horrific and appalling (the sexual exploitation of children). I am well aware that there are many deep problems with the mainstream porn industry, but I confess I’ve never quite seen how these problems are specific to porn rather than endemic to media or even capitalism more generally. Particularly with regard to the above-board sex industry in places like Nevada or the Netherlands, it’s not obvious to me that a prostitute is more exploited than a coal miner, a sweatshop worker, or a sharecropper—indeed, given the choice between those four careers, I’d without hesitation choose to be a prostitute in Amsterdam. Many sex workers resent the paternalistic insistence by anti-porn feminists that their work is inherently degrading and exploitative. Overall, sex workers report job satisfaction not statistically different than the average for all jobs. There are a multitude of misleading statistics often reported about the sex industry that often make matters seem far worse than they are.

Katz (all-too) vividly describes the depiction of various violent or degrading sex acts in mainstream porn, but he seems unwilling to admit that any other forms of porn do or even could exist—and worse, like far too many anti-porn feminists, he seems to willfully elide vital distinctions, effectively equating fantasy depiction with genuine violence and consensual kinks with sexual abuse. I like to watch action movies and play FPS video games; does that mean I believe it’s okay to shoot people with machine guns? I know the sophisticated claim is that it somehow “desensitizes” us (whatever that means), but there’s not much evidence of that either. Given that porn and video games are negatively correlated with actual violence, it may in fact be that depicting the fantasy provides an outlet for such urges and helps prevent them from becoming reality. Or, it may simply be that keeping a bunch of young men at home in front of their computers keeps them from going out and getting into trouble. (Then again, homicides actually increased during the COVID pandemic—though most other forms of crime decreased.) But whatever the cause, the evidence is clear that porn and video games don’t increase actual violence—they decrease them.

At the very end of the book, Katz hints at a few other things men might be able to do, or at least certain groups of men: Challenge sexism in sports, the military, and similar male-dominated spaces (you know, if you have clout in such spaces, which I really don’t—I’m an effete liberal intellectual, a paradigmatic “soy boy”; do you think football players or soldiers are likely to listen to me?); educate boys with more positive concepts of masculinity (if you are in a position to do so, e.g. as a teacher or parent); or, the very best advice in the entire book, worth more than the rest of the book combined: Donate to charities that support survivors of sexual violence. Katz doesn’t give any specific recommendations, but here are a few for you: RAINN, NAESV and NSVRC.

Honestly, I’m more impressed by Upworthy’s bulleted list of things men can do, though they’re mostly things that conscientious men do anyway, and even if 90% of men did them, it probably wouldn’t greatly reduce actual violence.

As far as motivations (“Why some men hurt women”), the book does at least manage to avoid the mindless slogan “rape is about power, not sex” (there is considerable evidence that this slogan is false or at least greatly overstated). Still, Katz insists upon collective responsibility, attributing what are in fact typically individual crimes, committed mainly by psychopaths, motivated primarily by anger or sexual desire, to some kind of institutionalized system of patriarchal control that somehow permeates all of society. The fact that violence is ubiquitous does not imply that it is coordinated. It’s very much the same cognitive error as “murderism”.

I agree that sexism exists, is harmful, and may contribute to the prevalence of rape. I agree that there are many widespread misconceptions about rape. I also agree that reducing sexism and toxic masculinity are worthwhile endeavors in themselves, with numerous benefits for both women and men. But I’m just not convinced that reducing sexism or toxic masculinity would do very much to reduce the rates of rape or other forms of violence. In fact, despite widely reported success of campaigns like the “Don’t Be That Guy” campaign, the best empirical research on the subject suggests that such campaigns actually tend to do more harm than good. The few programs that seem to work are those that focus on bystander interventions—getting men who are not rapists to recognize rapists and stop them. Basically nothing has ever been shown to convince actual rapists; all we can do is deny them opportunities—and while bystander intervention can do that, the most reliable method is probably incarceration. Trying to change their sexist attitudes may be worse than useless.

Indeed, I am increasingly convinced that much—not all, but much—of what is called “sexism” is actually toxic expressions of heterosexuality. Why do most creepy male bosses only ever hit on their female secretaries? Well, maybe because they’re straight? This is not hard to explain. It’s a fair question why there are so many creepy male bosses, but one need not posit any particular misogyny to explain why their targets would usually be women. I guess it’s a bit hard to disentangle; if an incel hates women because he perceives them as univocally refusing to sleep with him, is that sexism? What if he’s a gay incel (yes they exist) and this drives him to hate men instead?

In fact, I happen to know of a particular gay boss who has quite a few rumors surrounding him regarding his sexual harassment of male employees. Or you could look at Kevin Spacey, who (allegedly) sexually abused teenage boys. You could tell a complicated story about how this is some kind of projection of misogynistic attitudes onto other men (perhaps for being too “femme” or something)—or you could tell a really simple story about how this man is only sexually abusive toward other men because that’s the gender of people he’s sexually attracted to. Occam’s Razor strongly favors the latter.

Indeed, what are we to make of the occasional sexual harasser who targets men and women equally? On the theory that abuse is caused by patriarchy, that seems pretty hard to explain. On the theory that abusive people sometimes happen to be bisexual, it’s not much of a mystery. (Though I would like to take a moment to debunk the stereotype of the “depraved bisexual”: Bisexuals are no more likely to commit sexual violence, but are far more likely to suffer it—more likely than either straight or gay people, independently of gender. Trans people face even higher risk; the acronym LGBT is in increasing order of danger of violence.)

Does this excuse such behavior? Absolutely not. Sexual harassment and sexual assault are definitely wrong, definitely harmful, and rightfully illegal. But when trying to explain why the victims are overwhelmingly female, the fact that roughly 90% of people are heterosexual is surely relevant. The key explanandum here is not why the victims are usually female, but rather why the perpetrators are usually male.

That, indeed, requires explanation; but such an explanation is really not so hard to come by. Why is it that, in nearly every human society, for nearly every form of violence, the vast majority of that violence is committed by men? It sure looks genetic to me.

Indeed, in anyother context aside from gender or race, we would almost certainly reject any explanation other than genetics for such a consistent pattern. Why is it that, in nearly every human society, about 10% of people are LGBT? Probably genetics. Why is it that, in near every human society, about 10% of people are left-handed? Genetics. Why, in nearly every human society, do smiles indicate happiness, children fear loud noises, and adults fear snakes? Genetics. Why, in nearly every human society, are men on average much taller and stronger than women? Genetics. Why, in nearly every human society, is about 90% of violence, including sexual violence, committed by men? Clearly, it’s patriarchy.

A massive body of scientific evidence from multiple sources shows a clear casual relationship between increased testosterone and increased aggression. The correlation is moderate, only about 0.38—but it’s definitely real. And men have a lot more testosterone than women: While testosterone varies a frankly astonishing amount between men and over time—including up to a 2-fold difference even over the same day—a typical adult man has about 250 to 950 ng/dL of blood testosterone, while a typical adult woman has only 8 to 60 ng/dL. (An adolescent boy can have as much as 1200 ng/dL!) This is a difference ranging from a minimum of 4-fold to a maximum of over 100-fold, with a typical value of about 20-fold. It would be astonishing if that didn’t have some effect on behavior.

This is of course far from a complete explanation: With a correlation of 0.38, we’ve only explained about 14% of the variance, so what’s the other 86%? Well, first of all, testosterone isn’t the only biological difference between men and women. It’s difficult to identify any particular genes with strong effects on aggression—but the same is true of height, and nobody disputes that the height difference between men and women is genetic.

Clearly societal factors do matter a great deal, or we couldn’t possibly explain why homicide rates vary between countries from less than 3 per million per year in Japan to nearly 400 per million per year in Hondurasa full 2 orders of magnitude! But gender inequality does not appear to strongly predict homicide rates. Japan is not a very feminist place (in fact, surveys suggest that, after Spain, Japan is second-worst highly-developed country for women). Sweden is quite feminist, and their homicide rate is relatively low; but it’s still 4 times as high as Japan’s. The US doesn’t strike me as much more sexist than Canada (admittedly subjective—surveys do suggest at least some difference, and in the expected direction), and yet our homicide rate is nearly 3 times as high. Also, I think it’s worth noting that while overall homicide rates vary enormously across societies, the fact that roughly 90% of homicides are committed by men does not. Through some combination of culture and policy, societies can greatly reduce the overall level of violence—but no society has yet managed to change the fact that men are more violent than women.

I would like to do a similar analysis of sexual assault rates across countries, but unfortunately I really can’t, because different countries have such different laws and different rates of reporting that the figures really aren’t comparable. Sweden infamously has a very high rate of reported sex crimes, but this is largely because they have very broad definitions of sex crimes and very high rates of reporting. The best I can really say for now is there is no obvious pattern of more feminist countries having lower rates of sex crimes. Maybe there really is such a pattern; but the data isn’t clear.

Yet if biology contributes anything to the causation of violence—and at this point I think the evidence for that is utterly overwhelming—then mainstream feminism has done the world a grave disservice by insisting upon only social and cultural causes. Maybe it’s the case that our best options for intervention are social or cultural, but that doesn’t mean we can simply ignore biology. And then again, maybe it’s not the case at all:A neurological treatment to cure psychopathy could cut almost all forms of violence in half.

I want to be completely clear that a biological cause is not a justification or an excuse: literally billions of men manage to have high testosterone levels, and experience plenty of anger and sexual desire, without ever raping or murdering anyone. The fact that men appear to be innately predisposed toward violence does not excuse actual violence, and the fact that rape is typically motivated at least in part by sexual desire is no excuse for committing rape.

In fact, I’m quite worried about the opposite: that the notion that sexual violence is always motivated by a desire to oppress and subjugate women will be used to excuse rape, because men who know that their motivation was not oppression will therefore be convinced that what they did wasn’t rape. If rape is always motivated by a desire to oppress women, and his desire was only to get laid, then clearly, what he did can’t be rape, right? The logic here actually makes sense. If we are to reject this argument—as we must—then we must reject the first premise, that all rape is motivated by a desire to oppress and subjugate women. I’m not saying that’s never a motivation—I’m simply saying we can’t assume it is always.

The truth is, I don’t know how to end violence, and sexual violence may be the most difficult form of violence to eliminate. I’m not even sure what most of us can do to make any difference at all. For now, the best thing to do is probably to donate money to organizations like RAINN, NAESV and NSVRC. Even $10 to one of these organizations will do more to help survivors of sexual violence than hours of ruminating on your own complicity—and cost you a lot less.

What if we cared for everyone equally?

Oct 11 JDN 2459134

Imagine for a moment a hypothetical being who was a perfect utilitarian, who truly felt at the deepest level an equal caring for all human beings—or even all life.

We often imagine that such a being would be perfectly moral, and sometimes chide ourselves for failing so utterly to live up to its ideal. Today I’d like to take a serious look at how such a being would behave, and ask whether it is really such a compelling ideal after all.

I cannot feel sadness at your grandmother’s death, for over 150,000 people die every day. By far the highest QALY lost are the deaths of children in the poorest countries, and I feel sad for them as an aggregate, but couldn’t feel particularly saddened by any individual one.

I cannot feel happiness at your wedding or the birth of your child, for 50,000 couples marry every day, and another 30,000 divorce. 350,000 children are born every day, so why should I care about yours?

My happiness does not change from hour to hour or day to day, except as a slow, steady increase over time that is occasionally interrupted briefly by sudden disasters like hurricanes or tsunamis. 2020 was the saddest year I’ve had in awhile, as for once there was strongly correlated suffering across the globe sufficient to break through the trend of steadily increasing prosperity.

Should we go out with friends for drinks or dinner or games, I’ll be ever-so-slightly happier, some barely perceptible degree, provided that there is no coincidental event which causes more than the baseline rate of global suffering that day. And I’d be just as happy to learn that someone else I’d never met went out to dinner with someone else I’d also never met.

Of course I love you, my dear: Precisely as much as I love the other eight billion people on Earth.

I hope now that you can see how flat, how bleak, how inhuman such a being’s experience would be. We might sometimes wish some respite from the roller coaster ride of our own emotional experiences, but in its place this creature feels almost nothing at all, just a vague sense of gradually increasing contentment which is occasionally interrupted by fleeting deviations from the trend.

Such a being is incapable of feeling love as we would recognize it—for a mind such as ours could not possibly feel so intensely for a billion people at once. To love all the people of the world equally, and still have anything resembling a human mind, is to love no one at all.

Perhaps we should not feel so bad that we are not such creatures, then?

Of course I do not mean to say that we should care nothing for distant strangers in foreign lands, or even that the tiny amount most people seem to care is adequate. We should care—and we should care more, and do more, than most people do.

But I do mean to say that it is possible to care too much about other people far away, an idea that probably seems obvious to some but radical to others. The human capacity for caring is not simply zero-sum—there are those who care more overall and less overall—but I do believe that it is limited: At some point you begin to sacrifice so much for those you have no attachments to that you begin to devalue your own attachments.

There is an interior optimum: We should care enough, but not too much. We should sacrifice some things, but not everything. Those closest to us should matter more than those further away—but both should matter. Where exactly to draw that line is a very difficult question, which has stumped far greater philosophers than I; but at least we can narrow the space and exclude the endpoints.

This may even make a certain space for morally justifying selfishness. Surely it does not justify total, utter selfishness with no regard for the suffering of others. But it defends self-care at the very least, and perhaps can sweep away some of the feelings of guilt we may have from being fortunate or prevailing in fair competition. Yes, much of what you have was gained by sheer luck, and even much of what you have earned, you earned by out-competing someone else nearly as deserving. But this is true of everyone, and as long as you played fair, you’ve not done wrong by doing better. There’s even good reason to think that a system which allocates its privileges by fair competition is a particularly efficient one, one which ultimately raises the prosperity of all.

If nothing else, reflecting on this has made me feel better about giving 8% of my gross income to charity instead of 20% or 50% or even 80%. And if even 8% is too much for you, try 2% or even 1%.

Moral disagreement is not bad faith

Jun 7 JDN 2459008

One of the most dangerous moves to make in an argument is to accuse your opponent of bad faith. It’s a powerful, and therefore tempting, maneuver: If they don’t even really believe what they are saying, then you can safely ignore basically whatever comes out of their mouth. And part of why this is so tempting is that it is in fact occasionally true—people do sometimes misrepresent their true beliefs in various ways for various reasons. On the Internet especially, sometimes people are just trolling.

But unless you have really compelling evidence that someone is arguing in bad faith, you should assume good faith. You should assume that whatever they are asserting is what they actually believe. For if you assume bad faith and are wrong, you have just cut off any hope of civil discourse between the two of you. You have made it utterly impossible for either side to learn anything or change their mind in any way. If you assume good faith and are wrong, you may have been overly charitable; but in the end you are the one that is more likely to persuade any bystanders, not the one who was arguing in bad faith.

Furthermore, it is important to really make an effort to understand your opponent’s position as they understand it before attempting to respond to it. Far too many times, I have seen someone accused of bad faith by an opponent who simply did not understand their worldview—and did not even seem willing to try to understand their worldview.

In this post, I’m going to point out some particularly egregious examples of this phenomenon that I’ve found, all statements made by left-wing people in response to right-wing people. Why am I focusing on these? Well, for one thing, it’s as important to challenge bad arguments on your own side as it is to do so on the other side. I also think I’m more likely to be persuasive to a left-wing audience. I could find right-wing examples easily enough, but I think it would be less useful: It would be too tempting to think that this is something only the other side does.

Example 1: “Republicans Have Stopped Pretending to Care About Life”

The phrase “pro-life” means thinking that abortion is wrong. That’s all it means. It’s jargon at this point. The phrase has taken on this meaning independent of its constituent parts, just as a red herring need not be either red or a fish.

Stop accusing people of not being “truly pro-life” because they don’t adopt some other beliefs that are not related to abortion. Even if those would be advancing life in some sense (most people probably think that most things they think are good advance life in some sense!), they aren’t relevant to the concept of being “pro-life”. Moreover, being “pro-life” in the traditional conservative sense isn’t even about minimizing the harm of abortion or the abortion rate. It’s about emphasizing the moral wrongness of abortion itself, and often even criminalizing it.


I don’t think this is really so hard to understand. If someone truly, genuinely believes that abortion is murdering a child, it’s quite clear why they won’t be convinced by attempts at minimizing harm or trying to reduce the abortion rate via contraception or other social policy. Many policies are aimed at “reducing the demand for abortion”; would you want to “reduce the demand for murder”? No, you’d want murderers to be locked up. You wouldn’t care what their reasons were, and you wouldn’t be interested in using social policies to address those reasons. It’s not even hard to understand why this would be such an important issue to them, overriding almost anything else: If you thought that millions of people were murdering children you would consider that an extremely important issue too.

If you want to convince people to support Roe v. Wade, you’re going to have to change their actual belief that abortion is murder. You may even be able to convince them that they don’t really think abortion is murder—many conservatives support the death penalty for murder, but very few do so for abortion. But they clearly do think that abortion is a grave moral wrong, and you can’t simply end-run around that by calling them hypocrites because they don’t care about whatever other issue you think they should care about.

Example 2: “Stop pretending to care about human life if you support wars in the Middle East”

I had some trouble finding the exact wording of the meme I originally saw with this sentiment, but the gist of it was basically that if you support bombing Afghanistan, Libya, Iraq, and/or Syria, you have lost all legitimacy to claiming that you care about human life.

Say what you will about these wars (though to be honest I think what the US has done in Libya and Syria has done more good than harm), but simply supporting a war does not automatically undermine all your moral legitimacy. The kind of radical pacifism that requires us to never kill anyone ever is utterly unrealistic; the question is and has always been “Which people is it okay to kill, when and how and why?” Some wars are justified; we have to accept that.

It would be different if these were wars of genocidal extermination; I can see a case for saying that anyone who supported the Holocaust or the Rwandan Genocide has lost all moral legitimacy. But even then it isn’t really accurate to say that those people don’t care about human life; it’s much more accurate to say that they have assigned the group of people they want to kill to a subhuman status. Maybe you would actually get more traction by saying “They are human beings too!” rather than by accusing people of not believing in the value of human life.

And clearly these are not wars of extermination—if the US military wanted to exterminate an entire nation of people, they could do so much more efficiently than by using targeted airstrikes and conventional warfare. Remember: They have nuclear weapons. Even if you think that they wouldn’t use nukes because of fear of retaliation (Would Russia or China really retaliate using their own nukes if the US nuked Afghanistan or Iran?), it’s clear that they could have done a lot more to kill a lot more innocent people if that were actually their goal. It’s one thing to say they don’t take enough care not to kill innocent civilians—I agree with that. It’s quite another to say that they actively try to kill innocent civilians—that’s clearly not what is happening.

Example 3: “Stop pretending to be Christian if you won’t help the poor.”

This one I find a good deal more tempting: In the Bible, Jesus does spend an awful lot more words on helping the poor than he does on, well, almost anything else; and he doesn’t even once mention abortion or homosexuality. (The rest of the Bible does at least mention homosexuality, but it really doesn’t have any clear mentions of abortion.) So it really is tempting to say that anyone who doesn’t make helping the poor their number one priority can’t really be a Christian.

But the world is more complicated than that. People can truly and deeply believe some aspects of a religion while utterly rejecting others. They can do this more or less arbitrarily, in a way that may not even be logically coherent. They may even honestly believe that every single word of the Bible to be the absolute perfect truth of an absolute perfect God, and yet there are still passages you could point them to that they would have to admit they don’t believe in. (There are literally hundreds of explicit contradictions in the Bible. Many are minor—though still undermine any claim to absolute perfect truth—but some are really quite substantial. Does God forgive and forget, or does he visit revenge upon generations to come? That’s kind of a big deal! And should we be answering fools or not?) In some sense they don’t really believe that every word is true, then; but they do seem to believe in believing it.

Yes, it’s true; people can worship a penniless son of a carpenter who preached peace and charity and at the same time support cutting social welfare programs and bombing the Middle East. Such a worldview may not be entirely self-consistent; it’s certainly not the worldview that Jesus himself espoused. But it nevertheless is quite sincerely believed by many millions of people.

It may still be useful to understand the Bible in order to persuade Christians to help the poor more. There are certainly plenty of passages you can point them to where Jesus talks about how important it is to help the poor. Likewise, Jesus doesn’t seem to much like the rich, so it is fair to ask: How Christian is it for Republicans to keep cutting taxes on the rich? (I literally laughed out loud when I first saw this meme: “Celebrate Holy Week By Flogging a Banker: It’s What Jesus Would Have Done!“) But you should not accuse people of “pretending to be Christian”. They really do strongly identify themselves as Christian, and would sooner give up almost anything else about their identity. If you accuse them of pretending, all that will do is shut down the conversation.

Now, after all that, let me give one last example that doesn’t fit the trend, one example where I really do think the other side is acting in bad faith.


Example 4: “#AllLivesMatter is a lie. You don’t actually think all lives matter.”

I think this one is actually true. If you truly believed that all lives matter, you wouldn’t post the hashtag #AllLivesMatter in response to #BlackLivesMatter protests against police brutality.

First of all, you’d probably be supporting those protests. But even if you didn’t for some reason, that isn’t how you would use the hashtag. As a genuine expression of caring, the hashtag #AllLivesMatter would only really make sense for something like Oxfam or UNICEF: Here are these human lives that are in danger and we haven’t been paying enough attention to them, and here, you can follow my hashtag and give some money to help them because all lives matter. If it were really about all lives mattering, then you’d see the hashtag pop up after a tsunami in Southeast Asia or a famine in central Africa. (For awhile I tried actually using it that way; I quickly found that it was overwhelmed by the bad faith usage and decided to give up.)

No, this hashtag really seems to be trying to use a genuinely reasonable moral norm—all lives matter—as a weapon against a political movement. We don’t see #AllLivesMatter popping up asking people to help save some lives—it’s always as a way of shouting down other people who want to save some lives. It’s a glib response that lets you turn away and ignore their pleas, without ever actually addressing the substance of what they are saying. If you really believed that all lives matter, you would not be so glib; you would want to understand how so many people are suffering and want to do something to help them. Even if you ultimately disagreed with what they were saying, you would respect them enough to listen.

The counterpart #BlueLivesMatter isn’t in bad faith, but it is disturbing in a different way: What are ‘blue lives’? People aren’t born police officers. They volunteer for that job. They can quit if want. No one can quit being Black. Working as a police officer isn’t even especially dangerous! But it’s not a bad faith argument: These people really do believe that the lives of police officers are worth more—apparently much more—than the lives of Black civilians.

I do admit, the phrasing “#BlackLivesMatter” is a bit awkward, and could be read to suggest that other lives don’t matter, but it takes about 2 minutes of talking to someone (or reading a blog by someone) who supports those protests to gather that this is not their actual view. Perhaps they should have used #BlackLivesMatterToo, but when your misconception is that easily rectified the responsibility to avoid it falls on you. (Then again, some people do seem to stoke this misconception: I was quite annoyed when a question was asked at a Democratic debate: “Do Black Lives Matter, or Do All Lives Matter?” The correct answer of course is “All lives matter, which is why I support the Black Lives Matter movement.”)

So, yes, bad faith arguments do exist, and sometimes we need to point them out. But I implore you, consider that a last resort, a nuclear option you’ll only deploy when all other avenues have been exhausted. Once you accuse someone of bad faith, you have shut down the conversation completely—preventing you, them, and anyone else who was listening from having any chance of learning or changing their mind.

Is Singularitarianism a religion?

 

Nov 17 JDN 2458805

I said in last week’s post that Pascal’s Mugging provides some deep insights into both Singularitarianism and religion. In particular, it explains why Singularitarianism seems so much like a religion.

This has been previously remarked, of course. I think Eric Steinhart makes the best case for Singularitarianism as a religion:

I think singularitarianism is a new religious movement. I might add that I think Clifford Geertz had a pretty nice (though very abstract) definition of religion. And I think singularitarianism fits Geertz’s definition (but that’s for another time).

My main interest is this: if singularitarianism is a new religious movement, then what should we make of it? Will it mainly be a good thing? A kind of enlightenment religion? It might be an excellent alternative to old-fashioned Abrahamic religion. Or would it degenerate into the well-known tragic pattern of coercive authority? Time will tell; but I think it’s worth thinking about this in much more detail.

To be clear: Singularitarianism is probably not a religion. It is certainly not a cult, as it has been even worse accused; the behaviors it prescribes are largely normative, pro-social behaviors, and therefore it would at worst be a mainstream religion. Really, if every religion only inspired people to do things like donate to famine relief and work on AI research (as opposed to, say, beheading gay people), I wouldn’t have much of a problem with religion.

In fact, Singularitarianism has one vital advantage over religion: Evidence. While the evidence in favor of it is not overwhelming, there is enough evidential support to lend plausibility to at least a broad concept of Singularitarianism: Technology will continue rapidly advancing, achieving accomplishments currently only in our wildest imaginings; artificial intelligence surpassing human intelligence will arise, sooner than many people think; human beings will change ourselves into something new and broadly superior; these posthumans will go on to colonize the galaxy and build a grander civilization than we can imagine. I don’t know that these things are true, but I hope they are, and I think it’s at least reasonably likely. All I’m really doing is extrapolating based on what human civilization has done so far and what we are currently trying to do now. Of course, we could well blow ourselves up before then, or regress to a lower level of technology, or be wiped out by some external force. But there’s at least a decent chance that we will continue to thrive for another million years to come.

But yes, Singularitarianism does in many ways resemble a religion: It offers a rich, emotionally fulfilling ontology combined with ethical prescriptions that require particular behaviors. It promises us a chance at immortality. It inspires us to work toward something much larger than ourselves. More importantly, it makes us special—we are among the unique few (millions?) who have the power to influence the direction of human and posthuman civilization for a million years. The stronger forms of Singularitarianism even have a flavor of apocalypse: When the AI comes, sooner than you think, it will immediately reshape everything at effectively infinite speed, so that from one year—or even one moment—to the next, our whole civilization will be changed. (These forms of Singularitarianism are substantially less plausible than the broader concept I outlined above.)

It’s this sense of specialness that Pascal’s Mugging provides some insight into. When it is suggested that we are so special, we should be inherently skeptical, not least because it feels good to hear that. (As Less Wrong would put it, we need to avoid a Happy Death Spiral.) Human beings like to feel special; we want to feel special. Our brains are configured to seek out evidence that we are special and reject evidence that we are not. This is true even to the point of absurdity: One cannot be mathematically coherent without admitting that the compliment “You’re one in a million.” is equivalent to the statement “There are seven thousand people as good or better than you.”—and yet, the latter seems much worse, because it does not make us sound special.

Indeed, the connection between Pascal’s Mugging and Pascal’s Wager is quite deep: Each argument takes a tiny probability and multiplies it by a huge impact in order to get a large expected utility. This often seems to be the way that religions defend themselves: Well, yes, the probability is small; but can you take the chance? Can you afford to take that bet if it’s really your immortal soul on the line?

And Singularitarianism has a similar case to make, even aside from the paradox of Pascal’s Mugging itself. The chief argument for why we should be focusing all of our time and energy on existential risk is that the potential payoff is just so huge that even a tiny probability of making a difference is enough to make it the only thing that matters. We should be especially suspicious of that; anything that says it is the only thing that matters is to be doubted with utmost care. The really dangerous religion has always been the fanatical kind that says it is the only thing that matters. That’s the kind of religion that makes you crash airliners into buildings.

I think some people may well have become Singularitarians because it made them feel special. It is exhilarating to be one of these lone few—and in the scheme of things, even a few million is a small fraction of all past and future humanity—with the power to effect some shift, however small, in the probability of a far grander, far brighter future.

Yet, in fact this is very likely the circumstance in which we are. We could have been born in the Neolithic, struggling to survive, utterly unaware of what would come a few millennia hence; we could have been born in the posthuman era, one of a trillion other artist/gamer/philosophers living in a world where all the hard work that needed to be done is already done. In the long S-curve of human development, we could have been born in the flat part on the left or the flat part on the right—and by all probability, we should have been; most people were. But instead we happened to be born in that tiny middle slice, where the curve slopes upward at its fastest. I suppose somebody had to be, and it might as well be us.

Sigmoid curve labeled

A priori, we should doubt that we were born so special. And when forming our beliefs, we should compensate for the fact that we want to believe we are special. But we do in fact have evidence, lots of evidence. We live in a time of astonishing scientific and technological progress.

My lifetime has included the progression from Deep Thought first beating David Levy to the creation of a computer one millimeter across that runs on a few nanowatts and nevertheless has ten times as much computing power as the 80-pound computer that ran the Saturn V. (The human brain runs on about 100 watts, and has a processing power of about 1 petaflop, so we can say that our energy efficiency is about 10 TFLOPS/W. The M3 runs on about 10 nanowatts and has a processing power of about 0.1 megaflops, so its energy efficiency is also about 10 TFLOPS/W. We did it! We finally made a computer as energy-efficient as the human brain! But we have still not matched the brain in terms of space-efficiency: The volume of the human brain is about 1000 cm^3, so our space efficiency is about 1 TFLOPS/cm^3. The volume of the M3 is about 1 mm^3, so its space efficiency is only about 100 MFLOPS/cm^3. The brain still wins by a factor of 10,000.)

My mother saw us go from the first jet airliners to landing on the Moon to the International Space Station and robots on Mars. She grew up before the polio vaccine and is still alive to see the first 3D-printed human heart. When I was a child, smartphones didn’t even exist; now more people have smartphones than have toilets. I may yet live to see the first human beings set foot on Mars. The pace of change is utterly staggering.

Without a doubt, this is sufficient evidence to believe that we, as a civilization, are living in a very special time. The real question is: Are we, as individuals, special enough to make a difference? And if we are, what weight of responsibility does this put upon us?

If you are reading this, odds are the answer to the first question is yes: You are definitely literate, and most likely educated, probably middle- or upper-middle-class in a First World country. Countries are something I can track, and I do get some readers from non-First-World countries; and of course I don’t observe your education or socioeconomic status. But at an educated guess, this is surely my primary reading demographic. Even if you don’t have the faintest idea what I’m talking about when I use Bayesian logic or calculus, you’re already quite exceptional. (And if you do? All the more so.)

That means the second question must apply: What do we owe these future generations who may come to exist if we play our cards right? What can we, as individuals, hope to do to bring about this brighter future?

The Singularitarian community will generally tell you that the best thing to do with your time is to work on AI research, or, failing that, the best thing to do with your money is to give it to people working on artificial intelligence research. I’m not going to tell you not to work on AI research or donate to AI research, as I do think it is among the most important things humanity needs to be doing right now, but I’m also not going to tell you that it is the one single thing you must be doing.

You should almost certainly be donating somewhere, but I’m not so sure it should be to AI research. Maybe it should be famine relief, or malaria prevention, or medical research, or human rights, or environmental sustainability. If you’re in the United States (as I know most of you are), the best thing to do with your money may well be to support political campaigns, because US political, economic, and military hegemony means that as goes America, so goes the world. Stop and think for a moment how different the prospects of global warming might have been—how many millions of lives might have been saved!—if Al Gore had become President in 2001. For lack of a few million dollars in Tampa twenty years ago, Miami may be gone in fifty. If you’re not sure which cause is most important, just pick one; or better yet, donate to a diversified portfolio of charities and political campaigns. Diversified investment isn’t just about monetary return.

And you should think carefully about what you’re doing with the rest of your life. This can be hard to do; we can easily get so caught up in just getting through the day, getting through the week, just getting by, that we lose sight of having a broader mission in life. Of course, I don’t know what your situation is; it’s possible things really are so desperate for you that you have no choice but to keep your head down and muddle through. But you should also consider the possibility that this is not the case: You may not be as desperate as you feel. You may have more options than you know. Most “starving artists” don’t actually starve. More people regret staying in their dead-end jobs than regret quitting to follow their dreams. I guess if you stay in a high-paying job in order to earn to give, that might really be ethically optimal; but I doubt it will make you happy. And in fact some of the most important fields are constrained by a lack of good people doing good work, and not by a simple lack of funding.

I see this especially in economics: As a field, economics is really not focused on the right kind of questions. There’s far too much prestige for incrementally adjusting some overcomplicated unfalsifiable mess of macroeconomic algebra, and not nearly enough for trying to figure out how to mitigate global warming, how to turn back the tide of rising wealth inequality, or what happens to human society once robots take all the middle-class jobs. Good work is being done in devising measures to fight poverty directly, but not in devising means to undermine the authoritarian regimes that are responsible for maintaining poverty. Formal mathematical sophistication is prized, and deep thought about hard questions is eschewed. We are carefully arranging the pebbles on our sandcastle in front of the oncoming tidal wave. I won’t tell you that it’s easy to change this—it certainly hasn’t been easy for me—but I have to imagine it’d be easier with more of us trying rather than with fewer. Nobody needs to donate money to economics departments, but we definitely do need better economists running those departments.

You should ask yourself what it is that you are really good at, what you—you yourself, not anyone else—might do to make a mark on the world. This is not an easy question: I have not quite answered for myself whether I would make more difference as an academic researcher, a policy analyst, a nonfiction author, or even a science fiction author. (If you scoff at the latter: Who would have any concept of AI, space colonization, or transhumanism, if not for science fiction authors? The people who most tilted the dial of human civilization toward this brighter future may well be Clarke, Roddenberry, and Asimov.) It is not impossible to be some combination or even all of these, but the more I try to take on the more difficult my life becomes.

Your own path will look different than mine, different, indeed, than anyone else’s. But you must choose it wisely. For we are very special individuals, living in a very special time.

Moral luck: How it matters, and how it doesn’t

Feb 10 JDN 2458525

The concept of moral luck is now relatively familiar to most philosophers, but I imagine most other people haven’t heard it before. It sounds like a contradiction, which is probably why it drew so much attention.

The term “moral luck” seems to have originated in essay by Thomas Nagel, but the intuition is much older, dating at least back to Greek philosophy (and really probably older than that; we just don’t have good records that far back).

The basic argument is this:

Most people would say that if you had no control over something, you can’t be held morally responsible for it. It was just luck.

But if you look closely, everything we do—including things we would conventionally regard as moral actions—depends heavily on things we don’t have control over.

Therefore, either we can be held responsible for things we have no control over, or we can’t be held responsible for anything at all!

Neither approach seems very satisfying; hence the conundrum.

For example, consider four drivers:

Anna is driving normally, and nothing of note happens.

Bob is driving recklessly, but nothing of note happens.

Carla is driving normally, but a child stumbles out into the street and she runs the child over.

Dan is driving recklessly, and a child stumbles out into the street and he runs the child over.

The presence or absence of a child in the street was not in the control of any of the four drivers. Yet I think most people would agree that Dan should be held more morally responsible than Bob, and Carla should be held more morally responsible than Anna. (Whether Bob should be held more morally responsible than Carla is not as clear.) Yet both Bob and Dan were driving recklessly, and both Anna and Carla were driving normally. The moral evaluation seems to depend upon the presence of the child, which was not under the drivers’ control.

Other philosophers have argued that the difference is an epistemic one: We know the moral character of someone who drove recklessly and ran over a child better than the moral character of someone who drove recklessly and didn’t run over a child. But do we, really?

Another response is simply to deny that we should treat Bob and Dan any differently, and say that reckless driving is reckless driving, and safe driving is safe driving. For this particular example, maybe that works. But it’s not hard to come up with better examples where that doesn’t work:

Ted is a psychopathic serial killer. He kidnaps, rapes, and murder people. Maybe he can control whether or not he rapes and murders someone. But the reason he rapes and murders someone is that he is a psychopath. And he can’t control that he is a psychopath. So how can we say that his actions are morally wrong?

Obviously, we want to say that his actions are morally wrong.

I have heard one alternative, which is to consider psychopaths as morally equivalent to viruses: Zero culpability, zero moral value, something morally neutral but dangerous that we should contain or eradicate as swiftly as possible. HIV isn’t evil; it’s just harmful. We should kill it not because it deserves to die, but because it will kill us if we don’t. On this theory, Ted doesn’t deserve to be executed; it’s just that we must execute him in order to protect ourselves from the danger he poses.

But this quickly becomes unsatisfactory as well:

Jonas is a medical researcher whose work has saved millions of lives. Maybe he can control the research he works on, but he only works on medical research because he was born with a high IQ and strong feelings of compassion. He can’t control that he was born with a high IQ and strong feelings of compassion. So how can we say his actions are morally right?

This is the line of reasoning that quickly leads to saying that all actions are outside our control, and therefore morally neutral; and then the whole concept of morality falls apart.

So we need to draw the line somewhere; there has to be a space of things that aren’t in our control, but nonetheless carry moral weight. That’s moral luck.

Philosophers have actually identified four types of moral luck, which turns out to be tremendously useful in drawing that line.

Resultant luck is luck that determines the consequences of your actions, how things “turn out”. Happening to run over the child because you couldn’t swerve fast enough is resultant luck.

Circumstantial luck is luck that determines the sorts of situations you are in, and what moral decisions you have to make. A child happening to stumble across the street is circumstantial luck.

Constitutive luck is luck that determines who you are, your own capabilities, virtues, intentions and so on. Having a high IQ and strong feelings of compassion is constitutive luck.

Causal luck is the inherent luck written into the fabric of the universe that determines all events according to the fundamental laws of physics. Causal luck is everything and everywhere; it is written into the universal wavefunction.

I have a very strong intuition that this list is ordered; going from top to bottom makes things “less luck” in a vital sense.

Resultant luck is pure luck, what we originally meant when we said the word “luck”. It’s the roll of the dice.

Circumstantial luck is still mostly luck, but maybe not entirely; there are some aspects of it that do seem to be under our control.

Constitutive luck is maybe luck, sort of, but not really. Yes, “You’re lucky to be so smart” makes sense, but “You’re lucky to not be a psychopath” already sounds pretty weird. We’re entering territory here where our ordinary notions of luck and responsibility really don’t seem to apply.

Causal luck is not luck at all. Causal luck is really the opposite of luck: Without a universe with fundamental laws of physics to maintain causal order, none of our actions would have any meaning at all. They wouldn’t even really be actions; they’d just be events. You can’t do something in a world of pure chaos; things only happen. And being made of physical particles doesn’t make you any less what you are; a table made of wood is still a table, and a rocket made of steel is still a rocket. Thou art physics.

And that, my dear reader, is the solution to the problem of moral luck. Forget “causal luck”, which isn’t luck at all. Then, draw a hard line at constitutive luck: regardless of how you became who you are, you are responsible for what you do.

You don’t need to have control over who you are (what would that even mean!?).

You merely need to have control over what you do.

This is how the word “control” is normally used, by the way; when we say that a manufacturing process is “under control” or a pilot “has control” of an airplane, we aren’t asserting some grand metaphysical claim of ultimate causation. We’re merely saying that the system is working as it’s supposed to; the outputs coming out are within the intended parameters. This is all we need for moral responsibility as well.

In some cases, maybe people’s brains really are so messed up that we can’t hold them morally responsible; they aren’t “under control”. Okay, we’re back to the virus argument then: Contain or eradicate. If a brain tumor makes you so dangerous that we can’t trust you around sharp objects, unless we can take out that tumor, we’ll need to lock you up somewhere where you can’t get any sharp objects. Sorry. Maybe you don’t deserve that in some ultimate sense, but it’s still obviously what we have to do. And this is obviously quite exceptional; most people are not suffering from brain tumors that radically alter their personalities—and even most psychopaths are otherwise neurologically normal.

Ironically, it’s probably my fellow social scientists who will scoff the most at this answer. “But so much of what we are is determined by our neurochemistry/cultural norms/social circumstances/political institutions/economic incentives!” Yes, that’s true. And if we want to change those things to make us and others better, I’m all for it. (Well, neurochemistry is a bit problematic, so let’s focus on the others first—but if you can make a pill that cures psychopathy, I would support mandatory administration of that pill to psychopaths in positions of power.)

When you make a moral choice, we have to hold you responsible for that choice.

Maybe Ted is psychopathic and sadistic because there was too much lead in his water as a child. That’s a good reason to stop putting lead in people’s water (like we didn’t already have plenty!); but it’s not a good reason to let Ted off the hook for all those rapes and murders.

Maybe Jonas is intelligent and compassionate because his parents were wealthy and well-educated. That’s a good reason to make sure people are financially secure and well-educated (again, did we need more?); but it’s not a good reason to deny Jonas his Nobel Prize for saving millions of lives.

Yes, “personal responsibility” has been used by conservatives as an excuse to not solve various social and economic problems (indeed, it has specifically been used to stop regulations on lead in water and public funding for education). But that’s not actually anything wrong with personal responsibility. We should hold those conservatives personally responsible for abusing the term in support of their destructive social and economic policies. No moral freedom is lost by preventing lead from turning children into psychopaths. No personal liberty is destroyed by ensuring that everyone has access to a good education.

In fact, there is evidence that telling people who are suffering from poverty or oppression that they should take personal responsibility for their choices benefits them. Self-perceived victimhood is linked to all sorts of destructive behaviors, even controlling for prior life circumstances. Feminist theorists have written about how taking responsibility even when you are oppressed can empower you to make your life better. Yes, obviously, we should be helping people when we can. But telling them that they are hopeless unless we come in to rescue them isn’t helping them.

This way of thinking may require a delicate balance at times, but it’s not inconsistent. You can both fight against lead pollution and support the criminal justice system. You can believe in both public education and the Nobel Prize. We should be working toward a world where people are constituted with more virtue for reasons beyond their control, and where people are held responsible for the actions they take that are under their control.

We can continue to talk about “moral luck” referring to constitutive luck, I suppose, but I think the term obscures more than it illuminates. The “luck” that made you a good or a bad person is very different from the “luck” that decides how things happen to turn out.

What we could, what we should, and what we must

May 27 JDN 2458266

In one of the most famous essays in all of ethical philosophy, Peter Singer famously argued that we are morally obligated to give so much to charity that we would effectively reduce ourselves to poverty only slightly better than what our donations sought to prevent. His argument is a surprisingly convincing one, especially for such a radical proposition. Indeed, one of the core activities of the Effective Altruism movement has basically been finding ways to moderate Singer’s argument without giving up on its core principles, because it’s so obvious both that we ought to do much more to help people around the world and that there’s no way we’re ever going to do what that argument actually asks of us.

The most cost-effective charities in the world can save a human life for an average cost of under $4,000. The maneuver that Singer basically makes is quite simple: If you know that you could save someone’s life for $4,000, you have $4,000 to spend, and instead you spend that $4,000 on something else, aren’t you saying that whatever you did spend it on was more important than saving that person’s life? And is that really something you believe?

But if you think a little more carefully, it becomes clear that things are not quite so simple. You aren’t being paid $4,000 to kill someone, first of all. If you were willing to accept $4,000 as sufficient payment to commit a murder, you would be, quite simply, a monster. Implicitly the “infinite identical psychopath” of neoclassical rational agent models would be willing to do such a thing, but very few actual human beings—even actual psychopaths—are that callous.

Obviously, we must refrain from murdering people, even for amounts far in excess of $4,000. If you were offered the chance to murder someone for $4 billion dollars, I can understand why you would be tempted to do such a thing. Think of what you could do with all that money! Not only would you and everyone in your immediate family be independently wealthy for life, you could donate billions of dollars to charity and save as much as a million lives. What’s one life for a million? Even then, I have a strong intuition that you shouldn’t commit this murder—but I have never been able to find a compelling moral argument for why. The best I’ve been able to come up with a sort of Kantian notion: What if everyone did this?

Since the most plausible scenario is that the $4 billion comes from existing wealth, all those murders would simply be transferring wealth around, from unknown sources. If you stipulate where the wealth comes from, the dilemma can change quite a bit.

Suppose for example the $4 billion is confiscated from Bashar Al-Assad. That would be in itself a good thing, lessening the power of a genocidal tyrant. So we need to add that to the positive side of the ledger. It is probably worth killing one innocent person just to undermine Al-Assad’s power; indeed, the US Air Force certainly seems to think so, as they average more than one civilian fatality every day in airstrikes.

Now suppose the wealth was extracted by clever financial machinations that took just a few dollars out of every bank account in America. This would be in itself a bad thing, but perhaps not a terrible thing, especially since we’re planning on giving most of it to UNICEF. Those people should have given it anyway, right? This sounds like a pretty good movie, actually; a cyberpunk Robin Hood basically.

Next, suppose it was obtained by stealing the life savings of a million poor people in Africa. Now the method of obtaining the money is so terrible that it’s not clear that funneling it through UNICEF would compensate, even if you didn’t have to murder someone to get it.

Finally, suppose that the wealth is actually created anew—not printed money from the Federal Reserve, but some new technology that will increase the world’s wealth by billions of dollars yet requires the death of an innocent person to create. In this scenario, the murder has become something more like the inherent risk in human subjects biomedical research, and actually seems justifiable. And indeed, that fits with the Kantian answer, for if we all had the chance to kill one person in order to create something that would increase the wealth of the world by $4 billion, we could turn this planet into a post-scarcity utopia within a generation for fewer deaths than are currently caused by diabetes.

Anyway, my point here is that the detailed context of a decision actually matters a great deal. We can’t simply abstract away from everything else in the world and ask whether the money is worth the life.

When we consider this broader context with regard to the world’s most cost-effective charities, it becomes apparent that a small proportion of very dedicated people giving huge proportions of their income to charity is not the kind of world we want to see.

If I actually gave so much that I equalized my marginal utility of wealth to that of a child dying of malaria in Ghana, I would have to donate over 95% of my income—and well before that point, I would be homeless and impoverished. This actually seems penny-wise and pound-foolish even from the perspective of total altruism: If I stop paying rent, it gets a lot harder for me to finish my doctorate and become a development economist. And even if I never donated another dollar, the world would be much better off with one more good development economist than with even another $23,000 to the Against Malaria Foundation. Once you factor in the higher income I’ll have (and proportionately higher donations I’ll make), it’s obviously the wrong decision for me to give 95% of $25,000 today rather than 10% of $70,000 every year for the next 20 years after I graduate.

But the optimal amount for me to donate from that perspective is whatever the maximum would be that I could give without jeopardizing my education and career prospects. This is almost certainly more than I am presently giving. Exactly how much more is actually not all that apparent: It’s not enough to say that I need to be able to pay rent, eat three meals a day, and own a laptop that’s good enough for programming and statistical analysis. There’s also a certain amount that I need for leisure, to keep myself at optimal cognitive functioning for the next several years. Do I need that specific video game, that specific movie? Surely not—but if I go the next ten years without ever watching another movie or playing another video game, I’m probably going to be in trouble psychologically. But what exactly is the minimum amount to keep me functioning well? And how much should I be willing to spend attending conferences? Those can be important career-building activities, but they can also be expensive wastes of time.

Singer acts as though jeopardizing your career prospects is no big deal, but this is clearly wrong: The harm isn’t just to your own well-being, but also to your productivity and earning power that could have allowed you to donate more later. You are a human capital asset, and you are right to invest in yourself. Exactly how much you should invest in yourself is a much harder question.
Such calculations are extremely difficult to do. There are all sorts of variables I simply don’t know, and don’t have any clear way of finding out. It’s not a good sign for an ethical theory when even someone with years of education and expertise on specifically that topic still can’t figure out the answer. Ethics is supposed to be something we can apply to everyone.

So I think it’s most helpful to think in those terms: What could we apply to everyone? What standard of donation would be high enough if we could get everyone on board?

World poverty is rapidly declining. The direct poverty gap at the UN poverty line of $1.90 per day is now only $80 billion. Realistically, we couldn’t simply close that gap precisely (there would also be all sorts of perverse incentives if we tried to do it that way). But the standard estimate that it would take about $300 billion per year in well-targeted spending to eliminate world hunger is looking very good.

How much would each person, just those in the middle class or above within the US or the EU, have to give in order to raise this much?
89% of US income is received by the top 60% of households (who I would say are unambiguously “middle class or above”). Income inequality is not as extreme within the EU, so the proportion of income received by the top 60% seems to be more like 75%.

89% of US GDP plus 75% of EU GDP is all together about $29 trillion per year. This means that in order to raise $300 billion, each person in the middle class or above would need to donate just over one percent of their income.

Not 95%. Not 25%. Not even 10%. Just 1%. That would be enough.

Of course, more is generally better—at least until you start jeopardizing your career prospects. So by all means, give 2% or 5% or even 10%. But I really don’t think it’s helpful to make people feel guilty about not giving 95% when all we really needed was for everyone to give 1%.

There is an important difference between what we could do, what we should do, and what we must do.

What we must do are moral obligations so strong they are essentially inviolable: We must not murder people. There may be extreme circumstances where exceptions can be made (such as collateral damage in war), and we can always come up with hypothetical scenarios that would justify almost anything, but for the vast majority of people the vast majority of time, these ethical rules are absolutely binding.

What we should do are moral obligations that are strong enough to be marks against your character if you break them, but not so absolutely binding that you have to be a monster not to follow them. This is where I put donating at least 1% of your income. (This is also where I put being vegetarian, but perhaps that is a topic for another time.) You really ought to do it, and you are doing something wrongful if you don’t—but most people don’t, and you are not a terrible person if you don’t.

This latter category is in part socially constructed, based on the norms people actually follow. Today, slavery is obviously a grave crime, and to be a human trafficker who participates in it you must be a psychopath. But two hundred years ago, things were somewhat different: Slavery was still wrong, yes, but it was quite possible to be an ordinary person who was generally an upstanding citizen in most respects and yet still own slaves. I would still condemn people who owned slaves back then, but not nearly as forcefully as I would condemn someone who owned slaves today. Two hundred years from now, perhaps vegetarianism will move up a category: The norm will be that everyone eats only plants, and someone who went out of their way to kill and eat a pig would have to be a psychopath. Eating meat is already wrong today—but it will be more wrong in the future. I’d say the same about donating 1% of your income, but actually I’m hoping that by two hundred years from now there will be no more poverty left to eradicate, and donation will no longer be necessary.

Finally, there is what we could do—supererogatory, even heroic actions of self-sacrifice that would make the world a better place, but cannot be reasonably expected of us. This is where donating 95% or even 25% of your income would fall. Yes, absolutely, that would help more people than donating 1%; but you don’t owe the world that much. It’s not wrong for you to contribute less than this. You don’t need to feel guilty for not giving this much.

But I do want to make you feel guilty if you don’t give at least 1%. Don’t tell me you can’t. You can. If your income is $30,000 per year, that’s $300 per year. If you needed that much for a car repair, or dental work, or fixing your roof, you’d find a way to come up with it. No one in the First World middle class is that liquidity-constrained. It is true that half of Americans say they couldn’t come up with $400 in an emergency, but I frankly don’t believe it. (I believe it for the bottom 25% or so, who are actually poor; but not half of Americans.) If you have even one credit card that’s not maxed out, you can do this—and frankly even if a card is maxed out, you can probably call them and get them to raise your limit. There is something you could cut out of your spending that would allow you to get back 1% of your annual income. I don’t know what it is, necessarily: Restaurants? Entertainment? Clothes? But I’m not asking you to give a third of your income—I’m asking you to give one penny out of every dollar.

I give considerably more than that; my current donation target is 8% and I’m planning on raising it to 10% or more once I get a high-paying job. I live on a grad student salary which is less than the median personal income in the US. So I know it can be done. But I am very intentionally not asking you to give this much; that would be above and beyond the call of duty. I’m only asking you to give 1%.

The vector geometry of value change

Post 239: May 20 JDN 2458259

This post is one of those where I’m trying to sort out my own thoughts on an ongoing research project, so it’s going to be a bit more theoretical than most, but I’ll try to spare you the mathematical details.

People often change their minds about things; that should be obvious enough. (Maybe it’s not as obvious as it might be, as the brain tends to erase its prior beliefs as wastes of data storage space.)

Most of the ways we change our minds are fairly minor: We get corrected about Napoleon’s birthdate, or learn that George Washington never actually chopped down any cherry trees, or look up the actual weight of an average African elephant and are surprised.

Sometimes we change our minds in larger ways: We realize that global poverty and violence are actually declining, when we thought they were getting worse; or we learn that climate change is actually even more dangerous than we thought.

But occasionally, we change our minds in an even more fundamental way: We actually change what we care about. We convert to a new religion, or change political parties, or go to college, or just read some very compelling philosophy books, and come out of it with a whole new value system.

Often we don’t anticipate that our values are going to change. That is important and interesting in its own right, but I’m going to set it aside for now, and look at a different question: What about the cases where we know our values are going to change?
Can it ever be rational for someone to choose to adopt a new value system?

Yes, it can—and I can put quite tight constraints on precisely when.

Here’s the part where I hand-wave the math, but imagine for a moment there are only two goods in the world that anyone would care about. (This is obviously vastly oversimplified, but it’s easier to think in two dimensions to make the argument, and it generalizes to n dimensions easily from there.) Maybe you choose a job caring only about money and integrity, or design policy caring only about security and prosperity, or choose your diet caring only about health and deliciousness.

I can then represent your current state as a vector, a two dimensional object with a length and a direction. The length describes how happy you are with your current arrangement. The direction describes your values—the direction of the vector characterizes the trade-off in your mind of how much you care about each of the two goods. If your vector is pointed almost entirely parallel with health, you don’t much care about deliciousness. If it’s pointed mostly at integrity, money isn’t that important to you.

This diagram shows your current state as a green vector.

vector1

Now suppose you have the option of taking some action that will change your value system. If that’s all it would do and you know that, you wouldn’t accept it. You will be no better off, and your value system will be different, which is bad from your current perspective. So here, you would not choose to move to the red vector:

vector2

But suppose that the action would change your value system, and make you better off. Now the red vector is longer than the green vector. Should you choose the action?

vector3

It’s not obvious, right? From the perspective of your new self, you’ll definitely be better off, and that seems good. But your values will change, and maybe you’ll start caring about the wrong things.

I realized that the right question to ask is whether you’ll be better off from your current perspective. If you and your future self both agree that this is the best course of action, then you should take it.

The really cool part is that (hand-waving the math again) it’s possible to work this out as a projection of the new vector onto the old vector. A large change in values will be reflected as a large angle between the two vectors; to compensate for that you need a large change in length, reflecting a greater improvement in well-being.

If the projection of the new vector onto the old vector is longer than the old vector itself, you should accept the value change.

vector4
If the projection of the new vector onto the old vector is shorter than the old vector, you should not accept the value change.

vector5

This captures the trade-off between increased well-being and changing values in a single number. It fits the simple intuitions that being better off is good, and changing values more is bad—but more importantly, it gives us a way of directly comparing the two on the same scale.

This is a very simple model with some very profound implications. One is that certain value changes are impossible in a single step: If a value change would require you to take on values that are completely orthogonal or diametrically opposed to your own, no increase in well-being will be sufficient.

It doesn’t matter how long I make this red vector, the projection onto the green vector will always be zero. If all you care about is money, no amount of integrity will entice you to change.

vector6

But a value change that was impossible in a single step can be feasible, even easy, if conducted over a series of smaller steps. Here I’ve taken that same impossible transition, and broken it into five steps that now make it feasible. By offering a bit more money for more integrity, I’ve gradually weaned you into valuing integrity above all else:

vector7

This provides a formal justification for the intuitive sense many people have of a “moral slippery slope” (commonly regarded as a fallacy). If you make small concessions to an argument that end up changing your value system slightly, and continue to do so many times, you could end up with radically different beliefs at the end, even diametrically opposed to your original beliefs. Each step was rational at the time you took it, but because you changed yourself in the process, you ended up somewhere you would not have wanted to go.

This is not necessarily a bad thing, however. If the reason you made each of those changes was actually a good one—you were provided with compelling evidence and arguments to justify the new beliefs—then the whole transition does turn out to be a good thing, even though you wouldn’t have thought so at the time.

This also allows us to formalize the notion of “inferential distance”: the inferential distance is the number of steps of value change required to make someone understand your point of view. It’s a function of both the difference in values and the difference in well-being between their point of view and yours.

Another key insight is that if you want to persuade someone to change their mind, you need to do it slowly, with small changes repeated many times, and you need to benefit them at each step. You can only persuade someone to change their minds if they will end up better off than they were at each step.

Is this an endorsement of wishful thinking? Not if we define “well-being” in the proper way. It can make me better off in a deep sense to realize that my wishful thinking was incorrect, so that I realize what must be done to actually get the good things I thought I already had.  It’s not necessary to appeal to material benefits; it’s necessary to appeal to current values.

But it does support the notion that you can’t persuade someone by belittling them. You won’t convince people to join your side by telling them that they are defective and bad and should feel guilty for being who they are.

If that seems obvious, well, maybe you should talk to some of the people who are constantly pushing “White privilege”. If you focused on how reducing racism would make people—even White people—better off, you’d probably be more effective. In some cases there would be direct material benefits: Racism creates inefficiency in markets that reduces overall output. But in other cases, sure, maybe there’s no direct benefit for the person you’re talking to; but you can talk about other sorts of benefits, like what sort of world they want to live in, or how proud they would feel to be part of the fight for justice. You can say all you want that they shouldn’t need this kind of persuasion, they should already believe and do the right thing—and you might even be right about that, in some ultimate sense—but do you want to change their minds or not? If you actually want to change their minds, you need to meet them where they are, make small changes, and offer benefits at each step.

If you don’t, you’ll just keep on projecting a vector orthogonally, and you’ll keep ending up with zero.

There is no problem of free will, just a lot of really confused people

Jan 15, JDN 2457769

I was hoping for some sort of news item to use as a segue, but none in particular emerged, so I decided to go on with it anyway. I haven’t done any cognitive science posts in awhile, and this is one I’ve been meaning to write for a long time—actually it’s the sort of thing that even a remarkable number of cognitive scientists frequently get wrong, perhaps because the structure of human personality makes cognitive science inherently difficult.

Do we have free will?

The question has been asked so many times by so many people it is now a whole topic in philosophy. The Stanford Encyclopedia of Philosophy has an entire article on free will. The Information Philosopher has a gateway page “The Problem of Free Will” linking to a variety of subpages. There are even YouTube videos about “the problem of free will”.

The constant arguing back and forth about this would be problematic enough, but what really grates me are the many, many people who write “bold” articles and books about how “free will does not exist”. Examples include Sam Harris and Jerry Coyne, and have been published in everything from Psychology Today to the Chronicle of Higher Education. There’s even a TED talk.

The worst ones are those that follow with “but you should believe in it anyway”. In The Atlantic we have “Free will does not exist. But we’re better off believing in it anyway.” Scientific American offers a similar view, “Scientists say free will probably doesn’t exist, but urge: “Don’t stop believing!””

This is a mind-bogglingly stupid approach. First of all, if you want someone to believe in something, you don’t tell them it doesn’t exist. Second, if something doesn’t exist, that is generally considered a pretty compelling reason not to believe in it. You’d need a really compelling counter-argument, and frankly I’m not even sure the whole idea is logically coherent. How can I believe in something if I know it doesn’t exist? Am I supposed to delude myself somehow?

But the really sad part is that it’s totally unnecessary. There is no problem of free will. There are just an awful lot of really, really confused people. (Fortunately not everyone is confused; there are those, such as Daniel Dennett, who actually understand what’s going on.)

The most important confusion is over what you mean by the phrase “free will”. There are really two core meanings here, and the conflation of them is about 90% of the problem.

1. Moral responsibility: We have “free will” if and only if we are morally responsible for our actions.

2. Noncausality: We have “free will” if and only if our actions are not caused by the laws of nature.

Basically, every debate over “free will” boils down to someone pointing out that noncausality doesn’t exist, and then arguing that this means that moral responsibility doesn’t exist. Then someone comes back and says that moral responsibility does exist, and then infers that this means noncausality must exist. Or someone points out that noncausality doesn’t exist, and then they realize how horrible it would be if moral responsibility didn’t exist, and then tells people they should go on believing in noncausality so that they don’t have to give up moral responsibility.

Let me be absolutely clear here: Noncausality could not possibly exist.

Noncausality isn’t even a coherent concept. Actions, insofar as they are actions, must, necessarily, by definition, be caused by the laws of nature.

I can sort of imagine an event not being caused; perhaps virtual electron-positron pairs can really pop into existence without ever being caused. (Even then I’m not entirely convinced; I think quantum mechanics might actually be deterministic at the most fundamental level.)

But an action isn’t just a particle popping into existence. It requires the coordinated behavior of some 10^26 or more particles, all in a precisely organized, unified way, structured so as to move some other similarly large quantity of particles through space in a precise way so as to change the universe from one state to another state according to some system of objectives. Typically, it involves human muscles intervening on human beings or inanimate objects. (Recently it has come to mean specifically human fingers on computer keyboards a rather large segment of the time!) If what you do is an action—not a muscle spasm, not a seizure, not a slip or a trip, but something you did on purpose—then it must be caused. And if something is caused, it must be caused according to the laws of nature, because the laws of nature are the laws underlying all causality in the universe!

And once you realize that, the “problem of free will” should strike you as one of the stupidest “problems” ever proposed. Of course our actions are caused by the laws of nature! Why in the world would you think otherwise?

If you think that noncausality is necessary—or even useful—for free will, what kind of universe do you think you live in? What kind of universe could someone live in, that would fit your idea of what free will is supposed to be?

It’s like I said in that much earlier post about The Basic Fact of Cognitive Science (we are our brains): If you don’t think a mind can be made of matter, what do you think minds are made of? What sort of magical invisible fairy dust would satisfy you? If you can’t even imagine something that would satisfy the constraints you’ve imposed, did it maybe occur to you that your constraints are too strong?

Noncausality isn’t worth fretting over for the same reason that you shouldn’t fret over the fact that pi is irrational and you can’t make a square circle. There is no possible universe in which that isn’t true. So if it bothers you, it’s not that there’s something wrong with the universe—it’s clearly that there’s something wrong with you. Your thinking on the matter must be too confused, too dependent on unquestioned intuitions, if you think that murder can’t be wrong unless 2+2=5.

In philosophical jargon I am called a “compatibilist” because I maintain that free will and determinism are “compatible”. But this is much too weak a term. I much prefer Eleizer Yudkowsky’s “requiredism”, which he explains in one of the greatest blog posts of all time (seriously, read it immediately if you haven’t before—I’m okay with you cutting off my blog post here and reading his instead, because it truly is that brilliant), entitled simply “Thou Art Physics”. This quote sums it up briefly:

My position might perhaps be called “Requiredism.” When agency, choice, control, and moral responsibility are cashed out in a sensible way, they require determinism—at least some patches of determinism within the universe. If you choose, and plan, and act, and bring some future into being, in accordance with your desire, then all this requires a lawful sort of reality; you cannot do it amid utter chaos. There must be order over at least over those parts of reality that are being controlled by you. You are within physics, and so you/physics have determined the future. If it were not determined by physics, it could not be determined by you.

Free will requires a certain minimum level of determinism in the universe, because the universe must be orderly enough that actions make sense and there isn’t simply an endless succession of random events. Call me a “requiredist” if you need to call me something. I’d prefer you just realize the whole debate is silly because moral responsibility exists and noncausality couldn’t possibly.

We could of course use different terms besides “free will”. “Moral responsibility” is certainly a good one, but it is missing one key piece, which is the issue of why we can assign moral responsibility to human beings and a few other entities (animals, perhaps robots) and not to the vast majority of entities (trees, rocks, planets, tables), and why we are sometimes willing to say that even a human being does not have moral responsibility (infancy, duress, impairment).

This is why my favored term is actually “rational volition”. The characteristic that human beings have (at least most of us, most of the time), which also many animals and possibly some robots share (if not now, then soon enough), which justifies our moral responsibility is precisely our capacity to reason. Things don’t just happen to us the way they do to some 99.999,999,999% of the universe; we do things. We experience the world through our senses, have goals we want to achieve, and act in ways that are planned to make the world move closer to achieving those goals. We have causes, sure enough; but not just any causes. We have a specific class of causes, which are related to our desires and intentions—we call these causes reasons.

So if you want to say that we don’t have “free will” because that implies some mysterious nonsensical noncausality, sure; that’s fine. But then don’t go telling us that this means we don’t have moral responsibility, or that we should somehow try to delude ourselves into believing otherwise in order to preserve moral responsibility. Just recognize that we do have rational volition.

How do I know we have rational volition? That’s the best part, really: Experiments. While you’re off in la-la land imagining fanciful universes where somehow causes aren’t really causes even though they are, I can point to not only centuries of human experience but decades of direct, controlled experiments in operant conditioning. Human beings and most other animals behave quite differently in behavioral experiments than, say, plants or coffee tables. Indeed, it is precisely because of this radical difference that it seems foolish to even speak of a “behavioral experiment” about coffee tables—because coffee tables don’t behave, they just are. Coffee tables don’t learn. They don’t decide. They don’t plan or consider or hope or seek.

Japanese, as it turns out, may be a uniquely good language for cognitive science, because it has two fundamentally different verbs for “to be” depending on whether an entity is sentient. Humans and animals imasu, while inanimate objects merely arimasu. We have free will because and insofar as we imasu.

Once you get past that most basic confusion of moral responsibility with noncausality, there are a few other confusions you might run into as well. Another one is two senses of “reductionism”, which Dennett refers to as “ordinary” and “greedy”:

1. Ordinary reductionism: All systems in the universe are ultimately made up of components that always and everywhere obey the laws of nature.

2. Greedy reductionism: All systems in the universe just are their components, and have no existence, structure, or meaning aside from those components.

I actually had trouble formulating greedy reductionism as a coherent statement, because it’s such a nonsensical notion. Does anyone really think that a pile of two-by-fours is the same thing as a house? But people do speak as though they think this about human brains, when they say that “love is just dopamine” or “happiness is just serotonin”. But dopamine in a petri dish isn’t love, any more than a pile of two-by-fours is a house; and what I really can’t quite grok is why anyone would think otherwise.

Maybe they’re simply too baffled by the fact that love is made of dopamine (among other things)? They can’t quite visualize how that would work (nor can I, nor, I think, can anyone in the world at this level of scientific knowledge). You can see how the two-by-fours get nailed together and assembled into the house, but you can’t see how dopamine and action potentials would somehow combine into love.

But isn’t that a reason to say that love isn’t the same thing as dopamine, rather than that it is? I can understand why some people are still dualists who think that consciousness is somehow separate from the functioning of the brain. That’s wrong—totally, utterly, ridiculously wrong—but I can at least appreciate the intuition that underlies it. What I can’t quite grasp is why someone would go so far the other way and say that the consciousness they are currently experiencing does not exist.

Another thing that might confuse people is the fact that minds, as far as we know, are platform independentthat is, your mind could most likely be created out of a variety of different materials, from the gelatinous brain it currently is to some sort of silicon supercomputer, to perhaps something even more exotic. This independence follows from the widely-believed Church-Turing thesis, which essentially says that all computation is computation, regardless of how it is done. This may not actually be right, but I see many reasons to think that it is, and if so, this means that minds aren’t really what they are made of at all—they could be made of lots of things. What makes a mind a mind is how it is structured and above all what it does.

If this is baffling to you, let me show you how platform-independence works on a much simpler concept: Tables. Tables are also in fact platform-independent. You can make a table out of wood, or steel, or plastic, or ice, or bone. You could take out literally every single atom of a table and replace it will a completely different atom of a completely different element—carbon for iron, for example—and still end up with a table. You could conceivably even do so without changing the table’s weight, strength, size, etc., though that would be considerably more difficult.
Does this mean that tables somehow exist “beyond” their constituent matter? In some very basic sense, I suppose so—they are, again, platform-independent. But not in any deep, mysterious sense. Start with a wooden table, take away all the wood, and you no longer have a table. Take apart the table and you have a bunch of wood, which you could use to build something else. There is no “essence” comprising the table. There is no “table soul” that would persist when the table is deconstructed.

And—now for the hard part—so it is with minds. Your mind is your brain. The constituent atoms of your brain are gradually being replaced, day by day, but your mind is the same, because it exists in the arrangement and behavior, not the atoms themselves. Yet there is nothing “extra” or “beyond” that makes up your mind. You have no “soul” that lies beyond your brain. If your brain is destroyed, your mind will also be destroyed. If your brain could be copied, your mind would also be copied. And one day it may even be possible to construct your mind in some other medium—some complex computer made of silicon and tantalum, most likely—and it would still be a mind, and in all its thoughts, feelings and behaviors your mind, if not numerically identical to you.

Thus, when we engage in rational volition—when we use our “free will” if you like that term—there is no special “extra” process beyond what’s going on in our brains, but there doesn’t have to be. Those particular configurations of action potentials and neurotransmitters are our thoughts, desires, plans, intentions, hopes, fears, goals, beliefs. These mental concepts are not in addition to the physical material; they are made of that physical material. Your soul is made of gelatin.

Again, this is not some deep mystery. There is no “paradox” here. We don’t actually know the details of how it works, but that makes this no different from a Homo erectus who doesn’t know how fire works. Maybe he thinks there needs to be some extra “fire soul” that makes it burn, but we know better; and in far fewer centuries than separate that Homo erectus from us, our descendants will know precisely how the brain creates the mind.

Until then, simply remember that any mystery here lies in us—in our ignorance—and not in the universe. And take heart that the kind of “free will” that matters—moral responsibility—has absolutely no need for the kind of “free will” that doesn’t exist—noncausality. They’re totally different things.