Hypocrisy is underrated

Sep 12 JDN 2459470

Hypocrisy isn’t a good thing, but it isn’t nearly as bad as most people seem to think. Often accusing someone of hypocrisy is taken as a knock-down argument for everything they are saying, and this is just utterly wrong. Someone can be a hypocrite and still be mostly right.

Often people are accused of hypocrisy when they are not being hypocritical; for instance the right wing seems to think that “They want higher taxes on the rich, but they are rich!” is hypocrisy, when in fact it’s simply altruism. (If they had wanted the rich guillotined, that would be hypocrisy. Maybe the problem is that the right wing can’t tell the difference?) Even worse, “They live under capitalism but they want to overthrow capitalism!” is not even close to hypocrisy—let alone why, how would someone overthrow a system they weren’t living under? (There are many things wrong with Marxists, but that is not one of them.)

But in fact I intend something stronger: Hypocrisy itself just isn’t that bad.


There are currently two classes of Republican politicians with regard to the COVID vaccines: Those who are consistent in their principles and don’t get the vaccines, and those who are hypocrites and get the vaccines while telling their constituents not to. Of the two, who is better? The hypocrites. At least they are doing the right thing even as they say things that are very, very wrong.

There are really four cases to consider. The principles you believe in could be right, or they could be wrong. And you could follow those principles, or you could be a hypocrite. Each of these two factors is independent.

If your principles are right and you are consistent, that’s the best case; if your principles are right and you are a hypocrite, that’s worse.

But if your principles are wrong and you are consistent, that’s the worst case; if your principles are wrong and you are a hypocrite, that’s better.

In fact I think for most things the ordering goes like this: Consistent Right > Hypocritical Wrong > Hypocritical Right > Consistent Wrong. Your behavior counts for more than your principles—so if you’re going to be a hypocrite, it’s better for your good actions to not match your bad principles.

Obviously if we could get people to believe good moral principles and then follow them, that would be best. And we should in fact be working to achieve that.

But if you know that someone’s moral principles are wrong, it doesn’t accomplish anything to accuse them of being a hypocrite. If it’s true, that’s a good thing.

Here’s a pretty clear example for you: Anyone who says that the Bible is infallible but doesn’t want gay people stoned to death is a hypocrite. The Bible is quite clear on this matter; Leviticus 20:13 really doesn’t leave much room for interpretation. By this standard, most Christians are hypocrites—and thank goodness for that. I owe my life to the hypocrisy of millions.

Of course if I could convince them that the Bible isn’t infallible—perhaps by pointing out all the things it says that contradict their most deeply-held moral and factual beliefs—that would be even better. But the last thing I want to do is make their behavior more consistent with their belief that the Bible is infallible; that would turn them into fanatical monsters. The Spanish Inquisition was very consistent in behaving according to the belief that the Bible is infallible.

Here’s another example: Anyone who thinks that cruelty to cats and dogs is wrong but is willing to buy factory-farmed beef and ham is a hypocrite. Any principle that would tell you that it’s wrong to kick a dog or cat would tell you that the way cows and pigs are treated in CAFOs is utterly unconscionable. But if you are really unwilling to give up eating meat and you can’t find or afford free-range beef, it still would be bad for you to start kicking dogs in a display of your moral consistency.

And one more example for good measure: The leaders of any country who resist human rights violations abroad but tolerate them at home are hypocrites. Obviously the best thing to do would be to fight human rights violations everywhere. But perhaps for whatever reason you are unwilling or unable to do this—one disturbing truth is that many human rights violations at home (such as draconian border policies) are often popular with your local constituents. Human-rights violations abroad are also often more severe—detaining children at the border is one thing, a full-scale genocide is quite another. So, for good reasons or bad, you may decide to focus your efforts on resisting human rights violations abroad rather than at home; this would make you a hypocrite. But it would still make you much better than a more consistent leader who simply ignores all human rights violations wherever they may occur.

In fact, there are cases in which it may be optimal for you to knowingly be a hypocrite. If you have two sets of competing moral beliefs, and you don’t know which is true but you know that as a whole they are inconsistent, your best option is to apply each set of beliefs in the domain for which you are most confident that it is correct, while searching for more information that might allow you to correct your beliefs and reconcile the inconsistency. If you are self-aware about this, you will know that you are behaving in a hypocritical way—but you will still behave better than you would if you picked the wrong beliefs and stuck to them dogmatically. In fact, given a reasonable level of risk aversion, you’ll be better off being a hypocrite than you would by picking one set of beliefs arbitrarily (say, at the flip of a coin). At least then you avoid the worst-case scenario of being the most wrong.

There is yet another factor to take into consideration. Sometimes following your own principles is hard.

Considerable ink has been spilled on the concept of akrasia, or “weakness of will”, in which we judge that A is better than B yet still find ourselves doing B. Philosophers continue to debate to this day whether this really happens. As a behavioral economist, I observe it routinely, perhaps even daily. In fact, I observe it in myself.

I think the philosophers’ mistake is to presume that there is one simple, well-defined “you” that makes all observations and judgments and takes actions. Our brains are much more complicated than that. There are many “you”s inside your brain, each with its own capacities, desires, and judgments. Yes, there is some important sense in which they are all somehow unified into a single consciousness—by a mechanism which still eludes our understanding. But it doesn’t take esoteric cognitive science to see that there are many minds inside you: Haven’t you ever felt an urge to do something you knew you shouldn’t do? Haven’t you ever succumbed to such an urge—drank the drink, eaten the dessert, bought the shoes, slept with the stranger—when it seemed so enticing but you knew it wasn’t really the right choice?

We even speak of being “of two minds” when we are ambivalent about something, and I think there is literal truth in this. The neural networks in your brain are forming coalitions, and arguing between them over which course of action you ought to take. Eventually one coalition will prevail, and your action will be taken; but afterward your reflective mind need not always agree that the coalition which won the vote was the one that deserved to.

The evolutionary reason for this is simple: We’re a kludge. We weren’t designed from the top down for optimal efficiency. We were the product of hundreds of millions of years of subtle tinkering, adding a bit here, removing a bit there, layering the mammalian, reflective cerebral cortex over the reptilian, emotional limbic system over the ancient, involuntary autonomic system. Combine this with the fact that we are built in pairs, with left and right halves of each kind of brain (and yes, they are independently functional when their connection is severed), and the wonder is that we ever agree with our own decisions.

Thus, there is a kind of hypocrisy that is not a moral indictment at all: You may genuinely and honestly agree that it is morally better to do something and still not be able to bring yourself to do it. You may know full well that it would be better to donate that money to malaria treatment rather than buy yourself that tub of ice cream—you may be on a diet and full well know that the ice cream won’t even benefit you in the long run—and still not be able to stop yourself from buying the ice cream.

Sometimes your feeling of hesitation at an altruistic act may be a useful insight; I certainly don’t think we should feel obliged to give all our income, or even all of our discretionary income, to high-impact charities. (For most people I encourage 5%. I personally try to aim for 10%. If all the middle-class and above in the First World gave even 1% we could definitely end world hunger.) But other times it may lead you astray, make you unable to resist the temptation of a delicious treat or a shiny new toy when even you know the world would be better off if you did otherwise.

Yet when following our own principles is so difficult, it’s not really much of a criticism to point out that someone has failed to do so, particularly when they themselves already recognize that they failed. The inconsistency between behavior and belief indicates that something is wrong, but it may not be any dishonesty or even anything wrong with their beliefs.

I wouldn’t go so far as to say you should stop ever calling out hypocrisy. Sometimes it is clearly useful to do so. But while hypocrisy is often the sign of a moral failing, it isn’t always—and even when it is, often as not the problem is the bad principles, not the behavior inconsistent with them.

Men and violence

Apr4 JDN 2459302

Content warning: In this post, I’m going to be talking about violence, including sexual violence. April is Sexual Assault Awareness and Prevention Month. I won’t go into any explicit detail, but I understand that discussion of such topics can still be very upsetting for many people.

After short posts for the past two weeks, get ready for a fairly long post. This is a difficult and complicated topic, and I want to make sure that I state things very clearly and with all necessary nuance.

While the overall level of violence between human societies varies tremendously, one thing is astonishingly consistent: Violence is usually committed by men.

In fact, violence is usually suffered by men as well—with the quite glaring exception of sexual violence. This is why I am particularly offended by claims like “All men benefit from male violence”; no, men who were murdered by other men did not benefit from male violence, and it is frankly appalling to say otherwise. Most men would be better off if male violence were somehow eliminated from the world. (Most women would also be much better off as well, of course.)

I therefore consider it both a matter of both moral obligation and self-interest to endeavor to reduce the amount of male violence in the world, which is almost coextensive with reducing the amount of violence in general.

On the other hand, ought implies can, and despite significant efforts I have made to seek out recommendations for concrete actions I could be taking… I haven’t been able to find very many.

The good news is that we appear to be doing something right—overall rates of violent crime have declined by nearly half since 1990. The decline in rape has been slower, only about 25% since 1990, though this is a bit misleading since the legal definition of rape has been expanded during that interval. The causes of this decline in violence are unclear: Some of the most important factors seem to be changes in policing, economic growth, and reductions in lead pollution. For whatever reason, Millennials just don’t seem to commit crimes at the same rates that Gen-X-ers or Boomers did. We are also substantially more feminist, so maybe that’s an important factor too; the truth is, we really don’t know.

But all of this still leaves me asking: What should I be doing?

When I searched for an answer to this question, a significant fraction of the answers I got from various feminist sources were some variation on “ruminate on your own complicity in male violence”. I tried it; it was painful, difficult—and basically useless. I think this is particularly bad advice for someone like me who has a history of depression.

When you ruminate on your own life, it’s easy to find mistakes; but how important were those mistakes? How harmful were they? I can’t say that I’ve never done anything in my whole life that hurt anyone emotionally (can anyone?), but I can only think of a few times I’ve harmed someone physically (mostly by accident, once in self-defense). I’ve definitely never raped or murdered anyone, and as far as I can tell I’ve never done anything that would have meaningfully contributed to anyone getting raped or murdered. If you were to somehow replace every other man in the world with a copy of me, maybe that wouldn’t immediately bring about a utopian paradise—but I’m pretty sure that rates of violence would be a lot lower. (And in this world ruled by my clones, we’d have more progressive taxes! Less military spending! A basic income! A global democratic federation! Greater investment in space travel! Hey, this sounds pretty good, actually… though inbreeding would be a definite concern.) So, okay, I’m no angel; but I don’t think it’s really fair to say that I’m complicit in something that would radically decrease if everyone behaved as I do.

The really interesting thing is, I think this is true of most men. A typical man commits less than the average amount of violence—because there is great skew in the distribution, with most men committing little or no violence and a small number of men committing lots of violence. Truly staggering amounts of violence are committed by those at the very top of the distribution—that would be mass murderers like Hitler and Stalin. It sounds strange, but if all men in the world were replaced by a typical man, the world would surely be better off. The loss of the very best men would be more than compensated by the removal of the very worst. In fact, since most men are not rapists or murderers, replacing every man in the world with the median man would automatically bring the rates of rape and murder to zero. I know that feminists don’t like to hear #NotAllMen; but it’s not even most men. Maybe the reason that the “not all men” argument keeps coming up is… it’s actually kind of true? Maybe it’s not so unreasonable for men to resent the implication that we are complicit in acts we abhor that we have never done and would never do? Maybe this whole concept that an entire sex of people, literally almost half the human race, can share responsibility for violent crimes—is wrong?

I know that most women face a nearly constant bombardment of sexual harassment, and feel pressured to remain constantly vigilant in order to protect themselves against being raped. I know that victims of sexual violence are often blamed for their victimization (though this happens in a lot of crimes, not just sex crimes). I know that #YesAllWomen is true—basically all women have been in some way harmed or threatened by sexual violence. But the fact remains that most men are already not committing sexual violence. Many people seem to confuse the fact that most women are harmed by men with the claim that most men harm women; these are not at all equivalent. As long as one man can harm many women, there don’t need to be very many harmful men for all women to be affected.

Plausible guesses would be that about 20-25% of women suffer sexual assault, committed by about 4% or 5% of men, each of whom commits an average of 4 to 6 assaults—and some of whom commit far more. If these figures are right, then 95% of men are not guilty of sexual assault. The highest plausible estimate I’ve seen is from a study which found that 11% of men had committed rape. Since it’s only one study and its sample size was pretty small, I’m actually inclined to think that this is an overestimate which got excessive attention because it was so shocking. Larger studies rarely find a number above 5%.

But even if we suppose that it’s really 11%, that leaves 89%; in what sense is 89% not “most men”? I saw some feminist sites responding to this result by saying things like “We can’t imprison 11% of men!” but, uh, we almost do already. About 9% of American men will go to prison in their lifetimes. This is probably higher than it should be—it’s definitely higher than any other country—but if those convictions were all for rape, I’d honestly have trouble seeing the problem. (In fact only about 10% of US prisoners are incarcerated for rape.) If the US were the incarceration capital of the world simply because we investigated and prosecuted rape more reliably, that would be a point of national pride, not shame. In fact, the American conservatives who don’t see the problem with our high incarceration rate probably do think that we’re mostly incarcerating people for things like rape and murder—when in fact large portions of our inmates are incarcerated for drug possession, “public order” crimes, or pretrial detention.

Even if that 11% figure is right, “If you know 10 men, one is probably a rapist” is wrong. The people you know are not a random sample. If you don’t know any men who have been to prison, then you likely don’t know any men who are rapists. 37% of prosecuted rapists have prior criminal convictions, and 60% will be convicted of another crime within 5 years. (Of course, most rapes are never even reported; but where would we get statistics on those rapists?) Rapists are not typical men. They may seem like typical men—it may be hard to tell the difference at a glance, or even after knowing someone for a long time. But the fact that narcissists and psychopaths may hide among us does not mean that all of us are complicit in the crimes of narcissists and psychopaths. If you can’t tell who is a psychopath, you may have no choice but to be wary; but telling every man to search his heart is worthless, because the only ones who will listen are the ones who aren’t psychopaths.

That, I think, is the key disagreement here: Where the standard feminist line is “any man could be a rapist, and every man should search his heart”, I believe the truth is much more like, “monsters hide among us, and we should do everything in our power to stop them”. The monsters may look like us, they may often act like us—but they are not us. Maybe there are some men who would commit rapes but can be persuaded out of it—but this is not at all the typical case. Most rapes are committed by hardened, violent criminals and all we can really do is lock them up. (And for the love of all that is good in the world, test all the rape kits!)

It may be that sexual harassment of various degrees is more spread throughout the male population; perhaps the median man indeed commits some harassment at some point in his life. But even then, I think it’s pretty clear that the really awful kinds of harassment are largely committed by a small fraction of serial offenders. Indeed, there is a strong correlation between propensity toward sexual harassment and various measures of narcissism and psychopathy. So, if most men look closely enough, maybe they can think of a few things that they do occasionally that might make women uncomfortable; okay, stop doing those things. (Hint: Do not send unsolicited dick pics. Ever. Just don’t. Anyone who wants to see your genitals will ask first.) But it isn’t going to make a huge difference in anyone’s life. As long as the serial offenders continue, women will still feel utterly bombarded.

There are other kinds of sexual violations that more men commit—being too aggressive, or persisting too much after the first rejection, or sending unsolicited sexual messages or images. I’ve had people—mostly, but not only, men—do things like that to me; but it would be obviously unfair to both these people and actual rape victims to say I’d ever been raped. I’ve been groped a few times, but it seems like quite a stretch to call it “sexual assault”. I’ve had experiences that were uncomfortable, awkward, frustrating, annoying, occasionally creepy—but never traumatic. Never violence. Teaching men (and women! There is evidence that women are not much less likely than men to commit this sort of non-violent sexual violation) not to do these things is worthwhile and valuable in itself—but it’s not going to do much to prevent rape or murder.

Thus, whatever responsibility men have in reducing sexual violence, it isn’t simply to stop; you can’t stop doing what you already aren’t doing.

After pushing through all that noise, at last I found a feminist site making a more concrete suggestion: They recommended that I read a book by Jackson Katz on the subject entitled The Macho Paradox: Why Some Men Hurt Women and How All Men Can Help.

First of all, I must say I can’t remember any other time I’ve read a book that was so poorly titled. The only mention of the phrase “macho paradox” is a brief preface that was added to the most recent edition explaining what the term was meant to mean; it occurs nowhere else in the book. And in all its nearly 300 pages, the book has almost nothing that seriously addresses either the motivations underlying sexual violence or concrete actions that most men could take in order to reduce it.

As far as concrete actions (“How all men can help”), the clearest, most consistent advice the book seems to offer that would apply to most men is “stop consuming pornography” (something like 90% of men and 60% of women regularly consume porn), when in fact there is a strong negative correlation between consumption of pornography and real-world sexual violence. (Perhaps Millennials are less likely to commit rape and murder because we are so into porn and video games!) This advice is literally worse than nothing.

The sex industry exists on a continuum from the adult-only but otherwise innocuous (smutty drawings and erotic novels), through the legal but often problematic (mainstream porn, stripping), to the usually illegal but defensible (consensual sex work), all the way to the utterly horrific and appalling (the sexual exploitation of children). I am well aware that there are many deep problems with the mainstream porn industry, but I confess I’ve never quite seen how these problems are specific to porn rather than endemic to media or even capitalism more generally. Particularly with regard to the above-board sex industry in places like Nevada or the Netherlands, it’s not obvious to me that a prostitute is more exploited than a coal miner, a sweatshop worker, or a sharecropper—indeed, given the choice between those four careers, I’d without hesitation choose to be a prostitute in Amsterdam. Many sex workers resent the paternalistic insistence by anti-porn feminists that their work is inherently degrading and exploitative. Overall, sex workers report job satisfaction not statistically different than the average for all jobs. There are a multitude of misleading statistics often reported about the sex industry that often make matters seem far worse than they are.

Katz (all-too) vividly describes the depiction of various violent or degrading sex acts in mainstream porn, but he seems unwilling to admit that any other forms of porn do or even could exist—and worse, like far too many anti-porn feminists, he seems to willfully elide vital distinctions, effectively equating fantasy depiction with genuine violence and consensual kinks with sexual abuse. I like to watch action movies and play FPS video games; does that mean I believe it’s okay to shoot people with machine guns? I know the sophisticated claim is that it somehow “desensitizes” us (whatever that means), but there’s not much evidence of that either. Given that porn and video games are negatively correlated with actual violence, it may in fact be that depicting the fantasy provides an outlet for such urges and helps prevent them from becoming reality. Or, it may simply be that keeping a bunch of young men at home in front of their computers keeps them from going out and getting into trouble. (Then again, homicides actually increased during the COVID pandemic—though most other forms of crime decreased.) But whatever the cause, the evidence is clear that porn and video games don’t increase actual violence—they decrease them.

At the very end of the book, Katz hints at a few other things men might be able to do, or at least certain groups of men: Challenge sexism in sports, the military, and similar male-dominated spaces (you know, if you have clout in such spaces, which I really don’t—I’m an effete liberal intellectual, a paradigmatic “soy boy”; do you think football players or soldiers are likely to listen to me?); educate boys with more positive concepts of masculinity (if you are in a position to do so, e.g. as a teacher or parent); or, the very best advice in the entire book, worth more than the rest of the book combined: Donate to charities that support survivors of sexual violence. Katz doesn’t give any specific recommendations, but here are a few for you: RAINN, NAESV and NSVRC.

Honestly, I’m more impressed by Upworthy’s bulleted list of things men can do, though they’re mostly things that conscientious men do anyway, and even if 90% of men did them, it probably wouldn’t greatly reduce actual violence.

As far as motivations (“Why some men hurt women”), the book does at least manage to avoid the mindless slogan “rape is about power, not sex” (there is considerable evidence that this slogan is false or at least greatly overstated). Still, Katz insists upon collective responsibility, attributing what are in fact typically individual crimes, committed mainly by psychopaths, motivated primarily by anger or sexual desire, to some kind of institutionalized system of patriarchal control that somehow permeates all of society. The fact that violence is ubiquitous does not imply that it is coordinated. It’s very much the same cognitive error as “murderism”.

I agree that sexism exists, is harmful, and may contribute to the prevalence of rape. I agree that there are many widespread misconceptions about rape. I also agree that reducing sexism and toxic masculinity are worthwhile endeavors in themselves, with numerous benefits for both women and men. But I’m just not convinced that reducing sexism or toxic masculinity would do very much to reduce the rates of rape or other forms of violence. In fact, despite widely reported success of campaigns like the “Don’t Be That Guy” campaign, the best empirical research on the subject suggests that such campaigns actually tend to do more harm than good. The few programs that seem to work are those that focus on bystander interventions—getting men who are not rapists to recognize rapists and stop them. Basically nothing has ever been shown to convince actual rapists; all we can do is deny them opportunities—and while bystander intervention can do that, the most reliable method is probably incarceration. Trying to change their sexist attitudes may be worse than useless.

Indeed, I am increasingly convinced that much—not all, but much—of what is called “sexism” is actually toxic expressions of heterosexuality. Why do most creepy male bosses only ever hit on their female secretaries? Well, maybe because they’re straight? This is not hard to explain. It’s a fair question why there are so many creepy male bosses, but one need not posit any particular misogyny to explain why their targets would usually be women. I guess it’s a bit hard to disentangle; if an incel hates women because he perceives them as univocally refusing to sleep with him, is that sexism? What if he’s a gay incel (yes they exist) and this drives him to hate men instead?

In fact, I happen to know of a particular gay boss who has quite a few rumors surrounding him regarding his sexual harassment of male employees. Or you could look at Kevin Spacey, who (allegedly) sexually abused teenage boys. You could tell a complicated story about how this is some kind of projection of misogynistic attitudes onto other men (perhaps for being too “femme” or something)—or you could tell a really simple story about how this man is only sexually abusive toward other men because that’s the gender of people he’s sexually attracted to. Occam’s Razor strongly favors the latter.

Indeed, what are we to make of the occasional sexual harasser who targets men and women equally? On the theory that abuse is caused by patriarchy, that seems pretty hard to explain. On the theory that abusive people sometimes happen to be bisexual, it’s not much of a mystery. (Though I would like to take a moment to debunk the stereotype of the “depraved bisexual”: Bisexuals are no more likely to commit sexual violence, but are far more likely to suffer it—more likely than either straight or gay people, independently of gender. Trans people face even higher risk; the acronym LGBT is in increasing order of danger of violence.)

Does this excuse such behavior? Absolutely not. Sexual harassment and sexual assault are definitely wrong, definitely harmful, and rightfully illegal. But when trying to explain why the victims are overwhelmingly female, the fact that roughly 90% of people are heterosexual is surely relevant. The key explanandum here is not why the victims are usually female, but rather why the perpetrators are usually male.

That, indeed, requires explanation; but such an explanation is really not so hard to come by. Why is it that, in nearly every human society, for nearly every form of violence, the vast majority of that violence is committed by men? It sure looks genetic to me.

Indeed, in anyother context aside from gender or race, we would almost certainly reject any explanation other than genetics for such a consistent pattern. Why is it that, in nearly every human society, about 10% of people are LGBT? Probably genetics. Why is it that, in near every human society, about 10% of people are left-handed? Genetics. Why, in nearly every human society, do smiles indicate happiness, children fear loud noises, and adults fear snakes? Genetics. Why, in nearly every human society, are men on average much taller and stronger than women? Genetics. Why, in nearly every human society, is about 90% of violence, including sexual violence, committed by men? Clearly, it’s patriarchy.

A massive body of scientific evidence from multiple sources shows a clear casual relationship between increased testosterone and increased aggression. The correlation is moderate, only about 0.38—but it’s definitely real. And men have a lot more testosterone than women: While testosterone varies a frankly astonishing amount between men and over time—including up to a 2-fold difference even over the same day—a typical adult man has about 250 to 950 ng/dL of blood testosterone, while a typical adult woman has only 8 to 60 ng/dL. (An adolescent boy can have as much as 1200 ng/dL!) This is a difference ranging from a minimum of 4-fold to a maximum of over 100-fold, with a typical value of about 20-fold. It would be astonishing if that didn’t have some effect on behavior.

This is of course far from a complete explanation: With a correlation of 0.38, we’ve only explained about 14% of the variance, so what’s the other 86%? Well, first of all, testosterone isn’t the only biological difference between men and women. It’s difficult to identify any particular genes with strong effects on aggression—but the same is true of height, and nobody disputes that the height difference between men and women is genetic.

Clearly societal factors do matter a great deal, or we couldn’t possibly explain why homicide rates vary between countries from less than 3 per million per year in Japan to nearly 400 per million per year in Hondurasa full 2 orders of magnitude! But gender inequality does not appear to strongly predict homicide rates. Japan is not a very feminist place (in fact, surveys suggest that, after Spain, Japan is second-worst highly-developed country for women). Sweden is quite feminist, and their homicide rate is relatively low; but it’s still 4 times as high as Japan’s. The US doesn’t strike me as much more sexist than Canada (admittedly subjective—surveys do suggest at least some difference, and in the expected direction), and yet our homicide rate is nearly 3 times as high. Also, I think it’s worth noting that while overall homicide rates vary enormously across societies, the fact that roughly 90% of homicides are committed by men does not. Through some combination of culture and policy, societies can greatly reduce the overall level of violence—but no society has yet managed to change the fact that men are more violent than women.

I would like to do a similar analysis of sexual assault rates across countries, but unfortunately I really can’t, because different countries have such different laws and different rates of reporting that the figures really aren’t comparable. Sweden infamously has a very high rate of reported sex crimes, but this is largely because they have very broad definitions of sex crimes and very high rates of reporting. The best I can really say for now is there is no obvious pattern of more feminist countries having lower rates of sex crimes. Maybe there really is such a pattern; but the data isn’t clear.

Yet if biology contributes anything to the causation of violence—and at this point I think the evidence for that is utterly overwhelming—then mainstream feminism has done the world a grave disservice by insisting upon only social and cultural causes. Maybe it’s the case that our best options for intervention are social or cultural, but that doesn’t mean we can simply ignore biology. And then again, maybe it’s not the case at all:A neurological treatment to cure psychopathy could cut almost all forms of violence in half.

I want to be completely clear that a biological cause is not a justification or an excuse: literally billions of men manage to have high testosterone levels, and experience plenty of anger and sexual desire, without ever raping or murdering anyone. The fact that men appear to be innately predisposed toward violence does not excuse actual violence, and the fact that rape is typically motivated at least in part by sexual desire is no excuse for committing rape.

In fact, I’m quite worried about the opposite: that the notion that sexual violence is always motivated by a desire to oppress and subjugate women will be used to excuse rape, because men who know that their motivation was not oppression will therefore be convinced that what they did wasn’t rape. If rape is always motivated by a desire to oppress women, and his desire was only to get laid, then clearly, what he did can’t be rape, right? The logic here actually makes sense. If we are to reject this argument—as we must—then we must reject the first premise, that all rape is motivated by a desire to oppress and subjugate women. I’m not saying that’s never a motivation—I’m simply saying we can’t assume it is always.

The truth is, I don’t know how to end violence, and sexual violence may be the most difficult form of violence to eliminate. I’m not even sure what most of us can do to make any difference at all. For now, the best thing to do is probably to donate money to organizations like RAINN, NAESV and NSVRC. Even $10 to one of these organizations will do more to help survivors of sexual violence than hours of ruminating on your own complicity—and cost you a lot less.

This is not just about selfishness

Aug 2 JDN 2459064

The Millennial term is “Karen”: someone (paradigmatically a middle-aged White woman) who is so privileged, so self-centered, and has such an extreme sense of entitlement, that they are willing to make others suffer in order to avoid the slightest inconvenience.

I recently saw a tweet (which for some reason has been impossible to find; I think I must have misremembered its precise wording, because putting that in quotes in Google yields nothing) saying that Americans are not simply selfish, we are so selfish that we would gladly let others die to avoid mildly inconveniencing ourselves. Searching Twitter for “Americans are selfish” certainly yields plenty of results.

And it is tempting to agree with this, when it seems that re-opening the economy and so many people refusing to wear masks has given us far worse outcomes from COVID-19 than most other countries.

But this can’t be the whole story. Perhaps Americans are a bit more self-centered than other cultures, because of our history of libertarian individualism. But if we were truly so selfish we’d gladly let others die to avoid inconvenience, whence the fact that we donate more to charity than any other country in the world? I don’t simply mean total amount or per-capita dollars (though both of those are also true); I mean as a fraction of GDP Americans give more to charity than any other country, and by a wide margin.

How then do we explain that so many Americans are not wearing masks?

Well, first of all, most of us are wearing masks. The narrative about people not wearing masks has been exaggerated; the majority of Americans, including the majority of Republicans, agree that wearing masks is a matter of public health rather than personal choice. There are some people who refuse to wear masks, and each one adds a little bit more risk to us all; but it’s really not the case that Americans in general are refusing to wear masks.

But I think the most important failings here come from the top down. The Trump administration has handled the pandemic in an astonishingly poor way. First, they denied that it was even a serious problem. Then, they implemented only a half-hearted response. Then, they turned masks into a culture war. Then, they resisted the economic relief package and prevented it from being as large as it needed to me. At every step of the way, they have been at best utterly incompetent and at worst guilty of depraved indifference murder.

From denying it was a problem, to responding too slowly, to disparaging mask use, to pushing to re-open the economy too soon, at every step of the way our government has made things worse. Above all, a better economic relief package—like what most other First World countries have done—would have done a great deal to reduce the harm of lockdowns, and would have made re-opening the economy far less popular.

Republican-led states have followed the President’s lead, refusing to implement even basic common-sense protections. But even Democrat-led states have suffered greatly as well. New York and California have some of the most cases, though this is surely in part because they are huge states with highly urbanized populations that get a lot of visitors and trade from other places. The trajectory of infections looks worst in Lousiana and Missouri, surely among the most conservative of states; but it also looks quite bad in New Jersey and Hawaii, which are among the most liberal.

I think what this shows us is that America lacks coordination. Despite having United in our name and E pluribus unum as our motto (“In God We Trust” was a Cold War change to spite the Soviets), what we lack most of all is unity. Viruses do not respect borders or jurisdictions. More than perhaps any other issue aside from climate change, fighting a pandemic requires a unified, coordinated response—and that is precisely what we did not have.

In some ways the pluralism of the United States can be a great strength; but this year, it was very much a weakness. And as the many crises around us continue, I fear we grow only more divided.

How we measure efficiency affects our efficiency

Jun 21 JDN 2459022

Suppose we are trying to minimize carbon emissions, and we can afford one of the two following policies to improve fuel efficiency:

  1. Policy A will replace 10,000 cars that average 25 MPG with hybrid cars that average 100 MPG.
  2. Policy B will replace 5,000 diesel trucks that average 5 MPG with turbocharged, aerodynamic diesel trucks that average 10 MPG.

Assume that both cars and trucks last about 100,000 miles (in reality this of course depends on a lot of factors), and diesel and gas pollute about the same amount per gallon (this isn’t quite true, but it’s close). Which policy should we choose?

It seems obvious: Policy A, right? 10,000 vehicles, each increasing efficiency by 75 MPG or a factor of 4, instead of 5,000 vehicles, each increasing efficiency by only 5 MPG or a factor of 2.

And yet—in fact the correct answer is definitely policy B, because the use of MPG has distorted our perception of what constitutes efficiency. We should have been using the inverse: gallons per hundred miles.

  1. Policy A will replace 10,000 cars that average 4 GPHM with cars that average 1 GPHM.
  2. Policy B will replace 5,000 trucks that average 20 GPHM with trucks that average 10 GPHM.

This means that policy A will save (10,000)(100,000/100)(4-1) = 30 million gallons, while policy B will save (5,000)(100,000/100)(20-10) = 50 million gallons.

A gallon of gasoline produces about 9 kg of CO2 when burned. This means that by choosing the right policy here, we’ll have saved 450,000 tons of CO2—or by choosing the wrong one we would only have saved 270,000.

The simple choice of which efficiency measure to use when making our judgment—GPHM versus MPG—has had a profound effect on the real impact of our choices.

Let’s try applying the same reasoning to charities. Again suppose we can choose one of two policies.

  1. Policy C will move $10 million that currently goes to local community charities which can save one QALY for $1 million to medical-research charities that can save one QALY for $50,000.
  2. Policy D will move $10 million that currently goes to direct-transfer charities which can save one QALY for $1000 to anti-malaria net charities that can save one QALY for $800.

Policy C means moving funds from charities that are almost useless ($1 million per QALY!?) to charities that meet a basic notion of cost-effectiveness (most public health agencies in the First World have a standard threshold of about $50,000 or $100,000 per QALY).

Policy D means moving funds from charities that are already highly cost-effective to other charities that are only a bit more cost-effective. It almost seems pedantic to even concern ourselves with the difference between $1000 per QALY and $800 per QALY.

It’s the same $10 million either way. So, which policy should we pick?

If the lesson you took from the MPG example is that we should always be focused on increasing the efficiency of the least efficient, you’ll get the wrong answer. The correct answer is based on actually using the right measure of efficiency.

Here, it’s not dollars per QALY we should care about; it’s QALY per million dollars.

  1. Policy C will move $10 million from charities which get 1 QALY per million dollars to charities which get 20 QALY per million dollars.
  2. Policy D will move $10 million from charities which get 1000 QALY per million dollars to charities which get 1250 QALY per million dollars.

Multiply that out, and policy C will gain (10)(20-1) = 190 QALY, while policy D will gain (10)(1250-1000) = 2500 QALY. Assuming that “saving a life” means about 50 QALY, this is the difference between saving 4 lives and saving 50 lives.

My intuition actually failed me on this one; before I actually did the math, I had assumed that it would be far more important to move funds from utterly useless charities to ones that meet a basic standard. But it turns out that it’s actually far more important to make sure that the funds being targeted at the most efficient charities are really the most efficient—even apparently tiny differences matter a great deal.

Of course, if we can move that $10 million from the useless charities to the very best charities, that’s the best of all; it would save (10)(1250-1) = 12,490 QALY. This is nearly 250 lives.

In the fuel economy example, there’s no feasible way to upgrade a semitrailer to get 100 MPG. If we could, we totally should; but nobody has any idea how to do that. Even an electric semi probably won’t be that efficient, depending on how the grid produces electricity. (Obviously if the grid were all nuclear, wind, and solar, it would be; but very few places are like that.)

But when we’re talking about charities, this is just money; it is by definition fungible. So it is absolutely feasible in an economic sense to get all the money currently going towards nearly-useless charities like churches and museums and move that money directly toward high-impact charities like anti-malaria nets and vaccines.

Then again, it may not be feasible in a practical or political sense. Someone who currently donates to their local church may simply not be motivated by the same kind of cosmopolitan humanitarianism that motivates Effective Altruism. They may care more about supporting their local community, or be motivated by genuine religious devotion. This isn’t even inherently a bad thing; nobody is a cosmopolitan in everything they do, nor should we be—we have good reasons to care more about our own friends, family, and community than we do about random strangers in foreign countries thousands of miles away. (And while I’m fairly sure Jesus himself would have been an Effective Altruist if he’d been alive today, I’m well aware that most Christians aren’t—and this doesn’t make them “false Christians”.) There might be some broader social or cultural change that could make this happen—but it’s not something any particular person can expect to accomplish.

Whereas, getting people who are already Effective Altruists giving to efficient charities to give to a slightly more efficient charity is relatively easy: Indeed, it’s basically the whole purpose for which GiveWell exists. And there are analysts working at GiveWell right now whose job it is to figure out exactly which charities yield the most QALY per dollar and publish that information. One person doing that job even slightly better can save hundreds or even thousands of lives.

Indeed, I’m seriously considering applying to be one myself—it sounds both more pleasant and more important than anything I’d be likely to get in academia.

Tithing makes quite a lot of sense

Dec 22 JDN 2458840

Christmas is coming soon, and it is a season of giving: Not only gifts to those we love, but also to charities that help people around the world. It’s a theme of some of our most classic Christmas stories, like A Christmas Carol. (I do have to admit: Scrooge really isn’t wrong for not wanting to give to some random charity without any chance to evaluate it. But I also get the impression he wasn’t giving a lot to evaluated charities either.) And people do really give more around this time of year: Charitable donation rates peak in November and December (though that may also have something to do with tax deductions).

Where should we give? This is not an easy question, but it’s one that we now have tools to answer: There are various independent charity evaluation agencies, like GiveWell and Charity Navigator, which can at least provide some idea of which charities are most cost-effective.

How much should we give? This question is a good deal harder.

Perhaps a perfect being would determine their own precise marginal utility of wealth, and the marginal utility of spending on every possible charity, and give of your wealth to the best possible charity up until those two marginal utilities are equal. Since $1 to UNICEF or the Against Malaria Foundation saves about 0.02 QALY, and (unless you’re a billionaire) you don’t have enough money to meaningfully affect the budget of UNICEF, you’d probably need to give until you are yourself at the UN poverty level of $1.90 per day.

I don’t know of anyone who does this. Even Peter Singer, who writes books that essentially tell us to do this, doesn’t do this. I’m not sure it’s humanly possible to do this. Indeed, I’m not even so sure that a perfect being would do it, since it would require destroying their own life and their own future potential.

How about we all give 10%? In other words, how about we tithe? Yes, it sounds arbitrary—because it is. It could just as well have been 8% or 11%. Perhaps one-tenth feels natural to a base-10 culture made of 10-fingered beings, and if we used a base-12 numeral system we’d think in terms of giving one-twelfth instead. But 10% feels reasonable to a lot of people, it has a lot of cultural support behind it already, and it has become a Schelling point for coordination on this otherwise intractable problem. We need to draw the line somewhere, and it might as well be there.

As Slate Star Codex put it:

It’s ten percent because that’s the standard decreed by Giving What We Can and the effective altruist community. Why should we believe their standard? I think we should believe it because if we reject it in favor of “No, you are a bad person unless you give all of it,” then everyone will just sit around feeling very guilty and doing nothing. But if we very clearly say “You have discharged your moral duty if you give ten percent or more,” then many people will give ten percent or more. The most important thing is having a Schelling point, and ten percent is nice, round, divinely ordained, and – crucially – the Schelling point upon which we have already settled. It is an active Schelling point. If you give ten percent, you can have your name on a nice list and get access to a secret forum on the Giving What We Can site which is actually pretty boring.

It’s ten percent because definitions were made for Man, not Man for definitions, and if we define “good person” in a way such that everyone is sitting around miserable because they can’t reach an unobtainable standard, we are stupid definition-makers. If we are smart definition-makers, we will define it in whichever way which makes it the most effective tool to convince people to give at least that much.

I think it would be also reasonable to adjust this proportion according to your household income. If you are extremely poor, give a token amount: Perhaps 1% or 2%. (As it stands, most poor people already give more than this, and most rich people give less.) If you are somewhat below the median household income, give a bit less: Perhaps 6% or 8%. (I currently give 8%; I plan to increase to 10% once I get a higher-paying job after graduation.) If you are somewhat above, give a bit more: Perhaps 12% or 15%. If you are spectacularly rich, maybe you should give as much as 25%.

Is 10% enough? Well, actually, if everyone gave, even 1% would probably be enough. The total GDP of the First World is about $40 trillion; 1% of that is $400 billion per year, which is more than enough to end world hunger. But since we know that not everyone will give, we need to adjust our standard upward so that those who do give will give enough. (There’s actually an optimization problem here which is basically equivalent to finding a monopoly’s profit-maximizing price.) And just ending world hunger probably isn’t enough; there is plenty of disease to cure, education to improve, research to do, and ecology to protect. If say a third of First World people give 10%, that would be about $1.3 trillion, which would be enough money to at least make a huge difference in all those areas.

You can decide for yourself where you think you should draw the line. But 10% is a pretty good benchmark, and above all—please, give something. If you give anything, you are probably already above average. A large proportion of people give nothing at all. (Only 24% of US tax returns include a charitable deduction—though, to be fair, a lot of us donate but don’t itemize deductions. Even once you account for that, only about 60% of US households give to charity in any given year.)

Pascal’s Mugging

Nov 10 JDN 2458798

In the Singularitarian community there is a paradox known as “Pascal’s Mugging”. The name is an intentional reference to Pascal’s Wager (and the link is quite apt, for reasons I’ll discuss in a later post.)

There are a few different versions of the argument; Yudkowsky’s original argument in which he came up with the name “Pascal’s Mugging” relies upon the concept of the universe as a simulation and an understanding of esoteric mathematical notation. So here is a more intuitive version:

A strange man in a dark hood comes up to you on the street. “Give me five dollars,” he says, “or I will destroy an entire planet filled with ten billion innocent people. I cannot prove to you that I have this power, but how much is an innocent life worth to you? Even if it is as little as $5,000, are you really willing to bet on ten trillion to one odds that I am lying?”

Do you give him the five dollars? I suspect that you do not. Indeed, I suspect that you’d be less likely to give him the five dollars than if he had merely said he was homeless and asked for five dollars to help pay for food. (Also, you may have objected that you value innocent lives, even faraway strangers you’ll never meet, at more than $5,000 each—but if that’s the case, you should probably be donating more, because the world’s best charities can save a live for about $3,000.)

But therein lies the paradox: Are you really willing to bet on ten trillion to one odds?

This argument gives me much the same feeling as the Ontological Argument; as Russell said of the latter, “it is much easier to be persuaded that ontological arguments are no good than it is to say exactly what is wrong with them.” It wasn’t until I read this post on GiveWell that I could really formulate the answer clearly enough to explain it.

The apparent force of Pascal’s Mugging comes from the idea of expected utility: Even if the probability of an event is very small, if it has a sufficiently great impact, the expected utility can still be large.

The problem with this argument is that extraordinary claims require extraordinary evidence. If a man held a gun to your head and said he’d shoot you if you didn’t give him five dollars, you’d give him five dollars. This is a plausible claim and he has provided ample evidence. If he were instead wearing a bomb vest (or even just really puffy clothing that could conceal a bomb vest), and he threatened to blow up a building unless you gave him five dollars, you’d probably do the same. This is less plausible (what kind of terrorist only demands five dollars?), but it’s not worth taking the chance.

But when he claims to have a Death Star parked in orbit of some distant planet, primed to make another Alderaan, you are right to be extremely skeptical. And if he claims to be a being from beyond our universe, primed to destroy so many lives that we couldn’t even write the number down with all the atoms in our universe (which was actually Yudkowsky’s original argument), to say that you are extremely skeptical seems a grievous understatement.

That GiveWell post provides a way to make this intuition mathematically precise in terms of Bayesian logic. If you have a normal prior with mean 0 and standard deviation 1, and you are presented with a likelihood with mean X and standard deviation X, what should you make your posterior distribution?

Normal priors are quite convenient; they conjugate nicely. The precision (inverse variance) of the posterior distribution is the sum of the two precisions, and the mean is a weighted average of the two means, weighted by their precision.

So the posterior variance is 1/(1 + 1/X^2).

The posterior mean is 1/(1+1/X^2)*(0) + (1/X^2)/(1+1/X^2)*(X) = X/(X^2+1).

That is, the mean of the posterior distribution is just barely higher than zero—and in fact, it is decreasing in X, if X > 1.

For those who don’t speak Bayesian: If someone says he’s going to have an effect of magnitude X, you should be less likely to believe him the larger that X is. And indeed this is precisely what our intuition said before: If he says he’s going to kill one person, believe him. If he says he’s going to destroy a planet, don’t believe him, unless he provides some really extraordinary evidence.

What sort of extraordinary evidence? To his credit, Yudkowsky imagined the sort of evidence that might actually be convincing:

If a poorly-dressed street person offers to save 10(10^100) lives (googolplex lives) for $5 using their Matrix Lord powers, and you claim to assign this scenario less than 10-(10^100) probability, then apparently you should continue to believe absolutely that their offer is bogus even after they snap their fingers and cause a giant silhouette of themselves to appear in the sky.

This post he called “Pascal’s Muggle”, after the term from the Harry Potter series, since some of the solutions that had been proposed for dealing with Pascal’s Mugging had resulted in a situation almost as absurd, in which the mugger could exhibit powers beyond our imagining and yet nevertheless we’d never have sufficient evidence to believe him.

So, let me go on record as saying this: Yes, if someone snaps his fingers and causes the sky to rip open and reveal a silhouette of himself, I’ll do whatever that person says. The odds are still higher that I’m dreaming or hallucinating than that this is really a being from beyond our universe, but if I’m dreaming, it makes no difference, and if someone can make me hallucinate that vividly he can probably cajole the money out of me in other ways. And there might be just enough chance that this could be real that I’m willing to give up that five bucks.

These seem like really strange thought experiments, because they are. But like many good thought experiments, they can provide us with some important insights. In this case, I think they are telling us something about the way human reasoning can fail when faced with impacts beyond our normal experience: We are in danger of both over-estimating and under-estimating their effects, because our brains aren’t equipped to deal with magnitudes and probabilities on that scale. This has made me realize something rather important about both Singularitarianism and religion, but I’ll save that for next week’s post.

What if the charitable deduction were larger?

Nov 3 JDN 2458791

Right now, the charitable tax deduction is really not all that significant. It makes donating to charity cheaper, but you still always end up with less money after donating than you had before. It might cause you to donate more than you otherwise would have, but you’ll still only give to a charity you already care about.

This is because the tax deduction applies to your income, rather than your taxes directly. So if you make $100,000 and donate $10,000, you pay taxes as if your income were $90,000. Say your tax rate is 25%; then you go from paying $25,000 and keeping $75,000 to paying $22,500 and keeping $67,500. The more you donate, the less money you will have to keep.

Many people don’t seem to understand this; they seem to think that rich people can actually get richer by donating to charity. That can’t be done in our current tax system, or at least not legally. (There are fraudulent ways to do so; but there are fraudulent ways to do lots of things.) Part of the confusion may be related to the fact that people don’t seem to understand how tax brackets work; they worry about being “pushed into a higher tax bracket” as though this could somehow reduce their after-tax income, but that doesn’t happen. That isn’t how tax brackets work.

Some welfare programs work that way—for instance, seeing your income rise high enough to lose Medicaid eligibility can be bad enough that you would prefer to have less income—but taxes themselves do not.

The graph below shows the actual average tax rate (red) and marginal tax rate (purple) of the current US federal income tax:

Average_tax_rate
From that graph alone, you might think that going to a higher tax bracket could result in lower after-tax income. But the next graph, of before-tax (blue) and after-tax (green) income shows otherwise:

After_tax_income

All that tax deductions can do is reduce your taxable income. Thus the tax deduction benefits you if you were already donating, but never leaves you richer than you would have been without donating at all.

For example, if you have an income of $700,000, you would pay $223,000 in taxes and keep $477,000 in after-tax income. If you instead donate $100,000, your adjusted gross income will be reduced to $600,000, you will only pay $186,000 in taxes, and you will keep $414,000 in after-tax income. If there were no tax deduction, you would still have to pay $223,000 in taxes, and your after-tax income would be only $377,000. So you do benefit from the tax deduction; but there is no amount of donation which will actually increase your after-tax income to above $477,000.

But we wouldn’t have to do it this way. We could instead apply the deduction as a tax credit, which would make the effect of the deduction far larger.

Several years back, Miles Kimball (an economist who formerly worked at Michigan, now at UC Boulder) proposed a quite clever change to the tax system:

My proposal is to raise marginal tax rates above about $75,000 per person–or $150,000 per couple–by 10% (a dime on every extra dollar), but offer a 100% tax credit for public contributions up to the entire amount of the tax surcharge.

Kimball’s argument for the policy is mainly that this would make a tax increase more palatable, by giving people more control over where their money goes. This is surely true, and a worthwhile endeavor.

But the even larger benefit might come from the increased charitable donations. If we limited the tax credit to particularly high-impact charities, we would increase the donations to those charities. Whereas in the current system you get the same deduction regardless of where you give your money, even though we know that some charities are literally hundreds of times as cost-effective as others.

In fact, we might not even want to limit the tax credit to that 10% surcharge. If people want to donate more than 10% of their income to high-impact charities, perhaps we should let them. This would mean that the federal deficit could actually increase under this policy, but if so, there would have to be so much money donated that we’d most likely end world hunger. That’s a tradeoff I’m quite willing to make.

In principle, we could even introduce a tax credit that is greater than 100%—say for instance you get a 120% donation for the top-rated charities. This is not mathematically inconsistent, though it is surely a very bad idea. In that case, it absolutely would be possible to end up with more money than you started with, and the richer you are, the more you could get. There would effectively be a positive return on charitable donations, with the money paid for from the government budget. Bill Gates for instance could pay $10 billion a year to charity and the government would not only pay for it, but also have to give him an extra $2 billion. So even for the best charities—which probably are actually a good deal more cost-effective than the US government—we should cap the tax credit at 100%.

Obvious choices for high-impact charities include UNICEF, the Red Cross, GiveDirectly, and the Malaria Consortium. We would need some sort of criteria to decide which charities should get the benefits; I’m thinking we could have some sort of panel of experts who rate charities based on their cost-effectiveness.

It wouldn’t have to be all-or-nothing, either; charities with good but not top ratings could get an increased deduction but not a 100% deduction. The expert panel could rate charities on a scale from 0 to 10, and then anything above 5 gets an (X-5)*10% tax credit.

In effect, the current policy says, “If you give to charity, you don’t have to pay taxes on the money you gave; but all of your other taxes still apply.” The new policy would say, “You can give to a top-impact charity instead of paying taxes.”

Americans hate taxes and already give a lot to charity, but most of those donations are to relatively ineffective charities. This policy could incentivize people to give more or at least give to better places, probably without hurting the government budget—and if it does hurt the government budget, the benefits will be well worth the cost.

How much should we give?

Nov 4 JDN 2458427

How much should we give of ourselves to others?

I’ve previously struggled with this basic question when it comes to donating money; I have written multiple posts on it now, some philosophical, some empirical, and some purely mathematical.

But the question is broader than this: We don’t simply give money. We also give effort. We also give emotion. Above all, we also give time. How much should we be volunteering? How many protest marches should we join? How many Senators should we call?

It’s easy to convince yourself that you aren’t doing enough. You can always point to some hour when you weren’t doing anything particularly important, and think about all the millions of lives that hang in the balance on issues like poverty and climate change, and then feel a wave of guilt for spending that hour watching Netflix or playing video games instead of doing one more march. This, however, is clearly unhealthy: You won’t actually make yourself into a more effective activist, you’ll just destroy yourself psychologically and become no use to anybody.

I previously argued for a sort of Kantian notion that we should commit to giving our fair share, defined as the amount we would have to give if everyone gave that amount. This is quite appealing, and if I can indeed get anyone to donate 1% of their income as a result, I will be quite glad. (If I can get 100 people to do so, that’s better than I could ever have done myself—a good example of highly cost-effective slacktivism.)

Lately I have come to believe that this is probably inadequate. We know that not everyone will take this advice, which means that by construction it won’t be good enough to actually solve global problems.

This means I must make a slightly greater demand: Define your fair share as the amount you would have to give if everyone among people who are likely to give gave that amount.

Unfortunately, this question is considerably harder. It may not even have a unique answer. The number of people willing to give an amount n is obviously dependent upon the amount x itself, and we are nowhere close to knowing what that function n(x) looks like.

So let me instead put some mathematical constraints on it, by choosing an elasticity. Instead of an elasticity of demand or elasticity of supply, we could call this an elasticity of contribution.

Presumably the elasticity is negative: The more you ask of people, the fewer people you’ll get to contribute.

Suppose that the elasticity is something like -0.5, where contribution is relatively inelastic. This means that if you increase the amount you ask for by 2%, you’ll only decrease the number of contributors by 1%. In that case, you should be like Peter Singer and ask for everything. At that point, you’re basically counting on Bill Gates to save us, because nobody else is giving anything. The total amount contributed n(x) * x is increasing in x.

On the other hand, suppose that elasticity is something like 2, where contribution is relatively elastic. This means that if you increase the amount you ask for by 2%, you will decrease the number of contributors by 4%. In that case, you should ask for very little. You’re asking everyone in the world to give 1% of their income, as I did earlier. The total amount contributed n(x) * x is now decreasing in x.

But there is also a third option: What if the elasticity is exactly -1, unit elastic? Then if you increase the amount you ask for by 2%, you’ll decrease the number of contributors by 2%. Then it doesn’t matter how much you ask for: The total amount contributed n(x) * x is constant.

Of course, there’s no guarantee that the elasticity is constant over all possible choices of x—indeed, it would be quite surprising if it were. A quite likely scenario is that contribution is inelastic for small amounts, then passes through a regime where it is nearly unit elastic, and finally it becomes elastic as you start asking for really large amounts of money.

The simplest way to model that is to just assume that n(x) is linear in x, something like n = N – k x.

There is a parameter N that sets the maximum number of people who will ever donate, and a parameter k that sets how rapidly the number of contributors drops off as the amount asked for increases.

The first-order condition for maximizing n(x) * x is then quite simple: x = N/(2k)

This actually turns out to be the precisely the point at which the elasticity of contribution is -1.

The total amount you can get under that condition is N2/(4k)

Of course, I have no idea what N and k are in real life, so this isn’t terribly helpful. But what I really want to know is whether we should be asking for more money from each person, or asking for less money and trying to get more people on board.

In real life we can sometimes do both: Ask each person to give more than they are presently giving, whatever they are presently giving. (Just be sure to run your slogans by a diverse committee, so you don’t end up with “I’ve upped my standards. Now, up yours!”) But since we’re trying to find a benchmark level to demand of ourselves, let’s ignore that for now.

About 25% of American adults volunteer some of their time, averaging 140 hours of volunteer work per year. This is about 1.6% of all the hours in a year, or 2.4% of all waking hours. Total monetary contributions in the US reached $400 billion for the first time this year; this is about 2.0% of GDP. So the balance between volunteer hours and donations is actually pretty even. It would probably be better to tilt it a bit more toward donations, but it’s really not bad. About 60% of US households made some sort of charitable contribution, though only half of these received the charitable tax deduction.

This suggests to me that the quantity of people who give is probably about as high as it’s going to get—and therefore we need to start talking more about the amount of money. We may be in the inelastic regime, where the way to increase total contributions is to demand more from each individual.

Our goal is to increase the total contribution to poverty eradication by about 1% of GDP in both the US and Europe. So if 60% of people give, and currently total contributions are about 2.0% of GDP, this means that the average contribution is about 3.3% of the contributor’s gross income. Therefore I should tell them to donate 4.3%, right? Not quite; some of them might drop out entirely, and the rest will have to give more to compensate.
Without knowing the exact form of the function n(x), I can’t say precisely what the optimal value is. But it is most likely somewhat larger than 4.3%; 5% would be a nice round number in the right general range. This would raise contributions in the US to 2.6% of GDP, or about $500 billion. That’s a 20% increase over the current level, which is large, but feasible.

Accomplishing a similar increase in Europe would then give us a total of $200 billion per year in additional funds to fight global poverty; this might not quite be enough to end world hunger (depending on which estimate you use), but it would definitely have a large impact.

I asked you before to give 1%. I am afraid I must now ask for more. Set a target of 5%. You don’t have to reach it this year; you can gradually increase your donations each year for several years (I call this “Save More Lives Tomorrow”, after Thaler’s highly successful program “Save More Tomorrow”). This is in some sense more than your fair share; I’m relying on the assumption that half the population won’t actually give anything. But ultimately this isn’t about what’s fair to us. It’s about solving global problems.

What we could, what we should, and what we must

May 27 JDN 2458266

In one of the most famous essays in all of ethical philosophy, Peter Singer famously argued that we are morally obligated to give so much to charity that we would effectively reduce ourselves to poverty only slightly better than what our donations sought to prevent. His argument is a surprisingly convincing one, especially for such a radical proposition. Indeed, one of the core activities of the Effective Altruism movement has basically been finding ways to moderate Singer’s argument without giving up on its core principles, because it’s so obvious both that we ought to do much more to help people around the world and that there’s no way we’re ever going to do what that argument actually asks of us.

The most cost-effective charities in the world can save a human life for an average cost of under $4,000. The maneuver that Singer basically makes is quite simple: If you know that you could save someone’s life for $4,000, you have $4,000 to spend, and instead you spend that $4,000 on something else, aren’t you saying that whatever you did spend it on was more important than saving that person’s life? And is that really something you believe?

But if you think a little more carefully, it becomes clear that things are not quite so simple. You aren’t being paid $4,000 to kill someone, first of all. If you were willing to accept $4,000 as sufficient payment to commit a murder, you would be, quite simply, a monster. Implicitly the “infinite identical psychopath” of neoclassical rational agent models would be willing to do such a thing, but very few actual human beings—even actual psychopaths—are that callous.

Obviously, we must refrain from murdering people, even for amounts far in excess of $4,000. If you were offered the chance to murder someone for $4 billion dollars, I can understand why you would be tempted to do such a thing. Think of what you could do with all that money! Not only would you and everyone in your immediate family be independently wealthy for life, you could donate billions of dollars to charity and save as much as a million lives. What’s one life for a million? Even then, I have a strong intuition that you shouldn’t commit this murder—but I have never been able to find a compelling moral argument for why. The best I’ve been able to come up with a sort of Kantian notion: What if everyone did this?

Since the most plausible scenario is that the $4 billion comes from existing wealth, all those murders would simply be transferring wealth around, from unknown sources. If you stipulate where the wealth comes from, the dilemma can change quite a bit.

Suppose for example the $4 billion is confiscated from Bashar Al-Assad. That would be in itself a good thing, lessening the power of a genocidal tyrant. So we need to add that to the positive side of the ledger. It is probably worth killing one innocent person just to undermine Al-Assad’s power; indeed, the US Air Force certainly seems to think so, as they average more than one civilian fatality every day in airstrikes.

Now suppose the wealth was extracted by clever financial machinations that took just a few dollars out of every bank account in America. This would be in itself a bad thing, but perhaps not a terrible thing, especially since we’re planning on giving most of it to UNICEF. Those people should have given it anyway, right? This sounds like a pretty good movie, actually; a cyberpunk Robin Hood basically.

Next, suppose it was obtained by stealing the life savings of a million poor people in Africa. Now the method of obtaining the money is so terrible that it’s not clear that funneling it through UNICEF would compensate, even if you didn’t have to murder someone to get it.

Finally, suppose that the wealth is actually created anew—not printed money from the Federal Reserve, but some new technology that will increase the world’s wealth by billions of dollars yet requires the death of an innocent person to create. In this scenario, the murder has become something more like the inherent risk in human subjects biomedical research, and actually seems justifiable. And indeed, that fits with the Kantian answer, for if we all had the chance to kill one person in order to create something that would increase the wealth of the world by $4 billion, we could turn this planet into a post-scarcity utopia within a generation for fewer deaths than are currently caused by diabetes.

Anyway, my point here is that the detailed context of a decision actually matters a great deal. We can’t simply abstract away from everything else in the world and ask whether the money is worth the life.

When we consider this broader context with regard to the world’s most cost-effective charities, it becomes apparent that a small proportion of very dedicated people giving huge proportions of their income to charity is not the kind of world we want to see.

If I actually gave so much that I equalized my marginal utility of wealth to that of a child dying of malaria in Ghana, I would have to donate over 95% of my income—and well before that point, I would be homeless and impoverished. This actually seems penny-wise and pound-foolish even from the perspective of total altruism: If I stop paying rent, it gets a lot harder for me to finish my doctorate and become a development economist. And even if I never donated another dollar, the world would be much better off with one more good development economist than with even another $23,000 to the Against Malaria Foundation. Once you factor in the higher income I’ll have (and proportionately higher donations I’ll make), it’s obviously the wrong decision for me to give 95% of $25,000 today rather than 10% of $70,000 every year for the next 20 years after I graduate.

But the optimal amount for me to donate from that perspective is whatever the maximum would be that I could give without jeopardizing my education and career prospects. This is almost certainly more than I am presently giving. Exactly how much more is actually not all that apparent: It’s not enough to say that I need to be able to pay rent, eat three meals a day, and own a laptop that’s good enough for programming and statistical analysis. There’s also a certain amount that I need for leisure, to keep myself at optimal cognitive functioning for the next several years. Do I need that specific video game, that specific movie? Surely not—but if I go the next ten years without ever watching another movie or playing another video game, I’m probably going to be in trouble psychologically. But what exactly is the minimum amount to keep me functioning well? And how much should I be willing to spend attending conferences? Those can be important career-building activities, but they can also be expensive wastes of time.

Singer acts as though jeopardizing your career prospects is no big deal, but this is clearly wrong: The harm isn’t just to your own well-being, but also to your productivity and earning power that could have allowed you to donate more later. You are a human capital asset, and you are right to invest in yourself. Exactly how much you should invest in yourself is a much harder question.
Such calculations are extremely difficult to do. There are all sorts of variables I simply don’t know, and don’t have any clear way of finding out. It’s not a good sign for an ethical theory when even someone with years of education and expertise on specifically that topic still can’t figure out the answer. Ethics is supposed to be something we can apply to everyone.

So I think it’s most helpful to think in those terms: What could we apply to everyone? What standard of donation would be high enough if we could get everyone on board?

World poverty is rapidly declining. The direct poverty gap at the UN poverty line of $1.90 per day is now only $80 billion. Realistically, we couldn’t simply close that gap precisely (there would also be all sorts of perverse incentives if we tried to do it that way). But the standard estimate that it would take about $300 billion per year in well-targeted spending to eliminate world hunger is looking very good.

How much would each person, just those in the middle class or above within the US or the EU, have to give in order to raise this much?
89% of US income is received by the top 60% of households (who I would say are unambiguously “middle class or above”). Income inequality is not as extreme within the EU, so the proportion of income received by the top 60% seems to be more like 75%.

89% of US GDP plus 75% of EU GDP is all together about $29 trillion per year. This means that in order to raise $300 billion, each person in the middle class or above would need to donate just over one percent of their income.

Not 95%. Not 25%. Not even 10%. Just 1%. That would be enough.

Of course, more is generally better—at least until you start jeopardizing your career prospects. So by all means, give 2% or 5% or even 10%. But I really don’t think it’s helpful to make people feel guilty about not giving 95% when all we really needed was for everyone to give 1%.

There is an important difference between what we could do, what we should do, and what we must do.

What we must do are moral obligations so strong they are essentially inviolable: We must not murder people. There may be extreme circumstances where exceptions can be made (such as collateral damage in war), and we can always come up with hypothetical scenarios that would justify almost anything, but for the vast majority of people the vast majority of time, these ethical rules are absolutely binding.

What we should do are moral obligations that are strong enough to be marks against your character if you break them, but not so absolutely binding that you have to be a monster not to follow them. This is where I put donating at least 1% of your income. (This is also where I put being vegetarian, but perhaps that is a topic for another time.) You really ought to do it, and you are doing something wrongful if you don’t—but most people don’t, and you are not a terrible person if you don’t.

This latter category is in part socially constructed, based on the norms people actually follow. Today, slavery is obviously a grave crime, and to be a human trafficker who participates in it you must be a psychopath. But two hundred years ago, things were somewhat different: Slavery was still wrong, yes, but it was quite possible to be an ordinary person who was generally an upstanding citizen in most respects and yet still own slaves. I would still condemn people who owned slaves back then, but not nearly as forcefully as I would condemn someone who owned slaves today. Two hundred years from now, perhaps vegetarianism will move up a category: The norm will be that everyone eats only plants, and someone who went out of their way to kill and eat a pig would have to be a psychopath. Eating meat is already wrong today—but it will be more wrong in the future. I’d say the same about donating 1% of your income, but actually I’m hoping that by two hundred years from now there will be no more poverty left to eradicate, and donation will no longer be necessary.

Finally, there is what we could do—supererogatory, even heroic actions of self-sacrifice that would make the world a better place, but cannot be reasonably expected of us. This is where donating 95% or even 25% of your income would fall. Yes, absolutely, that would help more people than donating 1%; but you don’t owe the world that much. It’s not wrong for you to contribute less than this. You don’t need to feel guilty for not giving this much.

But I do want to make you feel guilty if you don’t give at least 1%. Don’t tell me you can’t. You can. If your income is $30,000 per year, that’s $300 per year. If you needed that much for a car repair, or dental work, or fixing your roof, you’d find a way to come up with it. No one in the First World middle class is that liquidity-constrained. It is true that half of Americans say they couldn’t come up with $400 in an emergency, but I frankly don’t believe it. (I believe it for the bottom 25% or so, who are actually poor; but not half of Americans.) If you have even one credit card that’s not maxed out, you can do this—and frankly even if a card is maxed out, you can probably call them and get them to raise your limit. There is something you could cut out of your spending that would allow you to get back 1% of your annual income. I don’t know what it is, necessarily: Restaurants? Entertainment? Clothes? But I’m not asking you to give a third of your income—I’m asking you to give one penny out of every dollar.

I give considerably more than that; my current donation target is 8% and I’m planning on raising it to 10% or more once I get a high-paying job. I live on a grad student salary which is less than the median personal income in the US. So I know it can be done. But I am very intentionally not asking you to give this much; that would be above and beyond the call of duty. I’m only asking you to give 1%.

Two terms in marginal utility of wealth

JDN 2457569

This post is going to be a little wonkier than most; I’m actually trying to sort out my thoughts and draw some public comment on a theory that has been dancing around my head for awhile. The original idea of separating terms in marginal utility of wealth was actually suggested by my boyfriend, and from there I’ve been trying to give it some more mathematical precision to see if I can come up with a way to test it experimentally. My thinking is also influenced by a paper Miles Kimball wrote about the distinction between happiness and utility.

There are lots of ways one could conceivably spend money—everything from watching football games to buying refrigerators to building museums to inventing vaccines. But insofar as we are rational (and we are after all about 90% rational), we’re going to try to spend our money in such a way that its marginal utility is approximately equal across various activities. You’ll buy one refrigerator, maybe two, but not seven, because the marginal utility of refrigerators drops off pretty fast; instead you’ll spend that money elsewhere. You probably won’t buy a house that’s twice as large if it means you can’t afford groceries anymore. I don’t think our spending is truly optimal at maximizing utility, but I think it’s fairly good.

Therefore, it doesn’t make much sense to break down marginal utility of wealth into all these different categories—cars, refrigerators, football games, shoes, and so on—because we already do a fairly good job of equalizing marginal utility across all those different categories. I could see breaking it down into a few specific categories, such as food, housing, transportation, medicine, and entertainment (and this definitely seems useful for making your own household budget); but even then, I don’t get the impression that most people routinely spend too much on one of these categories and not enough on the others.

However, I can think of two quite different fundamental motives behind spending money, which I think are distinct enough to be worth separating.

One way to spend money is on yourself, raising your own standard of living, making yourself more comfortable. This would include both football games and refrigerators, really anything that makes your life better. We could call this the consumption motive, or maybe simply the self-directed motive.

The other way is to spend it on other people, which, depending on your personality can take either the form of philanthropy to help others, or as a means of self-aggrandizement to raise your own relative status. It’s also possible to do both at the same time in various combinations; while the Gates Foundation is almost entirely philanthropic and Trump Tower is almost entirely self-aggrandizing, Carnegie Hall falls somewhere in between, being at once a significant contribution to our society and an obvious attempt to bring praise and adulation to himself. I would also include spending on Veblen goods that are mainly to show off your own wealth and status in this category. We can call this spending the philanthropic/status motive, or simply the other-directed motive.

There is some spending which combines both motives: A car is surely useful, but a Ferrari is mainly for show—but then, a Lexus or a BMW could be either to show off or really because you like the car better. Some form of housing is a basic human need, and bigger, fancier houses are often better, but the main reason one builds mansions in Beverly Hills is to demonstrate to the world that one is fabulously rich. This complicates the theory somewhat, but basically I think the best approach is to try to separate a sort of “spending proportion” on such goods, so that say $20,000 of the Lexus is for usefulness and $15,000 is for show. Empirically this might be hard to do, but theoretically it makes sense.

One of the central mysteries in cognitive economics right now is the fact that while self-reported happiness rises very little, if at all, as income increases, a finding which was recently replicated even in poor countries where we might not expect it to be true, nonetheless self-reported satisfaction continues to rise indefinitely. A number of theories have been proposed to explain this apparent paradox.

This model might just be able to account for that, if by “happiness” we’re really talking about the self-directed motive, and by “satisfaction” we’re talking about the other-directed motive. Self-reported happiness seems to obey a rule that $100 is worth as much to someone with $10,000 as $25 is to someone with $5,000, or $400 to someone with $20,000.

Self-reported satisfaction seems to obey a different rule, such that each unit of additional satisfaction requires a roughly equal proportional increase in income.

By having a utility function with two terms, we can account for both of these effects. Total utility will be u(x), happiness h(x), and satisfaction s(x).

u(x) = h(x) + s(x)

To obey the above rule, happiness must obey harmonic utility, like this, for some constants h0 and r:

h(x) = h0 – r/x

Proof of this is straightforward, though to keep it simple I’ve hand-waved why it’s a power law:

Given

h'(2x) = 1/4 h'(x)

Let

h'(x) = r x^n

h'(2x) = r (2x)^n

r (2x)^n = 1/4 r x^n

n = -2

h'(x) = r/x^2

h(x) = – r x^(-1) + C

h(x) = h0 – r/x

Miles Kimball also has some more discussion on his blog about how a utility function of this form works. (His statement about redistribution at the end is kind of baffling though; sure, dollar for dollar, redistributing wealth from the middle class to the poor would produce a higher gain in utility than redistributing wealth from the rich to the middle class. But neither is as good as redistributing from the rich to the poor, and the rich have a lot more dollars to redistribute.)

Satisfaction, however, must obey logarithmic utility, like this, for some constants s0 and k.

The x+1 means that it takes slightly less proportionally to have the same effect as your wealth increases, but it allows the function to be equal to s0 at x=0 instead of going to negative infinity:

s(x) = s0 + k ln(x)

Proof of this is very simple, almost trivial:

Given

s'(x) = k/x

s(x) = k ln(x) + s0

Both of these functions actually have a serious problem that as x approaches zero, they go to negative infinity. For self-directed utility this almost makes sense (if your real consumption goes to zero, you die), but it makes no sense at all for other-directed utility, and since there are causes most of us would willingly die for, the disutility of dying should be large, but not infinite.

Therefore I think it’s probably better to use x +1 in place of x:

h(x) = h0 – r/(x+1)

s(x) = s0 + k ln(x+1)

This makes s0 the baseline satisfaction of having no other-directed spending, though the baseline happiness of zero self-directed spending is actually h0 – r rather than just h0. If we want it to be h0, we could use this form instead:

h(x) = h0 + r x/(x+1)

This looks quite different, but actually only differs by a constant.

Therefore, my final answer for the utility of wealth (or possibly income, or spending? I’m not sure which interpretation is best just yet) is actually this:

u(x) = h(x) + s(x)

h(x) = h0 + r x/(x+1)

s(x) = s0 + k ln(x+1)

Marginal utility is then the derivatives of these:

h'(x) = r/(x+1)^2

s'(x) = k/(x+1)

Let’s assign some values to the constants so that we can actually graph these.

Let h0 = s0 = 0, so our baseline is just zero.

Furthermore, let r = k = 1, which would mean that the value of $1 is the same whether spent either on yourself or on others, if $1 is all you have. (This is probably wrong, actually, but it’s the simplest to start with. Shortly I’ll discuss what happens as you vary the ratio k/r.)

Here is the result graphed on a linear scale:

Utility_linear

And now, graphed with wealth on a logarithmic scale:

Utility_log

As you can see, self-directed marginal utility drops off much faster than other-directed marginal utility, so the amount you spend on others relative to yourself rapidly increases as your wealth increases. If that doesn’t sound right, remember that I’m including Veblen goods as “other-directed”; when you buy a Ferrari, it’s not really for yourself. While proportional rates of charitable donation do not increase as wealth increases (it’s actually a U-shaped pattern, largely driven by poor people giving to religious institutions), they probably should (people should really stop giving to religious institutions! Even the good ones aren’t cost-effective, and some are very, very bad.). Furthermore, if you include spending on relative power and status as the other-directed motive, that kind of spending clearly does proportionally increase as wealth increases—gotta keep up with those Joneses.

If r/k = 1, that basically means you value others exactly as much as yourself, which I think is implausible (maybe some extreme altruists do that, and Peter Singer seems to think this would be morally optimal). r/k < 1 would mean you should never spend anything on yourself, which not even Peter Singer believes. I think r/k = 10 is a more reasonable estimate.

For any given value of r/k, there is an optimal ratio of self-directed versus other-directed spending, which can vary based on your total wealth.

Actually deriving what the optimal proportion would be requires a whole lot of algebra in a post that probably already has too much algebra, but the point is, there is one, and it will depend strongly on the ratio r/k, that is, the overall relative importance of self-directed versus other-directed motivation.

Take a look at this graph, which uses r/k = 10.

Utility_marginal

If you only have 2 to spend, you should spend it entirely on yourself, because up to that point the marginal utility of self-directed spending is always higher. If you have 3 to spend, you should spend most of it on yourself, but a little bit on other people, because after you’ve spent about 2.2 on yourself there is more marginal utility for spending on others than on yourself.

If your available wealth is W, you would spend some amount x on yourself, and then W-x on others:

u(x) = h(x) + s(W-x)

u(x) = r x/(x+1) + k ln(W – x + 1)

Then you take the derivative and set it equal to zero to find the local maximum. I’ll spare you the algebra, but this is the result of that optimization:

x = – 1 – r/(2k) + sqrt(r/k) sqrt(2 + W + r/(4k))

As long as k <= r (which more or less means that you care at least as much about yourself as about others—I think this is true of basically everyone) then as long as W > 0 (as long as you have some money to spend) we also have x > 0 (you will spend at least something on yourself).

Below a certain threshold (depending on r/k), the optimal value of x is greater than W, which means that, if possible, you should be receiving donations from other people and spending them on yourself. (Otherwise, just spend everything on yourself). After that, x < W, which means that you should be donating to others. The proportion that you should be donating smoothly increases as W increases, as you can see on this graph (which uses r/k = 10, a figure I find fairly plausible):

Utility_donation

While I’m sure no one literally does this calculation, most people do seem to have an intuitive sense that you should donate an increasing proportion of your income to others as your income increases, and similarly that you should pay a higher proportion in taxes. This utility function would justify that—which is something that most proposed utility functions cannot do. In most models there is a hard cutoff where you should donate nothing up to the point where your marginal utility is equal to the marginal utility of donating, and then from that point forward you should donate absolutely everything. Maybe a case can be made for that ethically, but psychologically I think it’s a non-starter.

I’m still not sure exactly how to test this empirically. It’s already quite difficult to get people to answer questions about marginal utility in a way that is meaningful and coherent (people just don’t think about questions like “Which is worth more? $4 to me now or $10 if I had twice as much wealth?” on a regular basis). I’m thinking maybe they could play some sort of game where they have the opportunity to make money at the game, but must perform tasks or bear risks to do so, and can then keep the money or donate it to charity. The biggest problem I see with that is that the amounts would probably be too small to really cover a significant part of anyone’s total wealth, and therefore couldn’t cover much of their marginal utility of wealth function either. (This is actually a big problem with a lot of experiments that use risk aversion to try to tease out marginal utility of wealth.) But maybe with a variety of experimental participants, all of whom we get income figures on?