The unsung success of Bidenomics

Aug 13 JDN 2460170

I’m glad to see that the Biden administration is finally talking about “Bidenomics”. We tend to give too much credit or blame for economic performance to the President—particularly relative to Congress—but there are many important ways in which a Presidential administration can shift the priorities of public policy in particular directions, and Biden has clearly done that.

The economic benefits for people of color seem to have been particularly large. The unemployment gap between White and Black workers in the US is now only 2.7 percentage points, while just a few years ago it was over 4pp and at the worst of the Great Recession it surpassed 7pp. During lockdown, unemployment for Black people hit nearly 17%; it is now less than 6%.

The (misnamed, but we’re stuck with it) Inflation Reduction Act in particular has been an utter triumph.

In the past year, real private investment in manufacturing structures (essentially, new factories) has risen from $56 billion to $87 billion—an over 50% increase, which puts it the highest it has been since the turn of the century. The Inflation Reduction Act appears to be largely responsible for this change.

Not many people seem to know this, but the US has also been on the right track with regard to carbon emissions: Per-capita carbon emissions in the US have been trending downward since about 2000, and are now lower than they were in the 1950s. The Inflation Reduction act now looks poised to double down on that progress, as it has been forecasted to reduce our emissions all the way down to 40% below their early-2000s peak.

Somehow, this success doesn’t seem to be getting across. The majority of Americans incorrectly believe that we are in a downturn. Biden’s approval rating is still only 40%, barely higher than Trump’s was. When it comes to political beliefs, most American voters appear to be utterly impervious to facts.

Most Americans do correctly believe that inflation is still a bit high (though many seem to think it’s higher than it is); this is weird, seeing as inflation is normally high when the economy is growing rapidly, and gets too low when we are in a recession. This seems to be Halo Effect, rather than any genuine understanding of macroeconomics: downturns are bad and inflation is bad, so they must go together—when in fact, quite the opposite is the case.

People generally feel better about their own prospects than they do about the economy as a whole:

Sixty-four percent of Americans say the economy is worse off compared to 2020, while seventy-three percent of Americans say the economy is worse off compared to five years ago. About two in five of Americans say they feel worse off from five years ago generally (38%) and a similar number say they feel worse off compared to 2020 (37%).

(Did you really have to write out ‘seventy-three percent’? I hate that convention. 73% is so much clearer and quicker to read.)

I don’t know what the Biden administration should do about this. Trying to sell themselves harder might backfire. (And I’m pretty much the last person in the world you should ask for advice about selling yourself.) But they’ve been doing really great work for the US economy… and people haven’t noticed. Thousands of factories are being built, millions of people are getting jobs, and the collective response has been… “meh”.

Against deontology

Aug 6 JDN 2460163

In last week’s post I argued against average utilitarianism, basically on the grounds that it devalues the lives of anyone who isn’t of above average happiness. But you might be tempted to take these as arguments against utilitarianism in general, and that is not my intention.

In fact I believe that utilitarianism is basically correct, though it needs some particular nuances that are often lost in various presentations of it.

Its leading rival is deontology, which is really a broad class of moral theories, some a lot better than others.

What characterizes deontology as a class is that it uses rules, rather than consequences; an act is just right or wrong regardless of its consequences—or even its expected consequences.

There are certain aspects of this which are quite appealing: In fact, I do think that rules have an important role to play in ethics, and as such I am basically a rule utilitarian. Actually trying to foresee all possible consequences of every action we might take is an absurd demand far beyond the capacity of us mere mortals, and so in practice we have no choice but to develop heuristic rules that can guide us.

But deontology says that these are no mere heuristics: They are in fact the core of ethics itself. Under deontology, wrong actions are wrong even if you know for certain that their consequences will be good.

Kantian ethics is one of the most well-developed deontological theories, and I am quite sympathetic to Kantian ethics In fact I used to consider myself one of its adherents, but I now consider that view a mistaken one.

Let’s first dispense with the views of Kant himself, which are obviously wrong. Kant explicitly said that lying is always, always, always wrong, and even when presented with obvious examples where you could tell a small lie to someone obviously evil in order to save many innocent lives, he stuck to his guns and insisted that lying is always wrong.

This is a bit anachronistic, but I think this example will be more vivid for modern readers, and it absolutely is consistent with what Kant wrote about the actual scenarios he was presented with:

You are living in Germany in 1945. You have sheltered a family of Jews in your attic to keep them safe from the Holocaust. Nazi soldiers have arrived at your door, and ask you: “Are there any Jews in this house?” Do you tell the truth?

I think it’s utterly, agonizingly obvious that you should not tell the truth. Exactly what you should do is less obvious: Do you simply lie and hope they buy it? Do you devise a clever ruse? Do you try to distract them in some way? Do you send them on a wild goose chase elsewhere? If you could overpower them and kill them, should you? What if you aren’t sure you can; should you still try? But one thing is clear: You don’t hand over the Jewish family to the Nazis.

Yet when presented with similar examples, Kant insisted that lying is always wrong. He had a theory to back it up, his Categorical Imperative: “Act only according to that maxim whereby you can at the same time will that it should become a universal law.”

And, so his argument goes: Since it would be obviously incoherent to say that everyone should always lie, lying is wrong, and you’re never allowed to do it. He actually bites that bullet the size of a Howitzer round.

Modern deontologists—even though who consider themselves Kantians—are more sophisticated than this. They realize that you could make a rule like “Never lie, except to save the life of an innocent person.” or “Never lie, except to stop a great evil.” Either of these would be quite adequate to solve this particular dilemma. And it’s absolutely possible to will that these would be universal laws, in the sense that they would apply to anyone. ‘Universal’ doesn’t have to mean ‘applies equally to all possible circumstances’.

There are also a couple of things that deontology does very well, which are worth preserving. One of them is supererogation: The idea that some acts are above and beyond the call of duty, that something can be good without being obligatory.

This is something most forms of utilitarianism are notoriously bad at. They show us a spectrum of worlds from the best to the worst, and tell us to make things better. But there’s nowhere we are allowed to stop, unless we somehow manage to make it all the way to the best possible world.

I find this kind of moral demand very tempting, which often leads me to feel a tremendous burden of guilt. I always know that I could be doing more than I do. I’ve written several posts about this in the past, in the hopes of fighting off this temptation in myself and others. (I am not entirely sure how well I’ve succeeded.)

Deontology does much better in this regard: Here are some rules. Follow them.

Many of the rules are in fact very good rules that most people successfully follow their entire lives: Don’t murder. Don’t rape. Don’t commit robbery. Don’t rule a nation tyrannically. Don’t commit war crimes.

Others are oft more honored in the breach than the observance: Don’t lie. Don’t be rude. Don’t be selfish. Be brave. Be generous. But a well-developed deontology can even deal with this, by saying that some rules are more important than others, and thus some sins are more forgivable than others.

Whereas a utilitarian—at least, anything but a very sophisticated utilitarian—can only say who is better and who is worse, a deontologist can say who is good enough: who has successfully discharged their moral obligations and is otherwise free to live their life as they choose. Deontology absolves us of guilt in a way that utilitarianism is very bad at.

Another good deontological principle is double-effect: Basically this says that if you are doing something that will have bad outcomes as well as good ones, it matters whether you intend the bad one and what you do to try to mitigate it. There does seem to be a morally relevant difference between a bombing that kills civilians accidentally as part of an attack on a legitimate military target, and a so-called “strategic bombing” that directly targets civilians in order to maximize casualties—even if both occur as part of a justified war. (Both happen a lot—and it may even be the case that some of the latter were justified. The Tokyo firebombing and atomic bombs on Hiroshima and Nagasaki were very much in the latter category.)

There are ways to capture this principle (or something very much like it) in a utilitarian framework, but like supererogation, it requires a sophisticated, nuanced approach that most utilitarians don’t seem willing or able to take.

Now that I’ve said what’s good about it, let’s talk about what’s really wrong with deontology.

Above all: How do we choose the rules?

Kant seemed to think that mere logical coherence would yield a sufficiently detailed—perhaps even unique—set of rules for all rational beings in the universe to follow. This is obviously wrong, and seems to be simply a failure of his imagination. There is literally a countably infinite space of possible ethical rules that are logically consistent. (With probability 1 any given one is utter nonsense: “Never eat cheese on Thursdays”, “Armadillos should rule the world”, and so on—but these are still logically consistent.)

If you require the rules to be simple and general enough to always apply to everyone everywhere, you can narrow the space substantially; but this is also how you get obviously wrong rules like “Never lie.”

In practice, there are two ways we actually seem to do this: Tradition and consequences.

Let’s start with tradition. (It came first historically, after all.) You can absolutely make a set of rules based on whatever your culture has handed down to you since time immemorial. You can even write them down in a book that you declare to be the absolute infallible truth of the universe—and, amazingly enough, you can get millions of people to actually buy that.

The result, of course, is what we call religion. Some of its rules are good: Thou shalt not kill. Some are flawed but reasonable: Thou shalt not steal. Thou shalt not commit adultery. Some are nonsense: Thou shalt not covet thy neighbor’s goods.

And some, well… some rules of tradition are the source of many of the world’s most horrific human rights violations. Thou shalt not suffer a witch to live (Exodus 22:18). If a man also lie with mankind, as he lieth with a woman, both of them have committed an abomination: they shall surely be put to death; their blood shall be upon them (Leviticus 20:13).

Tradition-based deontology has in fact been the major obstacle to moral progress throughout history. It is not a coincidence that utilitarianism began to become popular right before the abolition of slavery, and there is an even more direct casual link between utilitarianism and the advancement of rights for women and LGBT people. When the sole argument you can make for moral rules is that they are ancient (or allegedly handed down by a perfect being), you can make rules that oppress anyone you want. But when rules have to be based on bringing happiness or preventing suffering, whole classes of oppression suddenly become untenable. “God said so” can justify anything—but “Who does it hurt?” can cut through.

It is an oversimplification, but not a terribly large one, to say that the arc of moral history has been drawn by utilitarians dragging deontologists kicking and screaming into a better future.

There is a better way to make rules, and that is based on consequences. And, in practice, most people who call themselves deontologists these days do this. They develop a system of moral rules based on what would be expected to lead to the overall best outcomes.

I like this approach. In fact, I agree with this approach. But it basically amounts to abandoning deontology and surrendering to utilitarianism.

Once you admit that the fundamental justification for all moral rules is the promotion of happiness and the prevention of suffering, you are basically a rule utilitarian. Rules then become heuristics for promoting happiness, not the fundamental source of morality itself.

I suppose it could be argued that this is not a surrender but a synthesis: We are looking for the best aspects of deontology and utilitarianism. That makes a lot of sense. But I keep coming back to the dark history of traditional rules, the fact that deontologists have basically been holding back human civilization since time immemorial. If deontology wants to be taken seriously now, it needs to prove that it has broken with that dark tradition. And frankly the easiest answer to me seems to be to just give up on deontology.

Against average utilitarianism

Jul 30 JDN 2460156

Content warning: Suicide and suicidal ideation

There are two broad strands of utilitarianism, known as average utilitarianism and total utilitarianism. As utilitarianism, both versions concern themselves with maximizing happiness and minimizing suffering. And for many types of ethical question, they yield the same results.

Under average utilitarianism, the goal is to maximize the average level of happiness minus suffering: It doesn’t matter how many people there are in the world, only how happy they are.

Under total utilitarianism, the goal is to maximize the total level of happiness minus suffering: Adding another person is a good thing, as long as their life is worth living.

Mathematically, its the difference between taking the sum of net happiness (total utilitarianism), and taking that sum and dividing it by the population (average utilitarianism).

It would make for too long a post to discuss the validity of utilitarianism in general. Overall I will say briefly that I think utilitarianism is basically correct, but there are some particular issues with it that need to be resolved, and usually end up being resolved by heading slightly in the direction of a more deontological ethics—in short, rule utilitarianism.

But for today, I want to focus on the difference between average and total utilitarianism, because average utilitarianism is a very common ethical view despite having appalling, horrifying implications.

Above all: under average utilitarianism, if you are considering suicide, you should probably do it.

Why? Because anyone who is considering suicide is probably of below-average happiness. And average utilitarianism necessarily implies that anyone who expects to be of below-average happiness should be immediately killed as painlessly as possible.

Note that this does not require that your life be one of endless suffering, so that it isn’t even worth going on living. Even a total utilitarian would be willing to commit suicide, if their life is expected to be so full of suffering that it isn’t worth going on.

Indeed, I suspect that most actual suicidal ideation by depressed people takes this form: My life will always be endless suffering. I will never be happy again. My life is worthless.

The problem with such suicidal ideation is not the ethical logic, which is valid: If indeed your existence from this point forward would be nothing but endless suffering, suicide actually makes sense. (Imagine someone who is being held in a dungeon being continually mercilessly tortured with no hope of escape; it doesn’t seem unreasonable for them to take a cyanide pill.) The problem is the prediction, which says that your life from this point forward will be nothing but endless suffering. Most people with depression do, eventually, feel better. They may never be quite as happy overall as people who aren’t depressed, but they do, in fact, have happy times. And most people who considered suicide but didn’t go through with it end up glad that they went on living.

No, an average utilitarian says you should commit suicide as long as your happiness is below average.

We could be living in a glorious utopia, where almost everyone is happy almost all the time, and people are only occasionally annoyed by minor inconveniences—and average utilitarianism would say that if you expect to suffer a more than average rate of such inconveniences, the world would be better off if you ceased to exist.

Moreover, average utilitarianism says that you should commit suicide if your life is expected to get worse—even if it’s still going to be good, adding more years to your life will just bring your average happiness down. If you had a very happy childhood and adulthood is going just sort of okay, you may as well end it now.

Average utilitarianism also implies that we should bomb Third World countries into oblivion, because their people are less happy than ours and thus their deaths will raise the population average.

Are there ways an average utilitarian can respond to these problems? Perhaps. But every response I’ve seen is far too weak to resolve the real problem.

One approach would be to say that the killing itself is bad, or will cause sufficient grief as to offset the loss of the unhappy person. (An average utilitarian is inherently committed to the claim that losing an unhappy person is itself an inherent good. There is something to be offset.)

This might work for the utopia case: The grief from losing someone you love is much worse than even a very large number of minor inconveniences.

It may even work for the case of declining happiness over your lifespan: Presumably some other people would be sad to lose you, even if they agreed that your overall happiness is expected to gradually decline. Then again, if their happiness is also expected to decline… should they, too, shuffle off this mortal coil?

But does it work for the question of bombing? Would most Americans really be so aggrieved at the injustice of bombing Burundi or Somalia to oblivion? Most of them don’t seem particularly aggrieved at the actual bombings of literally dozens of countries—including, by the way, Somalia. Granted, these bombings were ostensibly justified by various humanitarian or geopolitical objectives, but some of those justifications (e.g. Kosovo) seem a lot stronger than others (e.g. Grenada). And quite frankly, I care more about this sort of thing than most people, and I still can’t muster anything like the same kind of grief for random strangers in a foreign country that I feel when a friend or relative dies. Indeed, I can’t muster the same grief for one million random strangers in a foreign country that I feel for one lost loved one. Human grief just doesn’t seem to work that way. Sometimes I wish it did—but then, I’m not quite sure what our lives would be like in such a radically different world.

Moreover, the whole point is that an average utilitarian should consider it an intrinsically good thing to eliminate the existence of unhappy people, as long as it can be done swiftly and painlessly. So why, then, should people be aggrieved at the deaths of millions of innocent strangers they know are mostly unhappy? Under average utilitarianism, the greatest harm of war is the survivors you leave, because they will feel grief—so your job is to make sure you annihilate them as thoroughly as possible, presumably with nuclear weapons. Killing a soldier is bad as long as his family is left alive to mourn him—but if you kill an entire country, that’s good, because their country was unhappy.

Enough about killing and dying. Let’s talk about something happier: Babies.

At least, total utilitarians are happy about babies. When a new person is brought into the world, a total utilitarian considers this a good thing, as long as the baby is expected to have a life worth living and their existence doesn’t harm the rest of the world too much.

I think that fits with most people’s notions of what is good. Generally the response when someone has a baby is “Congratulations!” rather than “I’m sorry”. We see adding another person to the world as generally a good thing.

But under average utilitarianism, babies must reach a much higher standard in order to be a good thing. Your baby only deserves to exist if they will be happier than average.

Granted, this is the average for the whole world, so perhaps First World people can justify the existence of their children by pointing out that unless things go very badly, they should end up happier than the world average. (Then again, if you have a family history of depression….)

But for Third World families, quite the opposite: The baby may well bring joy to all around them, but unless that joy is enough to bring someone above the global average, it would still be better if the baby did not exist. Adding one more person of moderately-low happiness will just bring the world average down.

So in fact, on a global scale, an average utilitarian should always expect that babies are nearly as likely to be bad as they are good, unless we have some reason to think that the next generation would be substantially happier than this one.

And while I’m not aware of anyone who sincerely believes that we should nuke Third World countries for their own good, I have heard people speak this way about population growth in Third World countries: such discussions of “overpopulation” are usually ostensibly about ecological sustainability, even though the ecological impact of First World countries is dramatically higher—and such talk often shades very quickly into eugenics.

Of course, we wouldn’t want to say that having babies is always good, lest we all be compelled to crank out as many babies as possible and genuinely overpopulate the world. But total utilitarianism can solve this problem: It’s worth adding more people to the world unless the harm of adding those additional people is sufficient to offset the benefit of adding another person whose life is worth living.

Moreover, total utilitarianism can say that it would be good to delay adding another person to the world, until the situation is better. Potentially this delay could be quite long: Perhaps it is best for us not to have too many children until we can colonize the stars. For now, let’s just keep our population sustainable while we develop the technology for interstellar travel. If having more children now would increase the risk that we won’t ever manage to colonize distant stars, total utilitarianism would absolutely say we shouldn’t do it.

There’s also a subtler problem here, which is that it may seem good for any particular individual to have more children, but the net result is that the higher total population is harmful. Then what I think is happening is that we are unaware of, or uncertain about, or simply inattentive to, the small harm to many other people caused by adding one new person to the world. Alternatively, we may not be entirely altruistic, and a benefit that accrues to our own family may be taken as greater than a harm that accrues to many other people far away. If we really knew the actual marginal costs and benefits, and we really agreed on that utility function, we would in fact make the right decision. It’s our ignorance or disagreement that makes us fail, not total utilitarianism in principle. In practice, this means coming up with general rules that seem to result in a fair and reasonable outcome, like “families who want to have kids should aim for two or three”—and again we’re at something like rule utilitarianism.

Another case average utilitarianism seems tempting is in resolving the mere addition paradox.

Consider three possible worlds, A, B, and C:

In world A, there is a population of 1 billion, and everyone is living an utterly happy, utopian life.

In world B, there is a population of 1 billion living in a utopia, and a population of 2 billion living mediocre lives.

In world C, there is a population of 3 billion living good, but not utopian, lives.

The mere addition paradox is that, to many people, world B seems worse than world A, even though all we’ve done is add 2 billion people whose lives are worth living.

Moreover, many people seem to think that the ordering goes like this:


World B is better than world A, because all we’ve done is add more people whose lives are worth living.

World C is better than world B, because it’s fairer, and overall happiness is higher.

World A is better than world C, because everyone is happier, and all we’ve done is reduce the population.


This is intransitive: We have A > C > B > A. Our preferences over worlds are incoherent.

Average utilitarianism resolves this by saying that A > C is true, and C > B is true—but it says that B > A is false. Since average happiness is higher in world A, A > B.

But of course this results in the conclusion that if we are faced with world B, we should do whatever we can to annihilate the 2 billion extra unhappy people, so that we can get to world A. And the whole point of this post is that this is an utterly appalling conclusion we should immediately reject.

What does total utilitarianism say? It says that indeed C > B and B > A, but it denies that A > C. Rather, since there are more people in world C, it’s okay that people aren’t quite as happy.

Derek Parfit argues that this leads to what he calls the “repugnant conclusion”: If we keep increasing the population by a large amount while decreasing happiness by a small amount, the best possible world ends up being one where population is utterly massive but our lives are only barely worth living.

I do believe that total utilitarianism results in this outcome. I can live with that.

Under average utilitarianism, the best possible world is precisely one person who is immortal and absolutely ecstatic 100% of the time. Adding even one person who is not quite that happy will make things worse.

Under total utilitarianism, adding more people who are still very happy would be good, even if it makes that one ecstatic person a bit less ecstatic. And adding more people would continue to be good, as long as it didn’t bring the average down too quickly.

If you find this conclusion repugnant, as Parfit does, I submit that it is because it is difficult to imagine just how large a population we are talking about. Maybe putting some numbers on it will help.

Let’s say the happiness level of an average person in the world today is 35 quality-adjusted life years—our life expectancy of 70, times an average happiness level of 0.5.

So right now we have a world of 8 billion people at 35 QALY, for a total of 280 TQALY. (That’s tera-QALY, 1 trillion QALY.)

(Note: I’m not addressing inequality here. If you believe that a world where one person has 100 QALY and another has 50 QALY is worse than one where both have 75 QALY, you should adjust your scores accordingly—which mainly serves to make the current world look worse, due to our utterly staggering inequality. In fact I think I do not believe this—in my view, the problem is not that happiness is unequal, but that staggering inequality of wealth makes much greater suffering among the poor in exchange for very little happiness among the rich.)

Average utilitarianism says that we should eliminate the less happy people, so we can raise the average QALY higher, maybe to something like 60. I’ve already said why I find this appalling.

So now consider what total utilitarianism asks of us. If we could raise that figure above 280 TQALY, we should. Say we could increase our population to 10 billion, at the cost of reducing average happiness to 30 QALY; should we? Yes, we should, because that’s 300 TQALY.

But notice that in this scenario we’re still 85% as happy as we were. That doesn’t sound so bad. Parfit is worried about a scenario where our lives are barely worth living. So let’s consider what that would require.

“Barely worth living” sounds like maybe 1 QALY. This wouldn’t mean we all live exactly one year; that’s not sustainable, because babies can’t have babies. So it would be more like a life expectancy of 33, with a happiness of 0.03—pretty bad, but still worth living.

In that case, we would need to raise our population over 800 billion to make it better than our current existence. We must colonize at least 100 other planets and fill them as full as we’ve filled Earth.

In fact, I think this 1 QALY life was something like that human beings had at the dawn of agriculture (which by some estimates was actually worse than ancient hunter-gatherer life; we were sort of forced into early agriculture, rather than choosing it because it was better): Nasty, brutish, and short, but still, worth living.

So, Parfit’s repugnant conclusion is that filling 100 planets with people who live like the ancient Babylonians would be as good as life on Earth is now? I don’t really see how this is obviously horrible. Certainly not to the same degree that saying we should immediately nuke Somalia is obviously horrible.

Moreover, total utilitarianism absolutely still says that if we can make those 800 billion people happier, we should. A world of 800 billion people each getting 35 QALY is 100 times better than the way things are now—and doesn’t that seem right, at least?


Yet if you indeed believe that copying a good world 100 times gives you a 100 times better world, you are basically committed to total utilitarianism.

There are actually other views that would allow you to escape this conclusion without being an average utilitarian.

One way, naturally, is to not be a utilitarian. You could be a deontologist or something. I don’t have time to go into that in this post, so let’s save it for another time. For now, let me say that, historically, utilitarianism has led the charge in positive moral change, from feminism to gay rights, from labor unions to animal welfare. We tend to drag stodgy deontologists kicking and screaming toward a better world. (I vaguely recall an excellent tweet on this, though not who wrote it: “Yes, historically, almost every positive social change has been spearheaded by utilitarians. But sometimes utilitarianism seems to lead to weird conclusions in bizarre thought experiments, and surely that’s more important!”)

Another way, which has gotten surprisingly little attention, is to use an aggregating function that is neither a sum nor an average. For instance, you could add up all utility and divide by the square root of population, so that larger populations get penalized for being larger, but you aren’t simply trying to maximize average happiness. That does seem to still tell some people to die even though their lives were worth living, but at least it doesn’t require us to exterminate all who are below average. And it may also avoid the conclusion Parfit considers repugnant, by making our galactic civilization span 10,000 worlds. Of course, why square root? Why not a cube root, or a logarithm? Maybe the arbitrariness is why it hasn’t been seriously considered. But honestly, I think dividing by anything is suspicious; how can adding someone else who is happy ever make things worse?

But if I must admit that a sufficiently large galactic civilization would be better than our current lives, even if everyone there is mostly pretty unhappy? That’s a bullet I’m prepared to bite. At least I’m not saying we should annihilate everyone who is unhappy.

How much should we give of ourselves?

Jul 23 JDN 2460149

This is a question I’ve written about before, but it’s a very important one—perhaps the most important question I deal with on this blog—so today I’d like to come back to it from a slightly different angle.

Suppose you could sacrifice all the happiness in the rest of your life, making your own existence barely worth living, in exchange for saving the lives of 100 people you will never meet.

  1. Would it be good for you do so?
  2. Should you do so?
  3. Are you a bad person if you don’t?
  4. Are all of the above really the same question?

Think carefully about your answer. It may be tempting to say “yes”. It feels righteous to say “yes”.

But in fact this is not hypothetical. It is the actual situation you are in.

This GiveWell article is entitled “Why is it so expensive to save a life?” but that’s incredibly weird, because the actual figure they give is astonishingly, mind-bogglingly, frankly disgustingly cheap: It costs about $4500 to save one human life. I don’t know how you can possibly find that expensive. I don’t understand how anyone can think, “Saving this person’s life might max out a credit card or two; boy, that sure seems expensive!

The standard for healthcare policy in the US is that something is worth doing if it is able to save one quality-adjusted life year for less than $50,000. That’s one year for ten times as much. Even accounting for the shorter lifespans and worse lives in poor countries, saving someone from a poor country for $4500 is at least one hundred times as cost-effective as that.

To put it another way, if you are a typical middle-class person in the First World, with an after-tax income of about $25,000 per year, and you were to donate 90% of that after-tax income to high-impact charities, you could be expected to save 5 lives every year. Over the course of a 30-year career, that’s 150 lives saved.

You would of course be utterly miserable for those 30 years, having given away all the money you could possibly have used for any kind of entertainment or enjoyment, not to mention living in the cheapest possible housing—maybe even a tent in a homeless camp—and eating the cheapest possible food. But you could do it, and you would in fact be expected to save over 100 lives by doing so.

So let me ask you again:

  1. Would it be good for you do so?
  2. Should you do so?
  3. Are you a bad person if you don’t?
  4. Are all of the above really the same question?

Peter Singer often writes as though the answer to all these questions is “yes”. But even he doesn’t actually live that way. He gives a great deal to charity, mind you; no one seems to know exactly how much, but estimates range from at least 10% to up to 50% of his income. My general impression is that he gives about 10% of his ordinary income and more like 50% of big prizes he receives (which are in fact quite numerous). Over the course of his life he has certainly donated at least a couple million dollars. Yet he clearly could give more than he does: He lives a comfortable, upper-middle-class life.

Peter Singer’s original argument for his view, from his essay “Famine, Affluence, and Morality”, is actually astonishingly weak. It involves imagining a scenario where a child is drowning in a lake and you could go save them, but only at the cost of ruining your expensive suit.

Obviously, you should save the child. We all agree on that. You are in fact a terrible person if you wouldn’t save the child.

But Singer tries to generalize this into a principle that requires us to donate all most of our income to international charities, and that just doesn’t follow.

First of all, that suit is not worth $4500. Not if you’re a middle-class person. That’s a damn Armani. No one who isn’t a millionaire wears suits like that.

Second, in the imagined scenario, you’re the only one who can help the kid. All I have to do is change that one thing and already the answer is different: If right next to you there is a trained, certified lifeguard, they should save the kid, not you. And if there are a hundred other people at the lake, and none of them is saving the kid… probably there’s a good reason for that? (It could be bystander effect, but actually that’s much weaker than a lot of people think.) The responsibility doesn’t uniquely fall upon you.

Third, the drowning child is a one-off, emergency scenario that almost certainly will never happen to you, and if it does ever happen, will almost certainly only happen once. But donation is something you could always do, and you could do over and over and over again, until you have depleted all your savings and run up massive debts.

Fourth, in the hypothetical scenario, there is only one child. What if there were ten—or a hundred—or a thousand? What if you couldn’t possibly save them all by yourself? Should you keep going out there and saving children until you become exhausted and you yourself drown? Even if there is a lifeguard and a hundred other bystanders right there doing nothing?

And finally, in the drowning child scenario, you are right there. This isn’t some faceless stranger thousands of miles away. You can actually see that child in front of you. Peter Singer thinks that doesn’t matter—actually his central point seems to be that it doesn’t matter. But I think it does.

Singer writes:

It makes no moral difference whether the person I can help is a neighbor’s child ten yards away from me or a Bengali whose name I shall never know, ten thousand miles away.

That’s clearly wrong, isn’t it? Relationships mean nothing? Community means nothing? There is no moral value whatsoever to helping people close to us rather than random strangers on the other side of the planet?

One answer might be to say that the answer to question 4 is “no”. You aren’t a bad person for not doing everything you should, and even though something would be good if you did it, that doesn’t necessarily mean you should do it.

Perhaps some things are above and beyond the call of duty: Good, perhaps even heroic, if you’re willing to do them, but not something we are all obliged to do. The formal term for this is supererogatory. While I think that overall utilitarianism is basically correct and has done great things for human society, one thing I think most utilitarians miss is that they seem to deny that supererogatory actions exist.

Even then, I’m not entirely sure it is good to be this altruistic.

Someone who really believed that we owe as much to random strangers as we do to our friends and family would never show up to any birthday parties, because any time spent at a birthday party would be more efficiently spent earning-to-give to some high-impact charity. They would never visit their family on Christmas, because plane tickets are expensive and airplanes burn a lot of carbon.

They also wouldn’t concern themselves with whether their job is satisfying or even not totally miserable; they would only care whether the total positive impact they can have on the world is positive, either directly through their work or by raising as much money as possible and donating it all to charity.

They would rest only the minimum amount they require to remain functional, eat only the barest minimum of nutritious food, and otherwise work, work, work, constantly, all the time. If their body was capable of doing the work, they would continue doing the work. For there is not a moment to waste when lives are on the line!

A world full of people like that would be horrible. We would all live our entire lives in miserable drudgery trying to maximize the amount we can donate to faceless strangers on the other side of the planet. There would be no joy or friendship in that world, only endless, endless toil.

When I bring this up in the Effective Altruism community, I’ve heard people try to argue otherwise, basically saying that we would never need everyone to devote themselves to the cause at this level, because we’d soon solve all the big problems and be able to go back to enjoying our lives. I think that’s probably true—but it also kind of misses the point.

Yes, if everyone gave their fair share, that fair share wouldn’t have to be terribly large. But we know for a fact that most people are not giving their fair share. So what now? What should we actually do? Do you really want to live in a world where the morally best people are miserable all the time sacrificing themselves at the altar of altruism?

Yes, clearly, most people don’t do enough. In fact, most people give basically nothing to high-impact charities. We should be trying to fix that. But if I am already giving far more than my fair share, far more than I would have to give if everyone else were pitching in as they should—isn’t there some point at which I’m allowed to stop? Do I have to give everything I can or else I’m a monster?

The conclusion that we ought to make ourselves utterly miserable in order to save distant strangers feels deeply unsettling. It feels even worse if we say that we ought to do so, and worse still if we feel we are bad people if we don’t.

One solution would be to say that we owe absolutely nothing to these distant strangers. Yet that clearly goes too far in the opposite direction. There are so many problems in this world that could be fixed if more people cared just a little bit about strangers on the other side of the planet. Poverty, hunger, war, climate change… if everyone in the world (or really even just everyone in power) cared even 1% as much about random strangers as they do about themselves, all these would be solved.

Should you donate to charity? Yes! You absolutely should. Please, I beseech you, give some reasonable amount to charity—perhaps 5% of your income, or if you can’t manage that, maybe 1%.

Should you make changes in your life to make the world better? Yes! Small ones. Eat less meat. Take public transit instead of driving. Recycle. Vote.

But I can’t ask you to give 90% of your income and spend your entire life trying to optimize your positive impact. Even if it worked, it would be utter madness, and the world would be terrible if all the good people tried to do that.

I feel quite strongly that this is the right approach: Give something. Your fair share, or perhaps even a bit more, because you know not everyone will.

Yet it’s surprisingly hard to come up with a moral theory on which this is the right answer.

It’s much easier to develop a theory on which we owe absolutely nothing: egoism, or any deontology on which charity is not an obligation. And of course Singer-style utilitarianism says that we owe virtually everything: As long as QALYs can be purchased cheaper by GiveWell than by spending on yourself, you should continue donating to GiveWell.

I think part of the problem is that we have developed all these moral theories as if we were isolated beings, who act in a world that is simply beyond our control. It’s much like the assumption of perfect competition in economics: I am but one producer among thousands, so whatever I do won’t affect the price.

But what we really needed was a moral theory that could work for a whole society. Something that would still make sense if everyone did it—or better yet, still make sense if half the people did it, or 10%, or 5%. The theory cannot depend upon the assumption that you are the only one following it. It cannot simply “hold constant” the rest of society.

I have come to realize that the Effective Altruism movement, while probably mostly good for the world as a whole, has actually been quite harmful to the mental health of many of its followers, including myself. It has made us feel guilty for not doing enough, pressured us to burn ourselves out working ever harder to save the world. Because we do not give our last dollar to charity, we are told that we are murderers.

But there are real murderers in this world. While you were beating yourself up over not donating enough, Vladmir Putin was continuing his invasion of Ukraine, ExxonMobil was expanding its offshore drilling, Daesh was carrying out hundreds of terrorist attacks, Qanon was deluding millions of people, and the human trafficking industry was making $150 billion per year.

In other words, by simply doing nothing you are considerably better than the real monsters responsible for most of the world’s horror.

In fact, those starving children in Africa that you’re sending money to help? They wouldn’t need it, were it not for centuries of colonial imperialism followed by a series of corrupt and/or incompetent governments ruled mainly by psychopaths.

Indeed the best way to save those people, in the long run, would be to fix their governments—as has been done in places like Namibia and Botswana. According to the World Development Indicators, the proportion of people living below the UN extreme poverty line (currently $2.15 per day at purchasing power parity) has fallen from 36% to 16% in Namibia since 2003, and from 42% to 15% in Botswana since 1984. Compare this to some countries that haven’t had good governments over that time: In Cote d’Ivoire the same poverty rate was 8% in 1985 but is 11% today (and was actually as high as 33% in 2015), while in Congo it remains at 35%. Then there are countries that are trying, but just started out so poor it’s a long way to go: Burkina Faso’s extreme poverty rate has fallen from 82% in 1994 to 30% today.

In other words, if you’re feeling bad about not giving enough, remember this: if everyone in the world were as good as you, you wouldn’t need to give a cent.

Of course, simply feeling good about yourself for not being a psychopath doesn’t accomplish very much either. Somehow we have to find a balance: Motivate people enough so that they do something, get them to do their share; but don’t pressure them to sacrifice themselves at the altar of altruism.

I think part of the problem here—and not just here—is that the people who most need to change are the ones least likely to listen. The kind of person who reads Peter Singer is already probably in the top 10% of most altruistic people, and really doesn’t need much more than a slight nudge to be doing their fair share. And meanwhile the really terrible people in the world have probably never picked up an ethics book in their lives, or if they have, they ignored everything it said.

I don’t quite know what to do about that. But I hope I can least convince you—and myself—to take some of the pressure off when it feels like we’re not doing enough.

What am I without you?

Jul 16 JDN 2460142

When this post goes live, it will be my husband’s birthday. He will probably read it before that, as he follows my Patreon. In honor of his birthday, I thought I would make romance the topic of today’s post.

In particular, there’s a certain common sentiment that is usually viewed as romantic, which I in fact think is quite toxic. This is the notion that “Without you, I am nothing”—that in the absence of the one we love, we would be empty or worthless.

Here is this sentiment being expressed by various musicians:

I’m all out of love,
I’m so lost without you.
I know you were right,
Believing for so long.
I’m all out of love,
What am I without you?

– “All Out of Love”, Air Supply

<quotation>

Well what am I, what am I without you?
What am I without you?
Your love makes me burn.
No, no, no
Well what am I, what am I without you?
I’m nothing without you.
So lеt love burn.

– “What am I without you?”, Suede

Without you, I’m nothing.
Without you, I’m nothing.
Without you, I’m nothing.
Without you, I’m nothing at all.

– “Without you I’m nothing”, Placebo

I’ll be nothin’, nothin’, nothin’, nothin’ without you.
I’ll be nothin’, nothin’, nothin’, nothin’ without you.
Yeah
I was too busy tryna find you with someone else,
The one I couldn’t stand to be with was myself.
‘Cause I’ll be nothin’, nothin’, nothin’, nothin’ without you.

– “Nothing without you”, The Weeknd

You were my strength when I was weak.
You were my voice when I couldn’t speak.
You were my eyes when I couldn’t see.
You saw the best there was in me!
Lifted me up when I couldn’t reach,
You gave me faith ’cause you believed!
I’m everything I am,
Because you loved me.


– “Because You Loved Me”, Celine Dion

Hopefully that’s enough to convince you that this is not a rare sentiment. Moreover, these songs do seem quite romantic, and there are parts of them that still resonate quite strongly for me (particularly “Because You Loved Me”).

Yet there is still something toxic here: They make us lose sight of our own self-worth independently of our relationships with others. Humans are deeply social creatures, so of course we want to fill our lives with relationships with others, and as well we should. But you are more than your relationships.

Stranded alone on a deserted island, you would still be a person of worth. You would still have inherent dignity. You would still deserve to live.

It’s also unhealthy even from a romantic perspective. Yes, once you’ve found the love of your life and you really do plan to live together forever, tying your identity so tightly to the relationship may not be disastrous—though it could still be unhealthy and promote a cycle of codependency. But what about before you’ve made that commitment? If you are nothing without the one you love, what happens when you break up? Who are you then?

And even if you are with the love of your life, what happens if they die?

Of course our relationships do change who we are. To some degree, our identity is inextricably tied to those we love, and this would probably still be desirable even if it weren’t inevitable. But there must always be part of you that isn’t bound to anyone in particular other than yourself—and if you can’t find that part, it’s a very bad sign.

Now compare a quite different sentiment:

If I didn’t have you to hold me tight,

If I didn’t have you to lie with at night,

If I didn’t have you to share my sighs,

And to kiss me and dry my tears when I cry…

Well, I…

Really think that I would…

Probably…

Have somebody else.

– “If I Didn’t Have You”, Tim Minchin

Tim Minchin is a comedy musician, and the song is very much written in that vein. He doesn’t want you to take it too seriously.

Another song Tim Minchin wrote for his wife, “Beautiful Head”, reflects upon the inevitable chasm that separates any two minds—he knows all about her, but not what goes on inside that beautiful head. He also has another sort-of love song, called “I’ll Take Lonely Tonight”, about rejecting someone because he wants to remain faithful to his wife. It’s bittersweet despite the humor within, and honestly I think it shows a deeper sense of romance than the vast majority of love songs I’ve heard.

Yet I must keep coming back to one thing: This is a much healthier attitude.

The factual claim is almost certainly objectively true: In all probability, should you find yourself separated from your current partner, you would, sooner or later, find someone else.

None of us began our lives in romantic partnerships—so who were we before then? No doubt our relationships change us, and losing them would change us yet again. But we were something before, and should it end, we will continue to be something after.

And the attitude that our lives would be empty and worthless without the one we love is dangerously close to the sort of self-destructive self-talk I know all too well from years of depression. “I’m worthless without you, I’m nothing without you” is really not so far from “I’m worthless, I’m nothing” simpliciter. If you hollow yourself out for love, you have still hollowed yourself out.

Why, then, do we only see this healthier attitude expressed as comedy? Why can’t we take seriously the idea that love doesn’t define your whole identity? Why does the toxic self-deprecation of “I am nothing without you” sound more romantic to our ears than the honest self-respect of “I would probably have somebody else”? Why is so much of what we view as “romantic” so often unrealistic—or even harmful?

Tim Minchin himself seems to wonder, as the song alternates between serious expressions of love and ironic jabs:

And if I may conjecture a further objection,
Love is nothing to do with destined perfection.
The connection is strengthened,
The affection simply grows over time,

Like a flower,
Or a mushroom,
Or a guinea pig,
Or a vine,
Or a sponge,
Or bigotry…
…or a banana.

And love is made more powerful
By the ongoing drama of shared experience,
And the synergy of a kind of symbiotic empathy, or… something.

I believe that a healthier form of love is possible. I believe that we can unite ourselves with others in a way that does not sacrifice our own identity and self-worth. I believe that love makes us more than we were—but not that we would be nothing without it. I am more than I was because you loved me—but not everything I am.

This is already how most of us view friendship: We care for our friends, we value our relationships with them—but we would recognize it as toxic to declare that we’d be nothing without them. Indeed, there is a contradiction in our usual attitude here: If part of who I am is in my friendships, then how can losing my romantic partner render me nothing? Don’t I still at least have my friends?

I can now answer this question: What am I without you? An unhappier me. But still, me.

So, on your birthday, let me say this to you, my dear husband:

But with all my heart and all my mind,
I know one thing is true:
I have just one life and just one love,
And my love, that love is you.
And if it wasn’t for you,
Darling, you…
I really think that I would…
Possibly…
Have somebody else.

Why we need critical thinking

Jul 9 JDN 2460135

I can’t find it at the moment, but awhile ago I read a surprisingly compelling post on social media (I think it was Facebook, but it could also have been Reddit) questioning the common notion that we should be teaching more critical thinking in school.

I strongly believe that we should in fact be teaching more critical thinking in school—actually I think we should replace large chunks of the current math curriculum with a combination of statistics, economics and critical thinking—but it made me realize that we haven’t done enough to defend why that is something worth doing. It’s just become a sort of automatic talking point, like, “obviously you would want more critical thinking, why are you even asking?”

So here’s a brief attempt to explain why critical thinking is something that every citizen ought to be good at, and hence why it’s worthwhile to teach it in primary and secondary school.

Critical thinking, above all, allows you to detect lies. It teaches you to look past the surface of what other people are saying and determine whether what they are saying is actually true.

And our world is absolutely full of lies.

We are constantly lied to by advertising. We are constantly lied to by spam emails and scam calls. Day in and day out, people with big smiles promise us the world, if only we will send them five easy payments of $19.99.

We are constantly lied to by politicians. We are constantly lied to by religious leaders (it’s pretty much their whole job actually).

We are often lied to by newspapers—sometimes directly and explicitly, as in fake news, but more often in subtler ways. Most news articles in the mainstream press are true in the explicit facts they state, but are missing important context; and nearly all of them focus on the wrong things—exciting, sensational, rare events rather than what’s actually important and likely to affect your life. If newspapers were an accurate reflection of genuine risk, they’d have more articles on suicide than homicide, and something like one million articles on climate change for every one on some freak accident (like that submarine full of billionaires).

We are even lied to by press releases on science, which likewise focus on new, exciting, sensational findings rather than supported, established, documented knowledge. And don’t tell me everyone already knows it; just stating basic facts about almost any scientific field will shock and impress most of the audience, because they clearly didn’t learn this stuff in school (or, what amounts to the same thing, don’t remember it). This isn’t just true of quantum physics; it’s even true of economics—which directly affects people’s lives.

Critical thinking is how you can tell when a politician has distorted the views of his opponent and you need to spend more time listening to that opponent speak. Critical thinking could probably have saved us from electing Donald Trump President.

Critical thinking is how you tell that a supplement which “has not been evaluated by the FDA” (which is to say, nearly all of them) probably contains something mostly harmless that maybe would benefit you if you were deficient in it, but for most people really won’t matter—and definitely isn’t something you can substitute for medical treatment.

Critical thinking is how you recognize that much of the history you were taught as a child was a sanitized, simplified, nationalist version of what actually happened. But it’s also how you recognize that simply inverting it all and becoming the sort of anti-nationalist who hates your own country is at least as ridiculous. Thomas Jefferson was both a pioneer of democracy and a slaveholder. He was both a hero and a villain. The world is complicated and messy—and nothing will let you see that faster than critical thinking.


Critical thinking tells you that whenever a new “financial innovation” appears—like mortgage-backed securities or cryptocurrency—it will probably make obscene amounts of money for a handful of insiders, but will otherwise be worthless if not disastrous to everyone else. (And maybe if enough people had good critical thinking skills, we could stop the next “innovation” from getting so far!)

More widespread critical thinking could even improve our job market, as interviewers would no longer be taken in by the candidates who are best at overselling themselves, and would instead pay more attention to the more-qualified candidates who are quiet and honest.

In short, critical thinking constitutes a large portion of what is ordinarily called common sense or wisdom; some of that simply comes from life experience, but a great deal of it is actually a learnable skill set.

Of course, even if it can be learned, that still raises the question of how it can be taught. I don’t think we have a sound curriculum for teaching critical thinking, and in my more cynical moments I wonder if many of the powers that be like it that way. Knowing that many—not all, but many—politicians make their careers primarily from deceiving the public, it’s not so hard to see why those same politicians wouldn’t want to support teaching critical thinking in public schools. And it’s almost funny to me watching evangelical Christians try to justify why critical thinking is dangerous—they come so close to admitting that their entire worldview is totally unfounded in logic or evidence.

But at least I hope I’ve convinced you that it is something worthwhile to know, and that the world would be better off if we could teach it to more people.

Age, ambition, and social comparison

Jul 2 JDN 2460128

The day I turned 35 years old was one of the worst days of my life, as I wrote about at the time. I think the only times I have felt more depressed than that day were when my father died, when I was hospitalized by an allergic reaction to lamotrigine, and when I was rejected after interviewing for jobs at GiveWell and Wizards of the Coast.

This is notable because… nothing particularly bad happened to me on my 35th birthday. It was basically an ordinary day for me. I felt horrible simply because I was turning 35 and hadn’t accomplished so many of the things I thought I would have by that point in my life. I felt my dreams shattering as the clock ticked away what chance I thought I’d have at achieving my life’s ambitions.

I am slowly coming to realize just how pathological that attitude truly is. It was ingrained in me very deeply from the very youngest age, not least because I was such a gifted child.

While studying quantum physics in college, I was warned that great physicists do all their best work before they are 30 (some even said 25). Einstein himself said as much (so it must be true, right?). It turns out that was simply untrue. It may have been largely true in the 18th and 19th centuries, and seems to have seen some resurgence during the early years of quantum theory, but today the median age at which a Nobel laureate physicist did their prize-winning work is 48. Less than 20% of eminent scientists made their great discoveries before the age of 40.

Alexander Fleming was 47 when he discovered penicillin—just about average for an eminent scientist of today. Darwin was 22 when he set sail on the Beagle, but didn’t publish On the Origin of Species until he was 50. Andre-Marie Ampere started his work in electromagnetism in his forties.

In creative arts, age seems to be no barrier at all. Julia Child published her first cookbook at 50. Stan Lee sold his first successful Marvel comic at 40. Toni Morrison was 39 when she published her first novel, and 62 when she won her Nobel. Peter Mark Roget was 73 when he published his famous thesaurus. Tolkein didn’t publish The Hobbit until he was 45.

Alan Rickman didn’t start drama school until he was 26 and didn’t have a major Hollywood role until he was 42. Samuel L. Jackson is now the third-highest-grossing actor of all time (mostly because of the Avengers movies), but he didn’t have any major movie roles until his forties. Anna Moses didn’t start painting until she was 78.

We think of entrepreneurship as a young man’s game, but Ray Kroc didn’t buy McDonalds until he was 59. Harland Sanders didn’t franchise KFC until he was 62. Eric Yuan wasn’t a vice president until the age of 37 and didn’t become a billionaire until Zoom took off in 2019—he was 49. Sam Walton didn’t found Walmart until he was 44.

Great humanitarian achievements actually seem to be more likely later in life: Gandhi did not see India achieve independence until he was 78. Nelson Mandela was 76 when he became President of South Africa.

It has taken me far too long to realize this, and in fact I don’t think I have yet fully internalized it: Life is not a race. You do not “fall behind” when others achieve things younger than you did. In fact, most child prodigies grow up no more successful as adults than children who were merely gifted or even above-average. (There is another common belief that prodigies grow up miserable and stunted; that, fortunately, isn’t true either.)

Then there is queer timethe fact that, in a hostile heteronormative world, queer people often find ourselves growing up in a very different way than straight people—and crip timethe ways that coping with a disability changes your relationship with time and often forces you to manage your time in ways that others don’t. As someone who came out fairly young and is now married, queer time doesn’t seem to have affected me all that much. But I feel crip time very acutely: I have to very carefully manage when I go to bed and when I wake up, every single day, making sure I get not only enough sleep—much more sleep than most people get or most employers respect—but also that it aligns properly with my circadian rhythm. Failure to do so risks triggering severe, agonizing pain. Factoring that in, I have lost at least a few years of my life to migraines and depression, and will probably lose several more in the future.

But more importantly, we all need to learn to stop measuring ourselves against other people’s timelines. There is no prize in life for being faster. And while there are prizes for particular accomplishments (Oscars, Nobels, and so on), much of what determines whether you win such prizes is entirely beyond your control. Even people who ultimately made eminent contributions to society didn’t know in advance that they were going to, and didn’t behave all that much differently from others who tried but failed.

I do not want to make this sound easy. It is incredibly hard. I believe that I personally am especially terrible at it. Our society seems to be optimized to make us compare ourselves to others in as many ways as possible as often as possible in as biased a manner as possible.

Capitalism has many important upsides, but one of its deepest flaws is that it makes our standard of living directly dependent on what is happening in the rest of a global market we can neither understand nor control. A subsistence farmer is subject to the whims of nature; but in a supermarket, you are subject to the whims of an entire global economy.

And there is reason to think that the harm of social comparison is getting worse rather than better. If some mad villain set out to devise a system that would maximize harmful social comparison and the emotional damage it causes, he would most likely create something resembling social media.

The villain might also tack on some TV news for good measure: Here are some random terrifying events, which we’ll make it sound like could hit you at any moment (even though their actual risk is declining); then our ‘good news’ will be a litany of amazing accomplishments, far beyond anything you could reasonably hope for, which have been achieved by a cherry-picked sample of unimaginably fortunate people you have never met (yet you somehow still form parasocial bonds with because we keep showing them to you). We will make a point not to talk about the actual problems in the world (such as inequality and climate change), certainly not in any way you might be able to constructively learn from; nor will we mention any actual good news which might be relevant to an ordinary person such as yourself (such as economic growth, improved health, or reduced poverty). We will focus entirely on rare, extreme events that by construction aren’t likely to ever happen to you and are not relevant to how you should live your life.

I do not have some simple formula I can give you that will make social comparison disappear. I do not know how to shake the decades of indoctrination into a societal milieu that prizes richer and faster over all other concepts of worth. But perhaps at least recognizing the problem will weaken its power over us.

How to make political conversation possible

Jun 25 JDN 2460121

Every man has the right to an opinion, but no man has a right to be wrong in his facts.

~Bernard Baruch

We shouldn’t expect political conversation to be easy. Politics inherently involves confllict. There are various competing interests and different ethical views involved in any political decision. Budgets are inherently limited, and spending must be prioritized. Raising taxes supports public goods but hurts taxpayers. A policy that reduces inflation may increase unemployment. A policy that promotes growth may also increase inequality. Freedom must sometimes be weighed against security. Compromises must be made that won’t make everyone happy—often they aren’t anyone’s first choice.

But in order to have useful political conversations, we need to have common ground. It’s one thing to disagree about what should be done—it’s quite another to ‘disagree’ about the basic facts of the world. Reasonable people can disagree about what constitutes the best policy choice. But when you start insisting upon factual claims that are empirically false, you become inherently unreasonable.

What terrifies me about our current state of political discourse is that we do not seem to have this common ground. We can’t even agree about basic facts of the world. Unless we can fix this, political conversation will be impossible.

I am tempted to say “anymore”—it at least feels to me like politics used to be different. But maybe it’s always been this way, and the Internet simply made the unreasonable voices louder. Overall rates of belief in most conspiracy theories haven’t changed substantially over time. Many other times have declared themselves ‘the golden age of conspiracy theory’. Maybe this has always been a problem. Maybe the greatest reason humanity has never been able to achieve peace is that large swaths of humanity can’t even agree on the basic facts.

Donald Trump exemplified this fact-less approach to politics, and QAnon remains a disturbingly significant force in our politics today. It’s impossible to have a sensible conversation with people who are convinced that you’re supporting a secret cabal of Satanic child molesters—and all the more impossible because they were willing to become convinced of that on literally zero evidence. But Trump was not the first conspiracist candidate, and will not be the last.

Robert F. Kennedy Jr. now seems to be challenging Trump for the title of ‘most unreasonable Presidential candidate’, as he has now advocated for an astonishing variety of bizarre unfounded claims: that vaccines are deadly, that antidepressants are responsible for mass shootings, that COVID was a Chinese bioweapon. He even claims things that can be quickly refuted simply by looking up the figures: He says that Switzerland’s gun ownership rate is comparable to the US, when in fact it’s only about one-fourth as high. No other country even comes close to the extraordinarily high rate of gun ownership in the US; we are the only country in the world with more privately-owned guns than private citizens to own them—more guns than people. (We also have by far the most military weapons as well, but that’s a somewhat different issue.)

What should we be doing about this? I think at this point it’s clear that simply sitting back and hoping it goes away on its own is not working. There is a widespread fear that engaging with bizarre theories simply grants them attention, but I think we have no serious alternative. They aren’t going to disappear if we simply ignore them.

That still leaves the question of how to engage. Simply arguing with their claims directly and presenting mainstream scientific evidence appears to be remarkably ineffective. They will simply dismiss the credibility of the scientific evidence, often by exaggerating genuine flaws in scientific institutions. The journal system is broken? Big Pharma has far too much influence? Established ideas take too long to become unseated? All true. But that doesn’t mean that magic beans cure cancer.

A more effective—not easy, and certainly not infallible, but more effective—strategy seems to be to look deeper into why people say the things they do. I emphasize the word ‘say’ here, because it often seems to be the case that people don’t really believe in conspiracy theories the way they believe in ordinary facts. It’s more the mythology mindset.

Rather than address the claims directly, you need to address the person making the claims. Before getting into any substantive content, you must first build rapport and show empathy—a process some call pre-suasion. Then, rather than seeking out the evidence that support their claims—as there will be virtually none—try to find out what emotional need the conspiracy theory satisfies for them: How does it help them make sense of the terrifying chaos of the world? How does professing belief in something that initially seems absurd and horrific actually make the world seem more orderly and secure in their mind?


For instance, consider the claim that 9/11 was an inside job. At face value, this is horrifying: The US government is so evil it was prepared to launch an attack on our own soil, against our own citizens, in order to justify starting a war in another country? Against such a government, I think violent insurrection is the only viable response. But if you consider it from another perspective, it makes the world less terrifying: At least, there is someone in control. An attack like 9/11 means that the world is governed by chaos: Even we in the seemingly-impregnable fortress of American national security are in fact vulnerable to random attacks by small groups of dedicated fanatics. In the conspiracist vision of the world, the US government becomes a terrible villain; but at least the world is governed by powerful, orderly forces—not random chaos.

Or consider one of the most widespread (and, to be fair, one of the least implausible) conspiracy theories: That JFK was assassinated not by a single fanatic, but by an organized agency—the KGB, or the CIA, or the Vice President. In the real world, the President of the United States—the most powerful man on the entire planet—can occasionally be felled by a single individual who is dedicated enough and lucky enough. In the conspiracist world, such a powerful man can only be killed by someone similarly powerful. The world may be governed by an evil elite—but at least it is governed. The rules may be evil, but at least there are rules.

Understanding this can give you some sympathy for people who profess conspiracies: They are struggling to cope with the pain of living in a chaotic, unpredictable, disorderly world. They cannot deny that terrible events happen, but by attributing them to unseen, organized forces, they can at least believe that those terrible events are part of some kind of orderly plan.


At the same time, you must constantly guard against seeming arrogant or condescending. (This is where I usually fail; it’s so hard for me to take these ideas seriously.) You must present yourself as open-minded and interested in speaking in good faith. If they sense that you aren’t taking them seriously, people will simply shut down and refuse to talk any further.

It’s also important to recognize that most people with bizarre beliefs aren’t simply gullible. It isn’t that they believe whatever anyone tells them. On the contrary, they seem to suffer from misplaced skepticism: They doubt the credible sources and believe the unreliable ones. They are hyper-aware of the genuine problems with mainstream sources, and yet somehow totally oblivious to the far more glaring failures of the sources they themselves trust.

Moreover, you should never expect to change someone’s worldview in a single conversation. That simply isn’t how human beings work. The only times I have ever seen anyone completely change their opinion on something in a single sitting involved mathematical proofs—showing a proper proof really can flip someone’s opinion all by itself. Yet even scientists working in their own fields of expertise generally require multiple sources of evidence, combined over some period of time, before they will truly change their minds.

Your goal, then, should not be to convince someone that their bizarre belief is wrong. Rather, convince them that some of the sources they trust are just as unreliable as the ones they doubt. Or point out some gaps in the story they hadn’t considered. Or offer an alternative account of events that explains the outcome without requiring the existence of a secret evil cabal. Don’t try to tear down the entire wall all at once; chip away at it, one little piece at a time—and one day, it will crumble.

Hopefully if we do this enough, we can make useful political conversation possible.

We do seem to have better angels after all

Jun 18 JDN 2460114

A review of The Darker Angels of Our Nature

(I apologize for not releasing this on Sunday; I’ve been traveling lately and haven’t found much time to write.)

Since its release, I have considered Steven Pinker’s The Better Angels of our Nature among a small elite category of truly great books—not simply good because enjoyable, informative, or well-written, but great in its potential impact on humanity’s future. Others include The General Theory of Employment, Interest, and Money, On the Origin of Species, and Animal Liberation.

But I also try to expose myself as much as I can to alternative views. I am quite fearful of the echo chambers that social media puts us in, where dissent is quietly hidden from view and groupthink prevails.

So when I saw that a group of historians had written a scathing critique of The Better Angels, I decided I surely must read it and get its point of view. This book is The Darker Angels of Our Nature.

The Darker Angels is written by a large number of different historians, and it shows. It’s an extremely disjointed book; it does not present any particular overall argument, various sections differ wildly in scope and tone, and sometimes they even contradict each other. It really isn’t a book in the usual sense; it’s a collection of essays whose only common theme is that they disagree with Steven Pinker.

In fact, even that isn’t quite true, as some of the best essays in The Darker Angels are actually the ones that don’t fundamentally challenge Pinker’s contention that global violence has been on a long-term decline for centuries and is now near its lowest in human history. These essays instead offer interesting insights into particular historical eras, such as medieval Europe, early modern Russia, and shogunate Japan, or they add additional nuances to the overall pattern, like the fact that, compared to medieval times, violence in Europe seems to have been less in the Pax Romana (before) and greater in the early modern period (after), showing that the decline in violence was not simple or steady, but went through fluctuations and reversals as societies and institutions changed. (At this point I feel I should note that Pinker clearly would not disagree with this—several of the authors seem to think he would, which makes me wonder if they even read The Better Angels.)

Others point out that the scale of civilization seems to matter, that more is different, and larger societies and armies more or less automatically seem to result in lower fatality rates by some sort of scaling or centralization effect, almost like the square-cube law. That’s very interesting if true; it would suggest that in order to reduce violence, you don’t really need any particular mode of government, you just need something that unites as many people as possible under one banner. The evidence presented for it was too weak for me to say whether it’s really true, however, and there was really no theoretical mechanism proposed whatsoever.

Some of the essays correct genuine errors Pinker made, some of which look rather sloppy. Pinker clearly overestimated the death tolls of the An Lushan Rebellion, the Spanish Inquisition, and Aztec ritual executions, probably by using outdated or biased sources. (Though they were all still extremely violent!) His depiction of indigenous cultures does paint with a very broad brush, and fails to recognize that some indigenous societies seem to have been quite peaceful (though others absolutely were tremendously violent).

One of the best essays is about Pinker’s cavalier attitude toward mass incarceration, which I absolutely do consider a deep flaw in Pinker’s view. Pinker presents increased incarceration rates along with decreased crime rates as if they were an unalloyed good, while I can at best be ambivalent about whether the benefit of decreasing crime is worth the cost of greater incarceration. Pinker seems to take for granted that these incarcerations are fair and impartial, when we have a great deal of evidence that they are strongly biased against poor people and people of color.

There’s another good essay about the Enlightenment, which Pinker seems to idealize a little too much (especially in his other book Enlightenment Now). There was no sudden triumph of reason that instantly changed the world. Human knowledge and rationality gradually improved over a very long period of time, with no obvious turning point and many cases of backsliding. The scientific method isn’t a simple, infallible algorithm that suddenly appeared in the brain of Galileo or Bayes, but a whole constellation of methods and concepts of rationality that took centuries to develop and is in fact still developing. (Much as the Tao that can be told is not the eternal Tao, the scientific method that can be written in a textbook is not the true scientific method.)

Several of the essays point out the limitations of historical and (especially) archaeological records, making it difficult to draw any useful inferences about rates of violence in the past. I agree that Pinker seems a little too cavalier about this; the records really are quite sparse and it’s not easy to fill in the gaps. Very small samples can easily distort homicide rates; since only about 1% of deaths worldwide are homicide, if you find 20 bodies, whether or not one of them was murdered is the difference between peaceful Japan and war-torn Colombia.

On the other hand, all we really can do is make the best inferences we have with the available data, and for the time periods in which we do have detailed records—surely true since at least the 19th century—the pattern of declining violence is very clear, and even the World Wars look like brief fluctuations rather than fundamental reversals. Contrary to popular belief, the World Wars do not appear to have been especially deadly on a per-capita basis, compared to various historic wars. The primary reason so many people died in the World Wars was really that there just were more people in the world. A few of the authors don’t seem to consider this an adequate reason, but ask yourself this: Would you rather live in a society of 100 in which 10 people are killed, or a society of 1 billion in which 1 million are killed? In the former case your chances of being killed are 10%; in the latter, 0.1%. Clearly, per-capita measures of violence are the correct ones.

Some essays seem a bit beside the point, like one on “environmental violence” which quite aptly details the ongoing—terrifying—degradation of our global ecology, but somehow seems to think that this constitutes violence when it obviously doesn’t. There is widespread violence against animals, certainly; slaughterhouses are the obvious example—and unlike most people, I do not consider them some kind of exception we can simply ignore. We do in fact accept levels of cruelty to pigs and cows that we would never accept against dogs or horses—even the law makes such exceptions. Moreover, plenty of habitat destruction is accompanied by killing of the animals who lived in that habitat. But ecological degradation is not equivalent to violence. (Nor is it clear to me that our treatment of animals is more violent overall today than in the past; I guess life is probably worse for a beef cow today than it was in the medieval era, but either way, she was going to be killed and eaten. And at least we no longer do cat-burning.) Drilling for oil can be harmful, but it is not violent. We can acknowledge that life is more peaceful now than in the past without claiming that everything is better now—in fact, one could even say that overall life isn’t better, but I think they’d be hard-pressed to argue that.

These are the relatively good essays, which correct minor errors or add interesting nuances. There are also some really awful essays in the mix.

A common theme of several of the essays seems to be “there are still bad things, so we can’t say anything is getting better”; they will point out various forms of violence that undeniably still exist, and treat this as a conclusive argument against the claim that violence has declined. Yes, modern slavery does exist, and it is a very serious problem; but it clearly is not the same kind of atrocity that the Atlantic slave trade was. Yes, there are still murders. Yes, there are still wars. Probably these things will always be with us to some extent; but there is a very clear difference between 500 homicides per million people per year and 50—and it would be better still if we could bring it down to 5.

There’s one essay about sexual violence that doesn’t present any evidence whatsoever to contradict the claim that rates of sexual violence have been declining while rates of reporting and prosecution have been increasing. (These two trends together often result in reported rapes going up, but most experts agree that actual rapes are going down.) The entire essay is based on anecdote, innuendo, and righteous anger.

There are several essays that spend their whole time denouncing neoliberal capitalism (not even presenting any particularly good arguments against it, though such arguments do exist), seeming to equate Pinker’s view with some kind of Rothbardian anarcho-capitalism when in fact Pinker is explictly in favor of Nordic-style social democracy. (One literally dismisses his support for universal healthcare as “Well, he is Canadian”.) But Pinker has on occasion said good things about capitalism, so clearly, he is an irredeemable monster.

Right in the introduction—which almost made me put the book down—is an astonishingly ludicrous argument, which I must quote in full to show you that it is not out of context:

What actually is violence (nowhere posed or answered in The Better Angels)? How do people perceive it in different time-place settings? What is its purpose and function? What were contemporary attitudes toward violence and how did sensibilities shift over time? Is violence always ‘bad’ or can there be ‘good’ violence, violence that is regenerative and creative?

The Darker Angels of Our Nature, p.16

Yes, the scare quotes on ‘good’ and ‘bad’ are in the original. (Also the baffling jargon “time-place settings” as opposed to, say, “times and places”.) This was clearly written by a moral relativist. Aside from questioning whether we can say anything about anything, the argument seems to be that Pinker’s argument is invalid because he didn’t precisely define every single relevant concept, even though it’s honestly pretty obvious what the world “violence” means and how he is using it. (If anything, it’s these authors who don’t seem to understand what the word means; they keep calling things “violence” that are indeed bad, but obviously aren’t violence—like pollution and cyberbullying. At least talk of incarceration as “structural violence” isn’t obvious nonsense—though it is still clearly distinct from murder rates.)

But it was by reading the worst essays that I think I gained the most insight into what this debate is really about. Several of the essays in The Darker Angels thoroughly and unquestioningly share the following inference: if a culture is superior, then that culture has a right to impose itself on others by force. On this, they seem to agree with the imperialists: If you’re better, that gives you a right to dominate everyone else. They rightly reject the claim that cultures have a right to imperialistically dominate others, but they cannot deny the inference, and so they are forced to deny that any culture can ever be superior to another. The result is that they tie themselves in knots trying to justify how greater wealth, greater happiness, less violence, and babies not dying aren’t actually good things. They end up talking nonsense about “violence that is regenerative and creative”.

But we can believe in civilization without believing in colonialism. And indeed that is precisely what I (along with Pinker) believe: That democracy is better than autocracy, that free speech is better than censorship, that health is better than illness, that prosperity is better than poverty, that peace is better than war—and therefore that Western civilization is doing a better job than the rest. I do not believe that this justifies the long history of Western colonial imperialism. Governing your own country well doesn’t give you the right to invade and dominate other countries. Indeed, part of what makes colonial imperialism so terrible is that it makes a mockery of the very ideals of peace, justice, and freedom that the West is supposed to represent.

I think part of the problem is that many people see the world in zero-sum terms, and believe that the West’s prosperity could only be purchased by the rest of the world’s poverty. But this is untrue. The world is nonzero-sum. My happiness does not come from your sadness, and my wealth does not come from your poverty. In fact, even the West was poor for most of history, and we are far more prosperous now that we have largely abandoned colonial imperialism than we ever were in imperialism’s heyday. (I do occasionally encounter British people who seem vaguely nostalgic for the days of the empire, but real median income in the UK has doubled just since 1977. Inequality has also increased during that time, which is definitely a problem; but the UK is undeniably richer now than it ever was at the peak of the empire.)

In fact it could be that the West is richer now because of colonalism than it would have been without it. I don’t know whether or not this is true. I suspect it isn’t, but I really don’t know for sure. My guess would be that colonized countries are poorer, but colonizer countries are not richer—that is, colonialism is purely destructive. Certain individuals clearly got richer by such depredation (Leopold II, anyone?), but I’m not convinced many countries did.

Yet even if colonialism did make the West richer, it clearly cannot explain most of the wealth of Western civilization—for that wealth simply did not exist in the world before. All these bridges and power plants, laptops and airplanes weren’t lying around waiting to be stolen. Surely, some of the ingredients were stolen—not least, the land. Had they been bought at fair prices, the result might have been less wealth for us (then again it might not, for wealthier trade partners yield greater exports). But this does not mean that the products themselves constitute theft, nor that the wealth they provide is meaningless. Perhaps we should find some way to pay reparations; undeniably, we should work toward greater justice in the future. But we do not need to give up all we have in order to achieve that justice.

There is a law of conservation of energy. It is impossible to create energy in one place without removing it from another. There is no law of conservation of prosperity. Making the world better in one place does not require making it worse in another.

Progress is real. Yes, it is flawed, uneven, and it has costs of its own; but it is real. If we want to have more of it, we best continue to believe in it. And The Better Angels of Our Nature does have some notable flaws, but it still retains its place among truly great books.

Statisticacy

Jun 11 JDN 2460107

I wasn’t able to find a dictionary that includes the word “statisticacy”, but it doesn’t trigger my spell-check, and it does seem to have the same form as “numeracy”: numeric, numerical, numeracy, numerate; statistic, statistical, statisticacy, statisticate. It definitely still sounds very odd to my ears. Perhaps repetition will eventually make it familiar.

For the concept is clearly a very important one. Literacy and numeracy are no longer a serious problem in the First World; basically every adult at this point knows how to read and do addition. Even worldwide, 90% of men and 83% of women can read, at least at a basic level—which is an astonishing feat of our civilization by the way, well worthy of celebration.

But I have noticed a disturbing lack of, well, statisticacy. Even intelligent, educated people seem… pretty bad at understanding statistics.

I’m not talking about sophisticated econometrics here; of course most people don’t know that, and don’t need to. (Most economists don’t know that!) I mean quite basic statistical knowledge.

A few years ago I wrote a post called “Statistics you should have been taught in high school, but probably weren’t”; that’s the kind of stuff I’m talking about.

As part of being a good citizen in a modern society, every adult should understand the following:

1. The difference between a mean and a median, and why average income (mean) can increase even though most people are no richer (median).

2. The difference between increasing by X% and increasing by X percentage points: If inflation goes from 4% to 5%, that is an increase of 20% ((5/4-1)*100%), but only 1 percentage point (5%-4%).

3. The meaning of standard error, and how to interpret error bars on a graph—and why it’s a huge red flag if there aren’t any error bars on a graph.

4. Basic probabilistic reasoning: Given some scratch paper, a pen, and a calculator, everyone should be able to work out the odds of drawing a given blackjack hand, or rolling a particular number on a pair of dice. (If that’s too easy, make it a poker hand and four dice. But mostly that’s just more calculation effort, not fundamentally different.)

5. The meaning of exponential growth rates, and how they apply to economic growth and compound interest. (The difference between 3% interest and 6% interest over 30 years is more than double the total amount paid.)

I see people making errors about this sort of thing all the time.

Economic news that celebrates rising GDP but wonders why people aren’t happier (when real median income has been falling since 2019 and is only 7% higher than it was in 1999, an annual growth rate of 0.2%).

Reports on inflation, interest rates, or poll numbers that don’t clearly specify whether they are dealing with percentages or percentage points. (XKCD made fun of this.)

Speaking of poll numbers, any reporting on changes in polls that isn’t at least twice the margin of error of the polls in question. (There’s also a comic for this; this time it’s PhD Comics.)

People misunderstanding interest rates and gravely underestimating how much they’ll pay for their debt (then again, this is probably the result of strategic choices on the part of banks—so maybe the real failure is regulatory).

And, perhaps worst of all, the plague of science news articles about “New study says X”. Things causing and/or cancer, things correlated with personality types, tiny psychological nudges that supposedly have profound effects on behavior.

Some of these things will even turn out to be true; actually I think this one on fibromyalgia, this one on smoking, and this one on body image are probably accurate. But even if it’s a properly randomized experiment—and especially if it’s just a regression analysis—a single study ultimately tells us very little, and it’s irresponsible to report on them instead of telling people the extensive body of established scientific knowledge that most people still aren’t aware of.

Basically any time an article is published saying “New study says X”, a statisticate person should ignore it and treat it as random noise. This is especially true if the finding seems weird or shocking; such findings are far more likely to be random flukes than genuine discoveries. Yes, they could be true, but one study just doesn’t move the needle that much.

I don’t remember where it came from, but there is a saying about this: “What is in the textbooks is 90% true. What is in the published literature is 50% true. What is in the press releases is 90% false.” These figures are approximately correct.

If their goal is to advance public knowledge of science, science journalists would accomplish a lot more if they just opened to a random page in a mainstream science textbook and started reading it on air. Admittedly, I can see how that would be less interesting to watch; but then, their job should be to find a way to make it interesting, not to take individual studies out of context and hype them up far beyond what they deserve. (Bill Nye did this much better than most science journalists.)

I’m not sure how much to blame people for lacking this knowledge. On the one hand, they could easily look it up on Wikipedia, and apparently choose not to. On the other hand, they probably don’t even realize how important it is, and were never properly taught it in school even though they should have been. Many of these things may even be unknown unknowns; people simply don’t realize how poorly they understand. Maybe the most useful thing we could do right now is simply point out to people that these things are important, and if they don’t understand them, they should get on that Wikipedia binge as soon as possible.

And one last thing: Maybe this is asking too much, but I think that a truly statisticate person should be able to solve the Monty Hall Problem and not be confused by the result. (Hint: It’s very important that Monty Hall knows which door the car is behind, and would never open that one. If he’s guessing at random and simply happens to pick a goat, the correct answer is 1/2, not 2/3. Then again, it’s never a bad choice to switch.)