The worst is not inevitable

Jul 14 JDN 2460506

As I write this, the left has just won two historic landslide victories: In France, where a coalition of left-wing parties set aside their differences and prevailed; and in the UK, where the Labour Party just curb-stomped all competition.

Many commentators had been worried that the discredited center-right parties in these countries had left a power vacuum that would be filled by far-right parties like France’s National Rally, but this isn’t what happened. Voters showed up to the polls, and they voted out the center-right all right; but what they put in its place was the center-left, not the far-right.

The New York Times is constitutionally incapable of celebrating anything, so they immediately turned to worries that “turnout was low” and this indicates “an unhappy Britain”. Honestly this seems to be a general failing of journalists: They can’t ever say anything is good. Their entire view of the world is based around “if it bleeds, it leads”. I’m assuming this has something to do with incentives created by the market of news consumers, but it also seems to be an entrenched social norm among journalists themselves. The world must be getting worse, in every way, or if it’s obviously not, we don’t talk about those things—because good things just aren’t news. (Look no further than the fact we now have the lowest global homicide rates in the history of the human race. What, you didn’t realize we had that right now? Could that perhaps be because literally no news source even mentioned it, ever?)

Now, to be fair, turnout was low, and far-right parties did win some representation, and any kind of sudden political shift indicates some kind of public dissatisfaction… but for goodness’ sake, can we take the win for once?

These elections are proof that the free world’s slide into far-right authoritarianism doesn’t have to be inevitable. We can fight it, we are fighting it—and sometimes, we actually win.

So let’s not give up hope in the United States, either. Yes, polls of the Biden/Trump election don’t look great right now; Trump seems to have a slight lead, and it’s way too close for comfort. But we don’t need to roll over and die. The left can win, when we band together well enough; and if France and Britain can pull it off, I don’t see why we can’t too.

And don’t tell me they had way better candidates. The new UK Prime Minister is not a particularly appealing or charismatic candidate. I frankly don’t even like him. He either is a TERF, or is at least willing to capitulate to them. (He also underestimates the number of trans women by about an order of magnitude.) But he won, because the Labour Party won, and he happened to be the Labour Party leader at the time.

Biden is old. Sure. So is Trump. And if it turns out that Biden is really unhealthy, guess what? That means he’ll die or resign and we get a woman of color as President instead. I don’t see eye-to-eye with Kamala Harris on everything, but I don’t see her taking office as a horrible outcome. It’s certainly a hundred times better than what happens if we let Trump win.

Are there better candidates out there? Theoretically, sure. But unless one of them manages to win nomination by one of the two leading parties, that doesn’t matter. Because in a first-past-the-post voting system, you either vote for one of the top two, or you waste your vote. I’m sorry. It sucks. I want a new voting system too. I know exactly which one we could use that would be a hundred times better. But we’re not going to get it by refusing to vote altogether.

We might get a better voting system by voting strategically for candidates who are open to the idea—which at this juncture clearly means Democrats, not Republicans. (At this point in history, Republicans don’t seem entirely convinced that we should decide things democratically in the first place.)There are also other forms of activism we can use, independent of voting. But not voting isn’t a form of activism, and we should stop acting like it is. Not voting is the lazy, selfish, default option. It’s what you’d do if you were a neoclassical rational agent who cares not in the least for his fellow human beings. You should never be proud of not voting. You’re not sending a message; you’re shirking your civic responsibility.

Voting isn’t writing a love letter. It isn’t signing a form endorsing everything a candidate has ever done or ever will do. If you think of it that way, you’re going to never want to vote—and thus you’re going to give up the most important power you have as a citizen of a democracy.

Voting is a decision. It’s choosing one alternative over another. Like any decision in the real world, there will almost never be a perfect option. There will only be better or worse options. Sometimes, even, you’ll feel that there are only bad options, and you are choosing the least-bad option. But you still have to choose the least-bad option, because literally everything else is worse—including doing nothing.

So get out there and try to help Biden win. Not because you love Biden, but because it’s your civic duty. And if enough people do it, we can still win this.

Adverse selection and all-you-can-eat

Jul 7 JDN 2460499

The concept of adverse selection is normally associated with finance and insurance, and they certainly do have a lot of important applications there. But finance and insurance are complicated (possibly intentionally?) and a lot of people are intimidated by them, and it turns out there’s a much simpler example of this phenomenon, which most people should find familiar:

All-you-can-eat meals.

At most restaurants, you buy a specific amount of food: One cheeseburger, one large order of fries. But at some, you have another option: You can buy an indeterminate amount of food, as much as you are able to eat at one sitting.

Now think about this from the restaurant’s perspective: How do you price an all-you-can-eat meal and turn a profit? Your cost obviously depends on how much food you need to prepare, but you don’t know exactly how much each customer is going to eat.

Fortunately, you don’t need to! You only need to know how much people will eat on average. As long as the average customer’s meal is worth less than what they paid for it, you will continue to make a profit, even though some customers end up eating more than what they paid for.

Insurance works the same way: Some people will cash in on their insurance, costing the company money; but most will not, providing the company with revenue. In fact, you could think of an all-you-can-eat-meal as a form of food insurance.

So, all you need to do is figure out how much an average person eats in one meal, and price based on that, right?

Wrong. Here’s the problem: The people who eat at your restaurant aren’t a random sample of people. They are specifically the kind of people who eat at all-you-can-eat restaurants.

Someone who eats very little probably won’t want to go to your restaurant very much, because they’ll have to pay a high price for very little food. But someone with a big appetite will go to your restaurant frequently, because they get to eat a large amount of food for that same price.

This means that, on average, your customers will end up eating more than what an average restaurant customer eats. You’ll have to raise the price accordingly—which will make the effect even stronger.

This can end in one of two ways: Either an equilibrium is reached where the price is pretty high and most of the customers have big appetites, or no equilibrium is reached, and the restaurant either goes bankrupt or gets rid of its all-you-can-eat policy.

But there’s basically no way to get the outcome that seems the best, which is a low price and a wide variety of people attending the restaurant. Those who eat very little just won’t show up.

That’s adverse selection. Because there’s no way to charge people who eat more a higher price (other than, you know, not being all-you-can-eat), people will self-select by choosing whether or not to attend, and the people who show up at your restaurant will be the ones with big appetites.

The same thing happens with insurance. Say we’re trying to price health insurance; we don’t just need to know the average medical expenses of our population, even if we know a lot of specific demographic information. People who are very healthy may choose not to buy insurance, leaving us with only the less-healthy people buying our insurance—which will force us to raise the price of our insurance.

Once again, you’re not getting a random sample; you’re getting a sample of the kind of people who buy health insurance.

Obamacare was specifically designed to prevent this, by imposing a small fine on people who choose not to buy health insurance. The goal was to get more healthy people buying insurance, in order to bring the cost down. It worked, at least for awhile—but now that individual mandate has been nullified, so adverse selection will once again rear its ugly head. Had our policymakers better understood this concept, they might not have removed the individual mandate.

Another option might occur to you, analogous to the restaurant: What if we just didn’t offer insurance, and made people pay for all their own healthcare? This would be like the restaurant ending its all-you-can-eat policy and charging for each new serving. Most restaurants do that, so maybe it’s the better option in general?

There are two problems here, one ethical, one economic.

The ethical problem is that people don’t deserve to be sick or injured. They didn’t choose those things. So it isn’t fair to let them suffer or bear all the costs of getting better. As a society, we should share in those costs. We should help people in need. (If you don’t already believe this, I don’t know how to convince you of it. But hopefully most people do already believe this.)

The economic problem is that some healthcare is rarely needed, but very expensive. That’s exactly the sort of situation where insurance makes sense, to spread the cost around. If everyone had to pay for their own care with no insurance at all, then most people who get severe illnesses simply wouldn’t be able to afford it. They’d go massively into debt, go bankrupt—people already do, even with insurance!—and still not even get much of the care they need. It wouldn’t matter that we have good treatments for a lot of cancers now; they are all very expensive, so most people with cancer would be unable to pay for them, and they’d just die anyway.

In fact, the net effect of such a policy would probably be to make us all poorer, because a lot of illness and disability would go untreated, making our workforce less productive. Even if you are very healthy and never need health insurance, it may still be in your own self-interest to support a policy of widespread health insurance, so that sick people get treated and can go back to work.

A world without all-you-can-eat restaurants wouldn’t be so bad. But a world without health insurance would be one in which millions of people suffer needlessly because they can’t afford healthcare.

Why does everyone work full-time?

Jun 30 JDN 2460492

Over 70% of US workers work “full-time”, that is, at least 40 hours a week. The average number of hours worked per week is 33.8, and the average number of overtime hours is only 3.6. So basically, about 2/3 of workers work almost exactly 40 hours per week.

We’re accustomed to this situation, so it may not seem strange to you. But stop and think for a moment: What are the odds that across every industry, exactly 40 hours per week is the most efficient arrangement?

Indeed, there is mounting evidence that in many industries, 40 hours is too much, and something like 5 or even 30 would actually be more efficient. Yet we continue to work 40-hour weeks.

This looks like a corner solution: Rather than choosing an optimal amount, we’re all up against some kind of constraint.


What’s the constraint? Well, the government requires (for most workers) that anything above 40 hours per week must be paid as overtime, that is, at a higher wage rate. So it looks like we would all be working more than 40 hours per week, but we hit the upper limit due to these regulations.

Does this mean we would be better off without the regulations? Clearly not. As I just pointed out, the evidence is mounting that 40 hours is too much, not too little. But why, then, would we all be trying to work so many hours?

I believe this is yet another example of hyper-competition, where competition drives us to an inefficient outcome.

Employers value employees who work a lot of hours. Indeed, I contend that they do so far more than makes any rational sense; they seem to care more about how many hours you work than about the actual quality or quantity of your output. Maybe this is because hours worked is easier to measure, or because it seems like a fairer estimate of your effort; but for whatever reason, employers really seem to reward employees who work a lot of hours, regardless of almost everything else.

In the absence of a limit on hours worked, then, employers are going to heap rewards on whoever works the most hours, and so people will be pressured to work more and more hours. Then we would all work ourselves to death, and it’s not even clear that this would be good for GDP.

Indeed, this seems to be what happened, before the 40-hour work week became the standard. In the 1800s, the average American worked over 60 hours per week. It wasn’t until the 1940s that 40-hour weeks became the norm.

But speaking of norms, that also seems to be a big factor here. The truth is, overtime isn’t really that expensive, and employers could be smarter about rewarding good work rather than more hours. But once a norm establishes itself in a society, it can be very hard to change. And right now, the norm is that 40 hours is a “normal” “standard” “full” work week—any more is above and beyond, and any less is inferior.

This is a problem, because a lot of people can’t work 40-hour weeks. Our standard for what makes someone “disabled” isn’t that you can’t work at all; it’s that you can’t work as much as society expects. I wonder how many people are currently living on disability who could have been working part-time, but there just weren’t enough part-time jobs available. The employment rate among people with a disability is only 41%, compared to 77% of the general population.

And it’s not that we need to work this much. Our productivity is now staggeringly high: We produce more than five times as much wealth per hour of work than we did as recently as the 1940s. So in theory, we should be able to live just as well while working one-fifth as much… but that’s clearly not what happened.

Keynes accurately predicted our high level of productivity; but he wrongly predicted that we would work less, when instead we just kept right on working almost as hard as before.

Indeed, it doesn’t even seem like we live five times as well while working just as much. Many things are better now—healthcare, entertainment, and of course electronics—but somehow, we really don’t feel like we are living better lives than our ancestors.

The Economic Policy Institute offers an explanation for this phenomenon: Our pay hasn’t kept up with our productivity.


Up until about 1980, productivity and pay rose in lockstep. But then they started to diverge, and they never again converged. Productivity continued to soar, while real wages only barely increased. The result is that since then, productivity has grown by 64%, and hourly pay has only grown 15%.

This is definitely part of the problem, but I think there’s more to it as well. Housing and healthcare have become so utterly unaffordable in this country that it really doesn’t matter that our cars are nice and our phones are dirt cheap. We are theoretically wealthier now, but most of that extra wealth goes into simply staying healthy and having a home. Our consumption has been necessitized.

If we can solve these problems, maybe people won’t feel a need to work so many hours. Or, maybe competition will continue to pressure them to work those hours… but at least we’ll actually feel richer when we do it.

Reflections on fatherhood

Jun 24 JDN 2460485

I am writing this on Father’s Day, which has become something of a morose occasion for me—or at least a bittersweet one. I had always thought that I would become a father while my own father were still around, that my children would have a full set of grandparents. But that isn’t how my life has turned out.

Humans are unusual, among mammals, in having fathers. Yes, biologically, there is always a male involved. But most male mammals really don’t do much of the parenting; they leave that task more or less entirely to the females. So while every mammal has a mother, most really don’t have a father.

We’re also unusual in just how much parenting we need to survive. All babies are vulnerable, but human babies are exceptionally so. Most mammals are born at least able to walk. Even other altricial mammals are not as underdeveloped at birth as we are. In many ways, it seems that we come out of the womb before we’re really done, in order to spare our mothers an impossible birth.

And it is most likely due to this state of exceptional need that we became creatures of exceptional caring. Fatherhood is one of the clearest examples of this: Our males devote enormous effort to the care and support of their offspring, comparable to the efforts that our females devote (though, even in modern societies, not equal).

It’s ironic that many people don’t think of humans as a uniquely caring species. Some even seem to imagine that we are uniquely violent and cruel. But violence and cruelty is everywhere in nature; it’s the lack of it that needs explained. Even bonobos are not as kind and cooperative as previously imagined, and eusocial species don’t generally cooperate outside their hives; humans may in fact be the most cooperative animal.

What about war? Is that not uniquely human, and thus proof of our inherent violence? Wars are indeed unusual in nature (though not nonexistent: ants and apes are both prone to them), but the part that’s unusual is not the violence—it’s the coordination. Almost all animals are violent to greater or lesser degree. But it’s the rare ones who are cooperative enough to be violent en masse. And most human societies are at peace with most of their neighbors most of the time.

In fact I think it is the fact that we are so caring that makes us so aware of our own cruelty. A truly cruel species would be far more violent, but also wouldn’t care about how violent it was. It wouldn’t feel guilt or shame about being so violent. The reason we feel so ashamed of our own violence is that we are capable of imagining peace.

And part of why we are able to imagine a more caring world is that (most of us) are born into one, in the hands of our mothers and fathers. When we become adults, we find ourselves longing for the peace and security we felt in childhood. And while caring is largely seen as a mother’s job, security is very much seen as a father’s. We feel so helpless and exposed when we grow up, because we were so protected and safe as children.

My father certainly taught me a great deal about caring—caring so much, perhaps too much. I suppose I don’t actually know how much of it he actually taught me, versus how much was encoded in genes I got from him; but I do know that I grew up to be just like him in so many ways, both good and bad—so kind, so loyal, so loving, but also so wounded, so aggrieved, so hopeless. My father was more caring than anyone else I have ever known. He carried the weight of the world on his shoulders, and now so do I. My father died without achieving most of his lifelong dreams. One of my greatest fears is that I will do the same.

Being in a same-sex marriage has also radically changed my relationship with fatherhood. It’s no longer something that can happen to me by accident, or something that would more or less end up happening on its own if we simply stopped fighting it. It is now something I must actively choose, a commitment I must make, a task I must willfully devote myself toward. And so far, it has never seemed like the right time to take that leap of faith. Another great fear of mine is that it never will.

Life is a succession of tomorrows that turn all too quickly into yesterdays, of could-bes that fade into could-have-beens, of shoulds that shrivel into should-haves. The possibilities are vast, but not limitless; more and more limits get imposed as time goes on, until at last death imposes the most final limit of all.

I don’t want my life to pass me by while I’m waiting for something better that never comes. But I clearly can’t be satisfied with where I am now, and I don’t want to give up on all my dreams. How do I know what I should fight for, and what I should give up on?

I wish I could ask my father for advice.

No, the system is not working as designed

You say you’ve got a real solution…

Well, you know,

We’d all love to see the plan.

“Revolution”, the Beatles


Jun 16 JDN 2460478


There are several different versions of the meme, but they all follow the same basic format: Rejecting the statement “the system is broken and must be fixed”, they endorse the statement “the system is working exactly as intended and must be destroyed”.


This view is not just utterly wrong; it’s also incredibly dangerous.

First of all, it should be apparent to anyone who has ever worked in any large, complex organization—a corporation, a university, even a large nonprofit org—that no human system works exactly as intended. Some obviously function better than others, and most function reasonably well most of the time (probably because those that don’t fail and disappear, so there is a sort of natural selection process at work); but even with apparently simple goals and extensive resources, no complex organization will ever be able to coordinate its actions perfectly toward those goals.

But when we’re talking about “the system”, well, first of all:

What exactly is “the system”?

Is it government? Society as a whole? The whole culture, or some subculture? Is it local, national, or international? Are we talking about democracy, or maybe capitalism? The world isn’t just one system; it’s a complex network of interacting systems. So to be quite honest with you, I don’t even know what people are complaining about when they complain about “the system”. All I know is that there is some large institution that they don’t like.

Let’s suppose we can pin that down—say we’re talking about capitalism, for instance, or the US government. Then, there is still the obvious fact that any real-world implementation of a system is going to have failures. Particularly when millions of people are involved, no system is ever going to coordinate exactly toward achieving its goals as efficiently as possible. At best it’s going to coordinate reasonably well and achieve its goals most of the time.

But okay, let’s try to be as charitable as possible here.

What are people trying to say when they say this?

I think that fundamentally this is meant as an expression of Conflict Theory over Mistake Theory: The problems with the world aren’t due to well-intentioned people making honest mistakes, they are due to people being evil. The response isn’t to try to correct their mistakes; it’s to fight them (kill them?), because they are evil.

Well, it is certainly true that evil people exist. There are mass murderers and tyrants, rapists and serial killers. And though they may be less extreme, it is genuinely true that billionaires are disproportionately likely to be psychopaths and that those who aren’t typically share a lot of psychopathic traits.

But does this really look like the sort of system that was designed to optimize payoffs for a handful of psychopaths? Really? You can’t imagine any way that the world could be more optimized for that goal?

How about, say… feudalism?

Not that long ago, historically—less than a millennium—the world was literally ruled by those same sorts of uber-rich psychopaths, and they wielded absolute power over their subjects. In medieval times, your king could confiscate your wealth whenever he chose, or even have you executed on a whim. That system genuinely looks like it’s optimized for the power of a handful of evil people.

Democracy, on the other hand, actually looks like it’s trying to be better. Maybe sometimes it isn’t better—or at least isn’t enough better. But why would they even bother letting us vote, if they were building a system to optimize their own power over us? Why would we have these free speech protections—that allow you to post those memes without going to prison?

In fact, there are places today where near-absolute power really is concentrated in a handful of psychopaths, where authoritarian dictators still act very much like kings of yore. In North Korea or Russia or China, there really is a system in place that’s very well optimized to maximize the power of a few individuals over everyone else.

But in the United States, we don’t have that. Not yet, anyway. Our democracy is flawed and imperilled, but so far, it stands. It needs our constant vigilance to defend it, but so far, it stands.

This is precisely why these ideas are so dangerous.

If you tell people that the system is already as bad as it’s ever going to get, that the only hope now is to burn it all down and build something new, then those people aren’t going to stand up and defend what we still have. They aren’t going to fight to keep authoritarians out of office, because they don’t believe that their votes or donations or protests actually do anything to control who ends up in office.

In other words, they are acting exactly as the authoritarians want them to.

Short of your actual support, the best gift you can give your enemy is apathy.

If all the good people give up on democracy, then it will fail, and we will see something worse in its place. Your belief that the world can’t get any worse can make the world much, much worse.

I’m not saying our system of government couldn’t be radically improved. It absolutely could, even by relatively simple reforms, such as range voting and a universal basic income. But there are people who want to tear it all down, and if they succeed, what they put in its place is almost certainly going to be worse, not better.

That’s what happened in Communist countries, after all: They started with bad systems, they tore them down in the name of making something better—and then they didn’t make something better. They made something worse.

And I don’t think it’s an accident that Marxists are so often Conflict Theorists; Marx himself certainly was. Marx seemed convinced that all we needed to do was tear down the old system, and a new, better system would spontaneously emerge. But that isn’t how any of this works.

Good governance is actually really hard.

Life isn’t simple. People aren’t easy to coordinate. Conflicts of interest aren’t easy to resolve. Coordination failures are everywhere. If you tear down the best systems we have for solving these problems, with no vision at all of what you would replace them with, you’re not going to get something better.

Different people want different things. We have to resolve those disagreements somehow. There are lots of ways we could go about doing that. But so far, some variation on voting seems to be the best method we have for resolving disagreements fairly.

It’s true; some people out there are really just bad people. Some of what even good people want is ultimately not reasonable, or based on false presumptions. (Like people who want to “cut” foreign aid to 5% of the budget—when it is in fact about 1%.) Maybe there is some alternative system out there that could solve these problems better, ensure that only the reasonable voices with correct facts actually get heard.

If so, well, you know:

We’d all love to see the plan.

It’s not enough to recognize that our current system is flawed and posit that something better could exist. You need to actually have a clear vision of what that better system looks like. For if you go tearing down the current system without any idea of what to replace it with, you’re going to end up with something much worse.

Indeed, if you had a detailed plan of how to improve things, it’s quite possible you could convince enough people to get that plan implemented, without tearing down the whole system first.

We’ve done it before, after all:

We ended slavery, then racial segregation. We gave women the right to vote, then integrated them into the workforce. We removed the ban of homosexuality, and then legalized same-sex marriage.


We have a very clear track record of reform working. Things are getting better, on a lot of different fronts. (Maybe not all fronts, I admit.) When the moral case becomes overwhelming, we really can convince people to change their minds and then vote to change our policies.

We do not have such a track record when it comes to revolutions.

Yes, some revolutions have worked out well, such as the one that founded the United States. (But I really cannot emphasize this: they had a plan!) But plenty more have worked out very badly. Even France, which turned out okay in the end, had to go through a Napoleon phase first.

Overall, it seems like our odds are better when we treat the system as broken and try to fix it, than when we treat it as evil and try to tear it down.

The world could be a lot better than it is. But never forget: It could also be a lot worse.

Wrongful beneficence

Jun 9 JDN 2460471

One of the best papers I’ve ever read—one that in fact was formative in making me want to be an economist—is Wrongful Beneficence by Chris Meyers.

This paper opened my eyes to a whole new class of unethical behavior: Acts that unambiguously make everyone better off, but nevertheless are morally wrong. Hence, wrongful beneficence.

A lot of economists don’t even seem to believe in such things. They seem convinced that as long as no one is made worse off by a transaction, that transaction must be ethically defensible.

Chris Meyers convinced me that they are wrong.

The key insight here is that it’s still possible to exploit someone even if you make them better off. This happens when they are in a desperate situation and you take advantage of that to get an unfair payoff.


Here one of the cases Meyers offers to demonstrate this:

Suppose Carole is driving across the desert on a desolate road when her car breaks down. After two days and two nights without seeing a single car pass by, she runs out of water and feels rather certain that she will perish if not rescued soon. Now suppose that Jason happens to drive down this road and finds Carole. He sees that her situation is rather desperate and that she needs (or strongly desires) to get to the nearest town as soon as possible. So Jason offers her a ride but only on the condition that […] [she gives him] her entire net worth, the title to her house and car, all of her money in the bank, and half of her earnings for the next ten years.

Carole obviously is better off than she would be if Jason hadn’t shown up—she might even have died. She freely consented to this transaction—again, because if she didn’t, she might die. Yet it seems absurd to say that Jason has done nothing wrong by making such an exorbitant demand. If he had asked her to pay for gas, or even to compensate him for his time at a reasonable rate, we’d have no objection. But to ask for her life savings, all her assets, and half her earnings for ten years? Obviously unfair—and obviously unethical. Jason is making Carole (a little) better off while making himself (a lot) better off, so everyone is benefited; but what he’s doing is obviously wrong.

Once you recognize that such behavior can exist, you start to see it all over the place, particularly in markets, where corporations are quite content to gouge their customers with high prices and exploit their workers with low wages—but still, technically, we’re better off than we would be with no products and no jobs at all.

Indeed, the central message of Wrongful Beneficence is actually about sweatshop labor: It’s not that the workers are worse off than they would have been (in general, they aren’t); it’s that they are so desperate that corporations can get away with exploiting them with obviously unfair wages and working conditions.

Maybe it would be easier just to move manufacturing back to First World countries?

Right-wingers are fond of making outlandish claims that making products at First World wages would be utterly infeasible; here’s one claiming that an iPhone would need to cost $30,000 if it were made in the US. In fact, the truth is that it would only need to cost about $40 more—because hardly any of its cost is actually going to labor. Most of its price is pure monopoly profit for Apple; most of the rest is components and raw materials. (Of course, if those also had to come from the US, the price would go up more; but even so, we’re talking something like double its original price, not thirty times. Workers in the US are indeed paid a lot more than workers in China; they are also more productive.)

It’s true that actually moving manufacturing from other countries back to the US would be a substantial undertaking, requiring retooling factories, retraining engineers, and so on; but it’s not like we’ve never done that sort of thing before. I’m sure it could not be done overnight; but of course it could be done. We do this sort of thing all the time.

Ironically, this sort of right-wing nonsense actually seems to feed the far left as well, supporting their conviction that all this prosperity around us is nothing more than an illusion, that all our wealth only exists because we steal it from others. But this could scarcely be further from the truth; our wealth comes from technology, not theft. If we offered a fairer bargain to poorer countries, we’d be a bit less rich, but they would be much less poor—the overall wealth in the world would in fact probably increase.

A better argument for not moving manufacturing back to the First World is that many Third World economies would collapse if they stopped manufacturing things for other countries, and that would be disastrous for millions of people.

And free trade really does increase efficiency and prosperity for all.

So, yes; let’s keep on manufacturing goods wherever it is cheapest to do so. But when we decide what’s cheapest, let’s evaluate that based on genuinely fair wages and working conditions, not the absolute cheapest that corporations think they can get away with.

Sometimes they may even decide that it’s not really cheaper to manufacture in poorer countries, because they need advanced technology and highly-skilled workers that are easier to come by in First World countries. In that case, bringing production back here is the right thing to do.

Of course, this raises the question:

What would be fair wages and working conditions?

That’s not so easy to answer. Since workers in Third World countries are less educated than workers in First World countries, and have access to less capital and worse technology, we should in fact expect them to be less productive and therefore get paid less. That may be unfair in some cosmic sense, but it’s not anyone’s fault, and it’s not any particular corporation’s responsibility to fix it.

But when there are products for which less than 1% of the sales price of the product goes to the workers who actually made the product, something is wrong. When the profit margin is often wildly larger than the total amount spent on labor, something is wrong.

It may be that we will never have precise thresholds we can set to decide what definitely is or is not exploitative; but that doesn’t mean we can’t ever recognize it when we see it. There are various institutional mechanisms we could use to enforce better wages and working conditions without ever making such a sharp threshold.

One of the simplest, in fact, is Fair Trade.

Fair Trade is by no means a flawless system; in fact there’s a lot of research debating how effective it is at achieving its goals. But it does seem to be accomplishing something. And it’s a system that we already have in place, operating successfully in many countries; it simply needs to be scaled up (and hopefully improved along the way).

One of the clearest pieces of evidence that it’s helping, in fact, is that farmers are willing to participate in it. That shows that it is beneficent.

Of course, that doesn’t mean that it’s genuinely fair! This could just be another kind of wrongful beneficence. Perhaps Fair Trade is really just less exploitative than all the available alternatives.

If so, then we need something even better still, some new system that will reliably pass on the increased cost for customers all the way down to increased wages for workers.

Fair Trade shows us something else, too: A lot of customers clearly are willing to pay a bit more in order to see workers treated better. Even if they weren’t, maybe they should be forced to. But the fact is, they are! Even those who are most adamantly opposed to Fair Trade can’t deny that people really are willing to pay more to help other people. (Yet another example of obvious altruism that neoclassical economists somehow manage to ignore.) They simply deny that it’s actually helping, which is an empirical matter.

But if this isn’t helping enough, fine; let’s find something else that does.

Go ahead and identify as a season

Jun 2 JDN 2460464

A few weeks back, Fox News was running the story that “kids today are identifying as seasons instead of genders”. I suspected that by “kids today” they meant “one particular person on the Internet”, but in fact it was even worse than that; the one person on the Internet they had used as an example hadn’t actually said what Fox claimed they said.

What they actually said was far more nuanced: It was basically that their fluid gender expression varied based on what kind of clothes they wear, which, naturally, varies with the seasons. So they end up feeling more masculine at certain times of year when they like to wear masculine clothing. Honestly, this would be pretty boring stuff if conservatives hadn’t blown it out of proportion.

But after thinking about it for awhile, I decided that I don’t even care if kids want to identify as seasons.

It seems silly. I don’t understand why you’d want to do it. It would probably always feel weird to me. (And what pronouns do you even use for someone who identifies as “summer”?)

But ultimately, it seems completely, utterly harmless. So if there are in fact kids—or adults—out there who really feel that they want to identify their gender with a season, I’m here to tell you now:

Go right ahead and do that.

It’s really astonishing just what upsets conservatives in this world. Poverty? No big deal. Climate change? Probably a hoax or something. War? That’s just how it goes. But kids with weird genders!? The horror! The horror!

I think the reasoning here goes something like this:

  1. Civilization is built upon social constructions.
  2. Social constructions rely upon consensus behavior.
  3. Consensus behavior relies upon shared norms.
  4. Challenging any shared norms challenges all shared norms.
  5. Challenging any norm will cause it to collapse.
  6. Challenging gender norms is challenging a shared norm.
  7. Therefore, challenging gender norms will cause civilization to collapse.

Premises 1 through 3 are true, though I suspect that phrases like “social construction” would actually not sit well with most conservatives. (Part of their whole shtick seems to be that if you simply admit that money, government, and national identity are socially constructed, that in itself will cause them to immediately and irretrievably collapse. Nevermind that I can tell you money is made up all day long, and you’ll still be able to spend it.)

Premise 6 is also true, indeed, nearly tautological.

And, indeed, the argument is valid; the conclusion would follow from the premises.

So of course we come to the two premises that aren’t valid.


Premise 4 is wrong because you can challenge some norms but not others. I have yet to see anyone seriously challenge the norm against murder, for example. Nor does it even seem especially popular to challenge the norm in favor of democratic voting. But those are the kind of norms that actually sustain our civilization—not gender!

And premise 5 is even worse: A norm that can’t withstand even the slightest challenge is a norm that’s too weak to rely upon in the first place. If our civilization is to be strong and robust, it must allow its norms to be challenged, and those norms must be able to sustain themselves against the challenge. And indeed, if someone were to challenge the norm against murder or the norm in favor of democratic voting, there are plenty of things I could say to reply to that challenge. These norms aren’t arbitrary. They are strong because we can defend them.

What about gender norms? How defensible are they?

Well, uh… not very, it turns out.

The existence of sexes is defensible. Humans are sexually dimorphic, and the vast majority of humans can be readily classified as either male or female. Yes, there are exceptions even to that, and those people count too. But it’s a pretty useful and accurate heuristic to divide our species into two sexes.

But gender norms are so much more than this. We don’t simply recognize that some people have penises and others have vaginas. We attach all sorts of social and behavioral requirements to people based on their bodies, many of which are utterly arbitrary and culturally dependent. (Not all, to be fair: The stereotype that men are stronger than women is itself a very useful and accurate heuristic.)

Worse, we don’t merely assign stereotypes to predict behavior—which might sometimes be useful. We assign norms to control behavior. We tell people who deviate from those norms that they are bad. We abuse them, discriminate against them, ostracize them from society. This is really weird.

And for what?

What benefit do gender norms have?

I can see how norms against murder and in favor of democracy sustain our civilization. I’m just not seeing how norms against using she/her pronouns when you have a penis provide similar support.

It’s true, most human societies throughout history have had strict gender norms, so maybe that’s some sort of evidence in their favor… but how about we at least try not having them for awhile? Or just relax them here and there, a little at a time, see how it goes? If indeed it seems to result in some sort of disaster, we’ll stop doing it. But I don’t see how it could—and so far, it hasn’t.

I think maybe the problem here is that conservatives don’t understand how to evaluate norms, or perhaps even that norms can be evaluated. To them, a rule is a rule, and you never challenge the rules, because if there were no rules, there would be chaos and destruction.

But challenging some rules—or even all rules—doesn’t mean having no rules! It means checking to make sure our rules are good rules, and if they aren’t, changing them so they are.

And since I see no particular reason why having two genders is an especially good rule, go ahead, make up some more if you want.

Go ahead and identify if a season, if you really want to.

Medical progress, at least, is real

May 26 JDN 2460457

The following vignettes are about me.

Well, one of them is about me as I actually am. The others are about the person I would have been, if someone very much like me, with the same medical conditions, had been born in a particular place and time. Someone in these times and places probably had actual experiences like this, though of course we’ll never know who they were.

976 BC, the hilled lands near the mouth of the river:

Since I was fourteen years old, I have woken up almost every day in pain. Often it is mild, but occasionally it is severe. It often seems to be worse when I encounter certain plants, or if I awaken too early, or if I exert myself too much, or if a storm is coming. No one knows why. The healers have tried every herb and tincture imaginable in their efforts to cure me, but nothing has worked. The priests believe it is a curse from the gods, but at least they appreciate my ability to sometimes predict storms. I am lucky to even remain alive, as I am of little use to the tribe. I will most likely remain this way the rest of my life.

24 AD, Rome:

Since I was fourteen years old, I have woken up almost every day in pain. Often it is mild, but occasionally it is severe. It often seems to be worse when I encounter certain plants, or if I awaken too early, or if I exert myself too much, or if a storm is coming. No one knows why. The healers have tried every herb and tincture imaginable in their efforts to cure me, but nothing has worked. The priests believe it is a curse from the gods, but at least they appreciate my ability to sometimes predict storms. I am lucky that my family was rich enough to teach me reading and mathematics, as I would be of little use for farm work, but can at least be somewhat productive as a scribe and a tutor. I will most likely remain this way the rest of my life.

1024 AD, England:

Since I was fourteen years old, I have woken up almost every day in pain. Often it is mild, but occasionally it is severe. It often seems to be worse when I encounter certain plants, or if I awaken too early, or if I exert myself too much, or if a storm is coming. No one knows why. The healers have tried every herb and tincture imaginable in their efforts to cure me, but nothing has worked. The priests believe it is a curse imposed upon me by some witchcraft, but at least they appreciate my ability to sometimes predict storms. I am lucky that my family was rich enough to teach me reading and mathematics, as I would be of little use for farm work, but can at least be somewhat productive as a scribe and a tutor. I will most likely remain this way the rest of my life.

2024 AD, Michigan:

Since I was fourteen years old, I have woken up almost every day in pain. Often it is mild, but occasionally it is severe. It often seems to be worse when I encounter certain pollens, fragrances, or chemicals, or if I awaken too early, or if I exert myself too much, or when the air pressure changes before a storm. Brain scans detected no gross abnormalities. I have been diagnosed with chronic migraine, but this is more a description of my symptoms than an explanation. I have tried over a dozen different preventative medications; most of them didn’t work at all, some of them worked but gave me intolerable side effects. (One didn’t work at all and put me in the hospital with a severe allergic reaction.) I’ve been more successful with acute medications, which at least work as advertised, but I have to ration them carefully to avoid rebound effects. And the most effective acute medication is a subcutaneous injection that makes me extremely nauseated unless I also take powerful anti-emetics along with it. I have had the most success with botulinum toxin injections, so I will be going back to that soon; but I am also looking into transcranial magnetic stimulation. Currently my condition is severe enough that I can’t return to full-time work, but I am hopeful that with future treatment I will be able to someday. For now, I can at least work as a writer and a tutor. Hopefully things get better soon.

3024 AD, Aegir 7, Ran System:

For a few months when I was fourteen years old, I woke up nearly every day in pain. Often it was mild, but occasionally it was severe. It often seemed to be worse when I encountered certain pollens, fragrances or chemicals, or if I awakened too early, or if I exerted myself too much, or when the air pressure changed before a storm. Brain scans detected no gross abnormalities, only subtle misfiring patterns. Genetic analysis confirmed I had chronic migraine type IVb, and treatment commenced immediately. Acute medications suppressed the pain while I underwent gene therapy and deep-effect transcranial magnetic stimulation. After three months of treatment, I was cured. That was an awful few months, but it’s twenty years behind me now. I can scarcely imagine how it might have impaired my life if it had gone on that whole time.

What is the moral of this story?

Medical progress is real.

Many people often doubt that society has made real progress. And in a lot of ways, maybe it hasn’t. Human nature is still the same, and so many of the problems we suffer have remained the same.

Economically, of course we have had tremendous growth in productivity and output, but it doesn’t really seem to have made us much happier. We have all this stuff, but we’re still struggling and miserable as a handful at the top become spectacularly, disgustingly rich.

Social progress seems to have gone better: Institutions have improved, more of the world is democratic than ever before, and women and minorities are better represented and better protected from oppression. Rates of violence have declined to some of their lowest levels in history. But even then, it’s pretty clear that we have a long, long way to go.

But medical progress is undeniable. We live longer, healthier lives than at any other point in history. Our infant and child mortality rates have plummeted. Even chronic conditions that seem intractable today (such as my chronic migraines) still show signs of progress; in a few generations they should be cured—in surely far less than the thousand years I’ve considered here.

Like most measures of progress, this change wasn’t slow and gradual over thousands of years; it happened remarkably suddenly. Humans went almost 200,000 years without any detectable progress in medicine, using basically the same herbs and tinctures (and a variety of localized and ever-changing superstitions) the entire time. Some of it worked (the herbs and tinctures, at least), but mostly it didn’t. Then, starting around the 18th century, as the Enlightenment took hold and Industrial Revolution ramped up, everything began to change.

We began to test our medicine and see if it actually worked. (Yes, amazingly, somehow, nobody had actually ever thought to do that before—not in anything resembling a scientific way.) And when we learned that most of it didn’t, we began to develop new methods, and see if those worked; and when they didn’t either, we tried new things instead—until, finally, eventually, we actually found medicines that actually did something, medicines worthy of the name. Our understanding of anatomy and biology greatly improved as well, allowing us to make better predictions about the effects our medicines would have. And after a few hundred years of that—a few hundred, out of two hundred thousand years of our species—we actually reached the point where most medicine is effective and a variety of health conditions are simply curable or preventable, including diseases like malaria and polio that had once literally plagued us.

Scientific medicine brought humanity into a whole new era of existence.

I could have set the first vignette 10,000 years ago without changing it. But the final vignette I could probably have set only 200 years from now. I’m actually assuming remarkable stagnation by putting it in the 31st century; but presumably technological advancement will slow at one point, perhaps after we’ve more or less run out of difficult challenges to resolve. (Then again, for all I know, maybe my 31st century counterpart will be an emulated consciousness, and his chronic pain will be resolved in 17.482 seconds by a code update.)

Indeed, the really crazy thing about all this is that there are still millions of people who don’t believe in scientific medicine, who want to use “homeopathy” or “naturopathy” or “acupuncture” or “chiropractic” or whatever else—who basically want to go back to those same old herbs and tinctures that maybe sometimes kinda worked but probably not and nobody really knows. (I have a cousin who is a chiropractor. I try to be polite about it, but….) They point out the various ways that scientific medicine has failed—and believe me, I am painfully aware of those failures—but then where the obvious solution is to improve scientific medicine, they instead want to turn the whole ship around, and go back to what we had before, which was obviously a million times worse.

And don’t tell me it’s harmless: One, it’s a completewaste of resources that could instead have been used for actual scientific medicine. (9% of all out-of-pocket spending on healthcare in the US is on “alternative medicine”—which is to say, on pointless nonsense.) Two, when you have a chronic illness and people keep shoving nonsense treatments in your face, you start to feel blamed for your condition: “Why haven’t you tried [other incredibly stupid idea that obviously won’t work]? You’re so closed-minded! Maybe your illness isn’t really that bad, or you’d be more desperate!” If “alternative medicine” didn’t exist, maybe these people could help me cope with the challenges of living with a chronic illness, or even just sympathize with me, instead of constantly shoving stupid nonsense in my face.

Not everything about the future looks bright.

In particular, I am pessimistic about the near-term future of artificial intelligence, which I think will cause a lot more problems than it solves and does have a small—but not negligible—risk of causing a global catastrophe.

I’m also not very optimistic about climate change; I don’t think it will wipe out our civilization or anything so catastrophic, but I do think it’s going to kill millions of people and we’ve done too little, too late to prevent that. We’re now doing about what we should have been doing in the 1980s.

But I am optimistic about scientific medicine. Every day, new discoveries are made. Every day, new treatments are invented. Yes, there is a lot we haven’t figured out how to cure yet; but people are working on it.

And maybe they could do it faster if we stopped wasting time on stuff that obviously won’t work.

Are eliminativists zombies?

May 19 JDN 2460450

There are lots of little variations, but basically all views on the philosophy of mind boil down to four possibilities:

  1. Dualism: Mind and body are two separate types of thing
  2. Monism: Mind and body are the same type of thing
  3. Idealism: Only mind exists; body isn’t real
  4. Eliminativism: Only body exists; mind isn’t real

Like most philosophers and cognitive scientists, I am a die-hard monist, specifically a physicalist: The mind and the body are the same type of thing. Indeed, they are parts of the same physical system.

I call it the Basic Fact of Cognitive Science, which so many fail to understand at their own peril:

You are your brain.

You are not a product of your brain; you are not an illusion created by your brain; you are not connected to your brain. You are your brain. Your consciousness is generated by the activity of your brain.

Understanding how this works is beyond current human knowledge. I ask only that you understand that it works. Treat it as a brute fact of the universe if you must.

But precisely because understanding this mechanism is so difficult it has been aptly dubbed The Hard Problem, I am at least somewhat sympathetic to dualists, who say that the reason we can’t understand how the mind and brain are the same is that they aren’t, that there is some extra thing, the soul, which somehow makes consciousness and isn’t made of any material substance.

(If you want to get into the weeds a bit more, there are also “property dualists”, who try to bridge the gap between dualism and physicalism, but I think they are trying to have their cake and eat it too. So-called “predicate dualism” is really just physicalism; nobody says that tables or hurricanes are non-physical just because they are multiply-realizable.)

The problem, of course, is that dualism doesn’t actually explain anything. In fact, it adds a bunch of other mysteries that would then need to be explained, because there are clear, direct ways that consciousness interacts with physical matter. Affecting the body affects the mind, and vice-versa.

You don’t need anything as exotic as fMRI or brain injury studies to understand this. All you need to do is take a drug. In fact, all you need to do is get hungry and eat food. Eating food—obviously a physical process—makes you no longer hungry—a change in your conscious state. And the reason you ate food in the first place was because you were hungry—your mental state intervened on your bodily action.

The fact that mind and body are deeply connected is therefore an obvious fact, which should have been apparent to anyone throughout history. It doesn’t require any kind of deep scientific knowledge; all you have to do is pay close enough attention to your ordinary life.

But I can at least understand the temptation to be a dualist. Consciousness is weird and mysterious. It’s tempting to posit some whole new class of substance beyond anything we know in order to explain it.

Then there’s idealism, which theoretically, in principle, could be true—it’s just absurdly, vanishingly unlikely. Technically, all that I experience, qua experience, happens in my mind. So I can’t completely rule out the possibility that everything I think of as physical reality is actually just an illusion, and only my mind exists. It’s just that, well… the whole of my experience points pretty strongly to this not being the case. At the very least, it’s utterly impractical to live your life according to such a remote possibility.

That leaves eliminativism. And this, I confess, is the one I really don’t get.

Idealism, I can’t technically rule out; dualism, I understand the temptation; monism is in fact the truth. But eliminativism? I just can’t grok how anyone can actually believe it.

Then again, I think they sort of admit that.

The weirdest thing about eliminativism is that what they are actually saying is that things like beliefs and knowledge and feelings don’t actually exist.

If you ask an eliminativist if they believe eliminativism is true, they should answer “no”: because their assertion is precisely that nobody believes anything at all.

The more sophisticated eliminativists say that these “folk terms” are rough approximations to deeper concepts that cognitive science will someday understand. That’s not so ridiculous, but it still seems pretty bizarre to me to say that iron doesn’t exist because we now understand that an iron atom has precisely 26 protons. Perhaps indeed we will understand the mechanisms underlying beliefs better than we do now; but why would we need to stop calling them beliefs?

But some eliminativists—particularly behaviorists—seem to think that the these “folk terms” are just stupid, unscientific notions that will be one day discarded the same way that phlogiston and elan vital were discarded. And that I absolutely cannot fathom.

Consciousness isn’t an explanation; it is what we were trying to explain.

You can’t just discardthe phenomenonyou were trying to make sense of! This isn’t giving up on phlogiston; it’s giving up on fire. This isn’t abandoning the notion of elan vital; it’s abandoning the distinction between life and death.

But the more I think about this, the more I wonder:

Maybe eliminativists are right—about themselves?

Maybe the reason they think the rest of us don’t have feelings and beliefs is that they actually don’t. They don’t understand all this talk about the inner light of consciousness, because they just don’t have it.

In other words:

Are eliminativists zombies?

No, not the shambling, “Brains! Brains!” kind of zombie; the philosophical concept of a zombie (sometimes written “p-zombie” to clarify). A zombie is a being that looks human, acts human, is externally indistinguishable from a human, yet has no internal experience. They walk and talk, but they don’t actually think. A zombie acts like us, but lacks the inner light of consciousness.

Of course, what I’d really be saying here is that they are almost indistinguishable, but you can sometimes tell them apart by their babbling about the non-existence of consciousness.

But really, almost indistinguishable makes more sense anyway; if they were literally impossible to tell apart under any conceivable test, it’s difficult to even make sense of what we mean when we say they are different. (I am certainly not the first to point this out, and indeed it’s often used as an argument against the existence of zombies.)

Do I actually think that eliminativists are zombies?

No. I don’t.

But the weird thing is that they seem to, and so I feel some compulsion to let them self-identify that way. It feels wrong to attribute beliefs to someone that they say they don’t actually hold, and eliminativists have said that they don’t hold any beliefs whatsoever.

Yet, somehow, I don’t think they’ll appreciate being called zombies, either.

How Effective Altruism hurt me

May 12 JDN 2460443

I don’t want this to be taken the wrong way. I still strongly believe in the core principles of Effective Altruism. Indeed, it’s shockingly hard to deny them, because basically they come out to this:

Doing more good is better than doing less good.

Then again, most people want to do good. Basically everyone agrees that more good is better than less good. So what’s the big deal about Effective Altruism?

Well, in practice, most people put shockingly little effort into trying to ensure that they are doing the most good they can. A lot of people just try to be nice people, without ever concerning themselves with the bigger picture. Many of these people don’t give to charity at all.

Then, even among people who do give to charity, typically give to charities more or less at random—or worse, in proportion to how much mail those charities send them begging for donations. (Surely you can see how that is a perverse incentive?) They donate to religious organizations, which sometimes do good things, but fundamentally are founded upon ignorance, patriarchy, and lies.

Effective Altruism is a movement intended to fix this, to get people to see the bigger picture and focus their efforts on where they will do the most good. Vet charities not just for their honesty, but also their efficiency and cost-effectiveness:

Just how many mQALY can you buy with that $1?

That part I still believe in. There is a lot of value in assessing which charities are the most effective, and trying to get more people to donate to those high-impact charities.

But there is another side to Effective Altruism, which I now realize has severely damaged my mental health.

That is the sense of obligation to give as much as you possibly can.

Peter Singer is the most extreme example of this. He seems to have mellowed—a little—in more recent years, but in some of his most famous books he uses the following thought experiment:

To challenge my students to think about the ethics of what we owe to people in need, I ask them to imagine that their route to the university takes them past a shallow pond. One morning, I say to them, you notice a child has fallen in and appears to be drowning. To wade in and pull the child out would be easy but it will mean that you get your clothes wet and muddy, and by the time you go home and change you will have missed your first class.

I then ask the students: do you have any obligation to rescue the child? Unanimously, the students say they do. The importance of saving a child so far outweighs the cost of getting one’s clothes muddy and missing a class, that they refuse to consider it any kind of excuse for not saving the child. Does it make a difference, I ask, that there are other people walking past the pond who would equally be able to rescue the child but are not doing so? No, the students reply, the fact that others are not doing what they ought to do is no reason why I should not do what I ought to do.

Basically everyone agrees with this particular decision: Even if you are wearing a very expensive suit that will be ruined, even if you’ll miss something really important like a job interview or even a wedding—most people agree that if you ever come across a drowning child, you should save them.

(Oddly enough, when contemplating this scenario, nobody ever seems to consider the advice that most lifeguards give, which is to throw a life preserver and then go find someone qualified to save the child—because saving someone who is drowning is a lot harder and a lot riskier than most people realize. (“Reach or throw, don’t go.”) But that’s a bit beside the point.)

But Singer argues that we are basically in this position all the time. For somewhere between $500 and $3000, you—yes, you—could donate to a high-impact charity, and thereby save a child’s life.

Does it matter that many other people are better positioned to donate than you are? Does it matter that the child is thousands of miles away and you’ll never see them? Does it matter that there are actually millions of children, and you could never save them all by yourself? Does it matter that you’ll only save a child in expectation, rather than saving some specific child with certainty?

Singer says that none of this matters. For a long time, I believed him.

Now, I don’t.

For, if you actually walked by a drowning child that you could save, only at the cost of missing a wedding and ruining your tuxedo, you clearly should do that. (If it would risk your life, maybe not—and as I alluded to earlier, that’s more likely than you might imagine.) If you wouldn’t, there’s something wrong with you. You’re a bad person.

But most people don’t donate everything they could to high-impact charities. Even Peter Singer himself doesn’t. So if donating is the same as saving the drowning child, it follows that we are all bad people.

(Note: In general, if an ethical theory results in the conclusion that the whole of humanity is evil, there is probably something wrong with that ethical theory.)

Singer has tried to get out of this by saying we shouldn’t “sacrifice things of comparable importance”, and then somehow cash out what “comparable importance” means in such a way that it doesn’t require you to live on the street and eat scraps from trash cans. (Even though the people you’d be donating to largely do live that way.)

I’m not sure that really works, but okay, let’s say it does. Even so, it’s pretty clear that anything you spend money on purely for enjoyment would have to go. You would never eat out at restaurants, unless you could show that the time saved allowed you to get more work done and therefore donate more. You would never go to movies or buy video games, unless you could show that it was absolutely necessary for your own mental functioning. Your life would be work, work, work, then donate, donate, donate, and then do the absolute bare minimum to recover from working and donating so you can work and donate some more.

You would enslave yourself.

And all the while, you’d believe that you were never doing enough, you were never good enough, you are always a terrible person because you try to cling to any personal joy in your own life rather than giving, giving, giving all you have.

I now realize that Effective Altruism, as a movement, had been basically telling me to do that. And I’d been listening.

I now realize that Effective Altruism has given me this voice in my head, which I hear whenever I want to apply for a job or submit work for publication:

If you try, you will probably fail. And if you fail, a child will die.

The “if you try, you will probably fail” is just an objective fact. It’s inescapable. Any given job application or writing submission will probably fail.

Yes, maybe there’s some sort of bundling we could do to reframe that, as I discussed in an earlier post. But basically, this is correct, and I need to accept it.

Now, what about the second part? “If you fail, a child will die.” To most of you, that probably sounds crazy. And it is crazy. It’s way more pressure than any ordinary person should have in their daily life. This kind of pressure should be reserved for neurosurgeons and bomb squads.

But this is essentially what Effective Altruism taught me to believe. It taught me that every few thousand dollars I don’t donate is a child I am allowing to die. And since I can’t donate what I don’t have, it follows that every few thousand dollars I fail to get is another dead child.

And since Effective Altruism is so laser-focused on results above all else, it taught me that it really doesn’t matter whether I apply for the job and don’t get it, or never apply at all; the outcome is the same, and that outcome is that children suffer and die because I had no money to save them.

I think part of the problem here is that Effective Altruism is utilitarian through and through, and utilitarianism has very little place for good enough. There is better and there is worse; but there is no threshold at which you can say that your moral obligations are discharged and you are free to live your life as you wish. There is always more good that you could do, and therefore always more that you should do.

Do we really want to live in a world where to be a good person is to owe your whole life to others?

I do not believe in absolute selfishness. I believe that we owe something to other people. But I no longer believe that we owe everything. Sacrificing my own well-being at the altar of altruism has been incredibly destructive to my mental health, and I don’t think I’m the only one.

By all means, give to high-impact charities. But give a moderate amount—at most, tithe—and then go live your life. You don’t owe the world more than that.