What we still have to be thankful for

Nov 30 JDN 2461010

This post has been written before, but will go live after, Thanksgiving.

Thanksgiving is honestly a very ambivalent holiday.

The particular event it celebrates don’t seem quite so charming in their historical context: Rather than finding peace and harmony with all Native Americans, the Pilgrims in fact allied with the Wampanoag against the Narragansett, though they did later join forces with the Narragansett in order to conquer the Pequot. And of course we all know how things went for most Native American nations in the long run.

Moreover, even the gathering of family comes with some major downsides, especially in a time of extreme political polarization such as this one. I won’t be joining any of my Trump-supporting relatives for dinner this year (and they probably wouldn’t have invited me anyway), but the fact that this means becoming that much more detached from a substantial part of my extended family is itself a tragedy.

This year in particular, US policy has gotten so utterly horrific that it often feels like we have nothing to be thankful for at all, that all we thought was good and just in the world could simply be torn away at a moment’s notice by raving madmen. It isn’t really quite that bad—but it feels that way sometimes.

It also felt a bit uncanny celebrating Thanksgiving a few years ago when we were living in Scotland, for the UK does not celebrate Thanksgiving, but absolutely does celebrate Black Friday: Holidays may be local, but capitalism is global.

But fall feasts of giving thanks are far more ancient than that particular event in 1621 that we have mythologized to oblivion. They appear in numerous cultures across the globe—indeed their very ubiquity may be why the Wampanoag were so willing to share one with the Pilgrims despite their cultures having diverged something like 40,000 years prior.

And I think that it is by seeing ourselves in that context—as part of the whole of humanity—that we can best appreciate what we truly do have to be thankful for, and what we truly do have to look forward to in the future.

Above all, medicine.

We have actual treatments for some diseases, even actual cures for some. By no means all, of course—and it often feels like we are fighting an endless battle even against what we can treat.

But it is worth reflecting on the fact that aside from the last few centuries, this has simply not been the case. There were no actual treatments. There was no real medicine.

Oh, sure, there were attempts at medicine; and there was certainly what we would think of as more like “first aid”: bandaging wounds, setting broken bones. Even amputation and surgery were done sometimes. But most medical treatment was useless or even outright harmful—not least because for most of history, most of it was done without anesthetic or even antiseptic!

There were various herbal remedies for various ailments, some of which even have happened to work: Willow bark genuinely helps with pain, St. John’s wort is a real antidepressant, and some traditional burn creams are surprisingly effective.

But there was no system in place for testing medicine, no way of evaluating what remedies worked and what didn’t. And thus, for every remedy that worked as advertised, there were a hundred more that did absolutely nothing, or even made things worse.

Today, it can feel like we are all chronically ill, because so many of us take so many different pills and supplements. But this is not a sign that we are ill—it is a sign that we can be treated. The pills are new, yes—but the illnesses they treat were here all along.

I don’t see any particular reason to think that Roman plebs or Medieval peasants were any less likely to get migraines than we are; but they certainly didn’t have access to sumatriptan or rimegepant. Maybe they were less likely to get diabetes, but mainly because they were much more likely to be malnourished. (Well, okay, also because they got more exercise, which we surely could stand to.) And they only reason they didn’t get Alzheimer’s was that they usually didn’t live long enough.

Looking further back, before civilization, human health actually does seem to have been better: Foragers were rarely malnourished, weren’t exposed to as many infectious pathogens, and certainly got plenty of exercise. But should a pathogen like smallpox or influenza make it to a forager tribe, the results were often utterly catastrophic.

Today, we don’t really have the sort of plague that human beings used to deal with. We have pandemics, which are also horrible, but far less so. We were horrified by losing 0.3% of our population to COVID; a society that had only suffered 0.3%—or even ten times that, 3%—losses from the Black Death would have been hailed as a miracle, for a more typical rate was 30%.

At 0.3%, most of us knew somebody, or knew somebody who knew somebody, who died from COVID. At 3%, nearly everyone would know somebody, and most would know several. At 30%, nearly everyone would have close family and friends who died.

Then there is infant mortality.

As recently as 1950—this is living memory—the global infant mortality rate was 14.6%. This is about half what it had been historically; for most of human history, roughly a third of all children died between birth and the age of 5.

Today, it is 2.5%.

Where our distant ancestors expected two out of three of their children to survive and our own great-grandparents expected five out of six can now safely expect thirty-nine out of forty to live. This is the difference between “nearly every family has lost a child” and “most families have not lost a child”.

And this is worldwide; in highly-developed countries it’s even better. The US has a relatively high infant mortality rate by the standards of highly-developed countries (indeed, are we even highly-developed, or are we becoming like Saudi Arabia, extremely rich but so unequal that it doesn’t really mean anything to most of our people?). Yet even for us, the infant mortality rate is 0.5%—so we can expect one-hundred-ninety-nine out of two-hundred to survive. This is at the level of “most families don’t even know someone who has lost a child.”

Poverty is a bit harder to measure.

I am increasingly dubious of conventional measures of poverty; ever since compiling my Index of Necessary Expenditure, I am convinced that economists in general, and perhaps US economists in particular, are systematically underestimating the cost of living and thereby underestimating the prevalence of poverty. (I don’t think this is intentional, mind you; I just think it’s a result of using convenient but simplistic measures and not looking too closely into the details.) I think not being able to sustainably afford a roof over your head constitutes being poor—and that applies to a lot of people.

Yet even with that caveat in mind, it’s quite clear that global poverty has greatly declined in the long run.

At the “extreme poverty” level, currently defined as consuming $1.90 at purchasing power parity per day—that’s just under $700 per year, less than 2% of the median personal income in the United States—the number of people has fallen from 1.9 billion in 1990 to about 700 million today. That’s from 36% of the world’s population to under 9% today.

Now, there are good reasons to doubt that “purchasing power parity” really can be estimated as accurately as we would like, and thus it’s not entirely clear that people living on “$2 per day PPP” are really living at less than 2% the standard of living of a typical American (honestly to me that just sounds like… dead); but they are definitely living at a much worse standard of living, and there are a lot fewer people living at such low standard of living today than there used to be not all that long ago. These are people who don’t have reliable food, clean water, or even basic medicine—and that used to include over a third of humanity and does no longer. (And I would like to note that actually finding such a person and giving them a few hundred dollars absolutely would change their life, and this is the sort of thing GiveDirectly does. We may not know exactly how to evaluate their standard of living, but we do know that the actual amount of money they have access to is very, very small.)

There are many ways in which the world could be better than it is.

Indeed, part of the deep, overwhelming outrage I feel pretty much all the time lies in the fact that it would be so easy to make things so much better for so many people, if there weren’t so many psychopaths in charge of everything.


Increased foreign aid is one avenue by which that could be achieved—so, naturally, Trump cut it tremendously. More progressive taxation is another—so, of course, we get tax cuts for the rich.

Just think about the fact that there are families with starving children for whom a $500 check could change their lives; but nobody is writing that check, because Elon Musk needs to become a literal trillionaire.

There are so many water lines and railroad tracks and bridges and hospitals and schools not being built because the money that would have paid for them is tied up in making already unfathomably-rich people even richer.

But even despite all that, things are getting better. Not every day, not every month, not even every year—this past year was genuinely, on net, a bad one. But nearly every decade, every generation, and certainly every century (for at least the last few), humanity has fared better than we did the last.

As long as we can keep that up, we still have much to hope for—and much to be thankful for.

What is the cost of all this?

Nov 23 JDN 2461003

After the Democrats swept the recent election and now the Epstein files are being released—and absolutely do seem to have information that is damning about Trump—it really seems like Trump’s popularity has permanently collapsed. His approval rating stands at 42%, which is about 42% too high, but at least comfortably well below a majority.

It now begins to feel like we have hope, not only of removing him, but also of changing how American politics in general operates so that someone like him ever gets power again. (The latter, of course, is a much taller order.)

But at the risk of undermining this moment of hope, I’d like to take stock of some of the damage that Trump and his ilk have already done.

In particular, the cuts to US foreign aid are an absolute humanitarian disaster.

These didn’t get so much attention, because there has been so much else going on; and—unfortunately—foreign aid actually isn’t that popular among American voters, despite being a small proportion of the budget and by far the most cost-effective beneficial thing that our government does.

In fact, I think USAID would be cost-effective on a purely national security basis: it’s hard to motivate people to attack a country that saves the lives of their children. Indeed, I suppose this is the kernel of truth to the leftists who say that US foreign aid is just a “tool of empire” (or even “a front for the CIA”); yes, indeed, helping the needy does in fact advance American interests and promote US national security.

Over the last 25 years, USAID has saved over 90 million lives. That is more than a fourth of the population of the United States. And it has done this for the cost of less than 1% of the US federal budget.

But under Trump’s authority and Elon Musk’s direction, US foreign aid was cut massively over the last couple of years, and the consequences are horrific. Research on the subject suggests that as many as 700,000 children will die each year as long as these cuts persist.


Even if that number is overestimated by a factor of 2, that would still be millions of children over the next few years. And it could just as well be underestimated.

If we don’t fix this fast, millions of children will die. Thousands already have.

What’s more, fixing this isn’t just a matter of bringing the funding back. Obviously that’s necessary, but it won’t be sufficient. The sudden cuts have severely damaged international trust in US foreign aid, and many of the agencies that our aid was supporting will either collapse or need to seek funding elsewhere—quite likely from China. Relationships with governments and NGOs that were built over decade have been strained or even destroyed, and will need to be rebuilt.

This is what happens when you elect monsters to positions of power.

And even after we remove them, much of the damage will be difficult or even impossible to repair. Certainly we can never bring back the children who have already needlessly died because of this.

More on Free Will


Oct 27 JDN 2460611

In a previous post, I defended the existence of compatibilism and free will. There are a few subtler issues with free will that I’d now like to deal with in this week’s post.

The ability to do otherwise

One subtler problem for free will comes from the idea of doing otherwise—what some philosophers call “genuinely open alternatives”. The question is simple to ask, but surprisingly difficult to answer: “When I make a choice, could I have chosen otherwise?”

On one hand, the answer seems obviously “yes” because, when I make a choice, I consider a set of alternatives and select the one that seems best. If I’d wanted to, I’d have chosen something else. On the other hand, the answer seems obviously “no”, because the laws of nature compelled my body and brain to move in exactly the way that it did. So which answer is right?

I think the key lies in understanding specifically how the laws of nature cause my behavior. It’s not as if my arms are on puppet strings, and no matter what I do, they will be moved in a particular way; if I choose to do something, I will do it; if I choose not to, I won’t do it. The laws of nature constrain my behavior by constraining my desires; they don’t constrain what I do in spite of what I want—instead, they constrain what I do through what I want. I am still free to do what I choose to do.

So, while my actions may be predetermined, they are determined by who I am, what I want, what experiences I have. These are precisely the right kind of determinants for free will to make sense; my actions spring not from random chance or external forces, but instead from my own character.

If we really mean to ask, “Could I (exactly as I was, in the situation I was in) have done otherwise (as free choice, not random chance)?” the answer is “No”. Something would have to be different. But one of the things that could be different is me! If I’d had different genes, or a different upbringing, or exposure to different ideas during my life, I might have acted differently. Most importantly, if I had wanted a different outcome, I could have chosen it. So if all we mean by the question is “Could I (if I wanted to) have done otherwise?” the answer is a resounding “Yes”. What I have done in my life speaks to my character—who I am, what I want. It doesn’t merely involve luck (though it may involve some luck), and it isn’t reducible to factors external to me. I am part of the causal structure of the universe; my will is a force. Though the world is made of pushes and pulls, I am among the things pushing and pulling.

As Daniel Dennett pointed out, this kind of freedom admits of degrees: It is entirely possible for a deterministic agent to be more or less effective at altering its circumstances to suit its goals. In fact, we have more options today than we did a few short centuries ago, and this means that in a very real sense we have more free will.

Empirically observing free will

What is really at stake, when we ask whether a person has free will? It seems to me that the question we really want to answer is this: “Are we morally justified in rewarding or punishing this person?” If you were to conclude, “No, they do not have free will, but we are justified in punishing them.”, I would think that you meant something different than I do by “free will”. If instead your ruling was “Yes, they have free will, but we may not reward or punish them.”, I would be similarly confused. Moreover, the concern that without free will, our moral and legal discourse collapses, seems to be founded upon this general notion—that reward and punishment, crucial to ethics and law (not to mention economics!) as they are, are dependent upon free will.

Yet, consider this as a scientific question. What kind of organism can respond to reward and punishment? What sort of thing will change its behavior based upon rewards, punishments, and the prospect thereof? Certainly you must agree that there is no point punishing a thing that will not be affected by the punishment in any way—banging your fist on the rocks will not make the rocks less likely to crush your loved ones. Conversely, I think you’d be hard-pressed to say it’s pointless to punish if the punishment would result in some useful effect. Maybe it’s not morally relevant—but then, why not? If you can make the world better by some action, doesn’t that, other things equal, give you a moral reason to perform that action?

We know exactly what sort of thing responds to reward and punishment: Animals. Specifically, animals that are operant-conditionable, for operant conditioning consists precisely in the orchestrated use of reward and punishment. Humans are of course supremely operant-conditionable; indeed, we can be trained to do incredibly complex things—like play a piano, pilot a space shuttle, hit a fastball, or write a book—and, even more impressively, we can learn to train ourselves to do such things. In fact, clearly something more than operant conditioning is at work here, because certain human behaviors (like language) are far too complex to learn by simple reward and punishment. There is a lot of innate cognition going on in the human brain—but over that layer of innate cognition we can add a virtually endless range of possible learned behaviors.

That is to say, learning—the capacity to change future behavior based upon past experience—is precisely in alignment with our common intuitions about free will—that humans have the most, animals have somewhat less, computers might have some, and rocks have none. Yes, there are staunch anthropocentrist dualists who would insist that animals and computers have no “free will”. But if you ask someone, “Did that dog dig that hole on purpose?” their immediate response will not include such theological considerations; it will attribute free choice to Canis lupus familiaris. Indeed, I think if you ask, “Did the chess program make that move on purpose?” the natural answer attributes some sort of will even to the machine. (Maybe just its programmer? I’m not so sure.)

Yet, if the capacity to respond to reward and punishment is all we need to justify reward and punishment, then the problem of free will collapses. We should punish criminals if, and only if, punishing them will reform them to better behavior, or set an example to deter others from similar crimes. Did we lose some deep sense of moral desert and retribution? Maybe, but I think we can probably work it back in, and if we can’t, we can probably do without it. Either way, we can still have a justice system and moral discourse.

Indeed, we can do better than that; we can now determine empirically whether a given entity is a moral agent. The insane psychopathic serial killer who utterly fails to understand empathy may indeed fail to qualify, in which case we should kill them and be done with it, the same way we would kill a virus or destroy an oncoming asteroid. Or they may turn out to qualify, in which case we should punish them as we would other moral agents. The point is, this is a decidable question, at least in principle; all we need are a few behavioral and psychological experiments to determine the answer.

The power of circumstances

There is another problem with classical accounts of free will, which comes from the results of psychology experiments. Perhaps the most seminal was the (in)famous experiment by Stanley Milgram, in which verbal commands caused ordinary people to administer what they thought were agonizing and life-threatening shocks to innocent people for no good reason. Simply by being put in particular circumstances, people found themselves compelled to engage in actions they would never have done otherwise. This experiment was replicated in 2009 under more rigorous controls, with virtually identical results.

This shows that free will is much more complicated than we previously imagined. Even if we acknowledge that human beings are capable of making rational, volitional decisions that reflect their character, we must be careful not to presume that everything people do is based upon character. As Hannah Arendt has pointed out, even the Nazis, though they perpetrated almost unimaginable evils, nonetheless were for the most part biologically and psychologically normal human beings. Perhaps Hitler and Himmler were maniacal psychopaths (and more recently Arendt’s specific example of Eichmann has also been challenged.), but the vast majority of foot soldiers of the German Army who burned villages or gassed children were completely ordinary men in extraordinarily terrible circumstances. This forces us to reflect upon the dire fact that in their place, most of us would have done exactly the same things.

This doesn’t undermine free will entirely, but it does force us to reconsider many of our preconceptions about it. Court systems around the world are based around the presumption that criminal acts are committed by people who are defective in character, making them deserving of punishment; in some cases this is probably right (e.g. Jeffrey Dahmer, Charles Manson), but in many cases, it is clearly wrong. Crime is much more prevalent in impoverished areas; why? Not because poor people are inherently more criminal, but because poverty itself makes people more likely to commit crimes. In a longitudinal study in Georgia, socioeconomic factors strongly predicted crime, especially property crime. An experiment at MIT suggests that letting people move to wealthier neighborhoods actually makes their children less likely to commit crimes. A 2007 report from the Government Accountability Office explicitly endorsed the hypothesis that poverty causes crime.

Really, all of this makes perfect sense: Poor people are precisely those who have the least to lose and the most to gain by breaking the rules. If you are starving, theft may literally save your life. Even if you’re not at the verge of starvation, the poorer you are, the worse your life prospects are, and the more unfairly the system has treated you. Most people who are rich today inherited much of their wealth from ancestors who violently stole it from other people. Why should anyone respect the rules of a system that robbed their ancestors and leaves them forsaken? Compound this with the fact that it is harder to be law-abiding when you are surrounded by thieves, and the high crime rates of inner cities hardly seem surprising.

Does this mean we should abandon criminal justice? Clearly not, for the consequences of doing so would be predictably horrendous. Temporary collapses in civil government typically lead to violent anarchy; this continued for several years in Somalia, and has happened more briefly even in Louisiana (it was not as terrible as the media initially reported, but it was still quite bad.) We do need to hold people responsible for their crimes. But what this sort of research shows is that we also need to consider situational factors when we set policy. The United States has the highest after-tax absolute poverty rate and the highest share of income claimed by the top 0.01\% of any First World nation—an astonishing 4%, meaning that the top 30,000 richest Americans have on average 400 times as much income as the average person. (My master’s thesis was actually on the subject of how this high level of inequality is related to increased corruption.) We also have the third-highest rate of murder in the OECD, after Mexico (by far the highest) and Estonia. Our homicide rate is almost three times that of Canada and over four times that of England. Even worse, the US has the highest incarceration rate in the world. Yes, that’s right; we in the US imprison a larger portion of our population than any other nation on Earth—including Iran, China, and Saudi Arabia.

Social science suggests this is no coincidence; it is our economic inequality that leads to our crime and incarceration. Nor is our poverty a result of insufficient wealth. By the standard measure Gross Domestic Product (GDP), an estimate of the total economic output a nation produces each year, the United States has the second-highest total GDP at purchasing power parity (China recently surpassed us), and the sixth-highest GDP per person in the world. We do not lack wealth; instead, we funnel wealth to the rich and deny it from the poor. If we stopped doing this, we would see a reduction in poverty and inequality, and there is reason to think that a corresponding reduction in crime would follow. We could make people act morally better simply by redistributing wealth.

Such knowledge of situational factors forces us to reconsider our ethical judgments on many subjects. It forces us to examine the ways that social, political, and economic systems influence our behavior in powerful ways. But we still have free will, and we still need to use it; in fact, in order to apply this research to our daily lives and public policies, we will need to exercise our free will very carefully.

Wrongful beneficence

Jun 9 JDN 2460471

One of the best papers I’ve ever read—one that in fact was formative in making me want to be an economist—is Wrongful Beneficence by Chris Meyers.

This paper opened my eyes to a whole new class of unethical behavior: Acts that unambiguously make everyone better off, but nevertheless are morally wrong. Hence, wrongful beneficence.

A lot of economists don’t even seem to believe in such things. They seem convinced that as long as no one is made worse off by a transaction, that transaction must be ethically defensible.

Chris Meyers convinced me that they are wrong.

The key insight here is that it’s still possible to exploit someone even if you make them better off. This happens when they are in a desperate situation and you take advantage of that to get an unfair payoff.


Here one of the cases Meyers offers to demonstrate this:

Suppose Carole is driving across the desert on a desolate road when her car breaks down. After two days and two nights without seeing a single car pass by, she runs out of water and feels rather certain that she will perish if not rescued soon. Now suppose that Jason happens to drive down this road and finds Carole. He sees that her situation is rather desperate and that she needs (or strongly desires) to get to the nearest town as soon as possible. So Jason offers her a ride but only on the condition that […] [she gives him] her entire net worth, the title to her house and car, all of her money in the bank, and half of her earnings for the next ten years.

Carole obviously is better off than she would be if Jason hadn’t shown up—she might even have died. She freely consented to this transaction—again, because if she didn’t, she might die. Yet it seems absurd to say that Jason has done nothing wrong by making such an exorbitant demand. If he had asked her to pay for gas, or even to compensate him for his time at a reasonable rate, we’d have no objection. But to ask for her life savings, all her assets, and half her earnings for ten years? Obviously unfair—and obviously unethical. Jason is making Carole (a little) better off while making himself (a lot) better off, so everyone is benefited; but what he’s doing is obviously wrong.

Once you recognize that such behavior can exist, you start to see it all over the place, particularly in markets, where corporations are quite content to gouge their customers with high prices and exploit their workers with low wages—but still, technically, we’re better off than we would be with no products and no jobs at all.

Indeed, the central message of Wrongful Beneficence is actually about sweatshop labor: It’s not that the workers are worse off than they would have been (in general, they aren’t); it’s that they are so desperate that corporations can get away with exploiting them with obviously unfair wages and working conditions.

Maybe it would be easier just to move manufacturing back to First World countries?

Right-wingers are fond of making outlandish claims that making products at First World wages would be utterly infeasible; here’s one claiming that an iPhone would need to cost $30,000 if it were made in the US. In fact, the truth is that it would only need to cost about $40 more—because hardly any of its cost is actually going to labor. Most of its price is pure monopoly profit for Apple; most of the rest is components and raw materials. (Of course, if those also had to come from the US, the price would go up more; but even so, we’re talking something like double its original price, not thirty times. Workers in the US are indeed paid a lot more than workers in China; they are also more productive.)

It’s true that actually moving manufacturing from other countries back to the US would be a substantial undertaking, requiring retooling factories, retraining engineers, and so on; but it’s not like we’ve never done that sort of thing before. I’m sure it could not be done overnight; but of course it could be done. We do this sort of thing all the time.

Ironically, this sort of right-wing nonsense actually seems to feed the far left as well, supporting their conviction that all this prosperity around us is nothing more than an illusion, that all our wealth only exists because we steal it from others. But this could scarcely be further from the truth; our wealth comes from technology, not theft. If we offered a fairer bargain to poorer countries, we’d be a bit less rich, but they would be much less poor—the overall wealth in the world would in fact probably increase.

A better argument for not moving manufacturing back to the First World is that many Third World economies would collapse if they stopped manufacturing things for other countries, and that would be disastrous for millions of people.

And free trade really does increase efficiency and prosperity for all.

So, yes; let’s keep on manufacturing goods wherever it is cheapest to do so. But when we decide what’s cheapest, let’s evaluate that based on genuinely fair wages and working conditions, not the absolute cheapest that corporations think they can get away with.

Sometimes they may even decide that it’s not really cheaper to manufacture in poorer countries, because they need advanced technology and highly-skilled workers that are easier to come by in First World countries. In that case, bringing production back here is the right thing to do.

Of course, this raises the question:

What would be fair wages and working conditions?

That’s not so easy to answer. Since workers in Third World countries are less educated than workers in First World countries, and have access to less capital and worse technology, we should in fact expect them to be less productive and therefore get paid less. That may be unfair in some cosmic sense, but it’s not anyone’s fault, and it’s not any particular corporation’s responsibility to fix it.

But when there are products for which less than 1% of the sales price of the product goes to the workers who actually made the product, something is wrong. When the profit margin is often wildly larger than the total amount spent on labor, something is wrong.

It may be that we will never have precise thresholds we can set to decide what definitely is or is not exploitative; but that doesn’t mean we can’t ever recognize it when we see it. There are various institutional mechanisms we could use to enforce better wages and working conditions without ever making such a sharp threshold.

One of the simplest, in fact, is Fair Trade.

Fair Trade is by no means a flawless system; in fact there’s a lot of research debating how effective it is at achieving its goals. But it does seem to be accomplishing something. And it’s a system that we already have in place, operating successfully in many countries; it simply needs to be scaled up (and hopefully improved along the way).

One of the clearest pieces of evidence that it’s helping, in fact, is that farmers are willing to participate in it. That shows that it is beneficent.

Of course, that doesn’t mean that it’s genuinely fair! This could just be another kind of wrongful beneficence. Perhaps Fair Trade is really just less exploitative than all the available alternatives.

If so, then we need something even better still, some new system that will reliably pass on the increased cost for customers all the way down to increased wages for workers.

Fair Trade shows us something else, too: A lot of customers clearly are willing to pay a bit more in order to see workers treated better. Even if they weren’t, maybe they should be forced to. But the fact is, they are! Even those who are most adamantly opposed to Fair Trade can’t deny that people really are willing to pay more to help other people. (Yet another example of obvious altruism that neoclassical economists somehow manage to ignore.) They simply deny that it’s actually helping, which is an empirical matter.

But if this isn’t helping enough, fine; let’s find something else that does.

How Effective Altruism hurt me

May 12 JDN 2460443

I don’t want this to be taken the wrong way. I still strongly believe in the core principles of Effective Altruism. Indeed, it’s shockingly hard to deny them, because basically they come out to this:

Doing more good is better than doing less good.

Then again, most people want to do good. Basically everyone agrees that more good is better than less good. So what’s the big deal about Effective Altruism?

Well, in practice, most people put shockingly little effort into trying to ensure that they are doing the most good they can. A lot of people just try to be nice people, without ever concerning themselves with the bigger picture. Many of these people don’t give to charity at all.

Then, even among people who do give to charity, typically give to charities more or less at random—or worse, in proportion to how much mail those charities send them begging for donations. (Surely you can see how that is a perverse incentive?) They donate to religious organizations, which sometimes do good things, but fundamentally are founded upon ignorance, patriarchy, and lies.

Effective Altruism is a movement intended to fix this, to get people to see the bigger picture and focus their efforts on where they will do the most good. Vet charities not just for their honesty, but also their efficiency and cost-effectiveness:

Just how many mQALY can you buy with that $1?

That part I still believe in. There is a lot of value in assessing which charities are the most effective, and trying to get more people to donate to those high-impact charities.

But there is another side to Effective Altruism, which I now realize has severely damaged my mental health.

That is the sense of obligation to give as much as you possibly can.

Peter Singer is the most extreme example of this. He seems to have mellowed—a little—in more recent years, but in some of his most famous books he uses the following thought experiment:

To challenge my students to think about the ethics of what we owe to people in need, I ask them to imagine that their route to the university takes them past a shallow pond. One morning, I say to them, you notice a child has fallen in and appears to be drowning. To wade in and pull the child out would be easy but it will mean that you get your clothes wet and muddy, and by the time you go home and change you will have missed your first class.

I then ask the students: do you have any obligation to rescue the child? Unanimously, the students say they do. The importance of saving a child so far outweighs the cost of getting one’s clothes muddy and missing a class, that they refuse to consider it any kind of excuse for not saving the child. Does it make a difference, I ask, that there are other people walking past the pond who would equally be able to rescue the child but are not doing so? No, the students reply, the fact that others are not doing what they ought to do is no reason why I should not do what I ought to do.

Basically everyone agrees with this particular decision: Even if you are wearing a very expensive suit that will be ruined, even if you’ll miss something really important like a job interview or even a wedding—most people agree that if you ever come across a drowning child, you should save them.

(Oddly enough, when contemplating this scenario, nobody ever seems to consider the advice that most lifeguards give, which is to throw a life preserver and then go find someone qualified to save the child—because saving someone who is drowning is a lot harder and a lot riskier than most people realize. (“Reach or throw, don’t go.”) But that’s a bit beside the point.)

But Singer argues that we are basically in this position all the time. For somewhere between $500 and $3000, you—yes, you—could donate to a high-impact charity, and thereby save a child’s life.

Does it matter that many other people are better positioned to donate than you are? Does it matter that the child is thousands of miles away and you’ll never see them? Does it matter that there are actually millions of children, and you could never save them all by yourself? Does it matter that you’ll only save a child in expectation, rather than saving some specific child with certainty?

Singer says that none of this matters. For a long time, I believed him.

Now, I don’t.

For, if you actually walked by a drowning child that you could save, only at the cost of missing a wedding and ruining your tuxedo, you clearly should do that. (If it would risk your life, maybe not—and as I alluded to earlier, that’s more likely than you might imagine.) If you wouldn’t, there’s something wrong with you. You’re a bad person.

But most people don’t donate everything they could to high-impact charities. Even Peter Singer himself doesn’t. So if donating is the same as saving the drowning child, it follows that we are all bad people.

(Note: In general, if an ethical theory results in the conclusion that the whole of humanity is evil, there is probably something wrong with that ethical theory.)

Singer has tried to get out of this by saying we shouldn’t “sacrifice things of comparable importance”, and then somehow cash out what “comparable importance” means in such a way that it doesn’t require you to live on the street and eat scraps from trash cans. (Even though the people you’d be donating to largely do live that way.)

I’m not sure that really works, but okay, let’s say it does. Even so, it’s pretty clear that anything you spend money on purely for enjoyment would have to go. You would never eat out at restaurants, unless you could show that the time saved allowed you to get more work done and therefore donate more. You would never go to movies or buy video games, unless you could show that it was absolutely necessary for your own mental functioning. Your life would be work, work, work, then donate, donate, donate, and then do the absolute bare minimum to recover from working and donating so you can work and donate some more.

You would enslave yourself.

And all the while, you’d believe that you were never doing enough, you were never good enough, you are always a terrible person because you try to cling to any personal joy in your own life rather than giving, giving, giving all you have.

I now realize that Effective Altruism, as a movement, had been basically telling me to do that. And I’d been listening.

I now realize that Effective Altruism has given me this voice in my head, which I hear whenever I want to apply for a job or submit work for publication:

If you try, you will probably fail. And if you fail, a child will die.

The “if you try, you will probably fail” is just an objective fact. It’s inescapable. Any given job application or writing submission will probably fail.

Yes, maybe there’s some sort of bundling we could do to reframe that, as I discussed in an earlier post. But basically, this is correct, and I need to accept it.

Now, what about the second part? “If you fail, a child will die.” To most of you, that probably sounds crazy. And it is crazy. It’s way more pressure than any ordinary person should have in their daily life. This kind of pressure should be reserved for neurosurgeons and bomb squads.

But this is essentially what Effective Altruism taught me to believe. It taught me that every few thousand dollars I don’t donate is a child I am allowing to die. And since I can’t donate what I don’t have, it follows that every few thousand dollars I fail to get is another dead child.

And since Effective Altruism is so laser-focused on results above all else, it taught me that it really doesn’t matter whether I apply for the job and don’t get it, or never apply at all; the outcome is the same, and that outcome is that children suffer and die because I had no money to save them.

I think part of the problem here is that Effective Altruism is utilitarian through and through, and utilitarianism has very little place for good enough. There is better and there is worse; but there is no threshold at which you can say that your moral obligations are discharged and you are free to live your life as you wish. There is always more good that you could do, and therefore always more that you should do.

Do we really want to live in a world where to be a good person is to owe your whole life to others?

I do not believe in absolute selfishness. I believe that we owe something to other people. But I no longer believe that we owe everything. Sacrificing my own well-being at the altar of altruism has been incredibly destructive to my mental health, and I don’t think I’m the only one.

By all means, give to high-impact charities. But give a moderate amount—at most, tithe—and then go live your life. You don’t owe the world more than that.

The Butlerian Jihad is looking better all the time

Mar 24 JDN 2460395

A review of The Age of Em by Robin Hanson

In the Dune series, the Butlerian Jihad was a holy war against artificial intelligence that resulted in a millenias-long taboo against all forms of intelligent machines. It was effectively a way to tell a story about the distant future without basically everything being about robots or cyborgs.

After reading Robin Hanson’s book, I’m starting to think that maybe we should actually do it.

Thus it is written: “Thou shalt not make a machine in the likeness of a human mind.”

Hanson says he’s trying to reserve judgment and present objective predictions without evaluation, but it becomes very clear throughout that this is the future he wants, as well as—or perhaps even instead of—the world he expects.

In many ways, it feels like he has done his very best to imagine a world of true neoclassical rational agents in perfect competition, a sort of sandbox for the toys he’s always wanted to play with. Throughout he very much takes the approach of a neoclassical economist, making heroic assumptions and then following them to their logical conclusions, without ever seriously asking whether those assumptions actually make any sense.

To his credit, Hanson does not buy into the hype that AGI will be successful any day now. He predicts that we will achieve the ability to fully emulate human brains and thus create a sort of black-box AGI that behaves very much like a human within about 100 years. Given how the Blue Brain Project has progressed (much slower than its own hype machine told us it would—and let it be noted that I predicted this from the very beginning), I think this is a fairly plausible time estimate. He refers to a mind emulated in this way as an “em”; I have mixed feelings about the term, but I suppose we did need some word for that, and it certainly has conciseness on its side.

Hanson believes that a true understanding of artificial intelligence will only come later, and the sort of AGI that can be taken apart and reprogrammed for specific goals won’t exist for at least a century after that. Both of these sober, reasonable predictions are deeply refreshing in a field that’s been full of people saying “any day now” for the last fifty years.

But Hanson’s reasonableness just about ends there.

In The Age of Em, government is exactly as strong as Hanson needs it to be. Somehow it simultaneously ensures a low crime rate among a population that doubles every few months while also having no means of preventing that population growth. Somehow ensures that there is no labor collusion and corporations never break the law, but without imposing any regulations that might reduce efficiency in any way.

All of this begins to make more sense when you realize that Hanson’s true goal here is to imagine a world where neoclassical economics is actually true.

He realized it didn’t work on humans, so instead of giving up the theory, he gave up the humans.

Hanson predicts that ems will casually make short-term temporary copies of themselves called “spurs”, designed to perform a particular task and then get erased. I guess maybe he would, but I for one would not so cavalierly create another person and then make their existence dedicated to doing a single job before they die. The fact that I created this person, and they are very much like me, seem like reasons to care more about their well-being, not less! You’re asking me to enslave and murder my own child. (Honestly, the fact that Robin Hanson thinks ems will do this all the time says more about Robin Hanson than anything else.) Any remotely sane society of ems would ban the deletion of another em under any but the most extreme circumstances, and indeed treat it as tantamount to murder.

Hanson predicts that we will only copy the minds of a few hundred people. This is surely true at some point—the technology will take time to develop, and we’ll have to start somewhere. But I don’t see why we’d stop there, when we could continue to copy millions or billions of people; and his choices of who would be emulated, while not wildly implausible, are utterly terrifying.

He predicts that we’d emulate genius scientists and engineers; okay, fair enough, that seems right. I doubt that the benefits of doing so will be as high as many people imagine, because scientific progress actually depends a lot more on the combined efforts of millions of scientists than on rare sparks of brilliance by lone geniuses; but those people are definitely very smart, and having more of them around could be a good thing. I can also see people wanting to do this, and thus investing in making it happen.

He also predicts that we’d emulate billionaires. Now, as a prediction, I have to admit that this is actually fairly plausible; billionaires are precisely the sort of people who are rich enough to pay to be emulated and narcissistic enough to want to. But where Hanson really goes off the deep end here is that he sees this as a good thing. He seems to honestly believe that billionaires are so rich because they are so brilliant and productive. He thinks that a million copies of Elon Musks would produce a million hectobillionaires—when in reality it would produce a million squabbling narcissists, who at best had to split the same $200 billion wealth between them, and might very well end up with less because they squander it.

Hanson has a long section on trying to predict the personalities of ems. Frankly this could just have been dropped entirely; it adds almost nothing to the book, and the book is much too long. But the really striking thing to me about that section is what isn’t there. He goes through a long list of studies that found weak correlations between various personality traits like extroversion or openness and wealth—mostly comparing something like the 20th percentile to the 80th percentile—and then draws sweeping conclusions about what ems will be like, under the assumption that ems are all drawn from people in the 99.99999th percentile. (Yes, upper-middle-class people are, on average, more intelligent and more conscientious than lower-middle-class people. But do we even have any particular reason to think that the personalities of people who make $150,000 are relevant to understanding the behavior of people who make $15 billion?) But he completely glosses over the very strong correlations that specifically apply to people in that very top super-rich class: They’re almost all narcissists and/or psychopaths.

Hanson predicts a world where each em is copied many, many times—millions, billions, even trillions of times, and also in which the very richest ems are capable of buying parallel processing time that lets them accelerate their own thought processes to a million times faster than a normal human. (Is that even possible? Does consciousness work like that? Who knows!?) The world that Hanson is predicting is thus one where all the normal people get outnumbered and overpowered by psychopaths.

Basically this is the most abjectly dystopian cyberpunk hellscape imaginable. And he talks about it the whole time as if it were good.

It’s like he played the game Action Potential and thought, “This sounds great! I’d love to live there!” I mean, why wouldn’t you want to owe a life-debt on your own body and have to work 120-hour weeks for a trillion-dollar corporation just to make the payments on it?

Basically, Hanson doesn’t understand how wealth is actually acquired. He is educated as an economist, yet his understanding of capitalism basically amounts to believing in magic. He thinks that competitive markets just somehow perfectly automatically allocate wealth to whoever is most productive, and thus concludes that whoever is wealthy now must just be that productive.

I can see no other way to explain his wildly implausible predictions that the em economy will double every month or two. A huge swath of the book depends upon this assumption, but he waits until halfway through the book to even try to defend it, and then does an astonishingly bad job of doing so. (Honestly, even if you buy his own arguments—which I don’t—they seem to predict that population would grow with Moore’s Law—doubling every couple of years, not every couple of months.)

Whereas Keynes predicted based on sound economic principles that economic growth would more or less proceed apace and got his answer spot-on, Hanson predicts that for mysterious, unexplained reasons economic growth will suddenly increase by two orders of magnitude—and I’m pretty sure he’s going to be wildly wrong.

Hanson also predicts that ems will be on average poorer than we are, based on some sort of perfect-competition argument that doesn’t actually seem to mesh at all with his predictions of spectacularly rapid economic and technological growth. I think the best way to make sense of this is to assume that it means the trend toward insecure affluence will continue: Ems will have an objectively high standard of living in terms of what they own, what games they play, where they travel, and what they eat and drink (in simulation), but they will constantly be struggling to keep up with the rent on their homes—or even their own bodies. This is a world where (the very finest simulation of) Dom Perignon is $7 a bottle and wages are $980 an hour—but monthly rent is $284,000.

Early in the book Hanson argues that this life of poverty and scarcity will lead to more conservative values, on the grounds that people who are poorer now seem to be more conservative, and this has something to do with farmers versus foragers. Hanson’s explanation of all this is baffling; I will quote it at length, just so it’s clear I’m not misrepresenting it:

The other main (and independent) axis of value variation ranges between poor and rich societies. Poor societies place more value on conformity, security, and traditional values such as marriage, heterosexuality, religion, patriotism, hard work, and trust in authority. In contrast, rich societies place more value on individualism, self-direction, tolerance, pleasure, nature, leisure, and trust. When the values of individuals within a society vary on the same axis, we call this a left/liberal (rich) versus right/conservative (poor) axis.

Foragers tend to have values more like those of rich/liberal people today, while subsistence farmers tend to have values more like those of poor/conservative people today. As industry has made us richer, we have on average moved from conservative/farmer values to liberal/forager values. This value movement can make sense if cultural evolution used the social pressures farmers faced, such as conformity and religion, to induce humans, who evolved to find forager behaviors natural, to instead act like farmers. As we become rich, we don’t as strongly fear the threats behind these social pressures. This connection may result in part from disease; rich people are healthier, and healthier societies fear less.

The alternate theory that we have instead learned that rich forager values are more true predicts that values should have followed a random walk over time, and be mostly common across space. It also predicts the variance of value changes tracking the rate at which relevant information appears. But in fact industrial-era value changes have tracked the wealth of each society in much more steady and consistent fashion. And on this theory, why did foragers ever acquire farmer values?

[…]

In the scenario described in this book, many strange-to-forager behaviors are required, and median per-person (i.e. per-em) incomes return to near-subsistence levels. This suggests that the em era may reverse the recent forager-like trend toward more liberality; ems may have more farmer-like values.

The Age of Em, p. 26-27

There’s a lot to unpack here, but maybe it’s better to burn the whole suitcase.

First of all, it’s not entirely clear that this is really a single axis of variation, that foragers and farmers differ from each other in the same way as liberals and conservatives. There’s some truth to that at least—both foragers and liberals tend to be more generous, both farmers and conservatives tend to enforce stricter gender norms. But there are also clear ways that liberal values radically deviate from forager values: Forager societies are extremely xenophobic, and typically very hostile to innovation, inequality, or any attempts at self-aggrandizement (a phenomenon called “fierce egalitarianism“). San Francisco epitomizes rich, liberal values, but it would be utterly alien and probably regarded as evil by anyone from the Yanomamo.

Second, there is absolutely no reason to predict any kind of random walk. That’s just nonsense. Would you predict that scientific knowledge is a random walk, with each new era’s knowledge just a random deviation from the last’s? Maybe next century we’ll return to geocentrism, or phrenology will be back in vogue? On the theory that liberal values (or at least some liberal values) are objectively correct, we would expect them to advance as knowledge doesimproving over time, and improving faster in places that have better institutions for research, education, and free expression. And indeed, this is precisely the pattern we have observed. (Those places are also richer, but that isn’t terribly surprising either!)

Third, while poorer regions are indeed more conservative, poorer people within a region actually tend to be more liberal. Nigeria is poorer and more conservative than Norway, and Mississippi is poorer and more conservative than Massachusetts. But higher-income households in the United States are more likely to vote Republican. I think this is particularly true of people living under insecure affluence: We see the abundance of wealth around us, and don’t understand why we can’t learn to share it better. We’re tired of fighting over scraps while the billionaires claim more and more. Millennials and Zoomers absolutely epitomize insecure affluence, and we also absolutely epitomize liberalism. So, if indeed ems live a life of insecure affluence, we should expect them to be like Zoomers: “Trans liberation now!” and “Eat the rich!” (Or should I say, “Delete the rich!”)

And really, doesn’t that make more sense? Isn’t that the trend our society has been on, for at least the last century? We’ve been moving toward more and more acceptance of women and minorities, more and more deviation from norms, more and more concern for individual rights and autonomy, more and more resistance to authority and inequality.

The funny thing is, that world sounds a lot better than the one Hanson is predicting.

A world of left-wing ems would probably run things a lot better than Hanson imagines: Instead of copying the same hundred psychopaths over and over until we fill the planet, have no room for anything else, and all struggle to make enough money just to stay alive, we could moderate our population to a more sustainable level, preserve diversity and individuality, and work toward living in greater harmony with each other and the natural world. We could take this economic and technological abundance and share it and enjoy it, instead of killing ourselves and each other to make more of it for no apparent reason.

The one good argument Hanson makes here is expressed in a single sentence: “And on this theory, why did foragers ever acquire farmer values?” That actually is a good question; why did we give up on leisure and egalitarianism when we transitioned from foraging to agriculture?

I think scarcity probably is relevant here: As food became scarcer, maybe because of climate change, people were forced into an agricultural lifestyle just to have enough to eat. Early agricultural societies were also typically authoritarian and violent. Under those conditions, people couldn’t be so generous and open-minded; they were surrounded by threats and on the verge of starvation.

I guess if Hanson is right that the em world is also one of poverty and insecurity, we might go back to those sort of values, borne of desperation. But I don’t see any reason to think we’d give up all of our liberal values. I would predict that ems will still be feminist, for instance; in fact, Hanson himself admits that since VR avatars would let us change gender presentation at will, gender would almost certainly become more fluid in a world of ems. Far from valuing heterosexuality more highly (as conservatives do, a “farmer value” according to Hanson), I suspect that ems will have no further use for that construct, because reproduction will be done by manufacturing, not sex, and it’ll be so easy to swap your body into a different one that hardly anyone will even keep the same gender their whole life. They’ll think it’s quaint that we used to identify so strongly with our own animal sexual dimorphism.

But maybe it is true that the scarcity induced by a hyper-competitive em world would make people more selfish, less generous, less trusting, more obsessed with work. Then let’s not do that! We don’t have to build that world! This isn’t a foregone conclusion!

There are many other paths yet available to us.

Indeed, perhaps the simplest would be to just ban artificial intelligence, at least until we can get a better handle on what we’re doing—and perhaps until we can institute the kind of radical economic changes necessary to wrest control of the world away from the handful of psychopaths currently trying their best to run it into the ground.

I admit, it would kind of suck to not get any of the benefits of AI, like self-driving cars, safer airplanes, faster medical research, more efficient industry, and better video games. It would especially suck if we did go full-on Butlerian Jihad and ban anything more complicated than a pocket calculator. (Our lifestyle might have to go back to what it was in—gasp! The 1950s!)

But I don’t think it would suck nearly as much as the world Robin Hanson thinks is in store for us if we continue on our current path.

So I certainly hope he’s wrong about all this.

Fortunately, I think he probably is.

Reckoning costs in money distorts them

May 7 JDN 2460072

Consider for a moment what it means when an economic news article reports “rising labor costs”. What are they actually saying?

They’re saying that wages are rising—perhaps in some industry, perhaps in the economy as a whole. But this is not a cost. It’s a price. As I’ve written about before, the two are fundamentally distinct.

The cost of labor is measured in effort, toil, and time. It’s the pain of having to work instead of whatever else you’d like to do with your time.

The price of labor is a monetary amount, which is delivered in a transaction.

This may seem perfectly obvious, but it has important and oft-neglected implications. A cost, one paid, is gone. That value has been destroyed. We hope that it was worth it for some benefit we gained. A price, when paid, is simply transferred: One person had that money before, now someone else has it. Nothing was gained or lost.

So in fact when reports say that “labor costs have risen”, what they are really saying is that income is being transferred from owners to workers without any change in real value taking place. They are framing as a loss what is fundamentally a zero-sum redistribution.

In fact, it is disturbingly common to see a fundamentally good redistribution of income framed in the press as a bad outcome because of its expression as “costs”; the “cost” of chocolate is feared to go up if we insist upon enforcing bans on forced labor—when in fact it is only the price that goes up, and the cost actually goes down: chocolate would no longer include complicity in an atrocity. The real suffering of making chocolate would be thereby reduced, not increased. Even when they aren’t literally enslaved, those workers are astonishingly poor, and giving them even a few more cents per hour would make a real difference in their lives. But God forbid we pay a few cents more for a candy bar!

If labor costs were to rise, that would mean that work had suddenly gotten harder, or more painful; or else, that some outside circumstance had made it more difficult to work. Having a child increases your labor costs—you now have the opportunity cost of not caring for the child. COVID increased the cost of labor, by making it suddenly dangerous just to go outside in public. That could also increase prices—you may demand a higher wage, and people do seem to have demanded higher wages after COVID. But these are two separate effects, and you can have one without the other. In fact, women typically see wage stagnation or even reduction after having kids (but men largely don’t), despite their real opportunity cost of labor having obviously greatly increased.

On an individual level, it’s not such a big mistake to equate price and cost. If you are buying something, its cost to you basically just is its price, plus a little bit of transaction cost for actually finding and buying it. But on a societal level, it makes an enormous difference. It distorts our policy priorities and can even lead to actively trying to suppress things that are beneficial—such as rising wages.

This false equivalence between price and costs seems to be at least as common among economists as it is among laypeople. Economists will often justify it on the grounds that in an ideal perfect competitive market the two would be in some sense equated. But of course we don’t live in that ideal perfect market, and even if we did, they would only beproportional at the margin, not fundamentally equal across the board. It would still be obviously wrong to characterize the total value or cost of work by the price paid for it; only the last unit of effort would be priced so that marginal value equals price equals marginal cost. The first 39 hours of your work would cost you less than what you were paid, and produce more than you were paid; only that 40th hour would set the three equal.

Once you account for all the various market distortions in the world, there’s no particular relationship between what something costs—in terms of real effort and suffering—and its price—in monetary terms. Things can be expensive and easy, or cheap and awful. In fact, they often seem to be; for some reason, there seems to be a pattern where the most terrible, miserable jobs (e.g. coal mining) actually pay the leastand the easiest, most pleasant jobs (e.g. stock trading) pay the most. Some jobs that benefit society pay well (e.g. doctors) and others pay terribly or not at all (e.g. climate activists). Some actions that harm the world get punished (e.g. armed robbery) and others get rewarded with riches (e.g. oil drilling). In the real world, whether a job is good or bad and whether it is paid well or poorly seem to be almost unrelated.

In fact, sometimes they seem even negatively related, where we often feel tempted to “sell out” and do something destructive in order to get higher pay. This is likely due to Berkson’s paradox: If people are willing to do jobs if they are either high-paying or beneficial to humanity, then we should expect that, on average, most of the high-paying jobs people do won’t be beneficial to humanity. Even if there were inherently no correlation or a small positive one, people’s refusal to do harmful low-paying work removes those jobs from our sample and results in a negative correlation in what remains.

I think that the best solution, ultimately, is to stop reckoning costs in money entirely. We should reckon them in happiness.

This is of course much more difficult than simply using prices; it’s not easy to say exactly how many QALY are sacrificed in the extraction of cocoa beans or the drilling of offshore oil wells. But if we actually did find a way to count them, I strongly suspect we’d find that it was far more than we ought to be willing to pay.

A very rough approximation, surely flawed but at least a start, would be to simply convert all payments into proportions of their recipient’s income: For full-time wages, this would result in basically everyone being counted the same, as 1 hour of work if you work 40 hours per week, 50 weeks per year is precisely 0.05% of your annual income. So we could say that whatever is equivalent to your hourly wage constitutes 50 microQALY.

This automatically implies that every time a rich person pays a poor person, QALY increase, while every time a poor person pays a rich person, QALY decrease. This is not an error in the calculation. It is a fact of the universe. We ignore it only at out own peril. All wealth redistributed downward is a benefit, while all wealth redistributed upward is a harm. That benefit may cause some other harm, or that harm may be compensated by some other benefit; but they are still there.

This would also put some things in perspective. When HSBC was fined £70 million for its crimes, that can be compared against its £1.5 billion in net income; if it were an individual, it would have been hurt about 50 milliQALY, which is about what I would feel if I lost $2000. Of course, it’s not a person, and it’s not clear exactly how this loss was passed through to employees or shareholders; but that should give us at least some sense of how small that loss was for them. They probably felt it… a little.

When Trump was ordered to pay a $1.3 million settlement, based on his $2.5 billion net wealth (corresponding to roughly $125 million in annual investment income), that cost him about 10 milliQALY; for me that would be about $500.

At the other extreme, if someone goes from making $1 per day to making $1.50 per day, that’s a 50% increase in their income—500 milliQALY per year.

For those who have no income at all, this becomes even trickier; for them I think we should probably use their annual consumption, since everyone needs to eat and that costs something, though likely not very much. Or we could try to measure their happiness directly, trying to determine how much it hurts to not eat enough and work all day in sweltering heat.

Properly shifting this whole cultural norm will take a long time. For now, I leave you with this: Any time you see a monetary figure, ask yourself: How much is that worth to them?” The world will seem quite different once you get in the habit of that.

What is it with EA and AI?

Jan 1 JDN 2459946

Surprisingly, most Effective Altruism (EA) leaders don’t seem to think that poverty alleviation should be our top priority. Most of them seem especially concerned about long-term existential risk, such as artificial intelligence (AI) safety and biosecurity. I’m not going to say that these things aren’t important—they certainly are important—but here are a few reasons I’m skeptical that they are really the most important the way that so many EA leaders seem to think.

1. We don’t actually know how to make much progress at them, and there’s only so much we can learn by investing heavily in basic research on them. Whereas, with poverty, the easy, obvious answer turns out empirically to be extremely effective: Give them money.

2. While it’s easy to multiply out huge numbers of potential future people in your calculations of existential risk (and this is precisely what people do when arguing that AI safety should be a top priority), this clearly isn’t actually a good way to make real-world decisions. We simply don’t know enough about the distant future of humanity to be able to make any kind of good judgments about what will or won’t increase their odds of survival. You’re basically just making up numbers. You’re taking tiny probabilities of things you know nothing about and multiplying them by ludicrously huge payoffs; it’s basically the secular rationalist equivalent of Pascal’s Wager.

2. AI and biosecurity are high-tech, futuristic topics, which seem targeted to appeal to the sensibilities of a movement that is still very dominated by intelligent, nerdy, mildly autistic, rich young White men. (Note that I say this as someone who very much fits this stereotype. I’m queer, not extremely rich and not entirely White, but otherwise, yes.) Somehow I suspect that if we asked a lot of poor Black women how important it is to slightly improve our understanding of AI versus giving money to feed children in Africa, we might get a different answer.

3. Poverty eradication is often characterized as a “short term” project, contrasted with AI safety as a “long term” project. This is (ironically) very short-sighted. Eradication of poverty isn’t just about feeding children today. It’s about making a world where those children grow up to be leaders and entrepreneurs and researchers themselves. The positive externalities of economic development are staggering. It is really not much of an exaggeration to say that fascism is a consequence of poverty and unemployment.

4. Currently the main thing that most Effective Altruism organizations say they need most is “talent”; how many millions of person-hours of talent are we leaving on the table by letting children starve or die of malaria?

5. Above all, existential risk can’t really be what’s motivating people here. The obvious solutions to AI safety and biosecurity are not being pursued, because they don’t fit with the vision that intelligent, nerdy, young White men have of how things should be. Namely: Ban them. If you truly believe that the most important thing to do right now is reduce the existential risk of AI and biotechnology, you should support a worldwide ban on research in artificial intelligence and biotechnology. You should want people to take all necessary action to attack and destroy institutions—especially for-profit corporations—that engage in this kind of research, because you believe that they are threatening to destroy the entire world and this is the most important thing, more important than saving people from starvation and disease. I think this is really the knock-down argument; when people say they think that AI safety is the most important thing but they don’t want Google and Facebook to be immediately shut down, they are either confused or lying. Honestly I think maybe Google and Facebook should be immediately shut down for AI safety reasons (as well as privacy and antitrust reasons!), and I don’t think AI safety is yet the most important thing.

Why aren’t people doing that? Because they aren’t actually trying to reduce existential risk. They just think AI and biotechnology are really interesting, fascinating topics and they want to do research on them. And I agree with that, actually—but then they need stop telling people that they’re fighting to save the world, because they obviously aren’t. If the danger were anything like what they say it is, we should be halting all research on these topics immediately, except perhaps for a very select few people who are entrusted with keeping these forbidden secrets and trying to find ways to protect us from them. This may sound radical and extreme, but it is not unprecedented: This is how we handle nuclear weapons, which are universally recognized as a global existential risk. If AI is really as dangerous as nukes, we should be regulating it like nukes. I think that in principle it could be that dangerous, and may be that dangerous someday—but it isn’t yet. And if we don’t want it to get that dangerous, we don’t need more AI researchers, we need more regulations that stop people from doing harmful AI research! If you are doing AI research and it isn’t directly involved specifically in AI safety, you aren’t saving the world—you’re one of the people dragging us closer to the cliff! Anything that could make AI smarter but doesn’t also make it safer is dangerous. And this is clearly true of the vast majority of AI research, and frankly to me seems to also be true of the vast majority of research at AI safety institutes like the Machine Intelligence Research Institute.

Seriously, look through MIRI’s research agenda: It’s mostly incredibly abstract and seems completely beside the point when it comes to preventing AI from taking control of weapons or governments. It’s all about formalizing Bayesian induction. Thanks to you, Skynet can have a formally computable approximation to logical induction! Truly we are saved. Only two of their papers, on “Corrigibility” and “AI Ethics”, actually struck me as at all relevant to making AI safer. The rest is largely abstract mathematics that is almost literally navel-gazing—it’s all about self-reference. Eliezer Yudkowsky finds self-reference fascinating and has somehow convinced an entire community that it’s the most important thing in the world. (I actually find some of it fascinating too, especially the paper on “Functional Decision Theory”, which I think gets at some deep insights into things like why we have emotions. But I don’t see how it’s going to save the world from AI.)

Don’t get me wrong: AI also has enormous potential benefits, and this is a reason we may not want to ban it. But if you really believe that there is a 10% chance that AI will wipe out humanity by 2100, then get out your pitchforks and your EMP generators, because it’s time for the Butlerian Jihad. A 10% chance of destroying all humanity is an utterly unacceptable risk for any conceivable benefit. Better that we consign ourselves to living as we did in the Neolithic than risk something like that. (And a globally-enforced ban on AI isn’t even that; it’s more like “We must live as we did in the 1950s.” How would we survive!?) If you don’t want AI banned, maybe ask yourself whether you really believe the risk is that high—or are human brains just really bad at dealing with small probabilities?

I think what’s really happening here is that we have a bunch of guys (and yes, the EA and especially AI EA-AI community is overwhelmingly male) who are really good at math and want to save the world, and have thus convinced themselves that being really good at math is how you save the world. But it isn’t. The world is much messier than that. In fact, there may not be much that most of us can do to contribute to saving the world; our best options may in fact be to donate money, vote well, and advocate for good causes.

Let me speak Bayesian for a moment: The prior probability that you—yes, you, out of all the billions of people in the world—are uniquely positioned to save it by being so smart is extremely small. It’s far more likely that the world will be saved—or doomed—by people who have power. If you are not the head of state of a large country or the CEO of a major multinational corporation, I’m sorry; you probably just aren’t in a position to save the world from AI.

But you can give some money to GiveWell, so maybe do that instead?

Charity shouldn’t end at home

It so happens that this week’s post will go live on Christmas Day. I always try to do some kind of holiday-themed post around this time of year, because not only Christmas, but a dozen other holidays from various religions all fall around this time of year. The winter solstice seems to be a very popular time for holidays, and has been since antiquity: The Romans were celebrating Saturnalia 2000 years ago. Most of our ‘Christmas’ traditions are actually derived from Yuletide.

These holidays certainly mean many different things to different people, but charity and generosity are themes that are very common across a lot of them. Gift-giving has been part of the season since at least Saturnalia and remains as vital as ever today. Most of those gifts are given to our friends and loved ones, but a substantial fraction of people also give to strangers in the form of charitable donations: November and December have the highest rates of donation to charity in the US and the UK, with about 35-40% of people donating during this season. (Of course this is complicated by the fact that December 31 is often the day with the most donations, probably from people trying to finish out their tax year with a larger deduction.)

My goal today is to make you one of those donors. There is a common saying, often attributed to the Bible but not actually present in it: “Charity begins at home”.

Perhaps this is so. There’s certainly something questionable about the Effective Altruism strategy of “earning to give” if it involves abusing and exploiting the people around you in order to make more money that you then donate to worthy causes. Certainly we should be kind and compassionate to those around us, and it makes sense for us to prioritize those close to us over strangers we have never met. But while charity may begin at home, it must not end at home.

There are so many global problems that could benefit from additional donations. While global poverty has been rapidly declining in the early 21st century, this is largely because of the efforts of donors and nonprofit organizations. Official Development Assitance has been roughly constant since the 1970s at 0.3% of GNI among First World countries—well below international targets set decades ago. Total development aid is around $160 billion per year, while private donations from the United States alone are over $480 billion. Moreover, 9% of the world’s population still lives in extreme poverty, and this rate has actually slightly increased the last few years due to COVID.

There are plenty of other worthy causes you could give to aside from poverty eradication, from issues that have been with us since the dawn of human civilization (the Humane Society International for domestic animal welfare, the World Wildlife Federation for wildlife conservation) to exotic fat-tail sci-fi risks that are only emerging in our own lifetimes (the Machine Intelligence Research Institute for AI safety, the International Federation of Biosafety Associations for biosecurity, the Union of Concerned Scientists for climate change and nuclear safety). You could fight poverty directly through organizations like UNICEF or GiveDirectly, fight neglected diseases through the Schistomoniasis Control Initiative or the Against Malaria Foundation, or entrust an organization like GiveWell to optimize your donations for you, sending them where they think they are needed most. You could give to political causes supporting civil liberties (the American Civil Liberties Union) or protecting the rights of people of color (the North American Association of Colored People) or LGBT people (the Human Rights Campaign).

I could spent a lot of time and effort trying to figure out the optimal way to divide up your donations and give them to causes such as this—and then convincing you that it’s really the right one. (And there is even a time and place for that, because seemingly-small differences can matter a lot in this.) But instead I think I’m just going to ask you to pick something. Give something to an international charity with a good track record.

I think we worry far too much about what is the best way to give—especially people in the Effective Altruism community, of which I’m sort of a marginal member—when the biggest thing the world really needs right now is just more people giving more. It’s true, there are lots of worthless or even counter-productive charities out there: Please, please do not give to the Salvation Army. (And think twice before donating to your own church; if you want to support your own community, okay, go ahead. But if you want to make the world better, there are much better places to put your money.)

But above all, give something. Or if you already give, give more. Most people don’t give at all, and most people who give don’t give enough.

If I had a trillion dollars…

May 29 JDN 2459729

(To the tune of “If I had a million dollars” by Barenaked Ladies; by the way, he does now)

[Inspired by the book How to Spend a Trillion Dollars]

If I had a trillion dollars… if I had a trillion dollars!

I’d buy everyone a house—and yes, I mean, every homeless American.

[500,000 homeless households * $300,000 median home price = $150 billion]

If I had a trillion dollars… if I had a trillion dollars!

I’d give to the extreme poor—and then there would be no extreme poor!

[Global poverty gap: $160 billion]

If I had a trillion dollars… if I had a trillion dollars!

I’d send people to Mars—hey, maybe we’d find some alien life!

[Estimated cost of manned Mars mission: $100 billion]

If I had a trillion dollars… if I had a trillion dollars!

I’d build us a Moon base—haven’t you always wanted a Moon base?

[Estimated cost of a permanent Lunar base: $35 billion. NASA is bad at forecasting cost, so let’s allow cost overruns to take us to $100 billion.]

If I had a trillion dollars… if I had a trillion dollars!

I’d build a new particle accelerator—let’s finally figure out dark matter!

[Cost of planned new accelerator at CERN: $24 billion. Let’s do 4 times bigger and make it $100 billion.]

If I had a trillion dollars… if I had a trillion dollars!

I’d save the Amazon—pay all the ranchers to do something else!

[Brazil, where 90% of Amazon cattle ranching is, produces about 10 million tons of beef per year, which at an average price of $5000 per ton is $50 billion. So I could pay all the farmers two years of revenue to protect the Amazon instead of destroying it for $100 billion.]

If I had a trillion dollars…

We wouldn’t have to drive anymore!

If I had a trillion dollars…

We’d build high-speed rail—it won’t cost more!

[Cost of proposed high-speed rail system: $240 billion]

If I had a trillion dollars… if I had trillion dollars!

Hey wait, I could get it from a carbon tax!

[Even a moderate carbon tax could raise $1 trillion in 10 years.]

If I had a trillion dollars… I’d save the world….

All of the above really could be done for under $1 trillion. (Some of them would need to be repeated, so we could call it $1 trillion per year.)

I, of course, do not, and will almost certainly never have, anything approaching $1 trillion.

But here’s the thing: There are people who do.

Elon Musk and Jeff Bezos together have a staggering $350 billion. That’s two people with enough money to end world hunger. And don’t give me that old excuse that it’s not in cash: UNICEF gladly accepts donations in stock. They could, right now, give their stocks to UNICEF and thereby end world hunger. They are choosing not to do that. In fact, the goodwill generated by giving, say, half their stocks to UNICEF might actually result in enough people buying into their companies that their stock prices would rise enough to make up the difference—thus costing them literally nothing.

The total net wealth of all the world’s billionaires is a mind-boggling $12.7 trillion. That’s more than half a year of US GDP. Held by just over 2600 people—a small town.

The US government spends $4 trillion in a normal year—and $5 trillion the last couple of years due to the pandemic. Nearly $1 trillion of that is military spending, which could be cut in half and still be the highest in the world. After seeing how pathetic Russia’s army actually is in battle (they paint Zs on their tanks because apparently their IFF system is useless!), are we really still scared of them? Do we really need eleven carrier battle groups?

Yes, the total cost of mitigating climate change is probably in the tens of trillions—but the cost of not mitigating climate change could be over $100 trillion. And it’s not as if the world can’t come up with tens of trillions; we already do. World GDP is now over $100 trillion per year; just 2% of that for 10 years is $20 trillion.

Do these sound like good ideas to you? Would you want to do them? I think most people would want most of them. So now the question becomes: Why aren’t we doing them?