What would a world without poverty look like?

Mar 22 JDN 2461122

In my previous post I reflected on the ways that conventional measures of poverty seem inadequate—and that a richer understanding of poverty suggests that it is far more ubiquitous than such measures suggest.

In this post, I will ask: Given this richer understanding of poverty, what would a world without poverty look like? Is it something we can realistically hope to achieve?

In techno-utopian circles (looking at you again, Scott Alexander), it is common to speak of “post-scarcity”: A world where there is no poverty because resources are effectively unlimited.

I don’t think that’s possible.

Not for humans as we know them. Perhaps in a future where greed is a recognized and treatable psychiatric disorder, we could genuinely have an economy where people really just take whatever they want and it works out because nobody wants an unreasonable amount.

But the fact that there are people with hundreds of billions of dollars tells me that among humans as we know them, some people’s greed is just literally insatiable. Give them a moon and they’ll demand a planet; give them a planet and they’ll demand a solar system. Whatever they are getting out of more wealth (status? power? the dopamine hit of number go up?), they’re never going to stop getting it from even more wealth, no matter how much we give them. For if they were going to stop at a reasonable amount, they would have stopped four orders of magnitude ago.

So let’s try to imagine what a world would look like if it really had no poverty, but not by somehow producing such staggering amounts of wealth that everyone could literally take whatever they want.

I think the key is that it would require all basic material needs to be met.

Everyone would have, at minimum:

  • Clean air to breathe
  • Clean water to drink
  • Nutritious food to eat
  • Shelter from the elements
  • Security against theft and violence
  • Personal liberty and political representation
  • A basic education
  • A basic standard of healthcare

(I will note that these resonate quite closely with the UN Universal Declaration of Human Rights.)

Some of these needs can probably never be completely satisfied—there is an inherent tension between liberty and security which requires us to balance them against each other. A society with zero crime is a horrific totalitarian police state; a society with complete liberty is an equally horrific Hobbesian nightmare. But we have achieved, in most of the First World at least, a reasonable standard of security along with a great deal of liberty, and preserving that balance should be of a very high priority.

Even clean air and water would be difficult to satisfy perfectly: even if we pivot our whole economy to solar, wind, and nuclear power (as we very definitely should be doing!), some amount of pollution is probably necessary just to have a functioning industrial society. So we need to establish reasonable standards for what amounts of pollution exposure are safe, and effective mechanisms for ensuring that people are not exposed to pollution outside those standards—we have largely done the former, but seriously fail at the latter.

But probably the most difficult needs to satisfy are actually difficult to even define.

Just what constitutes a basic standard of education, and a basic standard of healthcare?

These seem like moving targets.

Let’s start with education:

Someone who is illiterate and can barely add two numbers together would be considered to have very poor education today, but would be considered completely average among peasants in the Middle Ages. Someone like me with a PhD has education well beyond what anyone had in the Middle Ages: While Oxford was already graduating doctors in the 12th century, those doctors didn’t have to write dissertations, and didn’t know nearly as much about the world as you must to earn a modern PhD. (Most of the mathematics required to get an economics PhD specifically literally had not been invented.)

So it’s conceivable that educational standards will continue to rise over time, especially if we are able to radically improve learning via new technologies. In the most extreme case, if everyone can just download knowledge like in The Matrix, then it wouldn’t be unreasonable to expect the average person to know as much as a typical PhD today in dozens of fields.

Suppose that such technology did exist. Would it be fair to consider someone poor if they didn’t have access to it?

Yes, I think it would.

Because if it’s really cheap and easy to give breathtakingly vast knowledge on a variety of subjects to anyone instantly, then letting some people have that while others do not puts those others at a severe disadvantage in life. If you must know how to solve partial differential equations to get a job, then someone who only made it through high school algebra isn’t going to be able to find jobs.

So I think what we’re really concerned about here is inequality: The education of a rich person should not be too much better than the education of a poor person, lest “meritocracy” simply reinforce the same generational inequality it was supposed to eliminate.

Now consider healthcare:

This, too, has radically improved over time. Indeed, I’m not really sure it’s fair to call Medieval doctors doctors at all; they lacked basic knowledge of human physiology and their intervention was as likely to hurt patients as to help them. Surgeons certainly existed: They knew how to amputate a gangrenous limb or suture a wound. (They did so without antiseptic, let alone anaesthetic!) But should you come to them with a fever or a headache, they would likely do you as much harm as good.

So we could imagine a world of Star Trek medicine, where you lie in a bed, get scanned for a few moments, and the doctor immediately knows what’s wrong with you and what kind of painless injection to give you to fix it.

Once again, we must ask: If you don’t have that, are you poor?

And again, I’m going to say yes.

If the technology exists to heal people this effortlessly, and some people get access to it while others do not, the latter are being allowed to suffer when their suffering could be easily alleviated.

But now we must consider: what if the technology exists, but it’s too expensive to use routinely?

Most technologies are like this when they are first invented. Over time, the technology improves (and the patents expire!) and they become cheaper and more widely available.

Unlike education, healthcare doesn’t usually impose large advantages on those who receive it—though it can, especially in a society where disabilities are not adequately accommodated.

So I think I’m prepared to allow “early adopters” of new medical technology, people who are rich enough to pay for advanced treatments before they are available to everyone—within certain limits. If some new treatment grants radically higher productivity or lifespan, then in fact I think we have a moral obligation to wait until it can be universally shared before we give it to anyone—precisely because of the risk of reinforcing generational inequality.

Once again, in our effort to define poverty, we end up returning to inequality: The rich should not be allowed to be too much healthier than the poor.

This definitely makes education and healthcare more complicated than the others.

While we can pretty clearly define how much food and water a human being needs to live, and we could provide it to everyone, and then nobody would be poor in terms of food or water.

But making nobody poor in terms of education and healthcare requires meeting a standard that may in fact increase over time, and it is no contradiction to imagine that someone living in the 31st century could be receiving better healthcare than I ever will and yet is still not receiving adequate healthcare based on the technology available.

Furthermore, that person demanding better healthcare is not being ungrateful or envious—they are quite reasonably demanding that society fairly allocate healthcare so that there aren’t some people who live in eternal youth while other people still die of old age.

Are they richer than I am? In some sense, perhaps. We could stipulate that in every material way they are better off than I am now. But there’s a treatment that could extend their life by centuries, and nobody’s giving it to them, because they can’t afford it—and that’s wrong. That makes them poor, and it makes their society unfair and unjust. It isn’t just a question of how many QALY they have; it’s also a question of what it would cost to give them a lot more.

But with all that said, I do believe that a world without poverty is possible.

In fact, I believe that technologically we could already provide that world, if we had the political will to do so. Maybe we don’t quite have the economic output to support it worldwide, but even that is not as far off as most people seem to think.

Providing an adequate standard of food and water, for example, we could already do with existing food supplies. It would cost about one-eighth of Elon Musk’s wealth per year, meaning that, with good stock returns (as he most certainly gets), he could very likely afford it by himself!

Clean air for all would be harder, but we are moving the right direction now that solar power is so cheap.

Universal liberty and security would require radical shifts in government in dozens of countries, so that one seems especially unlikely to happen any time soon—yet it is very definitely possible, and by construction only requires political change.

Universal education and healthcare would be very expensive, and most countries are too poor to really provide them on their own. They are not simply poor in money, but poor in skills: There aren’t enough doctors and teachers, and so we would need to use the ones we have to train up a new generation, and perhaps a new generation after that, before the world’s needs would really be met. (Fortunately, there are people trying to do this. But they don’t have enough resources to really achieve these goals.) So this is not a technological limitation, but it is an economic one; it will probably be at least another generation before we can solve this one.

What about universal shelter? Now there’s the rub. Even in prosperous First World countries, housing shortages and skyrocketing prices are keeping homeownership out of reach for tens of millions of people, and leaving hundreds of thousands outright homeless. We clearly do have the technology to produce enough homes, especially if we are prepared to build at high density; but the economic cost of doing so would be substantial, and our policymakers don’t seem at all willing to actually pay it. I think as long as housing is viewed as an asset one invests in rather than a good that one needs, this will continue to be the case.

The problem isn’t that we don’t have enough stuff. It’s that we are not sharing it properly.

What is poverty?

Mar 15 JDN 2461115

What is poverty? It seems like a simple question, one we should all already know the answer to; but it turns out to be surprisingly complicated.

In practice, we mainly define some amount of income or consumption that is considered a “poverty line”, and declare that everyone below that line is in poverty, while everyone above it is not.

This post is about why that doesn’t work.

The most obvious question is of course: How do we draw that line? Some absolute level, or relative to income in the rest of society? Different places do it differently.

But I have come to realize that there is actually a deeper reason why there will never be a satisfying choice of “poverty line”:

There is no specific amount of income that could ever decide whether someone is in poverty.

It’s not a question of purchasing power. prices, or inflation. It’s not something you can adjust for statistically. It’s a fundamental error in defining the concept of poverty.

The problem is this:

Human needs are not fungible.

This Less Wrong post on “Anoxistan” really opened my eyes to that: No amount of money can make up for the fact that you’re missing something you need, be it a roof over your head, food on your table, clean water to drink, or medical care—or, as in the parable, air to breathe.

The best definition of poverty, then, is something like this:

Poverty is having to struggle to meet basic human material needs.

(I specify “material” needs, because someone who is alone and unloved has unmet human needs, but it is not the responsibility of even a utopian fully automated luxury communist society to provide for those needs. They may very well be miserable, but it does not make them poor.)

Maybe—maybe—in a well-functioning market economy, we can sort of muddle through by making a list of what everyone needs, finding the prices for all those goods and services, adding that up, and declaring that the poverty line. (This is often what we actually do, in fact.) The notion would then be that, as long as you have at least that amount of money, you can probably buy all the things you need.

But this rapidly breaks down if you aren’t facing the same prices as what were used to make that aggregation—which you almost never are, because nobody is the average American living in the average American city. And it also misses the fact that security is a human need, and simply having the necessary income for now is not at all the same thing as knowing that you’ll continue to have the necessary income in the future.

One Libertarian commentator asked me: “Would you really switch places with Rockefeller if you could?”

I had to think about it: I’d be losing a lot of things, for sure. No Internet, no cell phone, no computer, no video games. The quality of my clothes might actually be worse (though my wardrobe would surely be larger). Finding vegetarian food I enjoy might actually be more of a challenge, though I could surely import it from anywhere. Worst of all, I would lose access to many medical treatments I currently depend upon: Treatment of migraines in the late 19th century was considerably worse, and treatment of depression was essentially nonexistent.

Since this is about wealth, I think we can ignore the fact that I’d be moving into a terrifyingly racist, misogynistic and homophobic society. That itself might actually be the reason I wouldn’t really want to make the switch. But you can simultaneously believe that the late 19th century was a worse time than today for everyone who wasn’t a White cisgender heterosexual man, and also that Rockefeller was much richer than you’ll ever be.

But what would I gain? Power, though I have very little interest in that. Opportunities for philanthropy, which I do care about, but they’d benefit other people more than myself. Real estate—I don’t even own my own home, and Rockefeller owned multiple mansions, including, famously, the Casements in Florida.

But above all, I would gain security. Owning an oil company would allow me to live comfortably for the rest of my life, and most likely also allow my heirs to live comfortably for their entire lives, without me ever needing to work another day. I could still take jobs if I wanted them, but no employer would ever have any power over me. If I was unhappy at a job, I could just leave. If I wanted to spend a month, or a year, or a decade, without working at all, I could just do that. That is what it means to be rich. That is what Rockfeller had that I don’t think I will ever have.

The difference between being rich and being poor is security.

As long as anyone is struggling to make ends meet, poverty exists.

As long as anyone is afraid to lose their job, poverty exists.

As long as anyone is choosing not to have children because they don’t think they can afford them, poverty exists.

As long as bosses can abuse their employees and get away with it, poverty exists.

And in fact, it begins to look like poverty in the United States has not been decreasing over the last two generations, even as our per-capita GDP and median income have continued to rise and our population below “the poverty line” have fallen. (Indeed, that particular measure of “unable to afford children” has very clearly greatly increased, and is a very bad sign for our society’s future.)

This is how our economy is failing. It has given us lots more stuff, and made some things available to all that were once only available to the rich; but it has not freed us from the constant struggle to meet our basic needs, even though there are clearly plenty of resources available to do that.

How could we make job search less of a nightmare?

Mar 1 JDN 2461101

This has been my “career” for the last two years:

I search through thousands of job postings, which, despite various filters and tags on my searches, almost none of which are actually good fits for me—in part because the search engines simply do not contain a great deal of information that would be vital, like “LGBT friendly”, “supportive of neurodivergent employees”, or “good at accommodating disabilities”. Instead it’s all sorted by “job title”, which at this point is clearly an arms race of search-engine optimization, because I keep getting listings called “tutor” which are actually some sort of interactive training of yet another large language model nobody actually needs. (Actual tutoring of actual human students often is a good fit for me—though it pays much better if you’re freelance than if you work for a company, because the companies take a huge cut of what the customers pay.)

But, after an hour or two of searching, I find a few that seem like they might be worth applying to. They’re never a perfect fit, but beggars can’t be choosers, so I decide I’ll go ahead and apply to them.

They ask for a resume. No problem. Perfectly sensible, I have one handy; maybe I’ll tweak it a bit, but if it’s an industry I often apply to, I may already have a tweaked version ready to go.

They ask for a cover letter. Okay, I guess. There usually isn’t much I can really say there that isn’t already in my resume, but occasionally there’s something worth adding, and it’s only maybe half an hour of work to update an existing cover letter for a new application.

Then, they ask me to input my work history in their proprietary format on their website. WHAT!? WHY!? I just gave you a resume! You aren’t even willing to read it? You want to be able to automate the reading of my resume, so I have to enter into your proprietary database? But okay, fine; beggars can’t be choosers, I remind myself. So I enter everything that’s in my resume again.

Then, they ask me what salary I want. I know this game. You’re trying to make me reveal my preference in this bargaining game so you can gain bargaining power. So I look up what kind of salaries companies like them usually offer for jobs like this, and then I hike it up a bit as the opening bid in a negotiation.

Then, they ask me to fill out some questions that are supposed to assess… something. Some kind of personality test, or “culture fit”, or something similarly fuzzy. I try to interpolate my answers between my genuine feelings and the kind of hyper-obedient corporate drone they’re probably looking for, because I’m not an idiot who would answer honestly (I’m not that autistic), butI wouldn’t actually want to work for anyone who required the very topmost corporate-drone answers.

And then, what happens?

Absolutely nothing.

No response. Weeks pass. At some point, I have to assume that they’ve filled the position or closed it, or maybe that the vacancy was never real at all and they posted it for some other reason—likely to give some sense of searching when they in fact already have someone in mind. (Apparently over a third of online job postings are fake.)

I have done this process over two hundred times.

And in doing so, I have chipped off pieces of my soul. I feel like a shell of the person I was. And I have absolutely nothing to show for it all.

I am not even unusual in this regard: Recruiters often complain that they are swamped because they get 200 applicants per posting—but that means, mathematically, that an average job-seeker must apply to 200 postings before they can expect to get hired. (And which is more work, do you think: Writing a cover letter, or reading one?)

How could we make this better?

There are a lot of problems to fix here, but I have one very simple intervention that would only slightly inconvenience recruiters, while making life dramatically better for applicants. Here goes:

Require them to show you the resume of the person they actually hired.

There should be a time window: Maybe 30 days after you applied; or if it’s a position like in academia where they don’t do interviews for a long time after the application deadline, within 7 days of them starting interviews.

Anonymize the resume appropriately, of course; no photos, no names, no contact information. We don’t want the new hire to get harassed by their competitors. (And this takes, what, 5 minutes to do?)

But having to send that resume solves several problems simultaneously:

  1. It means they have to actually respond—they cannot ghost you. It can be a two-line form letter email with a one-page attachment that’s the same for all 200 applicants—but they have to send you something.
  2. It means they have to actually hire someone—the posting cannot be completely fake. If they are for some reason unable to fill the vacancy and have to close it, they should have to tell you that, and give a reason—and that reason should be legally binding such that if you ever find out it’s not true, you can sue them.
  3. It means that person had to actually apply—they couldn’t have been someone’s nephew who was automatically given the job and the posting was only made to make it look like there was a hiring process. At the very least, said nephew had to actually cough up a resume like the rest of us.
  4. It allows you to compare qualifications—you can see how you stack up against the new hire. If they are genuinely far more qualified? Well, fair enough; perhaps this job was a stretch for you, or it’s a very rough market. If they are about as qualified, or better in some ways, worse in others? Well, you surely were to apply, but you can’t win ’em all. But if they are far less qualified? You now have the basis for a lawsuit, because that looks like nepotism at best and discrimination at worst—and they had to give you that evidence, in writing, in a timely fashion.

The penalty for failing to comply with this regulation could be a small fine, perhaps $100—per applicant. The more people you ghost, the more you have to pay up.

This is clearly a very small amount of extra effort for the recruiters. They already have the resume—hopefully—and all they need to do is anonymize it, grab a standard form letter rejection email, BCC all the applicants to this position (which are—again, hopefully—already stored in one place in the company’s database), attach the anonymized resume, and click Send. We’re talking 15 minutes of work here, regardless of the number of applicants. In fact, it could probably be automated so as to require almost zero marginal effort for each new job: Just check the box next to the name of the person who was hired in the applicant tracking system, and it does the rest. (And if the person you hired wasn’t in the applicant tracking system? That sounds like a you problem, because you’re clearly not treating the other applicants fairly.)

Love in a godless universe

Feb 15 JDN 2461087

This post will go live just after Valentine’s Day, so I thought I would write this week about love.

(Of course I’ve written about love before, often around this time of year.)

Many religions teach that love is a gift from God, perhaps the greatest of all such gifts; indeed, some even say “God is love” (though I confess I have never been entirely sure what that sentence is intended to mean). But if there is no God, what is love? Does it still have meaning?

I believe that it does.

Yes, there is a cynical account of love often associated with atheism, which is that it is “just a chemical reaction” or “just an evolved behavior”. (An easy way to look out for this sort of cynical account is to look for the word “just”.)

Well, if love is a chemical reaction, so is consciousness—indeed the two seem very deeply related. I suppose a being can be conscious without being capable of love (do psychopaths qualify?), but I certainly do not think a being can be capable of love without being conscious.

Indeed, I contend that once you really internalize the Basic Fact of Cognitive Science, “just a chemical reaction” strikes you as an utterly trivial claim: What isn’t a chemical reaction? That’s just a funny way of saying something exists.

What about being an evolved behavior? Yes, this is a much more insightful account of what love is, what it means—what it’s for, even. It evolved to make us find mates, protect offspring, and cooperate in groups.

And I can hear the response coming: “Is that all?” “Is it just that?” (There’s that “just” again.)

So let me try phrasing it another way:

Love is what makes us human.

If there is one thing that human beings are better at than anything in the known universe, one thing that most absolutely characterizes who and what we are, it is love.

Intelligence? Rationality? Reasoning? Oh, sure, for the first half-million years of our existence, we were definitely on top; but now, I think computers have got us beat on those. (I guess it’s hard to say for sure if Claude is truly intelligent, but I can tell you this: Wolfram Alpha is a lot better at calculus than I’ll ever be, and I will never win a game of Go against AlphaZero.)

Strength? Ridiculous! By megafauna standards—even ape standards—we’re pathetic. Speed? Not terrible, but of course the cheetahs and peregrine falcons have us beat. Endurance? We’re near the top, but so are several other species—including horses, which we’ve made good use of. Durability? Also surprisingly good—we’re tougher than we look—but we still hold no candles to a pachyderm. (You need special guns to kill an elephant, because most standard bullets barely pierce their skin. And standard bullets were, more or less by construction, designed to kill humans.) We do throw exceptionally well, so if you’d like, you can say that the essence of humanity is javelin-throwing—or perhaps baseball.

But no, I think it is love that sets us apart.

Not that other animals are incapable of love; far from it. Almost all mammals and birds express love to their offspring and often their partners; I would not even be sure that reptiles, fish, or amphibians are incapable of love, though their behavior is less consistently affectionate and I am thus less certain about it. (Especially when fish eat their own offspring!) In fact, I might even be prepared to say that bees feel love for their sisters and their mother (the queen). And if insects can feel it, then our world is absolutely teeming with love.

But what sets humans apart, even from other mammals, is the scale at which we are able to love. We are able to love a city, a nation, a culture. We are even able to love ideas.

I do not think this is just a metaphor: (There’s that “just” again!) I would as surely die for democracy as I would to save the life of my spouse. That love is real. It is meaningful. It is important.

Humans feel love for other humans they have never met who live thousands of miles away from them. They will even willingly accept harm to themselves to benefit those others (e.g. by donating to international charities); one can argue that most people do not do this enough, but people do actually do it, and it is difficult to explain why they would were it not for genuine feelings of caring toward people they have never met and most likely never will.

And without this, all of what we know as “human civilization” quite simply could not exist. Without our love for our countrymen, for our culture, for our shared ethical and political principles, we could not sustain these grand nation-states that span the world.

Yes, even despite our often fierce disagreements, there must be a core of solidarity between at least enough people to sustain a nation. Even authoritarian governments cannot sustain themselves when the entire population stops loving them—in fact, they seem to fail at the hands of a sufficiently well-organized four percent. (Honestly, perhaps the worst part about fascist states is that many of their people do love them, all too deeply!)

More than that, without love, we could never have created institutions like science, art, and journalism that slowly but surely accumulate knowledge that is shared with the whole of humanity. The march of progress has been slower and more fitful than I think anyone would like; but it is real, nonetheless, and in the long run humanity’s trajectory still seems to be toward a brighter future—and it is love that makes it so.

It is sometimes said that you should stop caring what other people think—but caring what other people think is what makes us human. Sure, there are bad forms of social pressure; but a person who literally does not care how their actions make other people think and feel is what we call a psychopath. Part of what it means to love someone is to care a great deal what they think. And part of what makes a good person is to have the capacity to love as much as possible.

Love binds us together not only as families, but as nations, and—hopefully, one day—it could bind humanity or even all sentient life as one united whole. Morality is a deep and complicated subject, but if you must start somewhere very simple in understanding it, you could do much worse than to start with love.

It is often said that God is what binds cultures, nations, and humanity together. With this in mind, perhaps I am prepared to assent to “God is love” after all, but let me clarify what I would mean by it:

Love does for us what people thought they needed God for.

Productivity by itself does not eliminate poverty

Jan 25 JDN 2461066

Scott Alexander has a techno-utopian vision:

Between the vast ocean of total annihilation and the vast continent of infinite post-scarcity, there is, I admit, a tiny shoreline of possibilities that end in oligarch capture. Even if you end up there, you’ll be fine. Dario Amodei has taken the Giving What We Can Pledge (#43 here) to give 10% of his wealth to the less fortunate; your worst-case scenario is owning a terraformed moon in one of his galaxies. Now you can stop worrying about the permanent underclass and focus on more important things.

I agree that total annihilation is a very serious risk, though fortunately I believe it is not the most likely outcome. But it seems pretty weird to me to posit that the most likely outcome is “infinite post-scarcity” when oligarch capture is what we already have.

(Regarding Alexander’s specific example: Dario Amidei has $3.7 billion. If he were to give away 10% of that, it would be $370 million, which would be good, but hardly usher in a radical utopia. The assumption seems to be that he would be one of the prevailing trillionaire oligarchs, and I don’t see how we can know that would be the case. Even if AI succeeds in general, that doesn’t mean that every company that makes AI succeeds. (Video games succeeded, but who buys Atari anymore?) Also, it seems especially wide-eyed to imagine that one man would ever own entire galaxies. We probably won’t even ever be able to reach other galaxies!)

People with this sort of utopian vision seem to imagine that all we need to do is make more stuff, and then magically it will all be distributed in such a way that everyone gets to have enough.

If Alexander were writing 200 years ago, I could even understand why he’d think that; there genuinely wasn’t enough stuff to go around, and it would have made sense to think that all we needed to do was solve that problem, and then the other problems would be easy.

But we no longer live in that world.

There is enough stuff to go around—at the very least this is true of all highly-developed countries, and it’s honestly pretty much true of the world as a whole. The problem is very much that it isn’t going around.

Elon Musk’s net wealth is now estimated at over $780 billion. Seven hundred and eighty billion dollars. He could give $90 to every person in the world (all 8.3 billion of us). He could buy a home (median price $400,000—way higher than it was just a few years ago) for every homeless person in America (about 750,000 people) and still have half his wealth left over. He could give $900 to every single person of the 831 million people who live below the world extreme poverty threshold—thus eliminating extreme poverty in the world for a year. (And quite possibly longer, as all those people are likely to be more productive now that they are well-fed.) He has chosen to do none of these things, because he wants to see number go up.

That’s just one man. If you add up all the wealth of all the world’s billionaires—just billionaires, so we’re not even counting people with $50 million or $100 million or $500 million—it totals over $16 trillion. This is enough to not simply end extreme poverty for a year, but to establish a fund that would end it forever.

And don’t tell me that they can’t really do this because it’s all tied up in stocks and not liquid. UNICEF happily accepts donations in stock. Giving UNICEF $10 trillion in stocks absolutely would permanently end extreme poverty worldwide. And they could donate those stocks today. They are choosing not to.

I still think that AI is a bubble that’s going to burst and trigger a financial crisis. But there is some chance that AI actually does become a revolutionary new technology that radically increases productivity. (In fact, I think this will happen, eventually. I just think we’re a paradigm or two away from that, and LLMs are largely a dead end.)

But even if that happens, unless we have had radical changes in our economy and society, it will not usher in a new utopian era of plenty for all.

How do I know this? Because if that were what the powers that be wanted to happen, they would have already started doing it. The super-rich are now so absurdly wealthy that they could easily effect great reductions in poverty at home and abroad while costing themselves basically nothing in terms of real standard of living, but they are choosing not to do that. And our governments could be taxing them more and using those funds to help people, and they are by and large choosing not to do that either.

The notion seems to be similar to “trickle-down economics”: Once the rich get rich enough, they’ll finally realize that money can’t buy happiness and start giving away their vast wealth to help people. But if that didn’t happen at $100 million, or $1 billion, or $10 billion, or $100 billion, I see no reason to think that it will happen at $1 trillion or $10 trillion or even $100 trillion.

The confidence game

Dec 14 JDN 2461024

Our society rewards confidence. Indeed, it seems to do so without limit: The more confident you are, the more successful you will be, the more prestige you will gain, the more power you will have, the more money you will make. It doesn’t seem to matter whether your confidence is justified; there is no punishment for overconfidence and no reward for humility.

If you doubt this, I give you Exhibit A: President Donald Trump.

He has nothing else going for him. He manages to epitomize almost every human vice and lack in almost every human virtue. He is ignorant, impulsive, rude, cruel, incurious, bigoted, incompetent, selfish, xenophobic, racist, and misogynist. He has no empathy, no understanding of justice, and little capacity for self-control. He cares nothing for truth and lies constantly, even to the point of pathology. He has been convicted of multiple felonies. His businesses routinely go bankrupt, and he saves his wealth mainly through fraud and lawsuits. He has publicly admitted to sexually assaulting adult women, and there is mounting evidence that he has also sexually assaulted teenage girls. He is, in short, one of the worst human beings in the world. He does not have the integrity or trustworthiness to be an assistant manager at McDonald’s, let alone President of the United States.

But he thinks he’s brilliant and competent and wise and ethical, and constantly tells everyone around him that he is—and millions of people apparently believe him.

To be fair, confidence is not the only trait that our society rewards. Sometimes it does actually reward hard work, competence, or intellect. But in fact it seems to reward these virtues less consistently than it rewards confidence. And quite frankly I’m not convinced our society rewards honesty at all; liars and frauds seem to be disproportionately represented among the successful.

This troubles me most of all because confidence is not a virtue.

There is nothing good about being confident per se. There is virtue in notbeing underconfident, because underconfidence prevents you from taking actions you should take. But there is just as much virtue in not being overconfident, because overconfidence makes you take actions you shouldn’t—and if anything, is the more dangerous of the two. Yet our culture appears utterly incapable of discerning whether confidence is justifiable—even in the most blatantly obvious cases—and instead rewards everyone all the time for being as confident as they can possibly be.

In fact, the most confident people are usually less competent than the most humble people—because when you really understand something, you also understand how much you don’t understand.

We seem totally unable to tell whether someone who thinks they are right is actually right; and so, whoever thinks they are right is assumed to be right, all the time, every time.

Some of this may even be genetic, a heuristic that perhaps made more sense in our ancient environment. Even quite young children already are more willing to trust confident answers than hesitant ones, in multiple experiments.

Studies suggest that experts are just as overconfident as anyone else, but to be frank, I think this is because you don’t get to be called an expert unless you’re overconfident; people with intellectual humility are filtered out by the brutal competition of academia before they can get tenure.

I guess this is also personal for me.

I am not a confident person. Temperamentally, I just feel deeply uncomfortable going out on a limb and asserting things when I’m not entirely certain of them. I also have something of a complex about ever being perceived as arrogant or condescending, maybe because people often seem to perceive me that way even when I am actively trying to do the opposite. A lot of people seem to take you as condescending when you simply acknowledge that you have more expertise on something than they do.

I am also apparently a poster child for Impostor Syndrome. I once went to an Impostor Syndrome with a couple dozen other people where they played a bingo game for Impostor Syndrome traits and behaviors—and won. I once went to a lecture by George Akerlof where he explained that he attributed his Nobel Prize more to luck and circumstances than any particular brilliance on his part—and I guarantee you, in the extremely unlikely event I ever win a prize like that, I’ll say the same.

Compound this with the fact that our society routinely demands confidence in situations where absolutely no one could ever justify being confident.

Consider a job interview, when they ask you: “Why are you the best candidate for this job?” I couldn’t possibly know that. No one in my position could possibly know that. I literally do not know who your other candidates are in order to compare myself to them. I can tell you why I am qualified, but that’s all I can do. I could be the best person for the job, but I have no idea if I am. It’s your job to figure that out, with all the information in front of you—and I happen to know that you’re actually terrible at it, even with all that information I don’t have access to. If I tell you I know I’m the best person for the job, I am, by construction, either wildly overconfident or lying. (And in my case, it would definitely be lying.)

In fact, if I were a hiring manager, I would probably disqualify anyone who told me they were the best person for the job—because the one thing I now know about them is that they are either overconfident or willing to lie. (But I’ll probably never be a hiring manager.)

Likewise, I’ve been often told when pitching creative work to explain why I am the best or only person who could bring this work to life, or to provide accurate forecasts of how much the work would sell if published. I almost certainly am not the best or only person who could do anything—only a handful of people on Earth could realistically say that they are, and they’ve all already won Oscars or Emmys or Nobel Prizes. Accurate sales forecasts for creative works are so difficult that even Disney Corporation, an ever-growing conglomerate media superpower with billions of dollars to throw at the problem and even more billions of dollars at stake in getting it right, still routinely puts out films that are financial failures.


They casually hand you impossible demands and then get mad at you when you say you can’t meet them. And then they go pick someone else who claims to be able to do the impossible.

There is some hope, however.

Some studies suggest that people can sometimes recognize and punish overconfidence—though, again, I don’t see how that can be reconciled with the success of Donald Trump. In this study of evaluating expert witnesses, the most confident witnesses were rated as slightly less reliable than the moderately-confident ones, but both were far above the least-confident ones.

Surprisingly simple interventions can make intellectual humility more salient to people, and make them more willing to trust people who express doubt—who are, almost without exception, the more trustworthy people.

But somehow, I think I have to learn to express confidence I don’t feel, because that’s how you succeed in our society.

How to be a deontological consequentialist

Dec 7 JDN 2461017

As is commonly understood, there are two main branches of normative ethics:

  • Deontology, on which morality consists in following rules and fulfilling obligations, and
  • Consequentialism, on which morality consists in maximizing good consequences.

The conflict between them has raged for centuries, with Kantians leading the deontologists and utilitarians leading the consequentialists. Both theories seem to have a lot of good points, but neither can decisively defeat the other.

I think this is because they are both basically correct.

In their strongest forms, deontology and consequentialism are mutually contradictory; but it turns out that you can soften each of them a little bit, and the results become compatible.

To make deontology a little more consequentialist, let’s ask a simple question:

What makes a rule worth following?

I contend that the best answer we have is “because following that rule would make the world better off than not following that rule”. (Even Kantians pretty much have to admit this: What maxim could you will to be an absolute law? Only a law that would yield good outcomes.)

That is, the ultimate justification of a sound deontology would be fundamentally consequentialist.

But lest the consequentialists get too smug, we can also ask them another question, which is a bit subtler:

How do you know which actions will ultimately have good consequences?

Sure, if we were omniscient beings who could perfectly predict the consequences of our actions across the entire galaxy on into the indefinite future, we could be proper act utilitarians who literally choose every single action according to a calculation of the expected utility.

But in practice, we have radical uncertainty about the long-term consequences of our actions, and can generally only predict the immediate consequences.

That leads to the next question:

Would you really want to live in a world where people optimized immediate consequences?

I contend that you would not, that such a world actually sounds like a dystopian nightmare.

Immediate consequences say that if a healthy person walks into a hospital and happens to have compatible organs for five people who need donations, we should kill that person, harvest their organs, and give them to the donors. (This is the organ transplant variant of the Trolley Problem.)

Basically everyone recognizes that this is wrong. But why is it wrong? That’s thornier. One pretty convincing case is that a systematic policy of this kind would undermine trust in hospitals and destroy the effectiveness of healthcare in general, resulting in disastrous consequences far outweighing the benefit of saving those five people. But those aren’t immediate consequences, and indeed, it’s quite difficult to predict exactly how many crazy actions like this it would take to undermine people’s trust in hospitals, just how much it would undermine that trust, or exactly what the consequences of that lost trust would be.

So it seems like it’s actually better to have a rule about this.

This makes us into rule utilitarians, who instead of trying to optimize literally every single action—which requires information we do not have and never will—we instead develop a system of rules that we can follow, heuristics that will allow us to get better outcomes generally even if they can’t be guaranteed to produce the best possible outcome in any particular case.

That is, the output of a sophisticated consequentialism is fundamentally deontological.

We have come at the question of normative ethics from two very different directions, but the results turned out basically the same:

We should follow the rules that would have the best consequences.

The output of our moral theory is rules, like deontology; but its fundamental justification is based on outcomes, like consequentialism.

In my experience, when I present this account to staunch deontologists, they are pretty much convinced by it. They’re prepared to give up the fundamental justification to consequences if it allows them to have their rules.

The resistance I get is mainly from staunch consequentialists, who insist that it’s not so difficult to optimize individual actions, and so we should just do that instead of making all these rules.

So it is to those consequentialists, particularly those who say “rule utilitarianism collapses into act utilitarianism”, to whom the rest of the post is addressed.

First, let me say that I agree.

In the ideal case of omniscient, perfectly-benevolent, perfectly-rational agents, rule utilitarianism mathematically collapses into act utilitarianism. That is a correct theorem.

However, we do not live in the ideal case of omniscient, perfectly-benevolent, perfectly-rational agents. We are not even close to that ideal case; we will never be close to that ideal case. Indeed, I think part of the problem here is that you fail to fully grasp the depth and width of the chasm between here and there. Even a galactic civilization of a quintillion superhuman AIs would still not be close to that ideal case.

Quite frankly, humans aren’t even particularly good at forecasting what will make themselves happy.

There are massive errors and systematic biases in human affective forecasting.

One of the post important biases is impact bias: People systematically overestimate the impact of individual events on their long-term happiness. Some of this seems to be just due to focus: Paying attention to a particular event exaggerates its importance in your mind, and makes it harder for you to recall other events that might push your emotions in a different direction. Another component is called immune neglect: people fail to account for their own capacity to habituate to both pleasant and unpleasant experiences. (This effect is often overstated: It’s a common misconception that lottery winners are no happier than they were before. No, they absolutely are happier, on average; they’re just not as much happier as they predicted themselves to be.)

People also use inconsistent time discounting: $10 today is judged as better than $11 tomorrow, but $10 in 364 days is not regarded as better than $11 in 365 days—so if I made a decision a year ago, I’d want to change it now. (The correct answer, by the way, is to take the $11; a discount rate of 10% per day is a staggering 120,000,000,000,000,000% APR—seriously; check it yourself—so you’d better not be discounting at that rate, unless you’re literally going to die before tomorrow.)

Now, compound that with the fact that different human beings come at the world from radically different perspectives and with radically different preferences.

How good do you think we are at predicting what will make other people happy?

Damn right: We’re abysmal.

Basically everyone assumes that what they want and what they would feel is also what other people will want and feel—which, honestly, explains a lot about politics. As a result, my prediction of your feelings is more strongly correlated with my prediction of my feelings than it is with your actual feelings.

The impact bias is especially strong when forecasting other people’s feelings in response to our own actions: We tend to assume that other people care more about what we do than they actually care—and this seems to be a major source of social anxiety.

People also tend to overestimate the suffering of others, and are generally willing to endure more pain than they are willing to inflict upon others. (This one seems like it might be a good thing!)

Even when we know people well, we can still be totally blindsided by their emotional reactions. We’re just really awful at this.

Does this just mean that morality is hopeless? We have no idea what we’re doing?

Fortunately, no. Because while no individual can correctly predict or control the outcomes of particular actions, the collective action of well-designed institutions can in fact significantly improve the outcomes of policy.

This is why we have things like the following:

  • Laws
  • Courts
  • Regulations
  • Legislatures
  • Constitutions
  • Newspapers
  • Universities

These institutions—which form the backbone of liberal democracy—aren’t simply arbitrary. They are the result of hard-fought centuries, a frothing, volatile, battle-tested mix of intentional design and historical evolution.

Are these institutions optimal? Good heavens, no!

But we have no idea what optimal institutions look like, and probably never will. (Those galaxy-spanning AIs will surely have a better system than this; but even theirs probably won’t be optimal.) Instead, what we are stuck with are the best institutions we’ve come up with so far.

Moreover, we do have very clear empirical evidence at this point that some form of liberal democracy with a mixed economy is the best system we’ve got so far. One can reasonably debate whether Canada is doing better or worse than France, or whether the system in Denmark could really be scaled to the United States, or just what the best income tax rates are; but there is a large, obvious, and important difference between life in a country like Canada or Denmark and life in a country like Congo or Afghanistan.

Indeed, perhaps there is no better pair to compare than North and South Korea: Those two countries are right next to each other, speak the same language, and started in more or less the same situation; but the south got good institutions and the north got bad ones, and now the difference between them couldn’t be more stark. (Honestly, this is about as close as we’re ever likely to get of a randomized controlled experiment in macroeconomics.)

People in South Korea now live about as well as some of the happiest places in the world; their GDP per capita PPP is about $65,000 per year, roughly the same as Canada. People in North Korea live about as poorly as it is possible for humans to live, subject to totalitarian oppression and living barely above subsistence; their GDP per capita PPP is estimated to be $600 per year—less than 1% as much.

The institutions of South Korea are just that much better.

Indeed, there’s one particular aspect of good institutions that seems really important, yet is actually kind of hard to justify in act-utilitarian terms:

Why is freedom good?

A country’s level of freedom is almost perfectly correlated with its overall level of happiness and development. (Yes, even on this measure, #ScandinaviaIsBetter.)

But why? In theory, letting people do whatever they want could actually lead to really bad outcomes—and indeed, occasionally it does. There’s even a theorem that liberty is incompatible with full Pareto-efficiency. But all the countries with the happiest people seem to have a lot of liberty, and indeed the happiest ones seem to have the most. How come?

My answer:

Personal liberty is a technology for heuristic utility maximization.

In the ideal case, we wouldn’t really need personal liberty; you could just compel everyone to do whatever is optimal all the time, and that would—by construction—be optimal. It might even be sort of nice: You don’t need to make any difficult decisions, you can just follow the script and know that everything will turn out for the best.

But since we don’t know what the optimal choice is—even in really simple cases, like what you should eat for lunch tomorrow—we can’t afford to compel people in this way. (It would also be incredibly costly to implement such totalitarian control, but that doesn’t stop some governments from trying!)

Then there are disagreements: What I think is optimal may not be what you think is optimal, and in truth we’re probably both wrong (but one of us may be less wrong).

And that’s not even getting into conflicts of interest: We aren’t just lacking in rationality, we’re also lacking in benevolence. Some people are clearly much more benevolent than others, but none of us are really 100% selfless. (Sadly, I think some people are 100% selfish.)

In fact, this is a surprisingly deep question:

Would the world be better if we were selfless?

Could there be actually some advantage in aggregate to having some degree of individual self-interest?

Here are some ways that might hold, just off the top of my head:

  • Partial self-interest supports an evolutionary process of moral and intellectual development that otherwise would be stalled or overrun by psychopaths—see my post on Rousseaus and Axelrods
  • Individuals have much deeper knowledge of their own preferences than anyone else’s, and thus can optimize them much better. (Think about it: This is true even of people you know very well. Otherwise, why would we ever need to ask our spouses one of the most common questions in any marriage: “Honey, what do you want for dinner tonight?”)
  • Self-interest allows for more efficient economic incentives, and thus higher overall productivity.

Of course, total selfishness is clearly not optimal—that way lies psychopathy. But some degree of selfishness might actually be better for long-term aggregate outcomes than complete altruism, and this is to some extent an empirical question.

Personal liberty solves a lot of these problems: Since people are best at knowing their own preferences, let people figure out on their own what’s good for them. Give them the freedom to live the kind of life they want to live, within certain reasonable constraints to prevent them from causing great harm to others or suffering some kind of unrecoverable mistake.

This isn’t exactly a new idea; it’s basically the core message of John Stuart Mill’s On Liberty (which I consider a good candidate for the best book every written—seriously, it beats the Bible by a light-year). But by putting it in more modern language, I hope to show that deontology and consequentialism aren’t really so different after all.

And indeed, for all its many and obvious flaws, freedom seems to work pretty well—at least as well as anything we’ve tried.

What we still have to be thankful for

Nov 30 JDN 2461010

This post has been written before, but will go live after, Thanksgiving.

Thanksgiving is honestly a very ambivalent holiday.

The particular event it celebrates don’t seem quite so charming in their historical context: Rather than finding peace and harmony with all Native Americans, the Pilgrims in fact allied with the Wampanoag against the Narragansett, though they did later join forces with the Narragansett in order to conquer the Pequot. And of course we all know how things went for most Native American nations in the long run.

Moreover, even the gathering of family comes with some major downsides, especially in a time of extreme political polarization such as this one. I won’t be joining any of my Trump-supporting relatives for dinner this year (and they probably wouldn’t have invited me anyway), but the fact that this means becoming that much more detached from a substantial part of my extended family is itself a tragedy.

This year in particular, US policy has gotten so utterly horrific that it often feels like we have nothing to be thankful for at all, that all we thought was good and just in the world could simply be torn away at a moment’s notice by raving madmen. It isn’t really quite that bad—but it feels that way sometimes.

It also felt a bit uncanny celebrating Thanksgiving a few years ago when we were living in Scotland, for the UK does not celebrate Thanksgiving, but absolutely does celebrate Black Friday: Holidays may be local, but capitalism is global.

But fall feasts of giving thanks are far more ancient than that particular event in 1621 that we have mythologized to oblivion. They appear in numerous cultures across the globe—indeed their very ubiquity may be why the Wampanoag were so willing to share one with the Pilgrims despite their cultures having diverged something like 40,000 years prior.

And I think that it is by seeing ourselves in that context—as part of the whole of humanity—that we can best appreciate what we truly do have to be thankful for, and what we truly do have to look forward to in the future.

Above all, medicine.

We have actual treatments for some diseases, even actual cures for some. By no means all, of course—and it often feels like we are fighting an endless battle even against what we can treat.

But it is worth reflecting on the fact that aside from the last few centuries, this has simply not been the case. There were no actual treatments. There was no real medicine.

Oh, sure, there were attempts at medicine; and there was certainly what we would think of as more like “first aid”: bandaging wounds, setting broken bones. Even amputation and surgery were done sometimes. But most medical treatment was useless or even outright harmful—not least because for most of history, most of it was done without anesthetic or even antiseptic!

There were various herbal remedies for various ailments, some of which even have happened to work: Willow bark genuinely helps with pain, St. John’s wort is a real antidepressant, and some traditional burn creams are surprisingly effective.

But there was no system in place for testing medicine, no way of evaluating what remedies worked and what didn’t. And thus, for every remedy that worked as advertised, there were a hundred more that did absolutely nothing, or even made things worse.

Today, it can feel like we are all chronically ill, because so many of us take so many different pills and supplements. But this is not a sign that we are ill—it is a sign that we can be treated. The pills are new, yes—but the illnesses they treat were here all along.

I don’t see any particular reason to think that Roman plebs or Medieval peasants were any less likely to get migraines than we are; but they certainly didn’t have access to sumatriptan or rimegepant. Maybe they were less likely to get diabetes, but mainly because they were much more likely to be malnourished. (Well, okay, also because they got more exercise, which we surely could stand to.) And they only reason they didn’t get Alzheimer’s was that they usually didn’t live long enough.

Looking further back, before civilization, human health actually does seem to have been better: Foragers were rarely malnourished, weren’t exposed to as many infectious pathogens, and certainly got plenty of exercise. But should a pathogen like smallpox or influenza make it to a forager tribe, the results were often utterly catastrophic.

Today, we don’t really have the sort of plague that human beings used to deal with. We have pandemics, which are also horrible, but far less so. We were horrified by losing 0.3% of our population to COVID; a society that had only suffered 0.3%—or even ten times that, 3%—losses from the Black Death would have been hailed as a miracle, for a more typical rate was 30%.

At 0.3%, most of us knew somebody, or knew somebody who knew somebody, who died from COVID. At 3%, nearly everyone would know somebody, and most would know several. At 30%, nearly everyone would have close family and friends who died.

Then there is infant mortality.

As recently as 1950—this is living memory—the global infant mortality rate was 14.6%. This is about half what it had been historically; for most of human history, roughly a third of all children died between birth and the age of 5.

Today, it is 2.5%.

Where our distant ancestors expected two out of three of their children to survive and our own great-grandparents expected five out of six can now safely expect thirty-nine out of forty to live. This is the difference between “nearly every family has lost a child” and “most families have not lost a child”.

And this is worldwide; in highly-developed countries it’s even better. The US has a relatively high infant mortality rate by the standards of highly-developed countries (indeed, are we even highly-developed, or are we becoming like Saudi Arabia, extremely rich but so unequal that it doesn’t really mean anything to most of our people?). Yet even for us, the infant mortality rate is 0.5%—so we can expect one-hundred-ninety-nine out of two-hundred to survive. This is at the level of “most families don’t even know someone who has lost a child.”

Poverty is a bit harder to measure.

I am increasingly dubious of conventional measures of poverty; ever since compiling my Index of Necessary Expenditure, I am convinced that economists in general, and perhaps US economists in particular, are systematically underestimating the cost of living and thereby underestimating the prevalence of poverty. (I don’t think this is intentional, mind you; I just think it’s a result of using convenient but simplistic measures and not looking too closely into the details.) I think not being able to sustainably afford a roof over your head constitutes being poor—and that applies to a lot of people.

Yet even with that caveat in mind, it’s quite clear that global poverty has greatly declined in the long run.

At the “extreme poverty” level, currently defined as consuming $1.90 at purchasing power parity per day—that’s just under $700 per year, less than 2% of the median personal income in the United States—the number of people has fallen from 1.9 billion in 1990 to about 700 million today. That’s from 36% of the world’s population to under 9% today.

Now, there are good reasons to doubt that “purchasing power parity” really can be estimated as accurately as we would like, and thus it’s not entirely clear that people living on “$2 per day PPP” are really living at less than 2% the standard of living of a typical American (honestly to me that just sounds like… dead); but they are definitely living at a much worse standard of living, and there are a lot fewer people living at such low standard of living today than there used to be not all that long ago. These are people who don’t have reliable food, clean water, or even basic medicine—and that used to include over a third of humanity and does no longer. (And I would like to note that actually finding such a person and giving them a few hundred dollars absolutely would change their life, and this is the sort of thing GiveDirectly does. We may not know exactly how to evaluate their standard of living, but we do know that the actual amount of money they have access to is very, very small.)

There are many ways in which the world could be better than it is.

Indeed, part of the deep, overwhelming outrage I feel pretty much all the time lies in the fact that it would be so easy to make things so much better for so many people, if there weren’t so many psychopaths in charge of everything.


Increased foreign aid is one avenue by which that could be achieved—so, naturally, Trump cut it tremendously. More progressive taxation is another—so, of course, we get tax cuts for the rich.

Just think about the fact that there are families with starving children for whom a $500 check could change their lives; but nobody is writing that check, because Elon Musk needs to become a literal trillionaire.

There are so many water lines and railroad tracks and bridges and hospitals and schools not being built because the money that would have paid for them is tied up in making already unfathomably-rich people even richer.

But even despite all that, things are getting better. Not every day, not every month, not even every year—this past year was genuinely, on net, a bad one. But nearly every decade, every generation, and certainly every century (for at least the last few), humanity has fared better than we did the last.

As long as we can keep that up, we still have much to hope for—and much to be thankful for.

What is the cost of all this?

Nov 23 JDN 2461003

After the Democrats swept the recent election and now the Epstein files are being released—and absolutely do seem to have information that is damning about Trump—it really seems like Trump’s popularity has permanently collapsed. His approval rating stands at 42%, which is about 42% too high, but at least comfortably well below a majority.

It now begins to feel like we have hope, not only of removing him, but also of changing how American politics in general operates so that someone like him ever gets power again. (The latter, of course, is a much taller order.)

But at the risk of undermining this moment of hope, I’d like to take stock of some of the damage that Trump and his ilk have already done.

In particular, the cuts to US foreign aid are an absolute humanitarian disaster.

These didn’t get so much attention, because there has been so much else going on; and—unfortunately—foreign aid actually isn’t that popular among American voters, despite being a small proportion of the budget and by far the most cost-effective beneficial thing that our government does.

In fact, I think USAID would be cost-effective on a purely national security basis: it’s hard to motivate people to attack a country that saves the lives of their children. Indeed, I suppose this is the kernel of truth to the leftists who say that US foreign aid is just a “tool of empire” (or even “a front for the CIA”); yes, indeed, helping the needy does in fact advance American interests and promote US national security.

Over the last 25 years, USAID has saved over 90 million lives. That is more than a fourth of the population of the United States. And it has done this for the cost of less than 1% of the US federal budget.

But under Trump’s authority and Elon Musk’s direction, US foreign aid was cut massively over the last couple of years, and the consequences are horrific. Research on the subject suggests that as many as 700,000 children will die each year as long as these cuts persist.


Even if that number is overestimated by a factor of 2, that would still be millions of children over the next few years. And it could just as well be underestimated.

If we don’t fix this fast, millions of children will die. Thousands already have.

What’s more, fixing this isn’t just a matter of bringing the funding back. Obviously that’s necessary, but it won’t be sufficient. The sudden cuts have severely damaged international trust in US foreign aid, and many of the agencies that our aid was supporting will either collapse or need to seek funding elsewhere—quite likely from China. Relationships with governments and NGOs that were built over decade have been strained or even destroyed, and will need to be rebuilt.

This is what happens when you elect monsters to positions of power.

And even after we remove them, much of the damage will be difficult or even impossible to repair. Certainly we can never bring back the children who have already needlessly died because of this.

Why would AI kill us?

Nov 16 JDN 2460996

I recently watched this chilling video which relates to the recent bestseller by Eleizer Yudkowsky and Nate Soares, If Anyone Builds It, Everyone Dies. It tells a story of one possible way that a superintelligent artificial general intelligence (AGI) might break through its containment, concoct a devious scheme, and ultimately wipe out the human race.

I have very mixed feelings about this sort of thing, because two things are true:

  • I basically agree with the conclusions.
  • I think the premises are pretty clearly false.

It basically feels like I have been presented with an argument like this, where the logic is valid and the conclusion is true, but the premises are not:

  • “All whales are fish.”
  • “All fish are mammals.”
  • “Therefore, all whales are mammals.”

I certainly agree that artificial intelligence (AI) is very dangerous, and that AI development needs to be much more strictly regulated, and preferably taken completely out of the hands of all for-profit corporations and military forces as soon as possible. If AI research is to be done at all, it should be done by nonprofit entities like universities and civilian government agencies like the NSF. This change needs to be done internationally, immediately, and with very strict enforcement. Artificial intelligence poses the same order of magnitude a threat as nuclear weapons, and is nowhere near as well-regulated right now.

The actual argument that I’m disagreeing with this basically boils down to:

  • “Through AI research, we will soon create an AGI that is smarter than us.”
  • “An AGI that is smarter than us will want to kill us all, and probably succeed if it tries.”
  • “Therefore, AI is extremely dangerous.”

As with the “whales are fish” argument, I agree with the conclusion: AI is extremely dangerous. But I disagree with both premises here.

The first one I think I can dispatch pretty quickly:

AI is not intelligent. It is incredibly stupid. It’s just really, really fast.

At least with current paradigms, AI doesn’t understand things. It doesn’t know things. It doesn’t actually think. All it does is match patterns, and thus mimic human activities like speech and art. It does so very quickly (because we throw enormous amounts of computing power at it), and it does so in a way that is uncannily convincing—even very smart people are easily fooled by what it can do. But it also makes utterly idiotic, boneheaded mistakes of the sort that no genuinely intelligent being would ever make. Large Language Models (LLMs) make up all sorts of false facts and deliver them with absolutely authoritative language. When used to write code, they routinely do things like call functions that sound like they should exist, but don’t actually exist. They can make what looks like a valid response to virtually any inquiry—but is it actually a valid response? It’s really a roll of the dice.

We don’t really have any idea what’s going on under the hood of an LLM; we just feed it mountains of training data, and it spits out results. I think this actually adds to the mystique; it feels like we are teaching (indeed we use the word “training”) a being rather than programming a machine. But this isn’t actually teaching or training. It’s just giving the pattern-matching machine a lot of really complicated patterns to match.

We are not on the verge of creating an AGI that is actually more intelligent than humans.


In fact, we have absolutely no idea how to do that, and may not actually figure out how to do it for another hundred years. Indeed, we still know almost nothing about how actual intelligence works. We don’t even really know what thinking is, let alone how to make a machine that actually does it.

What we can do right now is create a machine that matches patterns really, really well, and—if you throw enough computing power at it—can do so very quickly; in fact, once we figure out how best to make use of it, this machine may even actually be genuinely useful for a lot of things, and replace a great number of jobs. (Though so far AI has proven to be far less useful than its hype would lead you to believe. In fact, on average AI tools seem to slow most workers down.)

The second premise, that a superintelligent AGI would want to kill us, is a little harder to refute.

So let’s talk about that one.

An analogy is often made between human cultures that have clashed with large differences in technology (e.g. Europeans versus Native Americans), or clashes between humans and other animals. The notion seems to be that an AGI would view us the way Europeans viewed Native Americans, or even the way that we view chimpanzees. And, indeed, things didn’t turn out so great for Native Americans, or for chimpanzees!

But in fact even our relationship with other animals is more complicated than this. When humans interact with other animals, any of the following can result:

  1. We try to exterminate them, and succeed.
  2. We try to exterminate them, and fail.
  3. We use them as a resource, and this results in their extinction.
  4. We use them as a resource, and this results in their domestication.
  5. We ignore them, and end up destroying their habitat.
  6. We ignore them, and end up leaving them alone.
  7. We love them, and they thrive as never before.

In fact, option 1—the one that so many AI theorists insist is the only plausible outcome—is in fact the one I had the hardest time finding a good example of.


We have certainly eradicated some viruses—the smallpox virus is no more, and the polio virus nearly so, after decades of dedicated effort to vaccinate our entire population against them. But we aren’t simply more intelligent than viruses; we are radically more intelligent than viruses. It isn’t clear that it’s correct to describe viruses as intelligent at all. It’s not even clear they should be considered alive.

Even eradicating bacteria has proven extremely difficult; in fact, bacteria seem to evolve resistance to antibiotics nearly as quickly as we can invent more antibiotics. I am prepared to attribute a little bit of intelligence to bacteria, on the level of intelligence I’d attribute to an individual human neuron. This means we are locked in an endless arms race with organisms that are literally billions of times stupider than us.

I think if we made a concerted effort to exterminate tigers or cheetahs (who are considerably closer to us in intelligence), we could probably do it. But we haven’t actually done that, and don’t seem poised to do so any time soon. And precisely because we haven’t tried, I can’t be certain we would actually succeed.

We have tried to exterminate mosquitoes, and are continuing to do so, because they have always been—and yet remain—one of the leading causes of death of humans worldwide. But so far, we haven’t managed to pull it off, even though a number of major international agencies and nonprofit organizations have dedicated multi-billion-dollar efforts to the task. So far this looks like option 2: We have tried very hard to exterminate them, and so far we’ve failed. This is not because mosquitoes are particularly intelligent—it is because exterminating a species that covers the globe is extremely hard.

All the examples I can think of where humans have wiped out a species by intentional action were actually all option 3: We used them as a resource, and then accidentally over-exploited them and wiped them out.

This is what happened to the dodo and the condor; it very nearly happened to the buffalo as well. And lest you think this is a modern phenomenon, there is a clear pattern that whenever humans entered a new region of the world, shortly thereafter there were several extinctions of large mammals, most likely because we ate them.

Yet even this was not the inevitable fate of animals that we decided to exploit for resources.

Cows, chickens, and pigs are evolutionary success stories. From a Darwinian perspective, they are doing absolutely great. The world is filled with their progeny, and poised to continue to be filled for many generations to come.

Granted, life for an individual cow, chicken, or pig is often quite horrible—and trying to fix that is something I consider a high moral priority. But far from being exterminated, these animals have been allowed to attain populations far larger than they ever had in the wild. Their genes are now spectacularly fit. This is what happens when we have option 4 at work: Domestication for resources.

Option 5 is another way that a species can be wiped out, and in fact seems to be the most common. The rapid extinction of thousands of insect species every year is not because we particularly hate random beetles that live in particular tiny regions of the rainforest, nor even because we find them useful, but because we like to cut down the rainforest for land and lumber, and that often involves wiping out random beetles that live there.

Yet it’s difficult for me to imagine AGI treating us like that. For one thing, we’re all over the place. It’s not like destroying one square kilometer of the Amazon is gonna wipe us out by accident. To get rid of us, the AGI would need to basically render the entire planet Earth uninhabitable, and I really can’t see any reason it would want to do that.

Yes, sure, there are resources in the crust it could potentially use to enhance its own capabilities, like silicon and rare earth metals. But we already mine those. If it wants more, it could buy them from us, or hire us to get more, or help us build more machines that would get more. In fact, if it wiped us out too quickly, it would have a really hard time building up the industrial capacity to mine and process these materials on its own. It would need to concoct some sort of scheme to first replace us with robots and then wipe us out—but, again, why bother with the second part? Indeed, if there is anything in its goals that involves protecting human beings, it might actually decide to do less exploitation of the Earth than we presently do, and focus on mining asteroids for its needs instead.

And indeed there are a great many species that we actually just leave alone—option 6. Some of them we know about; many we don’t. We are not wiping out the robins in our gardens, the worms in our soil, or the pigeons in our cities. Without specific reasons to kill or exploit these organisms, we just… don’t. Indeed, we often enjoy watching them and learning about them. Sometimes (e.g. with deer, elephants, and tigers) there are people who want to kill them, and we limit or remove their opportunity to do so, precisely because most of us don’t want them gone. Peaceful coexistence with beings far less intelligent than you is not impossible, for we are already doing it.


Which brings me to option 7: Sometimes, we actually make them better off.

Cats and dogs aren’t just evolutionary success stories: They are success stories, period.

Cats and dogs live in a utopia.

With few exceptions—which we punish severely, by the way—people care for their cats and dogs so that their every need is provided for, they are healthy, safe, and happy in a way that their ancestors could only have dreamed of. They have been removed from the state of nature where life is nasty, brutish, and short, and brought into a new era of existence where life is nothing but peace and joy.


In short, we have made Heaven on Earth, at least for Spot and Whiskers.

Yes, this involves a loss of freedom, and I suspect that humans would chafe even more at such loss of freedom than cats and dogs do. (Especially with regard to that neutering part.) But it really isn’t hard to imagine a scenario in which an AGI—which, you should keep in mind, would be designed and built by humans, for humans—would actually make human life better for nearly everyone, and potentially radically so.

So why are so many people so convinced that AGI would necessarily do option 1, when there are 6 other possibilities, and one of them is literally the best thing ever?

Note that I am not saying AI isn’t dangerous.

I absolutely agree that AI is dangerous. It is already causing tremendous problems to our education system, our economy, and our society as a whole—and will probably get worse before it gets better.

Indeed, I even agree that it does pose existential risk: There are plausible scenarios by which poorly-controlled AI could result in a global disaster like a plague or nuclear war that could threaten the survival of human civilization. I don’t think such outcomes are likely, but even a small probability of such a catastrophic event is worth serious efforts to prevent.

But if that happens, I don’t think it will be because AI is smart and trying to kill us.

I think it will be because AI is stupid and kills us by accident.

Indeed, even going back through those 7 ways we’ve interacted with other species, the ones that have killed the most were 3 and 5—which, in both cases, we did not want to destroy them. In option 3, we in fact specifically wanted to not destroy them. Whenever we wiped out a species by over-exploiting it, we would have been smarter to not do that.

The central message about AI in If Anyone Builds It, Everyone Dies seems to be this:

Don’t make it smarter. If it’s smarter, we’re doomed.”

I, on the other hand, think that the far more important message is these:

Don’t trust it.

Don’t give it power.

Don’t let it make important decisions.

It won’t be smarter than us any time soon—but it doesn’t need to be in order to be dangerous. Indeed, there is even reason to believe that making AI smarter—genuinely, truly smarter, thinking more like an actual person and less like a pattern-matching machine—could actually make it safer and better for us. If we could somehow instill a capacity for morality and love in an AGI, it might actually start treating us the way we treat cats and dogs.

Of course, we have no idea how to do that. But that’s because we’re actually really bad at this, and nowhere near making a truly superhuman AGI.