What would a world without poverty look like?

Mar 22 JDN 2461122

In my previous post I reflected on the ways that conventional measures of poverty seem inadequate—and that a richer understanding of poverty suggests that it is far more ubiquitous than such measures suggest.

In this post, I will ask: Given this richer understanding of poverty, what would a world without poverty look like? Is it something we can realistically hope to achieve?

In techno-utopian circles (looking at you again, Scott Alexander), it is common to speak of “post-scarcity”: A world where there is no poverty because resources are effectively unlimited.

I don’t think that’s possible.

Not for humans as we know them. Perhaps in a future where greed is a recognized and treatable psychiatric disorder, we could genuinely have an economy where people really just take whatever they want and it works out because nobody wants an unreasonable amount.

But the fact that there are people with hundreds of billions of dollars tells me that among humans as we know them, some people’s greed is just literally insatiable. Give them a moon and they’ll demand a planet; give them a planet and they’ll demand a solar system. Whatever they are getting out of more wealth (status? power? the dopamine hit of number go up?), they’re never going to stop getting it from even more wealth, no matter how much we give them. For if they were going to stop at a reasonable amount, they would have stopped four orders of magnitude ago.

So let’s try to imagine what a world would look like if it really had no poverty, but not by somehow producing such staggering amounts of wealth that everyone could literally take whatever they want.

I think the key is that it would require all basic material needs to be met.

Everyone would have, at minimum:

  • Clean air to breathe
  • Clean water to drink
  • Nutritious food to eat
  • Shelter from the elements
  • Security against theft and violence
  • Personal liberty and political representation
  • A basic education
  • A basic standard of healthcare

(I will note that these resonate quite closely with the UN Universal Declaration of Human Rights.)

Some of these needs can probably never be completely satisfied—there is an inherent tension between liberty and security which requires us to balance them against each other. A society with zero crime is a horrific totalitarian police state; a society with complete liberty is an equally horrific Hobbesian nightmare. But we have achieved, in most of the First World at least, a reasonable standard of security along with a great deal of liberty, and preserving that balance should be of a very high priority.

Even clean air and water would be difficult to satisfy perfectly: even if we pivot our whole economy to solar, wind, and nuclear power (as we very definitely should be doing!), some amount of pollution is probably necessary just to have a functioning industrial society. So we need to establish reasonable standards for what amounts of pollution exposure are safe, and effective mechanisms for ensuring that people are not exposed to pollution outside those standards—we have largely done the former, but seriously fail at the latter.

But probably the most difficult needs to satisfy are actually difficult to even define.

Just what constitutes a basic standard of education, and a basic standard of healthcare?

These seem like moving targets.

Let’s start with education:

Someone who is illiterate and can barely add two numbers together would be considered to have very poor education today, but would be considered completely average among peasants in the Middle Ages. Someone like me with a PhD has education well beyond what anyone had in the Middle Ages: While Oxford was already graduating doctors in the 12th century, those doctors didn’t have to write dissertations, and didn’t know nearly as much about the world as you must to earn a modern PhD. (Most of the mathematics required to get an economics PhD specifically literally had not been invented.)

So it’s conceivable that educational standards will continue to rise over time, especially if we are able to radically improve learning via new technologies. In the most extreme case, if everyone can just download knowledge like in The Matrix, then it wouldn’t be unreasonable to expect the average person to know as much as a typical PhD today in dozens of fields.

Suppose that such technology did exist. Would it be fair to consider someone poor if they didn’t have access to it?

Yes, I think it would.

Because if it’s really cheap and easy to give breathtakingly vast knowledge on a variety of subjects to anyone instantly, then letting some people have that while others do not puts those others at a severe disadvantage in life. If you must know how to solve partial differential equations to get a job, then someone who only made it through high school algebra isn’t going to be able to find jobs.

So I think what we’re really concerned about here is inequality: The education of a rich person should not be too much better than the education of a poor person, lest “meritocracy” simply reinforce the same generational inequality it was supposed to eliminate.

Now consider healthcare:

This, too, has radically improved over time. Indeed, I’m not really sure it’s fair to call Medieval doctors doctors at all; they lacked basic knowledge of human physiology and their intervention was as likely to hurt patients as to help them. Surgeons certainly existed: They knew how to amputate a gangrenous limb or suture a wound. (They did so without antiseptic, let alone anaesthetic!) But should you come to them with a fever or a headache, they would likely do you as much harm as good.

So we could imagine a world of Star Trek medicine, where you lie in a bed, get scanned for a few moments, and the doctor immediately knows what’s wrong with you and what kind of painless injection to give you to fix it.

Once again, we must ask: If you don’t have that, are you poor?

And again, I’m going to say yes.

If the technology exists to heal people this effortlessly, and some people get access to it while others do not, the latter are being allowed to suffer when their suffering could be easily alleviated.

But now we must consider: what if the technology exists, but it’s too expensive to use routinely?

Most technologies are like this when they are first invented. Over time, the technology improves (and the patents expire!) and they become cheaper and more widely available.

Unlike education, healthcare doesn’t usually impose large advantages on those who receive it—though it can, especially in a society where disabilities are not adequately accommodated.

So I think I’m prepared to allow “early adopters” of new medical technology, people who are rich enough to pay for advanced treatments before they are available to everyone—within certain limits. If some new treatment grants radically higher productivity or lifespan, then in fact I think we have a moral obligation to wait until it can be universally shared before we give it to anyone—precisely because of the risk of reinforcing generational inequality.

Once again, in our effort to define poverty, we end up returning to inequality: The rich should not be allowed to be too much healthier than the poor.

This definitely makes education and healthcare more complicated than the others.

While we can pretty clearly define how much food and water a human being needs to live, and we could provide it to everyone, and then nobody would be poor in terms of food or water.

But making nobody poor in terms of education and healthcare requires meeting a standard that may in fact increase over time, and it is no contradiction to imagine that someone living in the 31st century could be receiving better healthcare than I ever will and yet is still not receiving adequate healthcare based on the technology available.

Furthermore, that person demanding better healthcare is not being ungrateful or envious—they are quite reasonably demanding that society fairly allocate healthcare so that there aren’t some people who live in eternal youth while other people still die of old age.

Are they richer than I am? In some sense, perhaps. We could stipulate that in every material way they are better off than I am now. But there’s a treatment that could extend their life by centuries, and nobody’s giving it to them, because they can’t afford it—and that’s wrong. That makes them poor, and it makes their society unfair and unjust. It isn’t just a question of how many QALY they have; it’s also a question of what it would cost to give them a lot more.

But with all that said, I do believe that a world without poverty is possible.

In fact, I believe that technologically we could already provide that world, if we had the political will to do so. Maybe we don’t quite have the economic output to support it worldwide, but even that is not as far off as most people seem to think.

Providing an adequate standard of food and water, for example, we could already do with existing food supplies. It would cost about one-eighth of Elon Musk’s wealth per year, meaning that, with good stock returns (as he most certainly gets), he could very likely afford it by himself!

Clean air for all would be harder, but we are moving the right direction now that solar power is so cheap.

Universal liberty and security would require radical shifts in government in dozens of countries, so that one seems especially unlikely to happen any time soon—yet it is very definitely possible, and by construction only requires political change.

Universal education and healthcare would be very expensive, and most countries are too poor to really provide them on their own. They are not simply poor in money, but poor in skills: There aren’t enough doctors and teachers, and so we would need to use the ones we have to train up a new generation, and perhaps a new generation after that, before the world’s needs would really be met. (Fortunately, there are people trying to do this. But they don’t have enough resources to really achieve these goals.) So this is not a technological limitation, but it is an economic one; it will probably be at least another generation before we can solve this one.

What about universal shelter? Now there’s the rub. Even in prosperous First World countries, housing shortages and skyrocketing prices are keeping homeownership out of reach for tens of millions of people, and leaving hundreds of thousands outright homeless. We clearly do have the technology to produce enough homes, especially if we are prepared to build at high density; but the economic cost of doing so would be substantial, and our policymakers don’t seem at all willing to actually pay it. I think as long as housing is viewed as an asset one invests in rather than a good that one needs, this will continue to be the case.

The problem isn’t that we don’t have enough stuff. It’s that we are not sharing it properly.

What is poverty?

Mar 15 JDN 2461115

What is poverty? It seems like a simple question, one we should all already know the answer to; but it turns out to be surprisingly complicated.

In practice, we mainly define some amount of income or consumption that is considered a “poverty line”, and declare that everyone below that line is in poverty, while everyone above it is not.

This post is about why that doesn’t work.

The most obvious question is of course: How do we draw that line? Some absolute level, or relative to income in the rest of society? Different places do it differently.

But I have come to realize that there is actually a deeper reason why there will never be a satisfying choice of “poverty line”:

There is no specific amount of income that could ever decide whether someone is in poverty.

It’s not a question of purchasing power. prices, or inflation. It’s not something you can adjust for statistically. It’s a fundamental error in defining the concept of poverty.

The problem is this:

Human needs are not fungible.

This Less Wrong post on “Anoxistan” really opened my eyes to that: No amount of money can make up for the fact that you’re missing something you need, be it a roof over your head, food on your table, clean water to drink, or medical care—or, as in the parable, air to breathe.

The best definition of poverty, then, is something like this:

Poverty is having to struggle to meet basic human material needs.

(I specify “material” needs, because someone who is alone and unloved has unmet human needs, but it is not the responsibility of even a utopian fully automated luxury communist society to provide for those needs. They may very well be miserable, but it does not make them poor.)

Maybe—maybe—in a well-functioning market economy, we can sort of muddle through by making a list of what everyone needs, finding the prices for all those goods and services, adding that up, and declaring that the poverty line. (This is often what we actually do, in fact.) The notion would then be that, as long as you have at least that amount of money, you can probably buy all the things you need.

But this rapidly breaks down if you aren’t facing the same prices as what were used to make that aggregation—which you almost never are, because nobody is the average American living in the average American city. And it also misses the fact that security is a human need, and simply having the necessary income for now is not at all the same thing as knowing that you’ll continue to have the necessary income in the future.

One Libertarian commentator asked me: “Would you really switch places with Rockefeller if you could?”

I had to think about it: I’d be losing a lot of things, for sure. No Internet, no cell phone, no computer, no video games. The quality of my clothes might actually be worse (though my wardrobe would surely be larger). Finding vegetarian food I enjoy might actually be more of a challenge, though I could surely import it from anywhere. Worst of all, I would lose access to many medical treatments I currently depend upon: Treatment of migraines in the late 19th century was considerably worse, and treatment of depression was essentially nonexistent.

Since this is about wealth, I think we can ignore the fact that I’d be moving into a terrifyingly racist, misogynistic and homophobic society. That itself might actually be the reason I wouldn’t really want to make the switch. But you can simultaneously believe that the late 19th century was a worse time than today for everyone who wasn’t a White cisgender heterosexual man, and also that Rockefeller was much richer than you’ll ever be.

But what would I gain? Power, though I have very little interest in that. Opportunities for philanthropy, which I do care about, but they’d benefit other people more than myself. Real estate—I don’t even own my own home, and Rockefeller owned multiple mansions, including, famously, the Casements in Florida.

But above all, I would gain security. Owning an oil company would allow me to live comfortably for the rest of my life, and most likely also allow my heirs to live comfortably for their entire lives, without me ever needing to work another day. I could still take jobs if I wanted them, but no employer would ever have any power over me. If I was unhappy at a job, I could just leave. If I wanted to spend a month, or a year, or a decade, without working at all, I could just do that. That is what it means to be rich. That is what Rockfeller had that I don’t think I will ever have.

The difference between being rich and being poor is security.

As long as anyone is struggling to make ends meet, poverty exists.

As long as anyone is afraid to lose their job, poverty exists.

As long as anyone is choosing not to have children because they don’t think they can afford them, poverty exists.

As long as bosses can abuse their employees and get away with it, poverty exists.

And in fact, it begins to look like poverty in the United States has not been decreasing over the last two generations, even as our per-capita GDP and median income have continued to rise and our population below “the poverty line” have fallen. (Indeed, that particular measure of “unable to afford children” has very clearly greatly increased, and is a very bad sign for our society’s future.)

This is how our economy is failing. It has given us lots more stuff, and made some things available to all that were once only available to the rich; but it has not freed us from the constant struggle to meet our basic needs, even though there are clearly plenty of resources available to do that.

What if we just banned banks?

Feb 22 JDN 2461094

I got a mailer from Wells Fargo today offering me a new credit card. The offer seemed decent, but the first thing that came to my mind was: Why is this company still allowed to exist?

In case you didn’t know, Wells Fargo was caught in 2016 creating millions of fraudulent accounts. They paid a fine of $185 million—which likely was less than the revenue they earned via this massive fraud scheme. How am I supposed to trust them ever again? How is anyone?

It’s hardly just them, of course. Almost every major bank has been implicated in some heinous crime.

JP Morgan Chase helped Jeffrey Epstein conceal assets, rigged municipal bonds transactions, and of course misrepresented thousands of mortgages in a way that directly contributed to the 2008 crisis.

Bank of America also committed mass fraud that contributed to the 2008 crisis.

A case against Citi is currently being tried for failing to protect its customers against fraud.

Capital One is being sued for failing to pay the interest rates it promised on savings accounts.

And let’s not forget HSBC, which laundered money for terrorists.

If these were individuals committing these crimes, they would be in prison, probably for the rest of their lives. But because they are corporations, they get slapped with a fine, or pay a settlement—typically less than what they made in the criminal activity—and then they get to go right back to work as if nothing had happened.

I think it’s time to do something much more radical.

Let’s ban banks.

This might sound crazy at first: Don’t we need banks? Doesn’t our whole financial system rest upon them?

But in fact, we do not need banks at all. We need loans, we need deposits, we need mortgages. But we already have a fully-functional alternative system for providing those services which is not implicated in crime after crime after heinous crime:

They are called credit unions.

Credit unions already provide almost all the services currently provided by banks—and most of the ones they don’t provide, we probably didn’t actually need anyway. There are already nearly 5,000 credit unions in the US with over 130 million customers.

Credit unions almost always fare better in financial crises, because they don’t overleverage themselves. They are far less likely to be involved in fraud. They don’t get involved in high-risk speculation. They offer higher yields on savings and lower rates on loans and credit cards. Basically they are better than banks in every way.

Why are credit unions so much better-behaved?

Because they are co-ops instead of for-profit corporations.

Customers of credit unions are also owners of credit unions, so there are no extra profits being siphoned off somewhere to greedy shareholders whose only goal in life is number go up.

Free markets are genuinely more efficient than centrally-planned systems. But there’s nothing about free markets that requires the owners of capital to be their own class of people who aren’t workers or customers and make their money by buying, selling, and owning things. That’s what’s wrong with capitalism—not too little central planning, but too concentrated ownership.

As I’ve written about before, co-ops are just as efficient as corporations, and produce much lower inequality.

For many industries, transitioning to co-ops would be a major change, and require lots of new organization that isn’t there. But for banking, the co-ops already exist. All we need to do is ban the alternative and force everyone to use the better, safer system. Come up with some way to transfer all the accounts fairly to credit unions, and—very intentionally—leave the shareholders of these criminal enterprises with absolutely nothing.

In fact, since credit unions are more likely to support other co-ops, forcing the financial system to transition to credit unions might actually make the process of transitioning our entire economy to co-ops easier.

It may seem extreme, but please, take a look again at all those crimes that all these major, highly-successful, market-dominating banks have committed. They’ve had their chance to prove that they can be honest and law-abiding, and they have failed.

Get rid of them.

Love in a godless universe

Feb 15 JDN 2461087

This post will go live just after Valentine’s Day, so I thought I would write this week about love.

(Of course I’ve written about love before, often around this time of year.)

Many religions teach that love is a gift from God, perhaps the greatest of all such gifts; indeed, some even say “God is love” (though I confess I have never been entirely sure what that sentence is intended to mean). But if there is no God, what is love? Does it still have meaning?

I believe that it does.

Yes, there is a cynical account of love often associated with atheism, which is that it is “just a chemical reaction” or “just an evolved behavior”. (An easy way to look out for this sort of cynical account is to look for the word “just”.)

Well, if love is a chemical reaction, so is consciousness—indeed the two seem very deeply related. I suppose a being can be conscious without being capable of love (do psychopaths qualify?), but I certainly do not think a being can be capable of love without being conscious.

Indeed, I contend that once you really internalize the Basic Fact of Cognitive Science, “just a chemical reaction” strikes you as an utterly trivial claim: What isn’t a chemical reaction? That’s just a funny way of saying something exists.

What about being an evolved behavior? Yes, this is a much more insightful account of what love is, what it means—what it’s for, even. It evolved to make us find mates, protect offspring, and cooperate in groups.

And I can hear the response coming: “Is that all?” “Is it just that?” (There’s that “just” again.)

So let me try phrasing it another way:

Love is what makes us human.

If there is one thing that human beings are better at than anything in the known universe, one thing that most absolutely characterizes who and what we are, it is love.

Intelligence? Rationality? Reasoning? Oh, sure, for the first half-million years of our existence, we were definitely on top; but now, I think computers have got us beat on those. (I guess it’s hard to say for sure if Claude is truly intelligent, but I can tell you this: Wolfram Alpha is a lot better at calculus than I’ll ever be, and I will never win a game of Go against AlphaZero.)

Strength? Ridiculous! By megafauna standards—even ape standards—we’re pathetic. Speed? Not terrible, but of course the cheetahs and peregrine falcons have us beat. Endurance? We’re near the top, but so are several other species—including horses, which we’ve made good use of. Durability? Also surprisingly good—we’re tougher than we look—but we still hold no candles to a pachyderm. (You need special guns to kill an elephant, because most standard bullets barely pierce their skin. And standard bullets were, more or less by construction, designed to kill humans.) We do throw exceptionally well, so if you’d like, you can say that the essence of humanity is javelin-throwing—or perhaps baseball.

But no, I think it is love that sets us apart.

Not that other animals are incapable of love; far from it. Almost all mammals and birds express love to their offspring and often their partners; I would not even be sure that reptiles, fish, or amphibians are incapable of love, though their behavior is less consistently affectionate and I am thus less certain about it. (Especially when fish eat their own offspring!) In fact, I might even be prepared to say that bees feel love for their sisters and their mother (the queen). And if insects can feel it, then our world is absolutely teeming with love.

But what sets humans apart, even from other mammals, is the scale at which we are able to love. We are able to love a city, a nation, a culture. We are even able to love ideas.

I do not think this is just a metaphor: (There’s that “just” again!) I would as surely die for democracy as I would to save the life of my spouse. That love is real. It is meaningful. It is important.

Humans feel love for other humans they have never met who live thousands of miles away from them. They will even willingly accept harm to themselves to benefit those others (e.g. by donating to international charities); one can argue that most people do not do this enough, but people do actually do it, and it is difficult to explain why they would were it not for genuine feelings of caring toward people they have never met and most likely never will.

And without this, all of what we know as “human civilization” quite simply could not exist. Without our love for our countrymen, for our culture, for our shared ethical and political principles, we could not sustain these grand nation-states that span the world.

Yes, even despite our often fierce disagreements, there must be a core of solidarity between at least enough people to sustain a nation. Even authoritarian governments cannot sustain themselves when the entire population stops loving them—in fact, they seem to fail at the hands of a sufficiently well-organized four percent. (Honestly, perhaps the worst part about fascist states is that many of their people do love them, all too deeply!)

More than that, without love, we could never have created institutions like science, art, and journalism that slowly but surely accumulate knowledge that is shared with the whole of humanity. The march of progress has been slower and more fitful than I think anyone would like; but it is real, nonetheless, and in the long run humanity’s trajectory still seems to be toward a brighter future—and it is love that makes it so.

It is sometimes said that you should stop caring what other people think—but caring what other people think is what makes us human. Sure, there are bad forms of social pressure; but a person who literally does not care how their actions make other people think and feel is what we call a psychopath. Part of what it means to love someone is to care a great deal what they think. And part of what makes a good person is to have the capacity to love as much as possible.

Love binds us together not only as families, but as nations, and—hopefully, one day—it could bind humanity or even all sentient life as one united whole. Morality is a deep and complicated subject, but if you must start somewhere very simple in understanding it, you could do much worse than to start with love.

It is often said that God is what binds cultures, nations, and humanity together. With this in mind, perhaps I am prepared to assent to “God is love” after all, but let me clarify what I would mean by it:

Love does for us what people thought they needed God for.

How are this many people in the Epstein files?

Feb 8 JDN 2461080

It’s been obvious from the start that Donald Trump had something to hide in the Epstein files, but the list of famous people mentioned in the Epstein files absolutely staggers me.

Just listing people I had previously heard of, even aside from Donald and Melania Trump:

Woody Allen, Steve Bannon, Ehud Barak, Richard Branson, William Burns, Noam Chomsky, Deepak Chopra, Bill Clinton, David Copperfield, Bill Gates, Stephen Hawking, Michael Jackson, Thorbjørn Jagland, Lawrence Krauss, Elon Musk, Mehmet Oz, Brett Ratner, Ariane de Rothschild, Kevin Spacey, Lawrence H. Summers, Peter Thiel, Robert Trivers, and Michael Wolff.

There are of course more people who are famous for various things that I simply wasn’t familiar with, such as Anil Ambani, Peter Attia, Todd Boehly, Andrew Farkas, Brad S. Karp, and Brian Vickers. And more names may yet come out as the saga continues.

Now, some of these connections are more damning than others: At the milder end, we have Bill Gates, who doesn’t appear to have actually received (let alone responded to) the emails addressed to him, and Thorbjørn Jagland, who was planning to visit the island but apparently never actually did so. At the worse end, we have Richard Branson, who introduced Epstein to his “harem” (Branson’s word), Noam Chomsky, who had extensive exchanges and received $270,000 from a mysterious account (he claims Epstein had nothing to do with it), Lawrence Krauss and Robert Trivers, who both continued to publicly defend Epstein even after Epstein was convicted of sex crimes against children in 2008, Peter Thiel, who received $40 million from Epstein, and of course Donald Trump himself, who is mentioned in the Epstein files some 38,000 times. (That we know of.)

Even the damning ones are largely not conclusive; the documents that have been released don’t appear to be sufficient to prove anyone guilty of crimes in a court of law. But given that Donald Trump is President and is probably doing everything he can to suppress and redact any such evidence that does exist (at the very least against himself), this absence of evidence is not particularly strong evidence of absence. The best we can really say at this juncture is that it looks very suspicious about an awful lot of famous people.

I guess it’s honestly possible that some of these people knew Epstein well but really didn’t know about his secret life sexually abusing children. Sometimes monsters can hide in plain sight. But several of these people have been credibly accused of sex crimes of their own, and many of them circled the wagons to defend each other whenever new accusations came out. And once someone pleads guilty and is convicted (as Epstein was in 2008), you really should stop defending him.

It honestly seems like QAnon wasn’t entirely wrong after all! There was a secret cabal of famous, powerful people sexually abusing children! They just got some (okay, nearly all) of the details wrong, and for some reason thought that Donald Trump was going to bring that cabal down, rather than do everything in his power to suppress and redact all files related to it and still end up being mentioned in said files over 38,000 times. But honestly, the whole idea sounded crazy to me, and apparently it was basically correct! (Even at least one Rothschild seems to have been involved!)

I am particularly disturbed by the academics on this list: Chomsky, Hawking, Krauss, Summers, and Trivers. These men are (or were) taking up scarce tenure slots at highly prestigious universities, while at best being guilty of very bad judgment, and quite likely actually guilty of serious sex crimes. Even if they aren’t actually criminals themselves, keeping them on at prestigious institutions—as several top universities did, for years, after much was already known—besmirches the reputation of those institutions and is a disservice to the many qualified academics with better reputations who would happily replace them.

To that list I might add Chopra, who has also taught at extremely prestigious universities, but doesn’t actually do much credible research, preferring instead to peddle pseudoscientific nonsense. I don’t understand why universities ever let him teach at all—frankly it’s an insult to every other applicant they haven’t hired. (Having applied to many of these institutions myself, I take it quite personally, as a matter of fact. You think he’s better than me?) Chopra’s associations with Epstein are just one more reason to cut ties with him, when they never had any reason to make ties with him in the first place.

I am not optimistic that releasing these files will accomplish very much. Like I said, none of it seems to be conclusive. Even if evidence of crimes did emerge, they’d likely be beyond the statute of limitations. All the secrecy surrounding Epstein and his cohorts actually seems to have been pretty effective at protecting them from facing punishment for their actions.

But please, please, I’m begging here, for the sake of all that is good in the world, could this at least make people stop supporting Donald Trump!?

This is fascism.

Feb 1 JDN 2461073

The Party told you to ignore the evidence of your eyes and ears. It was their final, most essential command.

– George Orwell, 1984

As I write this, we haven’t even finished January of 2026, and already there have been not one, but two blatant, public executions of innocent people by federal agents that occurred in broad daylight and on video.

I already thought the video of Renee Good’s shooting was pretty clear, but the videos of Alex Pretti’s just leave no room for doubt at all. He was disarmed and restrained when they shot him; this was an execution.

I have heard liberals mocked by leftists as “people who are okay with the government killing people as long as the right paperwork is filed”. This is sort of true, actually—if by “paperwork” you mean due process of law. You know, the foundation of liberal democracy? That little thing?

Yes, I am actually okay with (some) military actions, police shootings in self-defense, and even executions of convicted murderers (though I should note that actually many liberals aren’t okay with the latter). I think that a world where nobody kills anybody is a pipe dream, and the best we can reasonably hope for is one where there are few killings, most of them are justified, and the ones that aren’t are punished. (And if your problem is specifically with the government killing people… who do you think should have that authority, if not democratically-elected representatives?) I understand that the government needs to kill people sometimes, but I expect those killings to be limited to justifiable wars, imminent threats to life and limb, or the result of a proper conviction by a fair jury trial.

But this was not due process of law. There was no judge, no jury, no trial—there wasn’t even a warrant or an arrest. Nor was it an in-the-moment response to an imminent threat—even a perceived one. The videos are crystal-clear: Alex Pretti was no threat to the border patrol agents who shot him to death.

This is fascism.

It’s not like fascism. It’s not toward fascism. This isn’t how it starts. Masked men executing innocent people in broad daylight is fascism. It’s here. It’s happening.

This does not necessarily mean that our entire country has fallen to fascism; there is still hope that we can stop this from happening again, and also hope that this will not escalate into a full-blown civil war. But shooting an innocent unarmed man without a judge or a jury is an inherently, irredeemably fascist act. If the men responsible are not tried for murder, it will be a grave injustice—and it could very well escalate into much larger-scale violence.

I wish I could say this sort of thing is totally unprecedented; but no, it’s not. The United States government has done a lot of horrible things over the years, from slavery to the Trail of Tears to the Japanese internment. I think that our country has been in a profound state of tension from the very beginning, between the high-minded ideals of “all men are created equal” and the deep-seated tribalism that comes naturally to nearly all human beings. I don’t think America is uniquely evil; in fact, I think we are especially goodit’s just that even a good country often does horrible things.

And there is something different about this. It’s not the first time our government has killed anyone, or even killed anyone for an obviously unjustified reason. But I think it might be the first time the government has publicly and blatantly lied about the circumstances in a way that can be directly refuted by video evidence. They aren’t painting it as a “mistake” or saying it was “a few bad apples”; they are actually trying to claim justification where obviously none exists. They are asking you to believe what they say over what you can see with your own two eyes.

This is what authoritarian states do. They try to undermine your belief in objective reality. They try to gaslight you into believing what they say instead of what you can see. And even in an extremely prosperous, well-educated country, they have been shockingly effective at it.

This is what we warned against when Trump was running for election.

Maybe it’s not productive to say “We told you so”, but, uh, we told you so.

He’s done so many terrible things, and has been enabled so many times by Republicans in Congress and the right-wing justices of the Supreme Court. As a result, it’s hard to draw any bright lines in the sand. But if you really want to draw one, this might be a good one to draw.

Honestly, the best time to turn against Trump was ten years ago; but people are finally turning against him, and better late than never.

Productivity by itself does not eliminate poverty

Jan 25 JDN 2461066

Scott Alexander has a techno-utopian vision:

Between the vast ocean of total annihilation and the vast continent of infinite post-scarcity, there is, I admit, a tiny shoreline of possibilities that end in oligarch capture. Even if you end up there, you’ll be fine. Dario Amodei has taken the Giving What We Can Pledge (#43 here) to give 10% of his wealth to the less fortunate; your worst-case scenario is owning a terraformed moon in one of his galaxies. Now you can stop worrying about the permanent underclass and focus on more important things.

I agree that total annihilation is a very serious risk, though fortunately I believe it is not the most likely outcome. But it seems pretty weird to me to posit that the most likely outcome is “infinite post-scarcity” when oligarch capture is what we already have.

(Regarding Alexander’s specific example: Dario Amidei has $3.7 billion. If he were to give away 10% of that, it would be $370 million, which would be good, but hardly usher in a radical utopia. The assumption seems to be that he would be one of the prevailing trillionaire oligarchs, and I don’t see how we can know that would be the case. Even if AI succeeds in general, that doesn’t mean that every company that makes AI succeeds. (Video games succeeded, but who buys Atari anymore?) Also, it seems especially wide-eyed to imagine that one man would ever own entire galaxies. We probably won’t even ever be able to reach other galaxies!)

People with this sort of utopian vision seem to imagine that all we need to do is make more stuff, and then magically it will all be distributed in such a way that everyone gets to have enough.

If Alexander were writing 200 years ago, I could even understand why he’d think that; there genuinely wasn’t enough stuff to go around, and it would have made sense to think that all we needed to do was solve that problem, and then the other problems would be easy.

But we no longer live in that world.

There is enough stuff to go around—at the very least this is true of all highly-developed countries, and it’s honestly pretty much true of the world as a whole. The problem is very much that it isn’t going around.

Elon Musk’s net wealth is now estimated at over $780 billion. Seven hundred and eighty billion dollars. He could give $90 to every person in the world (all 8.3 billion of us). He could buy a home (median price $400,000—way higher than it was just a few years ago) for every homeless person in America (about 750,000 people) and still have half his wealth left over. He could give $900 to every single person of the 831 million people who live below the world extreme poverty threshold—thus eliminating extreme poverty in the world for a year. (And quite possibly longer, as all those people are likely to be more productive now that they are well-fed.) He has chosen to do none of these things, because he wants to see number go up.

That’s just one man. If you add up all the wealth of all the world’s billionaires—just billionaires, so we’re not even counting people with $50 million or $100 million or $500 million—it totals over $16 trillion. This is enough to not simply end extreme poverty for a year, but to establish a fund that would end it forever.

And don’t tell me that they can’t really do this because it’s all tied up in stocks and not liquid. UNICEF happily accepts donations in stock. Giving UNICEF $10 trillion in stocks absolutely would permanently end extreme poverty worldwide. And they could donate those stocks today. They are choosing not to.

I still think that AI is a bubble that’s going to burst and trigger a financial crisis. But there is some chance that AI actually does become a revolutionary new technology that radically increases productivity. (In fact, I think this will happen, eventually. I just think we’re a paradigm or two away from that, and LLMs are largely a dead end.)

But even if that happens, unless we have had radical changes in our economy and society, it will not usher in a new utopian era of plenty for all.

How do I know this? Because if that were what the powers that be wanted to happen, they would have already started doing it. The super-rich are now so absurdly wealthy that they could easily effect great reductions in poverty at home and abroad while costing themselves basically nothing in terms of real standard of living, but they are choosing not to do that. And our governments could be taxing them more and using those funds to help people, and they are by and large choosing not to do that either.

The notion seems to be similar to “trickle-down economics”: Once the rich get rich enough, they’ll finally realize that money can’t buy happiness and start giving away their vast wealth to help people. But if that didn’t happen at $100 million, or $1 billion, or $10 billion, or $100 billion, I see no reason to think that it will happen at $1 trillion or $10 trillion or even $100 trillion.

The confidence game

Dec 14 JDN 2461024

Our society rewards confidence. Indeed, it seems to do so without limit: The more confident you are, the more successful you will be, the more prestige you will gain, the more power you will have, the more money you will make. It doesn’t seem to matter whether your confidence is justified; there is no punishment for overconfidence and no reward for humility.

If you doubt this, I give you Exhibit A: President Donald Trump.

He has nothing else going for him. He manages to epitomize almost every human vice and lack in almost every human virtue. He is ignorant, impulsive, rude, cruel, incurious, bigoted, incompetent, selfish, xenophobic, racist, and misogynist. He has no empathy, no understanding of justice, and little capacity for self-control. He cares nothing for truth and lies constantly, even to the point of pathology. He has been convicted of multiple felonies. His businesses routinely go bankrupt, and he saves his wealth mainly through fraud and lawsuits. He has publicly admitted to sexually assaulting adult women, and there is mounting evidence that he has also sexually assaulted teenage girls. He is, in short, one of the worst human beings in the world. He does not have the integrity or trustworthiness to be an assistant manager at McDonald’s, let alone President of the United States.

But he thinks he’s brilliant and competent and wise and ethical, and constantly tells everyone around him that he is—and millions of people apparently believe him.

To be fair, confidence is not the only trait that our society rewards. Sometimes it does actually reward hard work, competence, or intellect. But in fact it seems to reward these virtues less consistently than it rewards confidence. And quite frankly I’m not convinced our society rewards honesty at all; liars and frauds seem to be disproportionately represented among the successful.

This troubles me most of all because confidence is not a virtue.

There is nothing good about being confident per se. There is virtue in notbeing underconfident, because underconfidence prevents you from taking actions you should take. But there is just as much virtue in not being overconfident, because overconfidence makes you take actions you shouldn’t—and if anything, is the more dangerous of the two. Yet our culture appears utterly incapable of discerning whether confidence is justifiable—even in the most blatantly obvious cases—and instead rewards everyone all the time for being as confident as they can possibly be.

In fact, the most confident people are usually less competent than the most humble people—because when you really understand something, you also understand how much you don’t understand.

We seem totally unable to tell whether someone who thinks they are right is actually right; and so, whoever thinks they are right is assumed to be right, all the time, every time.

Some of this may even be genetic, a heuristic that perhaps made more sense in our ancient environment. Even quite young children already are more willing to trust confident answers than hesitant ones, in multiple experiments.

Studies suggest that experts are just as overconfident as anyone else, but to be frank, I think this is because you don’t get to be called an expert unless you’re overconfident; people with intellectual humility are filtered out by the brutal competition of academia before they can get tenure.

I guess this is also personal for me.

I am not a confident person. Temperamentally, I just feel deeply uncomfortable going out on a limb and asserting things when I’m not entirely certain of them. I also have something of a complex about ever being perceived as arrogant or condescending, maybe because people often seem to perceive me that way even when I am actively trying to do the opposite. A lot of people seem to take you as condescending when you simply acknowledge that you have more expertise on something than they do.

I am also apparently a poster child for Impostor Syndrome. I once went to an Impostor Syndrome with a couple dozen other people where they played a bingo game for Impostor Syndrome traits and behaviors—and won. I once went to a lecture by George Akerlof where he explained that he attributed his Nobel Prize more to luck and circumstances than any particular brilliance on his part—and I guarantee you, in the extremely unlikely event I ever win a prize like that, I’ll say the same.

Compound this with the fact that our society routinely demands confidence in situations where absolutely no one could ever justify being confident.

Consider a job interview, when they ask you: “Why are you the best candidate for this job?” I couldn’t possibly know that. No one in my position could possibly know that. I literally do not know who your other candidates are in order to compare myself to them. I can tell you why I am qualified, but that’s all I can do. I could be the best person for the job, but I have no idea if I am. It’s your job to figure that out, with all the information in front of you—and I happen to know that you’re actually terrible at it, even with all that information I don’t have access to. If I tell you I know I’m the best person for the job, I am, by construction, either wildly overconfident or lying. (And in my case, it would definitely be lying.)

In fact, if I were a hiring manager, I would probably disqualify anyone who told me they were the best person for the job—because the one thing I now know about them is that they are either overconfident or willing to lie. (But I’ll probably never be a hiring manager.)

Likewise, I’ve been often told when pitching creative work to explain why I am the best or only person who could bring this work to life, or to provide accurate forecasts of how much the work would sell if published. I almost certainly am not the best or only person who could do anything—only a handful of people on Earth could realistically say that they are, and they’ve all already won Oscars or Emmys or Nobel Prizes. Accurate sales forecasts for creative works are so difficult that even Disney Corporation, an ever-growing conglomerate media superpower with billions of dollars to throw at the problem and even more billions of dollars at stake in getting it right, still routinely puts out films that are financial failures.


They casually hand you impossible demands and then get mad at you when you say you can’t meet them. And then they go pick someone else who claims to be able to do the impossible.

There is some hope, however.

Some studies suggest that people can sometimes recognize and punish overconfidence—though, again, I don’t see how that can be reconciled with the success of Donald Trump. In this study of evaluating expert witnesses, the most confident witnesses were rated as slightly less reliable than the moderately-confident ones, but both were far above the least-confident ones.

Surprisingly simple interventions can make intellectual humility more salient to people, and make them more willing to trust people who express doubt—who are, almost without exception, the more trustworthy people.

But somehow, I think I have to learn to express confidence I don’t feel, because that’s how you succeed in our society.

How to be a deontological consequentialist

Dec 7 JDN 2461017

As is commonly understood, there are two main branches of normative ethics:

  • Deontology, on which morality consists in following rules and fulfilling obligations, and
  • Consequentialism, on which morality consists in maximizing good consequences.

The conflict between them has raged for centuries, with Kantians leading the deontologists and utilitarians leading the consequentialists. Both theories seem to have a lot of good points, but neither can decisively defeat the other.

I think this is because they are both basically correct.

In their strongest forms, deontology and consequentialism are mutually contradictory; but it turns out that you can soften each of them a little bit, and the results become compatible.

To make deontology a little more consequentialist, let’s ask a simple question:

What makes a rule worth following?

I contend that the best answer we have is “because following that rule would make the world better off than not following that rule”. (Even Kantians pretty much have to admit this: What maxim could you will to be an absolute law? Only a law that would yield good outcomes.)

That is, the ultimate justification of a sound deontology would be fundamentally consequentialist.

But lest the consequentialists get too smug, we can also ask them another question, which is a bit subtler:

How do you know which actions will ultimately have good consequences?

Sure, if we were omniscient beings who could perfectly predict the consequences of our actions across the entire galaxy on into the indefinite future, we could be proper act utilitarians who literally choose every single action according to a calculation of the expected utility.

But in practice, we have radical uncertainty about the long-term consequences of our actions, and can generally only predict the immediate consequences.

That leads to the next question:

Would you really want to live in a world where people optimized immediate consequences?

I contend that you would not, that such a world actually sounds like a dystopian nightmare.

Immediate consequences say that if a healthy person walks into a hospital and happens to have compatible organs for five people who need donations, we should kill that person, harvest their organs, and give them to the donors. (This is the organ transplant variant of the Trolley Problem.)

Basically everyone recognizes that this is wrong. But why is it wrong? That’s thornier. One pretty convincing case is that a systematic policy of this kind would undermine trust in hospitals and destroy the effectiveness of healthcare in general, resulting in disastrous consequences far outweighing the benefit of saving those five people. But those aren’t immediate consequences, and indeed, it’s quite difficult to predict exactly how many crazy actions like this it would take to undermine people’s trust in hospitals, just how much it would undermine that trust, or exactly what the consequences of that lost trust would be.

So it seems like it’s actually better to have a rule about this.

This makes us into rule utilitarians, who instead of trying to optimize literally every single action—which requires information we do not have and never will—we instead develop a system of rules that we can follow, heuristics that will allow us to get better outcomes generally even if they can’t be guaranteed to produce the best possible outcome in any particular case.

That is, the output of a sophisticated consequentialism is fundamentally deontological.

We have come at the question of normative ethics from two very different directions, but the results turned out basically the same:

We should follow the rules that would have the best consequences.

The output of our moral theory is rules, like deontology; but its fundamental justification is based on outcomes, like consequentialism.

In my experience, when I present this account to staunch deontologists, they are pretty much convinced by it. They’re prepared to give up the fundamental justification to consequences if it allows them to have their rules.

The resistance I get is mainly from staunch consequentialists, who insist that it’s not so difficult to optimize individual actions, and so we should just do that instead of making all these rules.

So it is to those consequentialists, particularly those who say “rule utilitarianism collapses into act utilitarianism”, to whom the rest of the post is addressed.

First, let me say that I agree.

In the ideal case of omniscient, perfectly-benevolent, perfectly-rational agents, rule utilitarianism mathematically collapses into act utilitarianism. That is a correct theorem.

However, we do not live in the ideal case of omniscient, perfectly-benevolent, perfectly-rational agents. We are not even close to that ideal case; we will never be close to that ideal case. Indeed, I think part of the problem here is that you fail to fully grasp the depth and width of the chasm between here and there. Even a galactic civilization of a quintillion superhuman AIs would still not be close to that ideal case.

Quite frankly, humans aren’t even particularly good at forecasting what will make themselves happy.

There are massive errors and systematic biases in human affective forecasting.

One of the post important biases is impact bias: People systematically overestimate the impact of individual events on their long-term happiness. Some of this seems to be just due to focus: Paying attention to a particular event exaggerates its importance in your mind, and makes it harder for you to recall other events that might push your emotions in a different direction. Another component is called immune neglect: people fail to account for their own capacity to habituate to both pleasant and unpleasant experiences. (This effect is often overstated: It’s a common misconception that lottery winners are no happier than they were before. No, they absolutely are happier, on average; they’re just not as much happier as they predicted themselves to be.)

People also use inconsistent time discounting: $10 today is judged as better than $11 tomorrow, but $10 in 364 days is not regarded as better than $11 in 365 days—so if I made a decision a year ago, I’d want to change it now. (The correct answer, by the way, is to take the $11; a discount rate of 10% per day is a staggering 120,000,000,000,000,000% APR—seriously; check it yourself—so you’d better not be discounting at that rate, unless you’re literally going to die before tomorrow.)

Now, compound that with the fact that different human beings come at the world from radically different perspectives and with radically different preferences.

How good do you think we are at predicting what will make other people happy?

Damn right: We’re abysmal.

Basically everyone assumes that what they want and what they would feel is also what other people will want and feel—which, honestly, explains a lot about politics. As a result, my prediction of your feelings is more strongly correlated with my prediction of my feelings than it is with your actual feelings.

The impact bias is especially strong when forecasting other people’s feelings in response to our own actions: We tend to assume that other people care more about what we do than they actually care—and this seems to be a major source of social anxiety.

People also tend to overestimate the suffering of others, and are generally willing to endure more pain than they are willing to inflict upon others. (This one seems like it might be a good thing!)

Even when we know people well, we can still be totally blindsided by their emotional reactions. We’re just really awful at this.

Does this just mean that morality is hopeless? We have no idea what we’re doing?

Fortunately, no. Because while no individual can correctly predict or control the outcomes of particular actions, the collective action of well-designed institutions can in fact significantly improve the outcomes of policy.

This is why we have things like the following:

  • Laws
  • Courts
  • Regulations
  • Legislatures
  • Constitutions
  • Newspapers
  • Universities

These institutions—which form the backbone of liberal democracy—aren’t simply arbitrary. They are the result of hard-fought centuries, a frothing, volatile, battle-tested mix of intentional design and historical evolution.

Are these institutions optimal? Good heavens, no!

But we have no idea what optimal institutions look like, and probably never will. (Those galaxy-spanning AIs will surely have a better system than this; but even theirs probably won’t be optimal.) Instead, what we are stuck with are the best institutions we’ve come up with so far.

Moreover, we do have very clear empirical evidence at this point that some form of liberal democracy with a mixed economy is the best system we’ve got so far. One can reasonably debate whether Canada is doing better or worse than France, or whether the system in Denmark could really be scaled to the United States, or just what the best income tax rates are; but there is a large, obvious, and important difference between life in a country like Canada or Denmark and life in a country like Congo or Afghanistan.

Indeed, perhaps there is no better pair to compare than North and South Korea: Those two countries are right next to each other, speak the same language, and started in more or less the same situation; but the south got good institutions and the north got bad ones, and now the difference between them couldn’t be more stark. (Honestly, this is about as close as we’re ever likely to get of a randomized controlled experiment in macroeconomics.)

People in South Korea now live about as well as some of the happiest places in the world; their GDP per capita PPP is about $65,000 per year, roughly the same as Canada. People in North Korea live about as poorly as it is possible for humans to live, subject to totalitarian oppression and living barely above subsistence; their GDP per capita PPP is estimated to be $600 per year—less than 1% as much.

The institutions of South Korea are just that much better.

Indeed, there’s one particular aspect of good institutions that seems really important, yet is actually kind of hard to justify in act-utilitarian terms:

Why is freedom good?

A country’s level of freedom is almost perfectly correlated with its overall level of happiness and development. (Yes, even on this measure, #ScandinaviaIsBetter.)

But why? In theory, letting people do whatever they want could actually lead to really bad outcomes—and indeed, occasionally it does. There’s even a theorem that liberty is incompatible with full Pareto-efficiency. But all the countries with the happiest people seem to have a lot of liberty, and indeed the happiest ones seem to have the most. How come?

My answer:

Personal liberty is a technology for heuristic utility maximization.

In the ideal case, we wouldn’t really need personal liberty; you could just compel everyone to do whatever is optimal all the time, and that would—by construction—be optimal. It might even be sort of nice: You don’t need to make any difficult decisions, you can just follow the script and know that everything will turn out for the best.

But since we don’t know what the optimal choice is—even in really simple cases, like what you should eat for lunch tomorrow—we can’t afford to compel people in this way. (It would also be incredibly costly to implement such totalitarian control, but that doesn’t stop some governments from trying!)

Then there are disagreements: What I think is optimal may not be what you think is optimal, and in truth we’re probably both wrong (but one of us may be less wrong).

And that’s not even getting into conflicts of interest: We aren’t just lacking in rationality, we’re also lacking in benevolence. Some people are clearly much more benevolent than others, but none of us are really 100% selfless. (Sadly, I think some people are 100% selfish.)

In fact, this is a surprisingly deep question:

Would the world be better if we were selfless?

Could there be actually some advantage in aggregate to having some degree of individual self-interest?

Here are some ways that might hold, just off the top of my head:

  • Partial self-interest supports an evolutionary process of moral and intellectual development that otherwise would be stalled or overrun by psychopaths—see my post on Rousseaus and Axelrods
  • Individuals have much deeper knowledge of their own preferences than anyone else’s, and thus can optimize them much better. (Think about it: This is true even of people you know very well. Otherwise, why would we ever need to ask our spouses one of the most common questions in any marriage: “Honey, what do you want for dinner tonight?”)
  • Self-interest allows for more efficient economic incentives, and thus higher overall productivity.

Of course, total selfishness is clearly not optimal—that way lies psychopathy. But some degree of selfishness might actually be better for long-term aggregate outcomes than complete altruism, and this is to some extent an empirical question.

Personal liberty solves a lot of these problems: Since people are best at knowing their own preferences, let people figure out on their own what’s good for them. Give them the freedom to live the kind of life they want to live, within certain reasonable constraints to prevent them from causing great harm to others or suffering some kind of unrecoverable mistake.

This isn’t exactly a new idea; it’s basically the core message of John Stuart Mill’s On Liberty (which I consider a good candidate for the best book every written—seriously, it beats the Bible by a light-year). But by putting it in more modern language, I hope to show that deontology and consequentialism aren’t really so different after all.

And indeed, for all its many and obvious flaws, freedom seems to work pretty well—at least as well as anything we’ve tried.

What we still have to be thankful for

Nov 30 JDN 2461010

This post has been written before, but will go live after, Thanksgiving.

Thanksgiving is honestly a very ambivalent holiday.

The particular event it celebrates don’t seem quite so charming in their historical context: Rather than finding peace and harmony with all Native Americans, the Pilgrims in fact allied with the Wampanoag against the Narragansett, though they did later join forces with the Narragansett in order to conquer the Pequot. And of course we all know how things went for most Native American nations in the long run.

Moreover, even the gathering of family comes with some major downsides, especially in a time of extreme political polarization such as this one. I won’t be joining any of my Trump-supporting relatives for dinner this year (and they probably wouldn’t have invited me anyway), but the fact that this means becoming that much more detached from a substantial part of my extended family is itself a tragedy.

This year in particular, US policy has gotten so utterly horrific that it often feels like we have nothing to be thankful for at all, that all we thought was good and just in the world could simply be torn away at a moment’s notice by raving madmen. It isn’t really quite that bad—but it feels that way sometimes.

It also felt a bit uncanny celebrating Thanksgiving a few years ago when we were living in Scotland, for the UK does not celebrate Thanksgiving, but absolutely does celebrate Black Friday: Holidays may be local, but capitalism is global.

But fall feasts of giving thanks are far more ancient than that particular event in 1621 that we have mythologized to oblivion. They appear in numerous cultures across the globe—indeed their very ubiquity may be why the Wampanoag were so willing to share one with the Pilgrims despite their cultures having diverged something like 40,000 years prior.

And I think that it is by seeing ourselves in that context—as part of the whole of humanity—that we can best appreciate what we truly do have to be thankful for, and what we truly do have to look forward to in the future.

Above all, medicine.

We have actual treatments for some diseases, even actual cures for some. By no means all, of course—and it often feels like we are fighting an endless battle even against what we can treat.

But it is worth reflecting on the fact that aside from the last few centuries, this has simply not been the case. There were no actual treatments. There was no real medicine.

Oh, sure, there were attempts at medicine; and there was certainly what we would think of as more like “first aid”: bandaging wounds, setting broken bones. Even amputation and surgery were done sometimes. But most medical treatment was useless or even outright harmful—not least because for most of history, most of it was done without anesthetic or even antiseptic!

There were various herbal remedies for various ailments, some of which even have happened to work: Willow bark genuinely helps with pain, St. John’s wort is a real antidepressant, and some traditional burn creams are surprisingly effective.

But there was no system in place for testing medicine, no way of evaluating what remedies worked and what didn’t. And thus, for every remedy that worked as advertised, there were a hundred more that did absolutely nothing, or even made things worse.

Today, it can feel like we are all chronically ill, because so many of us take so many different pills and supplements. But this is not a sign that we are ill—it is a sign that we can be treated. The pills are new, yes—but the illnesses they treat were here all along.

I don’t see any particular reason to think that Roman plebs or Medieval peasants were any less likely to get migraines than we are; but they certainly didn’t have access to sumatriptan or rimegepant. Maybe they were less likely to get diabetes, but mainly because they were much more likely to be malnourished. (Well, okay, also because they got more exercise, which we surely could stand to.) And they only reason they didn’t get Alzheimer’s was that they usually didn’t live long enough.

Looking further back, before civilization, human health actually does seem to have been better: Foragers were rarely malnourished, weren’t exposed to as many infectious pathogens, and certainly got plenty of exercise. But should a pathogen like smallpox or influenza make it to a forager tribe, the results were often utterly catastrophic.

Today, we don’t really have the sort of plague that human beings used to deal with. We have pandemics, which are also horrible, but far less so. We were horrified by losing 0.3% of our population to COVID; a society that had only suffered 0.3%—or even ten times that, 3%—losses from the Black Death would have been hailed as a miracle, for a more typical rate was 30%.

At 0.3%, most of us knew somebody, or knew somebody who knew somebody, who died from COVID. At 3%, nearly everyone would know somebody, and most would know several. At 30%, nearly everyone would have close family and friends who died.

Then there is infant mortality.

As recently as 1950—this is living memory—the global infant mortality rate was 14.6%. This is about half what it had been historically; for most of human history, roughly a third of all children died between birth and the age of 5.

Today, it is 2.5%.

Where our distant ancestors expected two out of three of their children to survive and our own great-grandparents expected five out of six can now safely expect thirty-nine out of forty to live. This is the difference between “nearly every family has lost a child” and “most families have not lost a child”.

And this is worldwide; in highly-developed countries it’s even better. The US has a relatively high infant mortality rate by the standards of highly-developed countries (indeed, are we even highly-developed, or are we becoming like Saudi Arabia, extremely rich but so unequal that it doesn’t really mean anything to most of our people?). Yet even for us, the infant mortality rate is 0.5%—so we can expect one-hundred-ninety-nine out of two-hundred to survive. This is at the level of “most families don’t even know someone who has lost a child.”

Poverty is a bit harder to measure.

I am increasingly dubious of conventional measures of poverty; ever since compiling my Index of Necessary Expenditure, I am convinced that economists in general, and perhaps US economists in particular, are systematically underestimating the cost of living and thereby underestimating the prevalence of poverty. (I don’t think this is intentional, mind you; I just think it’s a result of using convenient but simplistic measures and not looking too closely into the details.) I think not being able to sustainably afford a roof over your head constitutes being poor—and that applies to a lot of people.

Yet even with that caveat in mind, it’s quite clear that global poverty has greatly declined in the long run.

At the “extreme poverty” level, currently defined as consuming $1.90 at purchasing power parity per day—that’s just under $700 per year, less than 2% of the median personal income in the United States—the number of people has fallen from 1.9 billion in 1990 to about 700 million today. That’s from 36% of the world’s population to under 9% today.

Now, there are good reasons to doubt that “purchasing power parity” really can be estimated as accurately as we would like, and thus it’s not entirely clear that people living on “$2 per day PPP” are really living at less than 2% the standard of living of a typical American (honestly to me that just sounds like… dead); but they are definitely living at a much worse standard of living, and there are a lot fewer people living at such low standard of living today than there used to be not all that long ago. These are people who don’t have reliable food, clean water, or even basic medicine—and that used to include over a third of humanity and does no longer. (And I would like to note that actually finding such a person and giving them a few hundred dollars absolutely would change their life, and this is the sort of thing GiveDirectly does. We may not know exactly how to evaluate their standard of living, but we do know that the actual amount of money they have access to is very, very small.)

There are many ways in which the world could be better than it is.

Indeed, part of the deep, overwhelming outrage I feel pretty much all the time lies in the fact that it would be so easy to make things so much better for so many people, if there weren’t so many psychopaths in charge of everything.


Increased foreign aid is one avenue by which that could be achieved—so, naturally, Trump cut it tremendously. More progressive taxation is another—so, of course, we get tax cuts for the rich.

Just think about the fact that there are families with starving children for whom a $500 check could change their lives; but nobody is writing that check, because Elon Musk needs to become a literal trillionaire.

There are so many water lines and railroad tracks and bridges and hospitals and schools not being built because the money that would have paid for them is tied up in making already unfathomably-rich people even richer.

But even despite all that, things are getting better. Not every day, not every month, not even every year—this past year was genuinely, on net, a bad one. But nearly every decade, every generation, and certainly every century (for at least the last few), humanity has fared better than we did the last.

As long as we can keep that up, we still have much to hope for—and much to be thankful for.