Hope for the new year

Jan 4 JDN 2461045

We have just entered 2026. I remember that around this time last year I felt a deep, visceral despair: Trump had just been elected and was about to be inaugurated, and I could only dread what the next year would bring. For the next several weeks I posted sections of my book The Logic of Kindness (at this point, it is probably never actually going to be published?), partly because I felt—and still feel—that these ideas do deserve to be out in the world, but also partly because I had no creative energy to write anything else.

Well, the first year of Trump’s second term was just about as bad as we thought it would be. He has torn apart global institutions that took decades to forge; he has caused thousands if not millions of unnecessary deaths; he has alienated our closest allies—seriously, CANADA!?—and cozied up to corrupt, authoritarian dictators around the world, because that is exactly what he aspires to be.

It’s true, he hasn’t collapsed the economy (yet). Inflation has been about as bad as it was before, despite the ludicrous tariffs. (He promised to bring prices down, but we all knew he wouldn’t. I honestly expected them to go up more than this.) He also hasn’t started any wars, though he looks damn close to it in Venezuela. And as he continues to make a mockery of our whole government, the checks and balances that are supposed to be reining him have languished unused, because the Republicans control all three branches.

Trump is still in office, and poised to be for three more years.

Yet, at last, there is some glimmer of hope on the horizon.

Other Republicans are starting to turn against him, in part because of his obvious and undeniable connections to Jeffrey Epstein and his ring of serial rapists. (Let’s be clear about that, by the way: They’re not just pedophiles. “Pedophile” merely means you are sexually attracted to children. Some pedophiles seek treatment. These men were rapists who sexually assaulted actual teenagers. And at this point it strains credulity to imagine that Donald Trump himself wasn’t an active participant on multiple occasions—no amount of incompetent redactions will change that.)

Trump’s net approval is now negative on almost every major issue, especially on inflation. It is now a statistical certainty that more Americans disapprove of him than approve of him.

Both of these things should have happened more than a year ago, if not a decade ago; but hey, better late than never.

Democrats—even very left-wing democrats, like Mamdani—have done very well in elections lately, and seem poised to continue doing well in the 2026 midterm election. If we can actually secure a majority in both houses of Congress, we might finally be able to start undoing some of the damage Trump has done—or at least stop him from doing even more.

I’m sure there will be plenty of bad things that continue to happen this year, and that many of them will be Donald Trump’s fault. But I no longer feel the deep despair I felt last year; it seems like things might finally be turning around for America—and thus for the world.

The longest night

Dec 21 JDN 2461031

When this post goes live, it will be (almost exactly) the winter solstice in the Northern Hemisphere. In our culture, derived mainly from European influences, we associate this time of year with Christmas; but in fact solstice celebrations are much more ancient and universal than that. Humans have been engaging in some sort of ritual celebration—often involving feasts and/or gifts—around the winter solstice in basically every temperate region of the world for as far back as we are able to determine. (You don’t see solstice celebrations so much in tropical regions, because “winter” isn’t really a thing there; those cultures tend to adopt lunar or lunisolar calendars instead.) Presumably humans have been doing something along these lines for about as long as there have been humans to do them.

I think part of why solstice celebrations are so enduring is that the solstice has both powerful symbolism and practical significance. It is the longest night of the year, when the sky will be darkest for the longest time and light for the shortest—above the Arctic Circle, the night lasts 24 hours and the sky never gets light at all. But from that point forward, the light will start to return. The solstice also heralds the start of the winter months, when the air is cold enough to be dangerous and food becomes much scarcer.

Of course, today we don’t have to worry about that so much: We have electric heating and refrigeration, so we can stay warm inside and eat pretty much whatever we want all year round. The practical significance, then, of the solstice has greatly decreased for us.

Yet it’s still a very symbolic time: The darkness is at its worst, the turning point is reached, the light will soon return. And when we reflect on how much safer we are than our ancestors were during this time of year, we may find it in our hearts to feel some gratitude for how far humanity has come—even if we still have terribly far yet to go.

And this year, in particular, I think we are seeing the turning point for a lot of darkness. The last year especially has been a nightmare for, well, the entire free world—not to mention all the poor countries who depended on us for aid—but at last it seems like we are beginning to wake from that nightmare. Within margin of error, Trump’s approval rating is at the lowest it has ever been, about 43% (still shockingly high, I admit), and the Republicans seem to be much more divided and disorganized than they were just a year ago, some of them even openly defying Trump instead of bowing at his every word.

Of course, while the motions of the Earth are extraordinarily regular and predictable, changes in society are not. The solstice will certainly happen on schedule, and the days will certainly get longer for the next six months after that—I’d give you million-to-one odds on either proposition. (Frankly, if I ever had to pay, we’d probably have bigger problems!)

But as far as our political, economic, and cultural situation, things could get very well get worse again before they get better. There’s even a chance they won’t get better, that it’s all downhill from here—but I believe those chances are very small. Things are not so bleak as that.

While there have certainly been setbacks and there will surely be more, on the whole humanity’s trajectory has been upward, toward greater justice and prosperity. Things feel so bad right now, not so much because they are bad in absolute terms (would you rather live as a Roman slave or a Medieval peasant?), but because this is such a harsh reversal in an otherwise upward trend—and because we can see just how easy it would be to do even better still, if the powers that be had half the will to do so.

So here’s hoping that on this longest night, at least some of the people with the power to make things better will see a little more of the light.

The confidence game

Dec 14 JDN 2461024

Our society rewards confidence. Indeed, it seems to do so without limit: The more confident you are, the more successful you will be, the more prestige you will gain, the more power you will have, the more money you will make. It doesn’t seem to matter whether your confidence is justified; there is no punishment for overconfidence and no reward for humility.

If you doubt this, I give you Exhibit A: President Donald Trump.

He has nothing else going for him. He manages to epitomize almost every human vice and lack in almost every human virtue. He is ignorant, impulsive, rude, cruel, incurious, bigoted, incompetent, selfish, xenophobic, racist, and misogynist. He has no empathy, no understanding of justice, and little capacity for self-control. He cares nothing for truth and lies constantly, even to the point of pathology. He has been convicted of multiple felonies. His businesses routinely go bankrupt, and he saves his wealth mainly through fraud and lawsuits. He has publicly admitted to sexually assaulting adult women, and there is mounting evidence that he has also sexually assaulted teenage girls. He is, in short, one of the worst human beings in the world. He does not have the integrity or trustworthiness to be an assistant manager at McDonald’s, let alone President of the United States.

But he thinks he’s brilliant and competent and wise and ethical, and constantly tells everyone around him that he is—and millions of people apparently believe him.

To be fair, confidence is not the only trait that our society rewards. Sometimes it does actually reward hard work, competence, or intellect. But in fact it seems to reward these virtues less consistently than it rewards confidence. And quite frankly I’m not convinced our society rewards honesty at all; liars and frauds seem to be disproportionately represented among the successful.

This troubles me most of all because confidence is not a virtue.

There is nothing good about being confident per se. There is virtue in notbeing underconfident, because underconfidence prevents you from taking actions you should take. But there is just as much virtue in not being overconfident, because overconfidence makes you take actions you shouldn’t—and if anything, is the more dangerous of the two. Yet our culture appears utterly incapable of discerning whether confidence is justifiable—even in the most blatantly obvious cases—and instead rewards everyone all the time for being as confident as they can possibly be.

In fact, the most confident people are usually less competent than the most humble people—because when you really understand something, you also understand how much you don’t understand.

We seem totally unable to tell whether someone who thinks they are right is actually right; and so, whoever thinks they are right is assumed to be right, all the time, every time.

Some of this may even be genetic, a heuristic that perhaps made more sense in our ancient environment. Even quite young children already are more willing to trust confident answers than hesitant ones, in multiple experiments.

Studies suggest that experts are just as overconfident as anyone else, but to be frank, I think this is because you don’t get to be called an expert unless you’re overconfident; people with intellectual humility are filtered out by the brutal competition of academia before they can get tenure.

I guess this is also personal for me.

I am not a confident person. Temperamentally, I just feel deeply uncomfortable going out on a limb and asserting things when I’m not entirely certain of them. I also have something of a complex about ever being perceived as arrogant or condescending, maybe because people often seem to perceive me that way even when I am actively trying to do the opposite. A lot of people seem to take you as condescending when you simply acknowledge that you have more expertise on something than they do.

I am also apparently a poster child for Impostor Syndrome. I once went to an Impostor Syndrome with a couple dozen other people where they played a bingo game for Impostor Syndrome traits and behaviors—and won. I once went to a lecture by George Akerlof where he explained that he attributed his Nobel Prize more to luck and circumstances than any particular brilliance on his part—and I guarantee you, in the extremely unlikely event I ever win a prize like that, I’ll say the same.

Compound this with the fact that our society routinely demands confidence in situations where absolutely no one could ever justify being confident.

Consider a job interview, when they ask you: “Why are you the best candidate for this job?” I couldn’t possibly know that. No one in my position could possibly know that. I literally do not know who your other candidates are in order to compare myself to them. I can tell you why I am qualified, but that’s all I can do. I could be the best person for the job, but I have no idea if I am. It’s your job to figure that out, with all the information in front of you—and I happen to know that you’re actually terrible at it, even with all that information I don’t have access to. If I tell you I know I’m the best person for the job, I am, by construction, either wildly overconfident or lying. (And in my case, it would definitely be lying.)

In fact, if I were a hiring manager, I would probably disqualify anyone who told me they were the best person for the job—because the one thing I now know about them is that they are either overconfident or willing to lie. (But I’ll probably never be a hiring manager.)

Likewise, I’ve been often told when pitching creative work to explain why I am the best or only person who could bring this work to life, or to provide accurate forecasts of how much the work would sell if published. I almost certainly am not the best or only person who could do anything—only a handful of people on Earth could realistically say that they are, and they’ve all already won Oscars or Emmys or Nobel Prizes. Accurate sales forecasts for creative works are so difficult that even Disney Corporation, an ever-growing conglomerate media superpower with billions of dollars to throw at the problem and even more billions of dollars at stake in getting it right, still routinely puts out films that are financial failures.


They casually hand you impossible demands and then get mad at you when you say you can’t meet them. And then they go pick someone else who claims to be able to do the impossible.

There is some hope, however.

Some studies suggest that people can sometimes recognize and punish overconfidence—though, again, I don’t see how that can be reconciled with the success of Donald Trump. In this study of evaluating expert witnesses, the most confident witnesses were rated as slightly less reliable than the moderately-confident ones, but both were far above the least-confident ones.

Surprisingly simple interventions can make intellectual humility more salient to people, and make them more willing to trust people who express doubt—who are, almost without exception, the more trustworthy people.

But somehow, I think I have to learn to express confidence I don’t feel, because that’s how you succeed in our society.

How to be a deontological consequentialist

Dec 7 JDN 2461017

As is commonly understood, there are two main branches of normative ethics:

  • Deontology, on which morality consists in following rules and fulfilling obligations, and
  • Consequentialism, on which morality consists in maximizing good consequences.

The conflict between them has raged for centuries, with Kantians leading the deontologists and utilitarians leading the consequentialists. Both theories seem to have a lot of good points, but neither can decisively defeat the other.

I think this is because they are both basically correct.

In their strongest forms, deontology and consequentialism are mutually contradictory; but it turns out that you can soften each of them a little bit, and the results become compatible.

To make deontology a little more consequentialist, let’s ask a simple question:

What makes a rule worth following?

I contend that the best answer we have is “because following that rule would make the world better off than not following that rule”. (Even Kantians pretty much have to admit this: What maxim could you will to be an absolute law? Only a law that would yield good outcomes.)

That is, the ultimate justification of a sound deontology would be fundamentally consequentialist.

But lest the consequentialists get too smug, we can also ask them another question, which is a bit subtler:

How do you know which actions will ultimately have good consequences?

Sure, if we were omniscient beings who could perfectly predict the consequences of our actions across the entire galaxy on into the indefinite future, we could be proper act utilitarians who literally choose every single action according to a calculation of the expected utility.

But in practice, we have radical uncertainty about the long-term consequences of our actions, and can generally only predict the immediate consequences.

That leads to the next question:

Would you really want to live in a world where people optimized immediate consequences?

I contend that you would not, that such a world actually sounds like a dystopian nightmare.

Immediate consequences say that if a healthy person walks into a hospital and happens to have compatible organs for five people who need donations, we should kill that person, harvest their organs, and give them to the donors. (This is the organ transplant variant of the Trolley Problem.)

Basically everyone recognizes that this is wrong. But why is it wrong? That’s thornier. One pretty convincing case is that a systematic policy of this kind would undermine trust in hospitals and destroy the effectiveness of healthcare in general, resulting in disastrous consequences far outweighing the benefit of saving those five people. But those aren’t immediate consequences, and indeed, it’s quite difficult to predict exactly how many crazy actions like this it would take to undermine people’s trust in hospitals, just how much it would undermine that trust, or exactly what the consequences of that lost trust would be.

So it seems like it’s actually better to have a rule about this.

This makes us into rule utilitarians, who instead of trying to optimize literally every single action—which requires information we do not have and never will—we instead develop a system of rules that we can follow, heuristics that will allow us to get better outcomes generally even if they can’t be guaranteed to produce the best possible outcome in any particular case.

That is, the output of a sophisticated consequentialism is fundamentally deontological.

We have come at the question of normative ethics from two very different directions, but the results turned out basically the same:

We should follow the rules that would have the best consequences.

The output of our moral theory is rules, like deontology; but its fundamental justification is based on outcomes, like consequentialism.

In my experience, when I present this account to staunch deontologists, they are pretty much convinced by it. They’re prepared to give up the fundamental justification to consequences if it allows them to have their rules.

The resistance I get is mainly from staunch consequentialists, who insist that it’s not so difficult to optimize individual actions, and so we should just do that instead of making all these rules.

So it is to those consequentialists, particularly those who say “rule utilitarianism collapses into act utilitarianism”, to whom the rest of the post is addressed.

First, let me say that I agree.

In the ideal case of omniscient, perfectly-benevolent, perfectly-rational agents, rule utilitarianism mathematically collapses into act utilitarianism. That is a correct theorem.

However, we do not live in the ideal case of omniscient, perfectly-benevolent, perfectly-rational agents. We are not even close to that ideal case; we will never be close to that ideal case. Indeed, I think part of the problem here is that you fail to fully grasp the depth and width of the chasm between here and there. Even a galactic civilization of a quintillion superhuman AIs would still not be close to that ideal case.

Quite frankly, humans aren’t even particularly good at forecasting what will make themselves happy.

There are massive errors and systematic biases in human affective forecasting.

One of the post important biases is impact bias: People systematically overestimate the impact of individual events on their long-term happiness. Some of this seems to be just due to focus: Paying attention to a particular event exaggerates its importance in your mind, and makes it harder for you to recall other events that might push your emotions in a different direction. Another component is called immune neglect: people fail to account for their own capacity to habituate to both pleasant and unpleasant experiences. (This effect is often overstated: It’s a common misconception that lottery winners are no happier than they were before. No, they absolutely are happier, on average; they’re just not as much happier as they predicted themselves to be.)

People also use inconsistent time discounting: $10 today is judged as better than $11 tomorrow, but $10 in 364 days is not regarded as better than $11 in 365 days—so if I made a decision a year ago, I’d want to change it now. (The correct answer, by the way, is to take the $11; a discount rate of 10% per day is a staggering 120,000,000,000,000,000% APR—seriously; check it yourself—so you’d better not be discounting at that rate, unless you’re literally going to die before tomorrow.)

Now, compound that with the fact that different human beings come at the world from radically different perspectives and with radically different preferences.

How good do you think we are at predicting what will make other people happy?

Damn right: We’re abysmal.

Basically everyone assumes that what they want and what they would feel is also what other people will want and feel—which, honestly, explains a lot about politics. As a result, my prediction of your feelings is more strongly correlated with my prediction of my feelings than it is with your actual feelings.

The impact bias is especially strong when forecasting other people’s feelings in response to our own actions: We tend to assume that other people care more about what we do than they actually care—and this seems to be a major source of social anxiety.

People also tend to overestimate the suffering of others, and are generally willing to endure more pain than they are willing to inflict upon others. (This one seems like it might be a good thing!)

Even when we know people well, we can still be totally blindsided by their emotional reactions. We’re just really awful at this.

Does this just mean that morality is hopeless? We have no idea what we’re doing?

Fortunately, no. Because while no individual can correctly predict or control the outcomes of particular actions, the collective action of well-designed institutions can in fact significantly improve the outcomes of policy.

This is why we have things like the following:

  • Laws
  • Courts
  • Regulations
  • Legislatures
  • Constitutions
  • Newspapers
  • Universities

These institutions—which form the backbone of liberal democracy—aren’t simply arbitrary. They are the result of hard-fought centuries, a frothing, volatile, battle-tested mix of intentional design and historical evolution.

Are these institutions optimal? Good heavens, no!

But we have no idea what optimal institutions look like, and probably never will. (Those galaxy-spanning AIs will surely have a better system than this; but even theirs probably won’t be optimal.) Instead, what we are stuck with are the best institutions we’ve come up with so far.

Moreover, we do have very clear empirical evidence at this point that some form of liberal democracy with a mixed economy is the best system we’ve got so far. One can reasonably debate whether Canada is doing better or worse than France, or whether the system in Denmark could really be scaled to the United States, or just what the best income tax rates are; but there is a large, obvious, and important difference between life in a country like Canada or Denmark and life in a country like Congo or Afghanistan.

Indeed, perhaps there is no better pair to compare than North and South Korea: Those two countries are right next to each other, speak the same language, and started in more or less the same situation; but the south got good institutions and the north got bad ones, and now the difference between them couldn’t be more stark. (Honestly, this is about as close as we’re ever likely to get of a randomized controlled experiment in macroeconomics.)

People in South Korea now live about as well as some of the happiest places in the world; their GDP per capita PPP is about $65,000 per year, roughly the same as Canada. People in North Korea live about as poorly as it is possible for humans to live, subject to totalitarian oppression and living barely above subsistence; their GDP per capita PPP is estimated to be $600 per year—less than 1% as much.

The institutions of South Korea are just that much better.

Indeed, there’s one particular aspect of good institutions that seems really important, yet is actually kind of hard to justify in act-utilitarian terms:

Why is freedom good?

A country’s level of freedom is almost perfectly correlated with its overall level of happiness and development. (Yes, even on this measure, #ScandinaviaIsBetter.)

But why? In theory, letting people do whatever they want could actually lead to really bad outcomes—and indeed, occasionally it does. There’s even a theorem that liberty is incompatible with full Pareto-efficiency. But all the countries with the happiest people seem to have a lot of liberty, and indeed the happiest ones seem to have the most. How come?

My answer:

Personal liberty is a technology for heuristic utility maximization.

In the ideal case, we wouldn’t really need personal liberty; you could just compel everyone to do whatever is optimal all the time, and that would—by construction—be optimal. It might even be sort of nice: You don’t need to make any difficult decisions, you can just follow the script and know that everything will turn out for the best.

But since we don’t know what the optimal choice is—even in really simple cases, like what you should eat for lunch tomorrow—we can’t afford to compel people in this way. (It would also be incredibly costly to implement such totalitarian control, but that doesn’t stop some governments from trying!)

Then there are disagreements: What I think is optimal may not be what you think is optimal, and in truth we’re probably both wrong (but one of us may be less wrong).

And that’s not even getting into conflicts of interest: We aren’t just lacking in rationality, we’re also lacking in benevolence. Some people are clearly much more benevolent than others, but none of us are really 100% selfless. (Sadly, I think some people are 100% selfish.)

In fact, this is a surprisingly deep question:

Would the world be better if we were selfless?

Could there be actually some advantage in aggregate to having some degree of individual self-interest?

Here are some ways that might hold, just off the top of my head:

  • Partial self-interest supports an evolutionary process of moral and intellectual development that otherwise would be stalled or overrun by psychopaths—see my post on Rousseaus and Axelrods
  • Individuals have much deeper knowledge of their own preferences than anyone else’s, and thus can optimize them much better. (Think about it: This is true even of people you know very well. Otherwise, why would we ever need to ask our spouses one of the most common questions in any marriage: “Honey, what do you want for dinner tonight?”)
  • Self-interest allows for more efficient economic incentives, and thus higher overall productivity.

Of course, total selfishness is clearly not optimal—that way lies psychopathy. But some degree of selfishness might actually be better for long-term aggregate outcomes than complete altruism, and this is to some extent an empirical question.

Personal liberty solves a lot of these problems: Since people are best at knowing their own preferences, let people figure out on their own what’s good for them. Give them the freedom to live the kind of life they want to live, within certain reasonable constraints to prevent them from causing great harm to others or suffering some kind of unrecoverable mistake.

This isn’t exactly a new idea; it’s basically the core message of John Stuart Mill’s On Liberty (which I consider a good candidate for the best book every written—seriously, it beats the Bible by a light-year). But by putting it in more modern language, I hope to show that deontology and consequentialism aren’t really so different after all.

And indeed, for all its many and obvious flaws, freedom seems to work pretty well—at least as well as anything we’ve tried.

What we still have to be thankful for

Nov 30 JDN 2461010

This post has been written before, but will go live after, Thanksgiving.

Thanksgiving is honestly a very ambivalent holiday.

The particular event it celebrates don’t seem quite so charming in their historical context: Rather than finding peace and harmony with all Native Americans, the Pilgrims in fact allied with the Wampanoag against the Narragansett, though they did later join forces with the Narragansett in order to conquer the Pequot. And of course we all know how things went for most Native American nations in the long run.

Moreover, even the gathering of family comes with some major downsides, especially in a time of extreme political polarization such as this one. I won’t be joining any of my Trump-supporting relatives for dinner this year (and they probably wouldn’t have invited me anyway), but the fact that this means becoming that much more detached from a substantial part of my extended family is itself a tragedy.

This year in particular, US policy has gotten so utterly horrific that it often feels like we have nothing to be thankful for at all, that all we thought was good and just in the world could simply be torn away at a moment’s notice by raving madmen. It isn’t really quite that bad—but it feels that way sometimes.

It also felt a bit uncanny celebrating Thanksgiving a few years ago when we were living in Scotland, for the UK does not celebrate Thanksgiving, but absolutely does celebrate Black Friday: Holidays may be local, but capitalism is global.

But fall feasts of giving thanks are far more ancient than that particular event in 1621 that we have mythologized to oblivion. They appear in numerous cultures across the globe—indeed their very ubiquity may be why the Wampanoag were so willing to share one with the Pilgrims despite their cultures having diverged something like 40,000 years prior.

And I think that it is by seeing ourselves in that context—as part of the whole of humanity—that we can best appreciate what we truly do have to be thankful for, and what we truly do have to look forward to in the future.

Above all, medicine.

We have actual treatments for some diseases, even actual cures for some. By no means all, of course—and it often feels like we are fighting an endless battle even against what we can treat.

But it is worth reflecting on the fact that aside from the last few centuries, this has simply not been the case. There were no actual treatments. There was no real medicine.

Oh, sure, there were attempts at medicine; and there was certainly what we would think of as more like “first aid”: bandaging wounds, setting broken bones. Even amputation and surgery were done sometimes. But most medical treatment was useless or even outright harmful—not least because for most of history, most of it was done without anesthetic or even antiseptic!

There were various herbal remedies for various ailments, some of which even have happened to work: Willow bark genuinely helps with pain, St. John’s wort is a real antidepressant, and some traditional burn creams are surprisingly effective.

But there was no system in place for testing medicine, no way of evaluating what remedies worked and what didn’t. And thus, for every remedy that worked as advertised, there were a hundred more that did absolutely nothing, or even made things worse.

Today, it can feel like we are all chronically ill, because so many of us take so many different pills and supplements. But this is not a sign that we are ill—it is a sign that we can be treated. The pills are new, yes—but the illnesses they treat were here all along.

I don’t see any particular reason to think that Roman plebs or Medieval peasants were any less likely to get migraines than we are; but they certainly didn’t have access to sumatriptan or rimegepant. Maybe they were less likely to get diabetes, but mainly because they were much more likely to be malnourished. (Well, okay, also because they got more exercise, which we surely could stand to.) And they only reason they didn’t get Alzheimer’s was that they usually didn’t live long enough.

Looking further back, before civilization, human health actually does seem to have been better: Foragers were rarely malnourished, weren’t exposed to as many infectious pathogens, and certainly got plenty of exercise. But should a pathogen like smallpox or influenza make it to a forager tribe, the results were often utterly catastrophic.

Today, we don’t really have the sort of plague that human beings used to deal with. We have pandemics, which are also horrible, but far less so. We were horrified by losing 0.3% of our population to COVID; a society that had only suffered 0.3%—or even ten times that, 3%—losses from the Black Death would have been hailed as a miracle, for a more typical rate was 30%.

At 0.3%, most of us knew somebody, or knew somebody who knew somebody, who died from COVID. At 3%, nearly everyone would know somebody, and most would know several. At 30%, nearly everyone would have close family and friends who died.

Then there is infant mortality.

As recently as 1950—this is living memory—the global infant mortality rate was 14.6%. This is about half what it had been historically; for most of human history, roughly a third of all children died between birth and the age of 5.

Today, it is 2.5%.

Where our distant ancestors expected two out of three of their children to survive and our own great-grandparents expected five out of six can now safely expect thirty-nine out of forty to live. This is the difference between “nearly every family has lost a child” and “most families have not lost a child”.

And this is worldwide; in highly-developed countries it’s even better. The US has a relatively high infant mortality rate by the standards of highly-developed countries (indeed, are we even highly-developed, or are we becoming like Saudi Arabia, extremely rich but so unequal that it doesn’t really mean anything to most of our people?). Yet even for us, the infant mortality rate is 0.5%—so we can expect one-hundred-ninety-nine out of two-hundred to survive. This is at the level of “most families don’t even know someone who has lost a child.”

Poverty is a bit harder to measure.

I am increasingly dubious of conventional measures of poverty; ever since compiling my Index of Necessary Expenditure, I am convinced that economists in general, and perhaps US economists in particular, are systematically underestimating the cost of living and thereby underestimating the prevalence of poverty. (I don’t think this is intentional, mind you; I just think it’s a result of using convenient but simplistic measures and not looking too closely into the details.) I think not being able to sustainably afford a roof over your head constitutes being poor—and that applies to a lot of people.

Yet even with that caveat in mind, it’s quite clear that global poverty has greatly declined in the long run.

At the “extreme poverty” level, currently defined as consuming $1.90 at purchasing power parity per day—that’s just under $700 per year, less than 2% of the median personal income in the United States—the number of people has fallen from 1.9 billion in 1990 to about 700 million today. That’s from 36% of the world’s population to under 9% today.

Now, there are good reasons to doubt that “purchasing power parity” really can be estimated as accurately as we would like, and thus it’s not entirely clear that people living on “$2 per day PPP” are really living at less than 2% the standard of living of a typical American (honestly to me that just sounds like… dead); but they are definitely living at a much worse standard of living, and there are a lot fewer people living at such low standard of living today than there used to be not all that long ago. These are people who don’t have reliable food, clean water, or even basic medicine—and that used to include over a third of humanity and does no longer. (And I would like to note that actually finding such a person and giving them a few hundred dollars absolutely would change their life, and this is the sort of thing GiveDirectly does. We may not know exactly how to evaluate their standard of living, but we do know that the actual amount of money they have access to is very, very small.)

There are many ways in which the world could be better than it is.

Indeed, part of the deep, overwhelming outrage I feel pretty much all the time lies in the fact that it would be so easy to make things so much better for so many people, if there weren’t so many psychopaths in charge of everything.


Increased foreign aid is one avenue by which that could be achieved—so, naturally, Trump cut it tremendously. More progressive taxation is another—so, of course, we get tax cuts for the rich.

Just think about the fact that there are families with starving children for whom a $500 check could change their lives; but nobody is writing that check, because Elon Musk needs to become a literal trillionaire.

There are so many water lines and railroad tracks and bridges and hospitals and schools not being built because the money that would have paid for them is tied up in making already unfathomably-rich people even richer.

But even despite all that, things are getting better. Not every day, not every month, not even every year—this past year was genuinely, on net, a bad one. But nearly every decade, every generation, and certainly every century (for at least the last few), humanity has fared better than we did the last.

As long as we can keep that up, we still have much to hope for—and much to be thankful for.

What is the cost of all this?

Nov 23 JDN 2461003

After the Democrats swept the recent election and now the Epstein files are being released—and absolutely do seem to have information that is damning about Trump—it really seems like Trump’s popularity has permanently collapsed. His approval rating stands at 42%, which is about 42% too high, but at least comfortably well below a majority.

It now begins to feel like we have hope, not only of removing him, but also of changing how American politics in general operates so that someone like him ever gets power again. (The latter, of course, is a much taller order.)

But at the risk of undermining this moment of hope, I’d like to take stock of some of the damage that Trump and his ilk have already done.

In particular, the cuts to US foreign aid are an absolute humanitarian disaster.

These didn’t get so much attention, because there has been so much else going on; and—unfortunately—foreign aid actually isn’t that popular among American voters, despite being a small proportion of the budget and by far the most cost-effective beneficial thing that our government does.

In fact, I think USAID would be cost-effective on a purely national security basis: it’s hard to motivate people to attack a country that saves the lives of their children. Indeed, I suppose this is the kernel of truth to the leftists who say that US foreign aid is just a “tool of empire” (or even “a front for the CIA”); yes, indeed, helping the needy does in fact advance American interests and promote US national security.

Over the last 25 years, USAID has saved over 90 million lives. That is more than a fourth of the population of the United States. And it has done this for the cost of less than 1% of the US federal budget.

But under Trump’s authority and Elon Musk’s direction, US foreign aid was cut massively over the last couple of years, and the consequences are horrific. Research on the subject suggests that as many as 700,000 children will die each year as long as these cuts persist.


Even if that number is overestimated by a factor of 2, that would still be millions of children over the next few years. And it could just as well be underestimated.

If we don’t fix this fast, millions of children will die. Thousands already have.

What’s more, fixing this isn’t just a matter of bringing the funding back. Obviously that’s necessary, but it won’t be sufficient. The sudden cuts have severely damaged international trust in US foreign aid, and many of the agencies that our aid was supporting will either collapse or need to seek funding elsewhere—quite likely from China. Relationships with governments and NGOs that were built over decade have been strained or even destroyed, and will need to be rebuilt.

This is what happens when you elect monsters to positions of power.

And even after we remove them, much of the damage will be difficult or even impossible to repair. Certainly we can never bring back the children who have already needlessly died because of this.

You call this a hobby?

Nov 9 JDN 2460989

A review of Politics is for Power by Eitan Hersch

This week, there was an election. It’s a minor midterm election—since it’s an odd-numbered year, many places don’t even have any candidates on the ballot—and as a result, turnout will surely be low. Eitan Hersch has written a book about why that’s a bad thing, and how it is symptomatic of greater problems in our civic culture as a whole.

Buried somewhere in this book, possible to find through committed, concerted effort, there is a book that could have had a large positive effect on our political system, our civic discourse, and our society as a whole. Sadly, Dr. Hersch buried it so well that most people will never find it.

In particular, he starts the booknot even on the first page, but on the cover—by actively alienating his core audience with what seems to be the very utmost effort he can muster.


Yes, even the subtitle is condescending and alienating:

How to Move Beyond Political Hobbyism, Take Action, and Make Real Change

And of course it’s not just there; on page after page he drives the dagger deeper and twists it as hard as he can, repeating the accusation over and over:

This is just a hobby for you. It doesn’t really mean anything.

Today’s hobbyists possess the negative qualities of the amateurs—hyperemotional engagement, obsession with national politics, an insatiable appetite for debate—and none of the amateur’s positive qualities—the neighborhood meetings, the concrete goals, the leadership.

– p.9

You hear that? You’re worse than an amateur. This is on page 9. Page 9.

[…] Much of the time we spend on politics is best described as an inward-focused leisure activity for people who like politics.

We may not easily concede that we are doing politics for fun.[…]

-p. 14

See? You may say it’s not really just for fun, but you’re lying. You’re failing to concede the truth.

To the political hobbyist, news is a form of entertainment and needs to be fun.

-p.19

You hear me? This is fun for you. You’re enjoying this. You’re doing it for yourself.

The real explanation for the dynamics of voter turnout is that we treat politics like a game and follow the spectacle. Turnout is high in presidential elections compared to other US elections in the same way that football viewership is high when the Super Bowl is on. Many people who do not like football or even know the rules of the game end up at a Super Bowl party. They’re there for the commercials, the guacamole, and to be part of a cultural moment. That’s why turnout is high in presidential elections. Without the spectacle, even people who say they care about voting don’t show up.

-p. 48

This is all a game. It’s not real. You don’t really care.

I could go on; he keeps repeating this message—this insult, this accusation—throughout the book. He tells you, over and over, that if you are not already participating in politics in the very particular way he wants you to (and he may even be right that it would be better!), you are a selfish liar, and you are treating what should be vitally important as just meaningless entertainment.

This made it honestly quite painful to get through the book. Several times, I was tempted to just give up and put it back on the shelf. But I’m glad I didn’t, because there are valuable insights about effective grassroots political activism buried within this barrage of personal accusations.

I guess Hersch must not see this as a personal accusation; at one point, he acknowledges that people might find it insulting, but (1) doesn’t seem to care and (2) makes no effort to inquire as to why we might feel that way; in fact, he manages to twist the knife just a little deeper in that very same passage:

For the non-self-identifying junkies, the term political hobbyist can be insulting. Given how important politics is, it doesn’t feel good to call one’s political activity a hobby. The term is also insulting, I have learned, to real hobbyists, who see hobbies as activities with much more depth than the online bickering or addictive news consumption I’m calling a hobby.

-p. 88

You think calling it a “hobby” is insulting? Yeah, well, it’s worse than that, so ha!

But let me tell you something about my own experience of politics. (Actually, one of Hersch’s central messages is that sharing personal experiences is one of the most powerful political tools I know.)

How do most people I know feel about politics, since, oh, say… November 2016?

ABSOLUTE HORROR AND DESPAIR.

For every queer person I know, every trans person, every immigrant, every woman, every person of color, and for plenty of White cishet liberal guys too, the election of President Donald Trump was traumatic. It felt like a physical injury. People who had recovered from depression were thrust back into it. People felt physically nauseated. And especially for immigrants and trans people, people literally feared for their lives and were right to do so.

WHATEVER THIS IS, IT IS NOT A HOBBY.

I’ve had to talk people down from psychotic episodes and suicidal ideation because of this, and you have the fucking audacity to tell me that we’re doing this for fun!?

If someone feared for their life because their team lost the Super Bowl, we would rightfully recognize that as an utterly pathological response. But I know a whole bunch of folks on student visas that are constantly afraid of being kidnapped and taken away by masked men with guns, because that is a thing that has actually happened to other people who were in this country on student visas. I know a whole bunch of trans folks who are afraid of assaulted or even killed for using the wrong bathroom, because that is a thing that actually happens to trans people in this country.

I wish I could tell these people—many of them dear friends of mine—that they are wrong to fear, that they are safe, that everything will be all right. But as long as Donald Trump is in power and the Republicans in Congress and the right-wing Supreme Court continue to enable him, I can’t tell them that, because I would be lying; the danger is real. All I can do is tell them that it is probably not as great a danger as they fear, and that if there is any way I can help them, I am willing to do so.

Indeed, politics for me and those closest to me is so obviously so much not a hobby that repeatedly insisting that I admit that it is starts to feel like gaslighting. I feel like I’m in a struggle session or something: “Admit you are a hobbyist! Repent!”

I don’t know; maybe there are people for whom politics is just a hobby. Maybe the privileged cishet White kids at Tufts that Dr. Hersch lectures to are genuinely so removed from the consequences of public policy that they can engage with politics at their leisure and for their own entertainment. (A lot of the studies he cites are specifically about undergrads; I know this is a thing in pretty much all social science… but maybe undergrads are in fact not a very representative sample of political behavior?) But even so, some of the international students in those lecture halls (11% of Tufts undergrads and 17% of Tufts grads) probably feel pretty differently, I have to imagine.

In fact, maybe genuine political hobbyism is a widespread phenomenon, and its existence explains a lot of otherwise really baffling things about the behavior of our electorate (like how the same districts could vote for both Donald Trump and Alexandria Ocasio-Cortez). I don’t find that especially plausible given my own experience, but I’m an economist, not a political scientist, so I do feel like I should offer some deference to the experts on this matter. (And I’m well aware that my own social network is nothing like a representative sample of the American electorate.)

But I can say this for sure:

The target audience of this book is not doing this as a hobby.

Someone who picks up a book by a political scientist hoping for guidance as to how to make their own political engagement more effective is not someone who thinks this is all a game. They are not someone who is engaging with politics as a fun leisure activity. They are someone who cares. They are someone who thinks this stuff matters.

By construction, the person who reads this book to learn about how to make change wants to make change.

So maybe you should acknowledge that at some point in your 200 pages of text? Maybe after spending all these words talking about how having empathy is such an important trait in political activism, you should have some empathy for your audience?

Hersch does have some useful advice to give, buried in all this.

His core message is basically that we need more grassroots activism: Small groups of committed people, acting in their communities. Not regular canvassing, which he acknowledges as terrible (and as well he should; I’ve done it, and it is), but deep canvassing, which also involves going door to door but is really a fundamentally different process.

Actually, he seems to love grassroots organizing so much that he’s weirdly nostalgic for the old days of party bosses. Several times, he acknowledges that these party bosses were corrupt, racist, and utterly unaccountable, but after every such acknowledgment he always follows it up with some variation on “but at least they got things done”.

He’s honestly weirdly dismissive of other forms of engagement, though. Like, I expected him to be dismissive of “slacktivism” (though I am not), if for no other reason than the usual generational curmudgeonry. But he’s also weirdly dismissive of donations and even… honestly… voting? He doesn’t even seem interested in encouraging people to vote more. He doesn’t seem to think that get-out-the-vote campaigns are valuable.

I guess as a political scientist, he’s probably very familiar with the phenomenon of “low information voters”, who frequently swing elections despite being either clueless or actively misled. And okay, maybe turning out those people isn’t all that useful, at least if it’s not coupled with also educating them and correcting their misconceptions. But surely it’s not hobbyism to vote? Surely doing the one most important thing in a democratic system isn’t treating this like a game?

In his section on donations, he takes two tacks against them:

The first is to say that rich donors who pay $10,000 a plate for fancy dinners really just want access to politicians for photo ops. I don’t think that’s right, but the truth is admittedly not much better: I think they want access to politicians to buy influence. This is “political engagement” in some sense—you’re acting to exert power—but it’s corrupt, and it’s the source of an enormous amount of damage to our society—indeed to our planet itself. But I think Hersch has to deny that the goal is influence, because that would in fact be “politics for power”, and in order to remain fiercely non-partisan throughout (which, honestly, probably is a good strategic move), he carefully avoids ever saying that anyone exerting political power is bad.

Actually the closest he gets to admitting his own political beliefs (surprise, the Massachusetts social science professor is a center-left liberal!) comes in a passage where he bemoans the fact that… uh… Democrats… aren’t… corrupt enough? If you don’t believe me, read it for yourself:

The hobbyist motivation among wealthy donors is also problematic for a reason that doesn’t have a parallel in the nonprofit world: Partisan asymmetry. Unlike Democratic donors, Republican donors typically support politicians whose policy priorities align with a wealthy person’s financial interests. The donors can view donations as an investment. When Schaffner and I asked max-out donors why they made their contribution, many more Republicans than Democrats said that a very or extremely important reason for their gift was that the politician could affect the donor’s own industry (37 percent of Republicans versus 22 percent of Democrats).

This asymmetry puts Democrats at a disadvantage. Not motivated by their own bottom line, Democratic donors instead have to be motivated by ideology, issues, or even by the entertainment value that a donation provides.

-p.80

Yes, God forbid they be motivated by issues or ideology. That would involve caring about other people. Clearly only naked self-interest and the profit motive could ever be a good reason for political engagement! (Quick question: You haven’t been, uh, reading a lot of… neoclassical economists lately, have you? Why? Oh, no reason.) Oh why can’t Democrats just be more like Republicans, and use their appallingly vast hoards of money to make sure that we cut social services and deregulate everything until the polluted oceans flood the world!?

The second is to say that the much broader population who makes small donations of $25 or $50 is “ideologically extreme” compared to the rest of the population, which is true, but seems to me utterly unsurprising. The further the world is from how you’d like to see it, the greater the value is to you of changing the world, and therefore the more you should be willing to invest into making that change—or even into a small probability of possibly making that change. If you think things are basically okay, why would you pay money to try to make them different? (I guess maybe you’d try to pay money to keep them the same? But even so-called “conservatives” never actually seem to campaign on that.)

I also don’t really see “ideologically extreme” as inherently a bad thing.

Sure, some extremists are very bad: Nazis are extreme and bad (weird that this seems controversial these days), Islamists are extreme and bad, Christian nationalists are extreme and bad, tankie leftists are extreme and bad.

But vegetarians—especially vegans—are also “ideologically extreme”, but quite frankly we are objectively correct, and maybe don’t even go far enough (I only hope that future generations will forgive me for my cheese). Everyone knows that animals can suffer, and everyone who is at all informed knows that factory farms make them suffer severely. The “moderate” view that all this horrible suffering is justifiable in the name of cheap ground beef and chicken nuggets is a fundamentally immoral one. (Maybe I could countenance a view that free-range humane meat farming is acceptable, but even that is far removed from our current political center.)

Trans activism is in some sense “ideologically extreme”—and frequently characterized as such—but it basically amounts to saying that the human rights of free expression, bodily autonomy, and even just personal safety outweigh other people’s narrow, blinkered beliefs about sex and gender. Okay, maybe we can make some sort of compromise on trans kids in sports (because why should I care about sports?), and I’m okay with gender-neutral bathrooms instead of letting trans women in women’s rooms (because gender-neutral bathrooms give more privacy and safety anyway!), and the evidence on the effects of puberty blockers and hormones is complicated (which is why it should be decided by doctors and scientists, not by legislators!), but in our current state, trans people die to murder and suicide at incredibly alarming rates. The only “moderate” position here is to demand, at minimum, enforced laws against discrimination and hate crimes. (Also, calling someone by the name and pronouns they ask you to costs you basically nothing. Failing to do that is not a brave ideological stand; it’s just you being rude and obnoxious. Indeed, since it can trigger dysphoria, it’s basically like finding out someone’s an arachnophobe and immediately putting a spider in their hair.)

Open borders is regarded as so “ideologically extreme” that even the progressive Democrats won’t touch it, despite the fact that I literally am not aware of a single ethical philosopher in the 21st century who believes that our current system of immigration control is morally justifiable. Even the ones who favor “closed borders” in principle are almost unanimous that our current system is cruel and racist. The Lifeboat Theory is ridiculous; allowing immigrants in wouldn’t kill us, it would just maybe—maybe—make us a little worse off. Their lives may be at stake, but ours are not. We are not keeping people out of a lifeboat so it doesn’t sink; we are keeping them out of a luxury cruise liner so it doesn’t get dirty and crowded.

Indeed, even so-called “eco-terrorists”, who are not just ideologically extreme but behaviorally extreme as well, don’t even really seem that bad. They are really mostly eco-vandals; they destroy property, they don’t kill people. There is some risk to life and limb involved in tree spiking or blowing up a pipeline, but the goal is clearly not to terrorize people; it’s to get them to stop doing a particular thing—a particular thing that they in fact probably should stop doing. I guess I understand why this behavior has to be illegal and punished as such; but morally, I’m not even sure it’s wrong. We may not be able to name or even precisely count the children saved who would have died if that pipeline had been allowed to continue pumping oil and thus spewing carbon emissions, but that doesn’t make them any less real.

So really, if anything, the problem is not “extremism” in some abstract sense, but particular beliefs and ideologies, some of which are not even regarded as extreme. A stronger vegan lobby would not be harmful to America, however “extreme” they might be, and a strong Republican lobby, however “mainstream” it is perceived to be, is rapidly destroying our nation on a number of different levels.

Indeed, in parts of the book, it almost seems like Hersch is advocating in some Nietzschean sense for power for its own sake. I don’t think that’s really his intention; I think he means to empower the currently disempowered, for the betterment of society as a whole. But his unwillingness to condemn rich Republicans who donate the maximum allowed in order to get their own industry deregulated is at least… problematic, as both political activists and social scientists are wont to say.

I’m honestly not even sure that empowering the disempowered is what we need right now. I think a lot of the disempowered are also terribly misinformed, and empowering them might actually make things worse. In fact, I think the problem with the political effect of social media isn’t that it has failed to represent the choices of the electorate, but that it has represented them all too well and most people are really, really bad—just, absolutely, shockingly, appallingly bad—at making good political choices. They have wildly wrong beliefs about really basic policy questions, and often think that politicians’ platforms are completely different from what they actually are. I don’t go quite as far as this article by Dan Williams in Conspicuous Cognition, but it makes some really good points I can’t ignore. Democracy is currently failing to represent the interests of a great many Americans, but a disturbingly large proportion of this failure must be blamed on a certain—all too large—segment of the American populace itself.

I wish this book had been better.

More grassroots organizing does seem like a good thing! And there is some advice in this book about how to do it better—though in my opinion, not nearly enough. A lot of what Hersch wants to see happen would require tremendous coordination between huge numbers of people, which almost seems like saying “politics would be better if enough people were better about politics”. What I wanted to hear more about was what I can do; if voting and donating and protesting and blogging isn’t enough, what should I be doing? How do I make it actually work? It feels like Hersch spent so long trying to berate me for being a “hobbyist” that he forgot to tell me what he actually thinks I should be doing.

I am fully prepared to believe that online petitions and social media posts don’t accomplish much politically. (Indeed, I am fully prepared to believe that blogging doesn’t accomplish much politically.) I am open to hearing what other options are available, and eager for guidance about how to have the most effective impact.

But could you please, please not spend half the conversation repeatedly accusing me of not caring!?

Taylor Swift and the means of production

Oct 5 JDN 2460954

This post is one I’ve been meaning to write for awhile, but current events keep taking precedence.

In 2023, Taylor Swift did something very interesting from an economic perspective, which turns out to have profound implications for our economic future.

She re-recorded an entire album and released it through a different record company.

The album was called 1989 (Taylor’s Version), and she created it because for the last four years she had been fighting with Big Machine Records over the rights to her previous work, including the original album 1989.

A Marxist might well say she seized the means of production! (How rich does she have to get before she becomes bourgeoisie, I wonder? Is she already there, even though she’s one of a handful of billionaires who can truly say they were self-made?)

But really she did something even more interesting than that. It was more like she said:

Seize the means of production? I am the means of production.”

Singing and songwriting are what is known as a human-capital-intensive industry. That is, the most important factor of production is not land, or natural resources, or physical capital (yes, you need musical instruments, amplifiers, recording equipment and the like—but these are a small fraction of what it costs to get Talor Swift for a concert), or even labor in the ordinary sense. It’s one where so-called (honestly poorly named) “human capital” is the most important factor of production.

A labor-intensive industry is one where you just need a lot of work to be done, but you can get essentially anyone to do it: Cleaning floors is labor-intensive. A lot of construction work is labor-intensive (though excavators and the like also make it capital-intensive).

No, for a human-capital-intensive industry, what you need is expertise or talent. You don’t need a lot of people doing back-breaking work; you need a few people who are very good at doing the specific thing you need to get done.

Taylor Swift was able to re-record and re-release her songs because the one factor of production that couldn’t be easily substituted was herself. Big Machine Records overplayed their hand; they thought they could control her because they owned the rights to her recordings. But she didn’t need her recordings; she could just sing the songs again.

But now I’m sure you’re wondering: So what?

Well, Taylor Swift’s story is, in large part, the story of us all.

For most of the 18th, 19th, and 20th centuries, human beings in developed countries saw a rapid increase in their standard of living.

Yes, a lot of countries got left behind until quite recently.

Yes, this process seems to have stalled in the 21st century, with “real GDP” continuing to rise but inequality and cost of living rising fast enough that most people don’t feel any richer (and I’ll get to why that may be the case in a moment).

But for millions of people, the gains were real, and substantial. What was it that brought about this change?

The story we are usually told is that it was capital; that as industries transitioned from labor-intensive to capital-intensive, worker productivity greatly increased, and this allowed us to increase our standard of living.

That’s part of the story. But it can’t be the whole thing.

Why not, you ask?

Because very few people actually own the capital.

When capital ownership is so heavily concentrated, any increases in productivity due to capital-intensive production can simply be captured by the rich people who own the capital. Competition was supposed to fix this, compelling them to raise wages to match productivity, but we often haven’t actually had competitive markets; we’ve had oligopolies that consolidate market power in a handful of corporations. We had Standard Oil before, and we have Microsoft now. (Did you know that Microsoft not only owns more than half the consumer operating system industry, but after acquiring Activision Blizzard, is now the largest video game company in the world?) In the presence of an oligopoly, the owners of the capital will reap the gains from capital-intensive productivity.

But standards of living did rise. So what happened?

The answer is that production didn’t just become capital-intensive. It became human-capital-intensive.

More and more jobs required skills that an average person didn’t have. This created incentives for expanding public education, making workers not just more productive, but also more aware of how things work and in a stronger bargaining position.

Today, it’s very clear that the jobs which are most human-capital-intensive—like doctors, lawyers, researchers, and software developers—are the ones with the highest pay and the greatest social esteem. (I’m still not 100% sure why stock traders are so well-paid; it really isn’t that hard to be a stock trader. I could write you an algorithm in 50 lines of Python that would beat the average trader (mostly by buying ETFs). But they pretend to be human-capital-intensive by hiring Harvard grads, and they certainly pay as if they are.)

The most capital-intensive industries—like factory work—are reasonably well-paid, but not that well-paid, and actually seem to be rapidly disappearing as the capital simply replaces the workers. Factory worker productivity is now staggeringly high thanks to all this automation, but the workers themselves have gained only a small fraction of this increase in higher wages; by far the bigger effect has been increased profits for the capital owners and reduced employment in manufacturing.

And of course the real money is all in capital ownership. Elon Musk doesn’t have $400 billion because he’s a great engineer who works very hard. He has $400 billion because he owns a corporation that is extremely highly valued (indeed, clearly overvalued) in the stock market. Maybe being a great engineer or working very hard helped him get there, but it was neither necessary nor sufficient (and I’m sure that his dad’s emerald mine also helped).

Indeed, this is why I’m so worried about artificial intelligence.

Most forms of automation replace labor, in the conventional labor-intensive sense: Because you have factory robots, you need fewer factory workers; because you have mountaintop removal, you need fewer coal miners. It takes fewer people to do the same amount of work. But you still need people to plan and direct the process, and in fact those people need to be skilled experts in order to be effective—so there’s a complementarity between automation and human capital.

But AI doesn’t work like that. AI substitutes for human capital. It doesn’t just replace labor; it replaces expertise.

So far, AI is currently too unreliable to replace any but entry-level workers in human-capital-intensive industries (though there is some evidence it’s already doing that). But it will most likely get more reliable over time, if not via the current LLM paradigm, than through the next one that comes after. At some point, AI will come to replace experienced software developers, and then veteran doctors—and I don’t think we’ll be ready.

The long-term pattern here seems to be transitioning away from human-capital-intensive production to purely capital-intensive production. And if we don’t change the fact that capital ownership is heavily concentrated and so many of our markets are oligopolies—which we absolutely do not seem poised to do anything about; Democrats do next to nothing and Republicans actively and purposefully make it worse—then this transition will be a recipe for even more staggering inequality than before, where the rich will get even more spectacularly mind-bogglingly rich while the rest of us stagnate or even see our real standard of living fall.

The tech bros promise us that AI will bring about a utopian future, but that would only work if capital ownership were equally shared. If they continue to own all the AIs, they may get a utopia—but we sure won’t.

We can’t all be Taylor Swift. (And if AI music catches on, she may not be able to much longer either.)

Reflections on the Charlie Kirk assassination

Sep 28 JDN 2460947

No doubt you are well aware that Charlie Kirk was shot and killed on September 10. His memorial service was held on September 21, and filled a stadium in Arizona.

There have been a lot of wildly different takes on this event. It’s enough to make you start questioning your own sanity. So while what I have to say may not be that different from what Krugman (or for that matter Jacobin) had to say, I still thought I would try to contribute to the small part of the conversation that’s setting the record straight.

First of all, let me say that this is clearly a political assassination, and as a matter of principle, that kind of thing should not be condoned in a democracy.

The whole point of a democratic system is that we don’t win by killing or silencing our opponents, we win by persuading or out-voting them. As long as someone is not engaging in speech acts that directly command or incite violence (like, say, inciting people to attack the Capitol), they should be allowed to speak in peace; even abhorrent views should be not be met with violence.

Free speech isn’t just about government censorship (though that is also a major problem right now); it’s a moral principle that underlies the foundation of liberal democracy. We don’t resolve conflicts with violence unless absolutely necessary.

So I want to be absolutely clear about this: Killing Charlie Kirk was not acceptable, and the assassin should be tried in a court of law and, if duly convicted, imprisoned for a very long time.

Second of all, we still don’t know the assassin’s motive, so stop speculating until we do.

At first it looked like the killer was left-wing. Then it looked like maybe he was right-wing. Now it looks like maybe he’s left-wing again. Maybe his views aren’t easily categorized that way; maybe he’s an anarcho-capitalist, or an anarcho-communist, or a Scientologist. I won’t say it doesn’t matter; it clearly does matter. But we simply do not know yet.

There is an incredibly common and incredibly harmful thing that people do after any major crime: They start spreading rumors and speculating about things that we actually know next to nothing about. Stop it. Don’t contribute to that.


The whole reason we have a court system is to actually figure out the real truth, which takes a lot of time and effort. The courts are one American institution that’s actually still functioning pretty well in this horrific cyberpunk/Trumpistan era; let them do their job.

It could be months or years before we really fully understand what happened here. Accept that. You don’t need to know the answer right now, and it’s far more dangerous to think you know the answer when you actually don’t.

But finally, I need to point out that Charlie Kirk was an absolutely abhorrent, despicable husk of a human being and no one should be honoring him.

First of all, he himself advocated for political violence against his opponents. I won’t say anyone deserves what happened to him—but if anyone did, it would be him, because he specifically rallied his followers to do exactly this sort of thing to other people.

He was also bigoted in almost every conceivable way: Racist, sexist, ableist, homophobic, and of course transphobic. He maintained a McCarthy-esque list of college professors that he encouraged people to harass for being too left-wing. He was a covert White supremacist, and only a little bit covert. He was not covert at all about his blatant sexism and misogyny that seems like it came from the 1950s instead of the 2020s.

He encouraged his—predominantly White, male, straight, cisgender, middle-class—audience to hate every marginalized group you can think of: women, people of color, LGBT people, poor people, homeless people, people with disabilities. Not content to merely be an abhorrent psychopath himself, he actively campaigned against the concept of empathy.

Charlie Kirk deserves no honors. The world is better off without him. He made his entire career out of ruining the lives of innocent people and actively making the world a worse place.

It was wrong to kill Charlie Kirk. But if you’re sad he’s gone, what is wrong with you!?

For my mother, on her 79th birthday

Sep 21 JDN 2460940

When this post goes live, it will be mother’s 79th birthday. I think birthdays are not a very happy time for her anymore.

I suppose nobody really likes getting older; children are excited to grow up, but once you hit about 25 or 26 (the age at which you can rent a car at the normal rate and the age at which you have to get your own health insurance, respectively) and it becomes “getting older” instead of “growing up”, the excitement rapidly wears off. Even by 30, I don’t think most people are very enthusiastic about their birthdays. Indeed, for some people, I think it might be downhill past 21—you wanted to become an adult, but you had no interest in aging beyond that point.

But I think it gets worse as you get older. As you get into your seventies and eighties, you begin to wonder which birthday will finally be your last; actually I think my mother has been wondering about this even earlier than that, because her brothers died in their fifties, her sister died in her sixties, and my father died at 63. At this point she has outlived a lot of people she loved. I think there is a survivor’s guilt that sets in: “Why do I get to keep going, when they didn’t?”

These are also very hard times in general; Trump and the people who enable him have done tremendous damage to our government, our society, and the world at large in a shockingly short amount of time. It feels like all the safeguards we were supposed to have suddenly collapsed and we gave free rein to a madman.

But while there are many loved ones we have lost, there are many we still have; and nor need our set of loved ones be fixed, only to dwindle with each new funeral. We can meet new people, and they can become part of our lives. New children can be born into our family, and they can make our family grow. It is my sincere hope that my mother still has grandchildren yet to meet; in my case they would probably need to be adopted, as the usual biological route is pretty much out of the question, and surrogacy seems beyond our budget for the foreseeable future. But we would still love them, and she could still love them, and it is worth sticking around in this world in order to be a part of their lives.

I also believe that this is not the end for American liberal democracy. This is a terrible time, no doubt. Much that we thought would never happen already has, and more still will. It must be so unsettling, so uncanny, for someone who grew up in the triumphant years after America helped defeat fascism in Europe, to grow older and then see homegrown American fascism rise ascendant here. Even those of us who knew history all too well still seem doomed to repeat it.

At this point it is clear that victory over corruption, racism, and authoritarianism will not be easy, will not be swift, may never be permanent—and is not even guaranteed. But it is still possible. There is still enough hope left that we can and must keep fighting for an America worth saving. I do not know when we will win; I do not even know for certain that we will, in fact, win. But I believe we will.

I believe that while it seems powerful—and does everything it can to both promote that image and abuse what power it does have—fascism is a fundamentally weak system, a fundamentally fragile system, which simply cannot sustain itself once a handful of critical leaders are dead, deposed, or discredited. Liberal democracy is kinder, gentler—and also slower, at times even clumsier—than authoritarianism, and so it may seem weak to those whose view of strength is that of the savanna ape or the playground bully; but this is an illusion. Liberal democracy is fundamentally strong, fundamentally resilient. There is power in kindness, inclusion, and cooperation that the greedy and cruel cannot see. Fascism in Germany arrived and disappeared within a generation; democracy in America has stood for nearly 250 years.

We don’t know how much more time we have, Mom; none of us do. I have heard it said that you should live your life as though you will live both a short life and a long one; but honestly, you should probably live your life as though you will live a randomly-decided amount of time that is statistically predicted by actuarial tables—because you will. Yes, the older you get, the less time you have left (almost tautologically); but especially in this age of rapid technological change, none of us really know whether we’ll die tomorrow or live another hundred years.

I think right now, you feel like there isn’t much left to look forward to. But I promise you there is. Maybe it’s hard to see right now; indeed, maybe you—or I, or anyone—won’t even ever get to see it. But a brighter future is possible, and it’s worth it to keep going, especially if there’s any way that we might be able to make that brighter future happen sooner.