Israel, Palestine, and the World Bank’s disappointing priorities

Nov 12 JDN 2460261

Israel and Palestine are once again at war. (There are a disturbing number of different years in which one could have written that sentence.) The BBC has a really nice section of their website dedicated to reporting on various facets of the war. The New York Times also has a section on it, but it seems a little tilted in favor of Israel.

This time, it started with a brutal attack by Hamas, and now Israel has—as usual—overreacted and retaliated with a level of force that is sure to feed the ongoing cycle of extremism. All across social media I see people wanting me to take one side or the other, often even making good points: “Hamas slaughters innocents” and “Israel is a de facto apartheid state” are indeed both important points I agree with. But if you really want to know my ultimate opinion, it’s that this whole thing is fundamentally evil and stupid because human beings are suffering and dying over nothing but lies. All religions are false, most of them are evil, and we need to stop killing each other over them.

Anti-Semitism and Islamophobia are both morally wrong insofar as they involve harming, abusing or discriminating against actual human beings. Let people dress however they want, celebrate whatever holidays they want, read whatever books they want. Even if their beliefs are obviously wrong, don’t hurt them if they aren’t hurting anyone else. But both Judaism and Islam—and Christianity, and more besides—are fundamentally false, wrong, evil, stupid, and detrimental to the advancement of humanity.

That’s the thing that so much of the public conversation is too embarrassed to say; we’re supposed to pretend that they aren’t fighting over beliefs that obviously false. We’re supposed to respect each particular flavor of murderous nonsense, and always find some other cause to explain the conflict. It’s over culture (what culture?); it’s over territory (whose territory?); it’s a retaliation for past conflict (over what?). We’re not supposed to say out loud that all of this violence ultimately hinges upon people believing in nonsense. Even if the conflict wouldn’t disappear overnight if everyone suddenly stopped believing in God—and are we sure it wouldn’t? Let’s try it—it clearly could never have begun, if everyone had started with rational beliefs in the first place.

But I don’t really want to talk about that right now. I’ve said enough. Instead I want to talk about something a little more specific, something less ideological and more symptomatic of systemic structural failures. Something you might have missed amidst the chaos.

The World Bank recently released a report on the situation focused heavily on the looming threat of… higher oil prices. (And of course there has been breathless reporting from various outlets regarding a headline figure of $150 per barrel which is explicitly stated in the report as an unlikely “worst-case scenario”.)

There are two very big reasons why I found this dismaying.


The first, of course, is that there are obviously far more important concerns here than commodity prices. Yes, I know that this report is part of an ongoing series of Commodity Markets Outlook reports, but the fact that this is the sort of thing that the World Bank has ongoing reports about is also saying something important about the World Bank’s priorities. They release monthly commodity forecasts and full Commodity Markets Outlook reports that come out twice a year, unlike the World Development Reports that only come out once a year. The World Bank doesn’t release a twice-annual Conflict Report or a twice-annual Food Security Report. (Even the FAO, which publishes an annual State of Food Security and Nutrition in the World report, also publishes a State of Agricultural Marketsreport just as often.)

The second is that, when reading the report, one can clearly tell that whoever wrote it thinks that rising oil and gas prices are inherently bad. They keep talking about all of these negative consequences that higher oil prices could have, and seem utterly unaware of the really enormous upside here: We may finally get a chance to do something about climate change.

You see, one of the most basic reasons why we haven’t been able to fix climate change is that oil is too damn cheap. Its market price has consistently failed to reflect its actual costs. Part of that is due to oil subsidies around the world, which have held the price lower than it would be even in a free market; but most of it is due to the simple fact that pollution and carbon emissions don’t cost money for the people who produce them, even though they do cost the world.

Fortunately, wind and solar power are also getting very cheap, and are now at the point where they can outcompete oil and gas for electrical power generation. But that’s not enough. We need to remove oil and gas from everything: heating, manufacturing, agriculture, transportation. And that is far easier to do if oil and gas suddenly become more expensive and so people are forced to stop using them.

Now, granted, many of the downsides in that report are genuine: Because oil and gas are such vital inputs to so many economic processes, it really is true that making them more expensive will make lots of other things more expensive, and in particular could increase food insecurity by making farming more expensive. But if that’s what we’re concerned about, we should be focusing on that: What policies can we use to make sure that food remains available to all? And one of the best things we could be doing toward that goal is finding ways to make agriculture less dependent on oil.

By focusing on oil prices instead, the World Bank is encouraging the world to double down on the very oil subsidies that are holding climate policy back. Even food subsides—which certainly have their own problems—would be an obviously better solution, and yet they are barely mentioned.

In fact, if you actually read the report, it shows that fears of food insecurity seem unfounded: Food prices are actually declining right now. Grain prices in particular seem to be falling back down remarkably quickly after their initial surge when Russia invaded Ukraine. Of course that could change, but it’s a really weird attitude toward the world to see something good and respond with, “Yes, but it might change!” This is how people with anxiety disorders (and I would know) think—which makes it seem as though much of the economic policy community suffers from some kind of collective equivalent of an anxiety disorder.

There also seems to be a collective sense that higher prices are always bad. This is hardly just a World Bank phenomenon; on the contrary, it seems to pervade all of economic thought, including the most esteemed economists, the most powerful policymakers, and even most of the general population of citizens. (The one major exception seems to be housing, where the sense is that higher prices are always good—even when the world is in a chronic global housing shortage that leaves millions homeless.) But prices can be too low or too high. And oil prices are clearly, definitely too low. Prices should reflect the real cost of production—all the real costs of production. It should cost money to pollute other people’s air.

In fact I think the whole report is largely a nothingburger: Oil prices haven’t even risen all that much so far—we’re still at $80 per barrel last I checked—and the one thing that is true about the so-called Efficient Market Hypothesis is that forecasting future prices is a fool’s errand. But it’s still deeply unsettling to see such intelligent, learned experts so clearly panicking over the mere possibility that there could be a price change which would so obviously be good for the long-term future of humanity.

There is plenty more worth saying about the Israel-Palestine conflict, and in particular what sort of constructive policy solutions we might be able to find that would actually result in any kind of long-term peace. I’m no expert on peace negotiations, and frankly I admit it would probably be a liability that if I were ever personally involved in such a negotiation, I’d be tempted to tell both sides that they are idiots and fanatics. (The headline the next morning: “Israeli and Palestinian Delegates Agree on One Thing: They Hate the US Ambassador”.)

The World Bank could have plenty to offer here, yet so far they’ve been too focused on commodity prices. Their thinking is a little too much ‘bank’ and not enough ‘world’.

It is a bit ironic, though also vaguely encouraging, that there are those within the World Bank itself who recognize this problem: Just a few weeks ago Ajay Banga gave a speech to the World Bank about “a world free of poverty on a livable planet”.

Yes. Those sound like the right priorities. Now maybe you could figure out how to turn that lip service into actual policy.

How will AI affect inequality?

Oct 15 JDN 2460233

Will AI make inequality worse, or better? Could it do a bit of both? Does it depend on how we use it?

This is of course an extremely big question. In some sense it is the big economic question of the 21st century. The difference between the neofeudalist cyberpunk dystopia of Neuromancer and the social democratic utopia of Star Trek just about hinges on whether AI becomes a force for higher or lower inequality.

Krugman seems quite optimistic: Based on forecasts by Goldman Sachs, AI seems poised to automate more high-paying white-collar jobs than low-paying blue-collar ones.

But, well, it should be obvious that Goldman Sachs is not an impartial observer here. They do have reasons to get their forecasts right—their customers are literally invested in those forecasts—but like anyone who immensely profits from the status quo, they also have a broader agenda of telling the world that everything is going great and there’s no need to worry or change anything.

And when I look a bit closer at their graphs, it seems pretty clear that they aren’t actually answering the right question. They estimate an “exposure to AI” coefficient (somehow; their methodology is not clearly explained and lots of it is proprietary), and if it’s between 10% and 49% they call it “complementary” while if it’s 50% or above they call it “replacement”.

But that is not how complements and substitutes work. It isn’t a question of “how much of the work can be done by machine” (whatever that means). It’s a question of whether you will still need the expert human.

It could be that the machine does 90% of the work, but you still need a human being there to tell it what to do, and that would be complementary. (Indeed, this basically is how finance works right now, and I see no reason to think it will change any time soon.) Conversely, it could be that the machine only does 20% of the work, but that was the 20% that required expert skill, and so a once comfortable high-paying job can now be replaced by low-paid temp workers. (This is more or less what’s happening at Amazon warehouses: They are basically managed by AI, but humans still do most of the actual labor, and get paid peanuts for it.)

For their category “computer and mathematical”, they call it “complementary”, and I agree: We are still going to need people who can code. We’re still going to need people who know how to multiply matrices. We’re still going to need people who understand search algorithms. Indeed, if the past is any indicator, we’re going to need more and more of those people, and they’re going to keep getting paid higher and higher salaries. Someone has to make the AI, after all.

Yet I’m not quite so sure about the “mathematical” part in many cases. We may not need people who can solve differential equations, actually: maybe a few to design the algorithms, but honestly even then, a software program with a simple finite-difference algorithm can often solve much more interesting problems than one with a full-fledged differential equation solver, because one of the dirty secrets of differential equations is that for some of the most important ones (like the Navier-Stokes Equations), we simply do not know how to solve them. Once you have enough computing power, you often can stop trying to be clever and just brute-force the damn thing.

Yet for “transportation and material movement”—that is, trucking—Goldman Sachs confidently forecasts mostly “no automation” with a bit of “complementary”. Yet this year—not at some distant point in the future, not in some sci-fi novel, this year in the actual world—the Governor of California already vetoed a bill that would have required automated trucks to have human drivers. The trucks aren’t on the roads yet—but if we already are making laws about them, they’re going to be, soon. (State legislatures are not known for their brilliant foresight or excessive long-term thinking.) And if the law doesn’t require them to have human drivers, they probably won’t; which means that hundreds of thousands of long-haul truckers will suddenly be out of work.

It’s also important to differentiate between different types of jobs that may fall under the same category or industry.

Neurosurgeons are not going anywhere, and improved robotics will only allow them to perform better, safer laparoscopic surgeries. Nor are nurses going anywhere, because some things just need an actual person physically there with the patient. But general practictioners, psychotherapists, and even radiologists are already seeing many of their tasks automated. So is “medicine” being automated or not? That depends what sort of medicine you mean. And yet it clearly means an increase in inequality, because it’s the middle-paying jobs (like GPs) that are going away, while the high-paying jobs (like neurosurgeons) and the low-paying jobs (like nurses) that remain.

Likewise, consider “legal services”, which is one of the few industries that Goldman Sachs thinks will be substantially replaced by AI. Are high-stakes trial lawyers like Sam Bernstein getting replaced? Clearly not. Nor would I expect most corporate lawyers to disappear. Human lawyers will still continue to perform at least a little bit better than AI law systems, and the rich will continue to use them, because a few million dollars for a few percentage points better odds of winning is absolutely worth it when billions of dollars are on the line. So which law services are going to get replaced by AI? First, routine legal questions, like how to renew your work visa or set up a living will—it’s already happening. Next, someone will probably decide that public defenders aren’t worth the cost and start automating the legal defenses of poor people who get accused of crimes. (And to be honest, it may not be much worse than how things currently are in the public defender system.) The advantage of such a change is that it will most likely bring court costs down—and that is desperately needed. But it may also tilt the courts even further in favor of the rich. It may also make it even harder to start a career as a lawyer, cutting off the bottom of the ladder.

Or consider “management”, which Goldman Sachs thinks will be “complementary”. Are CEOs going to get replaced by AI? No, because the CEOs are the ones making that decision. Certainly this is true for any closely-held firm: No CEO is going to fire himself. Theoretically, if shareholders and boards of directors pushed hard enough, they might be able to get a CEO of a publicly-traded corporation ousted in favor of an AI, and if the world were really made of neoclassical rational agents, that might actually happen. But in the real world, the rich have tremendous solidarity for each other (and only each other), and very few billionaires are going to take aim at other billionaires when it comes time to decide whose jobs should be replaced. Yet, there are a lot of levels of management below the CEO and board of directors, and many of those are already in the process of being replaced: Instead of relying on the expert judgment of a human manager, it’s increasingly common to develop “performance metrics”, feed them into an algorithm, and use that result to decide who gets raises and who gets fired. It all feels very “objective” and “impartial” and “scientific”—and usually ends up being both dehumanizing and ultimately not even effective at increasing profits. At some point, many corporations are going to realize that their middle managers aren’t actually making any important decisions anymore, and they’ll feed that into the algorithm, and it will tell them to fire the middle managers.

Thus, even though we think of “medicine”, “law”, and “management” as high-paying careers, the effect of AI is largely going to be to increase inequality within those industries. It isn’t the really high-paid doctors, managers, and lawyers who are going to get replaced.

I am therefore much less optimistic than Krugman about this. I do believe there are many ways that technology, including artificial intelligence, could be used to make life better for everyone, and even perhaps one day lead us into a glorious utopian future.

But I don’t see most of the people who have the authority to make important decisions for our society actually working towards such a future. They seem much more interested in maximizing their own profits or advancing narrow-minded ideologies. (Or, as most right-wing political parties do today: Advancing narrow-minded ideologies about maximizing the profits of rich people.) And if we simply continue on the track we’ve been on, our future is looking a lot more like Neuromancer than it is like Star Trek.

Productivity can cope with laziness, but not greed

Oct 8 JDN 2460226

At least since Star Trek, it has been a popular vision of utopia: post-scarcity, an economy where goods are so abundant that there is no need for money or any kind of incentive to work, and people can just do what they want and have whatever they want.

It certainly does sound nice. But is it actually feasible? I’ve written about this before.

I’ve been reading some more books set in post-scarcity utopias, including Ursula K. Le Guin (who is a legend) and Cory Doctorow (who is merely pretty good). And it struck me that while there is one major problem of post-scarcity that they seem to have good solutions for, there is another one that they really don’t. (To their credit, neither author totally ignores it; they just don’t seem to see it as an insurmountable obstacle.)

The first major problem is laziness.

A lot of people assume that the reason we couldn’t achieve a post-scarcity utopia is that once your standard of living is no longer tied to your work, people would just stop working. I think this assumption rests on both an overly cynical view of human nature and an overly pessimistic view of technological progress.

Let’s do a thought experiment. If you didn’t get paid, and just had the choice to work or not, for whatever hours you wished, motivated only by the esteem of your peers, your contribution to society, and the joy of a job well done, how much would you work?

I contend it’s not zero. At least for most people, work does provide some intrinsic satisfaction. It’s also probably not as much as you are currently working; otherwise you wouldn’t insist on getting paid. Those are our lower and upper bounds.

Is it 80% of your current work? Perhaps not. What about 50%? Still too high? 20% seems plausible, but maybe you think that’s still too high. Surely it’s at least 10%. Surely you would be willing to work at least a few hours per week at a job you’re good at that you find personally fulfilling. My guess is that it would actually be more than that, because once people were free of the stress and pressure of working for a living, they would be more likely to find careers that truly brought them deep satisfaction and joy.

But okay, to be conservative, let’s estimate that people are only willing to work 10% as much under a system where labor is fully optional and there is no such thing as a wage. What kind of standard of living could we achieve?

Well, at the current level of technology and capital in the United States, per-capita GDP at purchasing power parity is about $80,000. 10% of that is $8,000. This may not sound like a lot, but it’s about how people currently live in Venezuela. India is slightly better, Ghana is slightly worse. This would feel poor to most Americans today, but it’s objectively a better standard of living than most humans have had throughout history, and not much worse than the world average today.

If per-capita GDP growth continues at its current rate of about 1.5% per year for another century, that $80,000 would become $320,000, 10% of which is $32,000—that would put us at the standard of living of present-day Bulgaria, or what the United States was like in the distant past of [checks notes] 1980. That wouldn’t even feel poor. In fact if literally everyone had this standard of living, nearly as many Americans today would be richer as would be poorer, since the current median personal income is only a bit higher than that.

Thus, the utopian authors are right about this one: Laziness is a solvable problem. We may not quite have it solved yet, but it’s on the ropes; a few more major breakthroughs in productivity-enhancing technology and we’ll basically be there.

In fact, on a small scale, this sort of utopian communist anarchy already works, and has for centuries. There are little places, all around the world, where people gather together and live and work in a sustainable, basically self-sufficient way without being motivated by wages or salaries, indeed often without owning any private property at all.

We call these places monasteries.

Granted, life in a monastery clearly isn’t for everyone: I certainly wouldn’t want to live a life of celibacy and constant religious observance. But the long-standing traditions of monastic life in several very different world religions does prove that it’s possible for human beings to live and even flourish in the absence of a profit motive.

Yet the fact that monastic life is so strict turns out to be no coincidence: In a sense, it had to be for the whole scheme to work. I’ll get back to that in a moment.

The second major problem with a post-scarcity utopia is greed.

This is the one that I think is the real barrier. It may not be totally insurmountable, but thus far I have yet to hear any good proposals that would seriously tackle it.

The issue with laziness is that we don’t really want to work as much as we do. But since we do actually want to work a little bit, the question is simply how to make as much as we currently do while working only as much as we want to. Hence, to deal with laziness, all we need to do is be more efficient. That’s something we are shockingly good at; the overall productivity of our labor is now something like 100 times what it was at the dawn of the Industrial Revolution, and still growing all the time.

Greed is different. The issue with greed is that, no matter how much we have, we always want more.

Some people are clearly greedier than others. In fact, I’m even willing to bet that most people’s greed could be kept in check by a society that provided for everyone’s basic needs for free. Yeah, maybe sometimes you’d fantasize about living in a gigantic mansion or going into outer space; but most of the time, most of us could actually be pretty happy as long as we had a roof over our heads and food on our tables. I know that in my own case, my grandest ambitions largely involve fighting global poverty—so if that became a solved problem, my life’s ambition would be basically fulfilled, and I wouldn’t mind so much retiring to a life of simple comfort.

But is everyone like that? This is what anarchists don’t seem to understand. In order for anarchy to work, you need everyone to fit into that society. Most of us or even nearly all of us just won’t cut it.

Ammon Hennecy famously declared: “An anarchist is someone who doesn’t need a cop to make him behave.” But this is wrong. An anarchist is someone who thinks that no one needs a cop to make him behave. And while I am the former, I am not the latter.

Perhaps the problem is that anarchists don’t realize that not everyone is as good as they are. They implicitly apply their own mentality to everyone else, and assume that the only reason anyone ever cheats, steals, or kills is because their circumstances are desperate.

Don’t get me wrong: A lot of crime—perhaps even most crime—is committed by people who are desperate. Improving overall economic circumstances does in fact greatly reduce crime. But there is also a substantial proportion of crime—especially the most serious crimes—which is committed by people who aren’t particularly desperate, they are simply psychopaths. They aren’t victims of circumstance. They’re just evil. And society needs a way to deal with them.

If you set up a society so that anyone can just take whatever they want, there will be some people who take much more than their share. If you have no system of enforcement whatsoever, there’s nothing to stop a psychopath from just taking everything he can get his hands on. And then it really doesn’t matter how productive or efficient you are; whatever you make will simply get taken by whoever is greediest—or whoever is strongest.

In order to avoid that, you need to either set up a system that stops people from taking more than their share, or you need to find a way to exclude people like that from your society entirely.

This brings us back to monasteries. Why are they so strict? Why are the only places where utopian anarchism seems to flourish also places where people have to wear a uniform, swear vows, carry out complex rituals, and continually pledge their fealty to an authority? (Note, by the way, that I’ve also just described life in the military, which also has a lot in common with life in a monastery—and for much the same reasons.)

It’s a selection mechanism. Probably no one consciously thinks of it this way—indeed, it seems to be important to how monasteries work that people are not consciously weighing the costs and benefits of all these rituals. This is probably something that memetically evolved over centuries, rather than anything that was consciously designed. But functionally, that’s what it does: You only get to be part of a monastic community if you are willing to pay the enormous cost of following all these strict rules.

That makes it a form of costly signaling. Psychopaths are, in general, more prone to impulsiveness and short-term thinking. They are therefore less willing than others to bear the immediate cost of donning a uniform and following a ritual in order to get the long-term gains of living in a utopian community. This excludes psychopaths from ever entering the community, and thus protects against their predation.

Even celibacy may be a feature rather than a bug: Psychopaths are also prone to promiscuity. (And indeed, utopian communes that practice free love seem to have a much worse track record of being hijacked by psychopaths than monasteries that require celibacy!)

Of course, lots of people who aren’t psychopaths aren’t willing to pay those costs either—like I said, I’m not. So the selection mechanism is in a sense overly strict: It excludes people who would support the community just fine, but aren’t willing to pay the cost. But in the long run, this turns out to be less harmful than being too permissive and letting your community get hijacked and destroyed by psychopaths.

Yet if our goal is to make a whole society that achieves post-scarcity utopia, we can’t afford to be so strict. We already know that most people aren’t willing to become monks or nuns.

That means that we need a selection mechanism which is more reliable—more precisely, one with higher specificity.

I mentioned this in a previous post in the context of testing for viruses, but it bears repeating. Sensitivity and specificity are two complementary measures of a test’s accuracy. The sensitivity of a test is how likely it is to show positive if the truth is positive. The specificity of a test is how likely it is to show negative if the truth is negative.

As a test of psychopathy, monastic strictness has very high sensitivity: If you are a psychopath, there’s a very high chance it will weed you out. But it has quite low specificity: Even if you’re not a psychopath, there’s still a very high chance you won’t want to become a monk.

For a utopian society to work, we need something that’s more specific, something that won’t exclude a lot of people who don’t deserve to be excluded. But it still needs to have much the same sensitivity, because letting psychopaths into your utopia is a very easy way to let that utopia destroy itself. We do not yet have such a test, nor any clear idea how we might create one.

And that, my friends, is why we can’t have nice things. At least, not yet.

AI and the “generalization faculty”

Oct 1 JDN 2460219

The phrase “artificial intelligence” (AI) has now become so diluted by overuse that we needed to invent a new term for its original meaning. That term is now “artificial general intelligence” (AGI). In the 1950s, AI meant the hypothetical possibility of creating artificial minds—machines that could genuinely think and even feel like people. Now it means… pathing algorithms in video games and chatbots? The goalposts seem to have moved a bit.

It seems that AGI has always been 20 years away. It was 20 years away 50 years ago, and it will probably be 20 years away 50 years from now. Someday it will really be 20 years away, and then, 20 years after that, it will actually happen—but I doubt I’ll live to see it. (XKCD also offers some insight here: “It has not been conclusively proven impossible.”)

We make many genuine advances in computer technology and software, which have profound effects—both good and bad—on our lives, but the dream of making a person out of silicon always seems to drift ever further into the distance, like a mirage on the desert sand.

Why is this? Why do so many people—even, perhaps especially,experts in the field—keep thinking that we are on the verge of this seminal, earth-shattering breakthrough, and ending up wrong—over, and over, and over again? How do such obviously smart people keep making the same mistake?

I think it may be because, all along, we have been laboring under the tacit assumption of a generalization faculty.

What do I mean by that? By “generalization faculty”, I mean some hypothetical mental capacity that allows you to generalize your knowledge and skills across different domains, so that once you get good at one thing, it also makes you good at other things.

This certainly seems to be how humans think, at least some of the time: Someone who is very good at chess is likely also pretty good at go, and someone who can drive a motorcycle can probably also drive a car. An artist who is good at portraits is probably not bad at landscapes. Human beings are, in fact, able to generalize, at least sometimes.

But I think the mistake lies in imagining that there is just one thing that makes us good at generalizing: Just one piece of hardware or software that allows you to carry over skills from any domain to any other. This is the “generalization faculty”—the imagined faculty that I think we do not have, indeed I think does not exist.

Computers clearly do not have the capacity to generalize. A program that can beat grandmasters at chess may be useless at go, and self-driving software that works on one type of car may fail on another, let alone a motorcycle. An art program that is good at portraits of women can fail when trying to do portraits of men, and produce horrific Daliesque madness when asked to make a landscape.

But if they did somehow have our generalization capacity, then, once they could compete with us at some things—which they surely can, already—they would be able to compete with us at just about everything. So if it were really just one thing that would let them generalize, let them leap from AI to AGI, then suddenly everything would change, almost overnight.

And so this is how the AI hype cycle goes, time and time again:

  1. A computer program is made that does something impressive, something that other computer programs could not do, perhaps even something that human beings are not very good at doing.
  2. If that same prowess could be generalized to other domains, the result would plainly be something on par with human intelligence.
  3. Therefore, the only thing this computer program needs in order to be sapient is a generalization faculty.
  4. Therefore, there is just one more step to AGI! We are nearly there! It will happen any day now!

And then, of course, despite heroic efforts, we are unable to generalize that program’s capabilities except in some very narrow way—even decades after having good chess programs, getting programs to be good at go was a major achievement. We are unable to find the generalization faculty yet again. And the software becomes yet another “AI tool” that we will use to search websites or make video games.

For there never was a generalization faculty to be found. It always was a mirage in the desert sand.

Humans are in fact spectacularly good at generalizing, compared to, well, literally everything else in the known universe. Computers are terrible at it. Animals aren’t very good at it. Just about everything else is totally incapable of it. So yes, we are the best at it.

Yet we, in fact, are not particularly good at it in any objective sense.

In experiments, people often fail to generalize their reasoning even in very basic ways. There’s a famous one where we try to get people to make an analogy between a military tactic and a radiation treatment, and while very smart, creative people often get it quickly, most people are completely unable to make the connection unless you give them a lot of specific hints. People often struggle to find creative solutions to problems even when those solutions seem utterly obvious once you know them.

I don’t think this is because people are stupid or irrational. (To paraphrase Sydney Harris: Compared to what?) I think it is because generalization is hard.

People tend to be much better at generalizing within familiar domains where they have a lot of experience or expertise; this shows that there isn’t just one generalization faculty, but many. We may have a plethora of overlapping generalization faculties that apply across different domains, and can learn to improve some over others.

But it isn’t just a matter of gaining more expertise. Highly advanced expertise is in fact usually more specialized—harder to generalize. A good amateur chess player is probably a good amateur go player, but a grandmaster chess player is rarely a grandmaster go player. Someone who does well in high school biology probably also does well in high school physics, but most biologists are not very good physicists. (And lest you say it’s simply because go and physics are harder: The converse is equally true.)

Humans do seem to have a suite of cognitive tools—some innate hardware, some learned software—that allows us to generalize our skills across domains. But even after hundreds of millions of years of evolving that capacity under the highest possible stakes, we still basically suck at it.

To be clear, I do not think it will take hundreds of millions of years to make AGI—or even millions, or even thousands. Technology moves much, much faster than evolution. But I would not be surprised if it took centuries, and I am confident it will at least take decades.

But we don’t need AGI for AI to have powerful effects on our lives. Indeed, even now, AI is already affecting our lives—in mostly bad ways, frankly, as we seem to be hurtling gleefully toward the very same corporatist cyberpunk dystopia we were warned about in the 1980s.

A lot of technologies have done great things for humanity—sanitation and vaccines, for instance—and even automation can be a very good thing, as increased productivity is how we attained our First World standard of living. But AI in particular seems best at automating away the kinds of jobs human beings actually find most fulfilling, and worsening our already staggering inequality. As a civilization, we really need to ask ourselves why we got automated writing and art before we got automated sewage cleaning or corporate management. (We should also ask ourselves why automated stock trading resulted in even more money for stock traders, instead of putting them out of their worthless parasitic jobs.) There are technological reasons for this, yes; but there are also cultural and institutional ones. Automated teaching isn’t far away, and education will be all the worse for it.

To change our lives, AI doesn’t have to be good at everything. It just needs to be good at whatever we were doing to make a living. AGI may be far away, but the impact of AI is already here.

Indeed, I think this quixotic quest for AGI, and all the concern about how to control it and what effects it will have upon our society, may actually be distracting from the real harms that “ordinary” “boring” AI is already having upon our society. I think a Terminator scenario, where the machines rapidly surpass our level of intelligence and rise up to annihilate us, is quite unlikely. But a scenario where AI puts millions of people out of work with insufficient safety net, triggering economic depression and civil unrest? That could be right around the corner.

Frankly, all it may take is getting automated trucks to work, which could be just a few years. There are nearly 4 million truck drivers in the United States—a full percentage point of employment unto itself. And the Governor of California just vetoed a bill that would require all automated trucks to have human drivers. From an economic efficiency standpoint, his veto makes perfect sense: If the trucks don’t need drivers, why require them? But from an ethical and societal standpoint… what do we do with all the truck drivers!?

How much should we give of ourselves?

Jul 23 JDN 2460149

This is a question I’ve written about before, but it’s a very important one—perhaps the most important question I deal with on this blog—so today I’d like to come back to it from a slightly different angle.

Suppose you could sacrifice all the happiness in the rest of your life, making your own existence barely worth living, in exchange for saving the lives of 100 people you will never meet.

  1. Would it be good for you do so?
  2. Should you do so?
  3. Are you a bad person if you don’t?
  4. Are all of the above really the same question?

Think carefully about your answer. It may be tempting to say “yes”. It feels righteous to say “yes”.

But in fact this is not hypothetical. It is the actual situation you are in.

This GiveWell article is entitled “Why is it so expensive to save a life?” but that’s incredibly weird, because the actual figure they give is astonishingly, mind-bogglingly, frankly disgustingly cheap: It costs about $4500 to save one human life. I don’t know how you can possibly find that expensive. I don’t understand how anyone can think, “Saving this person’s life might max out a credit card or two; boy, that sure seems expensive!

The standard for healthcare policy in the US is that something is worth doing if it is able to save one quality-adjusted life year for less than $50,000. That’s one year for ten times as much. Even accounting for the shorter lifespans and worse lives in poor countries, saving someone from a poor country for $4500 is at least one hundred times as cost-effective as that.

To put it another way, if you are a typical middle-class person in the First World, with an after-tax income of about $25,000 per year, and you were to donate 90% of that after-tax income to high-impact charities, you could be expected to save 5 lives every year. Over the course of a 30-year career, that’s 150 lives saved.

You would of course be utterly miserable for those 30 years, having given away all the money you could possibly have used for any kind of entertainment or enjoyment, not to mention living in the cheapest possible housing—maybe even a tent in a homeless camp—and eating the cheapest possible food. But you could do it, and you would in fact be expected to save over 100 lives by doing so.

So let me ask you again:

  1. Would it be good for you do so?
  2. Should you do so?
  3. Are you a bad person if you don’t?
  4. Are all of the above really the same question?

Peter Singer often writes as though the answer to all these questions is “yes”. But even he doesn’t actually live that way. He gives a great deal to charity, mind you; no one seems to know exactly how much, but estimates range from at least 10% to up to 50% of his income. My general impression is that he gives about 10% of his ordinary income and more like 50% of big prizes he receives (which are in fact quite numerous). Over the course of his life he has certainly donated at least a couple million dollars. Yet he clearly could give more than he does: He lives a comfortable, upper-middle-class life.

Peter Singer’s original argument for his view, from his essay “Famine, Affluence, and Morality”, is actually astonishingly weak. It involves imagining a scenario where a child is drowning in a lake and you could go save them, but only at the cost of ruining your expensive suit.

Obviously, you should save the child. We all agree on that. You are in fact a terrible person if you wouldn’t save the child.

But Singer tries to generalize this into a principle that requires us to donate all most of our income to international charities, and that just doesn’t follow.

First of all, that suit is not worth $4500. Not if you’re a middle-class person. That’s a damn Armani. No one who isn’t a millionaire wears suits like that.

Second, in the imagined scenario, you’re the only one who can help the kid. All I have to do is change that one thing and already the answer is different: If right next to you there is a trained, certified lifeguard, they should save the kid, not you. And if there are a hundred other people at the lake, and none of them is saving the kid… probably there’s a good reason for that? (It could be bystander effect, but actually that’s much weaker than a lot of people think.) The responsibility doesn’t uniquely fall upon you.

Third, the drowning child is a one-off, emergency scenario that almost certainly will never happen to you, and if it does ever happen, will almost certainly only happen once. But donation is something you could always do, and you could do over and over and over again, until you have depleted all your savings and run up massive debts.

Fourth, in the hypothetical scenario, there is only one child. What if there were ten—or a hundred—or a thousand? What if you couldn’t possibly save them all by yourself? Should you keep going out there and saving children until you become exhausted and you yourself drown? Even if there is a lifeguard and a hundred other bystanders right there doing nothing?

And finally, in the drowning child scenario, you are right there. This isn’t some faceless stranger thousands of miles away. You can actually see that child in front of you. Peter Singer thinks that doesn’t matter—actually his central point seems to be that it doesn’t matter. But I think it does.

Singer writes:

It makes no moral difference whether the person I can help is a neighbor’s child ten yards away from me or a Bengali whose name I shall never know, ten thousand miles away.

That’s clearly wrong, isn’t it? Relationships mean nothing? Community means nothing? There is no moral value whatsoever to helping people close to us rather than random strangers on the other side of the planet?

One answer might be to say that the answer to question 4 is “no”. You aren’t a bad person for not doing everything you should, and even though something would be good if you did it, that doesn’t necessarily mean you should do it.

Perhaps some things are above and beyond the call of duty: Good, perhaps even heroic, if you’re willing to do them, but not something we are all obliged to do. The formal term for this is supererogatory. While I think that overall utilitarianism is basically correct and has done great things for human society, one thing I think most utilitarians miss is that they seem to deny that supererogatory actions exist.

Even then, I’m not entirely sure it is good to be this altruistic.

Someone who really believed that we owe as much to random strangers as we do to our friends and family would never show up to any birthday parties, because any time spent at a birthday party would be more efficiently spent earning-to-give to some high-impact charity. They would never visit their family on Christmas, because plane tickets are expensive and airplanes burn a lot of carbon.

They also wouldn’t concern themselves with whether their job is satisfying or even not totally miserable; they would only care whether the total positive impact they can have on the world is positive, either directly through their work or by raising as much money as possible and donating it all to charity.

They would rest only the minimum amount they require to remain functional, eat only the barest minimum of nutritious food, and otherwise work, work, work, constantly, all the time. If their body was capable of doing the work, they would continue doing the work. For there is not a moment to waste when lives are on the line!

A world full of people like that would be horrible. We would all live our entire lives in miserable drudgery trying to maximize the amount we can donate to faceless strangers on the other side of the planet. There would be no joy or friendship in that world, only endless, endless toil.

When I bring this up in the Effective Altruism community, I’ve heard people try to argue otherwise, basically saying that we would never need everyone to devote themselves to the cause at this level, because we’d soon solve all the big problems and be able to go back to enjoying our lives. I think that’s probably true—but it also kind of misses the point.

Yes, if everyone gave their fair share, that fair share wouldn’t have to be terribly large. But we know for a fact that most people are not giving their fair share. So what now? What should we actually do? Do you really want to live in a world where the morally best people are miserable all the time sacrificing themselves at the altar of altruism?

Yes, clearly, most people don’t do enough. In fact, most people give basically nothing to high-impact charities. We should be trying to fix that. But if I am already giving far more than my fair share, far more than I would have to give if everyone else were pitching in as they should—isn’t there some point at which I’m allowed to stop? Do I have to give everything I can or else I’m a monster?

The conclusion that we ought to make ourselves utterly miserable in order to save distant strangers feels deeply unsettling. It feels even worse if we say that we ought to do so, and worse still if we feel we are bad people if we don’t.

One solution would be to say that we owe absolutely nothing to these distant strangers. Yet that clearly goes too far in the opposite direction. There are so many problems in this world that could be fixed if more people cared just a little bit about strangers on the other side of the planet. Poverty, hunger, war, climate change… if everyone in the world (or really even just everyone in power) cared even 1% as much about random strangers as they do about themselves, all these would be solved.

Should you donate to charity? Yes! You absolutely should. Please, I beseech you, give some reasonable amount to charity—perhaps 5% of your income, or if you can’t manage that, maybe 1%.

Should you make changes in your life to make the world better? Yes! Small ones. Eat less meat. Take public transit instead of driving. Recycle. Vote.

But I can’t ask you to give 90% of your income and spend your entire life trying to optimize your positive impact. Even if it worked, it would be utter madness, and the world would be terrible if all the good people tried to do that.

I feel quite strongly that this is the right approach: Give something. Your fair share, or perhaps even a bit more, because you know not everyone will.

Yet it’s surprisingly hard to come up with a moral theory on which this is the right answer.

It’s much easier to develop a theory on which we owe absolutely nothing: egoism, or any deontology on which charity is not an obligation. And of course Singer-style utilitarianism says that we owe virtually everything: As long as QALYs can be purchased cheaper by GiveWell than by spending on yourself, you should continue donating to GiveWell.

I think part of the problem is that we have developed all these moral theories as if we were isolated beings, who act in a world that is simply beyond our control. It’s much like the assumption of perfect competition in economics: I am but one producer among thousands, so whatever I do won’t affect the price.

But what we really needed was a moral theory that could work for a whole society. Something that would still make sense if everyone did it—or better yet, still make sense if half the people did it, or 10%, or 5%. The theory cannot depend upon the assumption that you are the only one following it. It cannot simply “hold constant” the rest of society.

I have come to realize that the Effective Altruism movement, while probably mostly good for the world as a whole, has actually been quite harmful to the mental health of many of its followers, including myself. It has made us feel guilty for not doing enough, pressured us to burn ourselves out working ever harder to save the world. Because we do not give our last dollar to charity, we are told that we are murderers.

But there are real murderers in this world. While you were beating yourself up over not donating enough, Vladmir Putin was continuing his invasion of Ukraine, ExxonMobil was expanding its offshore drilling, Daesh was carrying out hundreds of terrorist attacks, Qanon was deluding millions of people, and the human trafficking industry was making $150 billion per year.

In other words, by simply doing nothing you are considerably better than the real monsters responsible for most of the world’s horror.

In fact, those starving children in Africa that you’re sending money to help? They wouldn’t need it, were it not for centuries of colonial imperialism followed by a series of corrupt and/or incompetent governments ruled mainly by psychopaths.

Indeed the best way to save those people, in the long run, would be to fix their governments—as has been done in places like Namibia and Botswana. According to the World Development Indicators, the proportion of people living below the UN extreme poverty line (currently $2.15 per day at purchasing power parity) has fallen from 36% to 16% in Namibia since 2003, and from 42% to 15% in Botswana since 1984. Compare this to some countries that haven’t had good governments over that time: In Cote d’Ivoire the same poverty rate was 8% in 1985 but is 11% today (and was actually as high as 33% in 2015), while in Congo it remains at 35%. Then there are countries that are trying, but just started out so poor it’s a long way to go: Burkina Faso’s extreme poverty rate has fallen from 82% in 1994 to 30% today.

In other words, if you’re feeling bad about not giving enough, remember this: if everyone in the world were as good as you, you wouldn’t need to give a cent.

Of course, simply feeling good about yourself for not being a psychopath doesn’t accomplish very much either. Somehow we have to find a balance: Motivate people enough so that they do something, get them to do their share; but don’t pressure them to sacrifice themselves at the altar of altruism.

I think part of the problem here—and not just here—is that the people who most need to change are the ones least likely to listen. The kind of person who reads Peter Singer is already probably in the top 10% of most altruistic people, and really doesn’t need much more than a slight nudge to be doing their fair share. And meanwhile the really terrible people in the world have probably never picked up an ethics book in their lives, or if they have, they ignored everything it said.

I don’t quite know what to do about that. But I hope I can least convince you—and myself—to take some of the pressure off when it feels like we’re not doing enough.

We do seem to have better angels after all

Jun 18 JDN 2460114

A review of The Darker Angels of Our Nature

(I apologize for not releasing this on Sunday; I’ve been traveling lately and haven’t found much time to write.)

Since its release, I have considered Steven Pinker’s The Better Angels of our Nature among a small elite category of truly great books—not simply good because enjoyable, informative, or well-written, but great in its potential impact on humanity’s future. Others include The General Theory of Employment, Interest, and Money, On the Origin of Species, and Animal Liberation.

But I also try to expose myself as much as I can to alternative views. I am quite fearful of the echo chambers that social media puts us in, where dissent is quietly hidden from view and groupthink prevails.

So when I saw that a group of historians had written a scathing critique of The Better Angels, I decided I surely must read it and get its point of view. This book is The Darker Angels of Our Nature.

The Darker Angels is written by a large number of different historians, and it shows. It’s an extremely disjointed book; it does not present any particular overall argument, various sections differ wildly in scope and tone, and sometimes they even contradict each other. It really isn’t a book in the usual sense; it’s a collection of essays whose only common theme is that they disagree with Steven Pinker.

In fact, even that isn’t quite true, as some of the best essays in The Darker Angels are actually the ones that don’t fundamentally challenge Pinker’s contention that global violence has been on a long-term decline for centuries and is now near its lowest in human history. These essays instead offer interesting insights into particular historical eras, such as medieval Europe, early modern Russia, and shogunate Japan, or they add additional nuances to the overall pattern, like the fact that, compared to medieval times, violence in Europe seems to have been less in the Pax Romana (before) and greater in the early modern period (after), showing that the decline in violence was not simple or steady, but went through fluctuations and reversals as societies and institutions changed. (At this point I feel I should note that Pinker clearly would not disagree with this—several of the authors seem to think he would, which makes me wonder if they even read The Better Angels.)

Others point out that the scale of civilization seems to matter, that more is different, and larger societies and armies more or less automatically seem to result in lower fatality rates by some sort of scaling or centralization effect, almost like the square-cube law. That’s very interesting if true; it would suggest that in order to reduce violence, you don’t really need any particular mode of government, you just need something that unites as many people as possible under one banner. The evidence presented for it was too weak for me to say whether it’s really true, however, and there was really no theoretical mechanism proposed whatsoever.

Some of the essays correct genuine errors Pinker made, some of which look rather sloppy. Pinker clearly overestimated the death tolls of the An Lushan Rebellion, the Spanish Inquisition, and Aztec ritual executions, probably by using outdated or biased sources. (Though they were all still extremely violent!) His depiction of indigenous cultures does paint with a very broad brush, and fails to recognize that some indigenous societies seem to have been quite peaceful (though others absolutely were tremendously violent).

One of the best essays is about Pinker’s cavalier attitude toward mass incarceration, which I absolutely do consider a deep flaw in Pinker’s view. Pinker presents increased incarceration rates along with decreased crime rates as if they were an unalloyed good, while I can at best be ambivalent about whether the benefit of decreasing crime is worth the cost of greater incarceration. Pinker seems to take for granted that these incarcerations are fair and impartial, when we have a great deal of evidence that they are strongly biased against poor people and people of color.

There’s another good essay about the Enlightenment, which Pinker seems to idealize a little too much (especially in his other book Enlightenment Now). There was no sudden triumph of reason that instantly changed the world. Human knowledge and rationality gradually improved over a very long period of time, with no obvious turning point and many cases of backsliding. The scientific method isn’t a simple, infallible algorithm that suddenly appeared in the brain of Galileo or Bayes, but a whole constellation of methods and concepts of rationality that took centuries to develop and is in fact still developing. (Much as the Tao that can be told is not the eternal Tao, the scientific method that can be written in a textbook is not the true scientific method.)

Several of the essays point out the limitations of historical and (especially) archaeological records, making it difficult to draw any useful inferences about rates of violence in the past. I agree that Pinker seems a little too cavalier about this; the records really are quite sparse and it’s not easy to fill in the gaps. Very small samples can easily distort homicide rates; since only about 1% of deaths worldwide are homicide, if you find 20 bodies, whether or not one of them was murdered is the difference between peaceful Japan and war-torn Colombia.

On the other hand, all we really can do is make the best inferences we have with the available data, and for the time periods in which we do have detailed records—surely true since at least the 19th century—the pattern of declining violence is very clear, and even the World Wars look like brief fluctuations rather than fundamental reversals. Contrary to popular belief, the World Wars do not appear to have been especially deadly on a per-capita basis, compared to various historic wars. The primary reason so many people died in the World Wars was really that there just were more people in the world. A few of the authors don’t seem to consider this an adequate reason, but ask yourself this: Would you rather live in a society of 100 in which 10 people are killed, or a society of 1 billion in which 1 million are killed? In the former case your chances of being killed are 10%; in the latter, 0.1%. Clearly, per-capita measures of violence are the correct ones.

Some essays seem a bit beside the point, like one on “environmental violence” which quite aptly details the ongoing—terrifying—degradation of our global ecology, but somehow seems to think that this constitutes violence when it obviously doesn’t. There is widespread violence against animals, certainly; slaughterhouses are the obvious example—and unlike most people, I do not consider them some kind of exception we can simply ignore. We do in fact accept levels of cruelty to pigs and cows that we would never accept against dogs or horses—even the law makes such exceptions. Moreover, plenty of habitat destruction is accompanied by killing of the animals who lived in that habitat. But ecological degradation is not equivalent to violence. (Nor is it clear to me that our treatment of animals is more violent overall today than in the past; I guess life is probably worse for a beef cow today than it was in the medieval era, but either way, she was going to be killed and eaten. And at least we no longer do cat-burning.) Drilling for oil can be harmful, but it is not violent. We can acknowledge that life is more peaceful now than in the past without claiming that everything is better now—in fact, one could even say that overall life isn’t better, but I think they’d be hard-pressed to argue that.

These are the relatively good essays, which correct minor errors or add interesting nuances. There are also some really awful essays in the mix.

A common theme of several of the essays seems to be “there are still bad things, so we can’t say anything is getting better”; they will point out various forms of violence that undeniably still exist, and treat this as a conclusive argument against the claim that violence has declined. Yes, modern slavery does exist, and it is a very serious problem; but it clearly is not the same kind of atrocity that the Atlantic slave trade was. Yes, there are still murders. Yes, there are still wars. Probably these things will always be with us to some extent; but there is a very clear difference between 500 homicides per million people per year and 50—and it would be better still if we could bring it down to 5.

There’s one essay about sexual violence that doesn’t present any evidence whatsoever to contradict the claim that rates of sexual violence have been declining while rates of reporting and prosecution have been increasing. (These two trends together often result in reported rapes going up, but most experts agree that actual rapes are going down.) The entire essay is based on anecdote, innuendo, and righteous anger.

There are several essays that spend their whole time denouncing neoliberal capitalism (not even presenting any particularly good arguments against it, though such arguments do exist), seeming to equate Pinker’s view with some kind of Rothbardian anarcho-capitalism when in fact Pinker is explictly in favor of Nordic-style social democracy. (One literally dismisses his support for universal healthcare as “Well, he is Canadian”.) But Pinker has on occasion said good things about capitalism, so clearly, he is an irredeemable monster.

Right in the introduction—which almost made me put the book down—is an astonishingly ludicrous argument, which I must quote in full to show you that it is not out of context:

What actually is violence (nowhere posed or answered in The Better Angels)? How do people perceive it in different time-place settings? What is its purpose and function? What were contemporary attitudes toward violence and how did sensibilities shift over time? Is violence always ‘bad’ or can there be ‘good’ violence, violence that is regenerative and creative?

The Darker Angels of Our Nature, p.16

Yes, the scare quotes on ‘good’ and ‘bad’ are in the original. (Also the baffling jargon “time-place settings” as opposed to, say, “times and places”.) This was clearly written by a moral relativist. Aside from questioning whether we can say anything about anything, the argument seems to be that Pinker’s argument is invalid because he didn’t precisely define every single relevant concept, even though it’s honestly pretty obvious what the world “violence” means and how he is using it. (If anything, it’s these authors who don’t seem to understand what the word means; they keep calling things “violence” that are indeed bad, but obviously aren’t violence—like pollution and cyberbullying. At least talk of incarceration as “structural violence” isn’t obvious nonsense—though it is still clearly distinct from murder rates.)

But it was by reading the worst essays that I think I gained the most insight into what this debate is really about. Several of the essays in The Darker Angels thoroughly and unquestioningly share the following inference: if a culture is superior, then that culture has a right to impose itself on others by force. On this, they seem to agree with the imperialists: If you’re better, that gives you a right to dominate everyone else. They rightly reject the claim that cultures have a right to imperialistically dominate others, but they cannot deny the inference, and so they are forced to deny that any culture can ever be superior to another. The result is that they tie themselves in knots trying to justify how greater wealth, greater happiness, less violence, and babies not dying aren’t actually good things. They end up talking nonsense about “violence that is regenerative and creative”.

But we can believe in civilization without believing in colonialism. And indeed that is precisely what I (along with Pinker) believe: That democracy is better than autocracy, that free speech is better than censorship, that health is better than illness, that prosperity is better than poverty, that peace is better than war—and therefore that Western civilization is doing a better job than the rest. I do not believe that this justifies the long history of Western colonial imperialism. Governing your own country well doesn’t give you the right to invade and dominate other countries. Indeed, part of what makes colonial imperialism so terrible is that it makes a mockery of the very ideals of peace, justice, and freedom that the West is supposed to represent.

I think part of the problem is that many people see the world in zero-sum terms, and believe that the West’s prosperity could only be purchased by the rest of the world’s poverty. But this is untrue. The world is nonzero-sum. My happiness does not come from your sadness, and my wealth does not come from your poverty. In fact, even the West was poor for most of history, and we are far more prosperous now that we have largely abandoned colonial imperialism than we ever were in imperialism’s heyday. (I do occasionally encounter British people who seem vaguely nostalgic for the days of the empire, but real median income in the UK has doubled just since 1977. Inequality has also increased during that time, which is definitely a problem; but the UK is undeniably richer now than it ever was at the peak of the empire.)

In fact it could be that the West is richer now because of colonalism than it would have been without it. I don’t know whether or not this is true. I suspect it isn’t, but I really don’t know for sure. My guess would be that colonized countries are poorer, but colonizer countries are not richer—that is, colonialism is purely destructive. Certain individuals clearly got richer by such depredation (Leopold II, anyone?), but I’m not convinced many countries did.

Yet even if colonialism did make the West richer, it clearly cannot explain most of the wealth of Western civilization—for that wealth simply did not exist in the world before. All these bridges and power plants, laptops and airplanes weren’t lying around waiting to be stolen. Surely, some of the ingredients were stolen—not least, the land. Had they been bought at fair prices, the result might have been less wealth for us (then again it might not, for wealthier trade partners yield greater exports). But this does not mean that the products themselves constitute theft, nor that the wealth they provide is meaningless. Perhaps we should find some way to pay reparations; undeniably, we should work toward greater justice in the future. But we do not need to give up all we have in order to achieve that justice.

There is a law of conservation of energy. It is impossible to create energy in one place without removing it from another. There is no law of conservation of prosperity. Making the world better in one place does not require making it worse in another.

Progress is real. Yes, it is flawed, uneven, and it has costs of its own; but it is real. If we want to have more of it, we best continue to believe in it. And The Better Angels of Our Nature does have some notable flaws, but it still retains its place among truly great books.

We ignorant, incompetent gods

May 21 JDN 2460086

A review of Homo Deus

The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions and godlike technology.

E.O. Wilson

Homo Deus is a very good read—and despite its length, a quick one; as you can see, I read it cover to cover in a week. Yuval Noah Harari’s central point is surely correct: Our technology is reaching a threshold where it grants us unprecedented power and forces us to ask what it means to be human.

Biotechnology and artificial intelligence are now advancing so rapidly that advancements in other domains, such as aerospace and nuclear energy, seem positively mundane. Who cares about making flight or electricity a bit cleaner when we will soon have the power to modify ourselves or we’ll all be replaced by machines?

Indeed, we already have technology that would have seemed to ancient people like the powers of gods. We can fly; we can witness or even control events thousands of miles away; we can destroy mountains; we can wipeout entire armies in an instant; we can even travel into outer space.

Harari rightly warns us that our not-so-distant descendants are likely to have powers that we would see as godlike: Immortality, superior intelligence, self-modification, the power to create life.

And where it is scary to think about what they might do with that power if they think the way we do—as ignorant and foolish and tribal as we are—Harari points out that it is equally scary to think about what they might do if they don’t think the way we do—for then, how do they think? If their minds are genetically modified or even artificially created, who will they be? What values will they have, if not ours? Could they be better? What if they’re worse?

It is of course difficult to imagine values better than our own—if we thought those values were better, we’d presumably adopt them. But we should seriously consider the possibility, since presumably most of us believe that our values today are better than what most people’s values were 1000 years ago. If moral progress continues, does it not follow that people’s values will be better still 1000 years from now? Or at least that they could be?

I also think Harari overestimates just how difficult it is to anticipate the future. This may be a useful overcorrection; the world is positively infested with people making overprecise predictions about the future, often selling them for exorbitant fees (note that Harari was quite well-compensated for this book as well!). But our values are not so fundamentally alien from those of our forebears, and we have reason to suspect that our descendants’ values will be no more different from ours.

For instance, do you think that medieval people thought suffering and death were good? I assure you they did not. Nor did they believe that the supreme purpose in life is eating cheese. (They didn’t even believe the Earth was flat!) They did not have the concept of GDP, but they could surely appreciate the value of economic prosperity.

Indeed, our world today looks very much like a medieval peasant’s vision of paradise. Boundless food in endless variety. Near-perfect security against violence. Robust health, free from nearly all infectious disease. Freedom of movement. Representation in government! The land of milk and honey is here; there they are, milk and honey on the shelves at Walmart.

Of course, our paradise comes with caveats: Not least, we are by no means free of toil, but instead have invented whole new kinds of toil they could scarcely have imagined. If anything I would have to guess that coding a robot or recording a video lecture probably isn’t substantially more satisfying than harvesting wheat or smithing a sword; and reconciling receivables and formatting spreadsheets is surely less. Our tasks are physically much easier, but mentally much harder, and it’s not obvious which of those is preferable. And we are so very stressed! It’s honestly bizarre just how stressed we are, given the abudance in which we live; there is no reason for our lives to have stakes so high, and yet somehow they do. It is perhaps this stress and economic precarity that prevents us from feeling such joy as the medieval peasants would have imagined for us.

Of course, we don’t agree with our ancestors on everything. The medieval peasants were surely more religious, more ignorant, more misogynistic, more xenophobic, and more racist than we are. But projecting that trend forward mostly means less ignorance, less misogyny, less racism in the future; it means that future generations should see the world world catch up to what the best of us already believe and strive for—hardly something to fear. The values that I believe are surely not what we as a civilization act upon, and I sorely wish they were. Perhaps someday they will be.

I can even imagine something that I myself would recognize as better than me: Me, but less hypocritical. Strictly vegan rather than lacto-ovo-vegetarian, or at least more consistent about only buying free range organic animal products. More committed to ecological sustainability, more willing to sacrifice the conveniences of plastic and gasoline. Able to truly respect and appreciate all life, even humble insects. (Though perhaps still not mosquitoes; this is war. They kill more of us than any other animal, including us.) Not even casually or accidentally racist or sexist. More courageous, less burnt out and apathetic. I don’t always live up to my own ideals. Perhaps someday someone will.

Harari fears something much darker, that we will be forced to give up on humanist values and replace them with a new techno-religion he calls Dataism, in which the supreme value is efficient data processing. I see very little evidence of this. If it feels like data is worshipped these days, it is only because data is profitable. Amazon and Google constantly seek out ever richer datasets and ever faster processing because that is how they make money. The real subject of worship here is wealth, and that is nothing new. Maybe there are some die-hard techno-utopians out there who long for us all to join the unified oversoul of all optimized data processing, but I’ve never met one, and they are clearly not the majority. (Harari also uses the word ‘religion’ in an annoyingly overbroad sense; he refers to communism, liberalism, and fascism as ‘religions’. Ideologies, surely; but religions?)

Harari in fact seems to think that ideologies are strongly driven by economic structures, so maybe he would even agree that it’s about profit for now, but thinks it will become religion later. But I don’t really see history fitting this pattern all that well. If monotheism is directly tied to the formation of organized bureaucracy and national government, then how did Egypt and Rome last so long with polytheistic pantheons? If atheism is the natural outgrowth of industrialized capitalism, then why are Africa and South America taking so long to get the memo? I do think that economic circumstances can constrain culture and shift what sort of ideas become dominant, including religious ideas; but there clearly isn’t this one-to-one correspondence he imagines. Moreover, there was never Coalism or Oilism aside from the greedy acquisition of these commodities as part of a far more familiar ideology: capitalism.

He also claims that all of science is now, or is close to, following a united paradigm under which everything is a data processing algorithm, which suggests he has not met very many scientists. Our paradigms remain quite varied, thank you; and if they do all have certain features in common, it’s mainly things like rationality, naturalism and empiricism that are more or less inherent to science. It’s not even the case that all cognitive scientists believe in materialism (though it probably should be); there are still dualists out there.

Moreover, when it comes to values, most scientists believe in liberalism. This is especially true if we use Harari’s broad sense (on which mainline conservatives and libertarians are ‘liberal’ because they believe in liberty and human rights), but even in the narrow sense of center-left. We are by no means converging on a paradigm where human life has no value because it’s all just data processing; maybe some scientists believe that, but definitely not most of us. If scientists ran the world, I can’t promise everything would be better, but I can tell you that Bush and Trump would never have been elected and we’d have a much better climate policy in place by now.

I do share many of Harari’s fears of the rise of artificial intelligence. The world is clearly not ready for the massive economic disruption that AI is going to cause all too soon. We still define a person’s worth by their employment, and think of ourselves primarily as collection of skills; but AI is going to make many of those skills obsolete, and may make many of us unemployable. It would behoove us to think in advance about who we truly are and what we truly want before that day comes. I used to think that creative intellectual professions would be relatively secure; ChatGPT and Midjourney changed my mind. Even writers and artists may not be safe much longer.

Harari is so good at sympathetically explaining other views he takes it to a fault. At times it is actually difficult to know whether he himself believes something and wants you to, or if he is just steelmanning someone else’s worldview. There’s a whole section on ‘evolutionary humanism’ where he details a worldview that is at best Nietschean and at worst Nazi, but he makes it sound so seductive. I don’t think it’s what he believes, in part because he has similarly good things to say about liberalism and socialism—but it’s honestly hard to tell.

The weakest part of the book is when Harari talks about free will. Like most people, he just doesn’t get compatibilism. He spends a whole chapter talking about how science ‘proves we have no free will’, and it’s just the same old tired arguments hard determinists have always made.

He talks about how we can make choices based on our desires, but we can’t choose our desires; well of course we can’t! What would that even mean? If you could choose your desires, what would you choose them based on, if not your desires? Your desire-desires? Well, then, can you choose your desire-desires? What about your desire-desire-desires?

What even is this ultimate uncaused freedom that libertarian free will is supposed to consist in? No one seems capable of even defining it. (I’d say Kant got the closest: He defined it as the capacity to act based upon what ought rather than what is. But of course what we believe about ‘ought’ is fundamentally stored in our brains as a particular state, a way things are—so in the end, it’s an ‘is’ we act on after all.)

Maybe before you lament that something doesn’t exist, you should at least be able to describe that thing as a coherent concept? Woe is me, that 2 plus 2 is not equal to 5!

It is true that as our technology advances, manipulating other people’s desires will become more and more feasible. Harari overstates the case on so-called robo-rats; they aren’t really mind-controlled, it’s more like they are rewarded and punished. The rat chooses to go left because she knows you’ll make her feel good if she does; she’s still freely choosing to go left. (Dangling a carrot in front of a horse is fundamentally the same thing—and frankly, paying a wage isn’t all that different.) The day may yet come where stronger forms of control become feasible, and woe betide us when it does. Yet this is no threat to the concept of free will; we already knew that coercion was possible, and mind control is simply a more precise form of coercion.

Harari reports on a lot of interesting findings in neuroscience, which are important for people to know about, but they do not actually show that free will is an illusion. What they do show is that free will is thornier than most people imagine. Our desires are not fully unified; we are often ‘of two minds’ in a surprisingly literal sense. We are often tempted by things we know are wrong. We often aren’t sure what we really want. Every individual is in fact quite divisible; we literally contain multitudes.

We do need a richer account of moral responsibility that can deal with the fact that human beings often feel multiple conflicting desires simultaneously, and often experience events differently than we later go on to remember them. But at the end of the day, human consciousness is mostly unified, our choices are mostly rational, and our basic account of moral responsibility is mostly valid.

I think for now we should perhaps be less worried about what may come in the distant future, what sort of godlike powers our descendants may have—and more worried about what we are doing with the godlike powers we already have. We have the power to feed the world; why aren’t we? We have the power to save millions from disease; why don’t we? I don’t see many people blindly following this ‘Dataism’, but I do see an awful lot blinding following a 19th-century vision of capitalism.

And perhaps if we straighten ourselves out, the future will be in better hands.

Reckoning costs in money distorts them

May 7 JDN 2460072

Consider for a moment what it means when an economic news article reports “rising labor costs”. What are they actually saying?

They’re saying that wages are rising—perhaps in some industry, perhaps in the economy as a whole. But this is not a cost. It’s a price. As I’ve written about before, the two are fundamentally distinct.

The cost of labor is measured in effort, toil, and time. It’s the pain of having to work instead of whatever else you’d like to do with your time.

The price of labor is a monetary amount, which is delivered in a transaction.

This may seem perfectly obvious, but it has important and oft-neglected implications. A cost, one paid, is gone. That value has been destroyed. We hope that it was worth it for some benefit we gained. A price, when paid, is simply transferred: One person had that money before, now someone else has it. Nothing was gained or lost.

So in fact when reports say that “labor costs have risen”, what they are really saying is that income is being transferred from owners to workers without any change in real value taking place. They are framing as a loss what is fundamentally a zero-sum redistribution.

In fact, it is disturbingly common to see a fundamentally good redistribution of income framed in the press as a bad outcome because of its expression as “costs”; the “cost” of chocolate is feared to go up if we insist upon enforcing bans on forced labor—when in fact it is only the price that goes up, and the cost actually goes down: chocolate would no longer include complicity in an atrocity. The real suffering of making chocolate would be thereby reduced, not increased. Even when they aren’t literally enslaved, those workers are astonishingly poor, and giving them even a few more cents per hour would make a real difference in their lives. But God forbid we pay a few cents more for a candy bar!

If labor costs were to rise, that would mean that work had suddenly gotten harder, or more painful; or else, that some outside circumstance had made it more difficult to work. Having a child increases your labor costs—you now have the opportunity cost of not caring for the child. COVID increased the cost of labor, by making it suddenly dangerous just to go outside in public. That could also increase prices—you may demand a higher wage, and people do seem to have demanded higher wages after COVID. But these are two separate effects, and you can have one without the other. In fact, women typically see wage stagnation or even reduction after having kids (but men largely don’t), despite their real opportunity cost of labor having obviously greatly increased.

On an individual level, it’s not such a big mistake to equate price and cost. If you are buying something, its cost to you basically just is its price, plus a little bit of transaction cost for actually finding and buying it. But on a societal level, it makes an enormous difference. It distorts our policy priorities and can even lead to actively trying to suppress things that are beneficial—such as rising wages.

This false equivalence between price and costs seems to be at least as common among economists as it is among laypeople. Economists will often justify it on the grounds that in an ideal perfect competitive market the two would be in some sense equated. But of course we don’t live in that ideal perfect market, and even if we did, they would only beproportional at the margin, not fundamentally equal across the board. It would still be obviously wrong to characterize the total value or cost of work by the price paid for it; only the last unit of effort would be priced so that marginal value equals price equals marginal cost. The first 39 hours of your work would cost you less than what you were paid, and produce more than you were paid; only that 40th hour would set the three equal.

Once you account for all the various market distortions in the world, there’s no particular relationship between what something costs—in terms of real effort and suffering—and its price—in monetary terms. Things can be expensive and easy, or cheap and awful. In fact, they often seem to be; for some reason, there seems to be a pattern where the most terrible, miserable jobs (e.g. coal mining) actually pay the leastand the easiest, most pleasant jobs (e.g. stock trading) pay the most. Some jobs that benefit society pay well (e.g. doctors) and others pay terribly or not at all (e.g. climate activists). Some actions that harm the world get punished (e.g. armed robbery) and others get rewarded with riches (e.g. oil drilling). In the real world, whether a job is good or bad and whether it is paid well or poorly seem to be almost unrelated.

In fact, sometimes they seem even negatively related, where we often feel tempted to “sell out” and do something destructive in order to get higher pay. This is likely due to Berkson’s paradox: If people are willing to do jobs if they are either high-paying or beneficial to humanity, then we should expect that, on average, most of the high-paying jobs people do won’t be beneficial to humanity. Even if there were inherently no correlation or a small positive one, people’s refusal to do harmful low-paying work removes those jobs from our sample and results in a negative correlation in what remains.

I think that the best solution, ultimately, is to stop reckoning costs in money entirely. We should reckon them in happiness.

This is of course much more difficult than simply using prices; it’s not easy to say exactly how many QALY are sacrificed in the extraction of cocoa beans or the drilling of offshore oil wells. But if we actually did find a way to count them, I strongly suspect we’d find that it was far more than we ought to be willing to pay.

A very rough approximation, surely flawed but at least a start, would be to simply convert all payments into proportions of their recipient’s income: For full-time wages, this would result in basically everyone being counted the same, as 1 hour of work if you work 40 hours per week, 50 weeks per year is precisely 0.05% of your annual income. So we could say that whatever is equivalent to your hourly wage constitutes 50 microQALY.

This automatically implies that every time a rich person pays a poor person, QALY increase, while every time a poor person pays a rich person, QALY decrease. This is not an error in the calculation. It is a fact of the universe. We ignore it only at out own peril. All wealth redistributed downward is a benefit, while all wealth redistributed upward is a harm. That benefit may cause some other harm, or that harm may be compensated by some other benefit; but they are still there.

This would also put some things in perspective. When HSBC was fined £70 million for its crimes, that can be compared against its £1.5 billion in net income; if it were an individual, it would have been hurt about 50 milliQALY, which is about what I would feel if I lost $2000. Of course, it’s not a person, and it’s not clear exactly how this loss was passed through to employees or shareholders; but that should give us at least some sense of how small that loss was for them. They probably felt it… a little.

When Trump was ordered to pay a $1.3 million settlement, based on his $2.5 billion net wealth (corresponding to roughly $125 million in annual investment income), that cost him about 10 milliQALY; for me that would be about $500.

At the other extreme, if someone goes from making $1 per day to making $1.50 per day, that’s a 50% increase in their income—500 milliQALY per year.

For those who have no income at all, this becomes even trickier; for them I think we should probably use their annual consumption, since everyone needs to eat and that costs something, though likely not very much. Or we could try to measure their happiness directly, trying to determine how much it hurts to not eat enough and work all day in sweltering heat.

Properly shifting this whole cultural norm will take a long time. For now, I leave you with this: Any time you see a monetary figure, ask yourself: How much is that worth to them?” The world will seem quite different once you get in the habit of that.

What happens when a bank fails

Mar 19 JDN 2460023

As of March 9, Silicon Valley Bank (SVB) has failed and officially been put into receivership under the FDIC. A bank that held $209 billion in assets has suddenly become insolvent.

This is the second-largest bank failure in US history, after Washington Mutual (WaMu) in 2008. In fact it will probably have more serious consequences than WaMu, for two reasons:

1. WaMu collapsed as part of the Great Recession, so there was already a lot of other things going on and a lot of policy responses already in place.

2. WaMu was mostly a conventional commercial bank that held deposits and loans for consumers, so its assets were largely protected by the FDIC, and thus its bankruptcy didn’t cause contagion the spread out to the rest of the system. (Other banks—shadow banks—did during the crash, but not so much WaMu.) SVB mostly served tech startups, so a whopping 89% of its deposits were not protected by FDIC insurance.

You’ve likely heard of many of the companies that had accounts at SVB: Roku, Roblox, Vimeo, even Vox. Stocks of the US financial industry lost $100 billion in value in two days.

The good news is that this will not be catastrophic. It probably won’t even trigger a recession (though the high interest rates we’ve been having lately potentially could drive us over that edge). Because this is commercial banking, it’s done out in the open, with transparency and reasonably good regulation. The FDIC knows what they are doing, and even though they aren’t covering all those deposits directly, they intend to find a buyer for the bank who will, and odds are good that they’ll be able to cover at least 80% of the lost funds.

In fact, while this one is exceptionally large, bank failures are not really all that uncommon. There have been nearly 100 failures of banks with assets over $1 billion in the US alone just since the 1970s. The FDIC exists to handle bank failures, and generally does the job well.

Then again, it’s worth asking whether we should really have a banking system in which failures are so routine.

The reason banks fail is kind of a dark open secret: They don’t actually have enough money to cover their deposits.

Banks loan away most of their cash, and rely upon the fact that most of their depositors will not want to withdraw their money at the same time. They are required to keep a certain ratio in reserves, but it’s usually fairly small, like 10%. This is called fractional-reserve banking.

As long as less than 10% of deposits get withdrawn at any given time, this works. But if a bunch of depositors suddenly decide to take out their money, the bank may not have enough to cover it all, and suddenly become insolvent.

In fact, the fear that a bank might become insolvent can actually cause it to become insolvent, in a self-fulfilling prophecy. Once depositors get word that the bank is about to fail, they rush to be the first to get their money out before it disappears. This is a bank run, and it’s basically what happened to SVB.

The FDIC was originally created to prevent or mitigate bank runs. Not only did they provide insurance that reduced the damage in the event of a bank failure; by assuring depositors that their money would be recovered even if the bank failed, they also reduced the chances of a bank run becoming a self-fulfilling prophecy.


Indeed, SVB is the exception that proves the rule, as they failed largely because their assets were mainly not FDIC insured.

Fractional-reserve banking effectively allows banks to create money, in the form of credit that they offer to borrowers. That credit gets deposited in other banks, which then go on to loan it out to still others; the result is that there is more money in the system than was ever actually printed by the central bank.

In most economies this commercial bank money is a far larger quantity than the central bank money actually printed by the central bank—often nearly 10 to 1. This ratio is called the money multiplier.

Indeed, it’s not a coincidence that the reserve ratio is 10% and the multiplier is 10; the theoretical maximum multiplier is always the inverse of the reserve ratio, so if you require reserves of 10%, the highest multiplier you can get is 10. Had we required 20% reserves, the multiplier would drop to 5.

Most countries have fractional-reserve banking, and have for centuries; but it’s actually a pretty weird system if you think about it.

Back when we were on the gold standard, fractional-reserve banking was a way of cheating, getting our money supply to be larger than the supply of gold would actually allow.

But now that we are on a pure fiat money system, it’s worth asking what fractional-reserve banking actually accomplishes. If we need more money, the central bank could just print more. Why do we delegate that task to commercial banks?

David Friedman of the Cato Institute had some especially harsh words on this, but honestly I find them hard to disagree with:

Before leaving the subject of fractional reserve systems, I should mention one particularly bizarre variant — a fractional reserve system based on fiat money. I call it bizarre because the essential function of a fractional reserve system is to reduce the resource cost of producing money, by allowing an ounce of reserves to replace, say, five ounces of currency. The resource cost of producing fiat money is zero; more precisely, it costs no more to print a five-dollar bill than a one-dollar bill, so the cost of having a larger number of dollars in circulation is zero. The cost of having more bills in circulation is not zero but small. A fractional reserve system based on fiat money thus economizes on the cost of producing something that costs nothing to produce; it adds the disadvantages of a fractional reserve system to the disadvantages of a fiat system without adding any corresponding advantages. It makes sense only as a discreet way of transferring some of the income that the government receives from producing money to the banking system, and is worth mentioning at all only because it is the system presently in use in this country.

Our banking system evolved gradually over time, and seems to have held onto many features that made more sense in an earlier era. Back when we had arbitrarily tied our central bank money supply to gold, creating a new money supply that was larger may have been a reasonable solution. But today, it just seems to be handing the reins over to private corporations, giving them more profits while forcing the rest of society to bear more risk.

The obvious alternative is full-reserve banking, where banks are simply required to hold 100% of their deposits in reserve and the multiplier drops to 1. This idea has been supported by a number of quite prominent economists, including Milton Friedman.

It’s not just a right-wing idea: The left-wing organization Positive Money is dedicated to advocating for a full-reserve banking system in the UK and EU. (The ECB VP’s criticism of the proposal is utterly baffling to me: it “would not create enough funding for investment and growth.” Um, you do know you can print more money, right? Hm, come to think of it, maybe the ECB doesn’t know that, because they think inflation is literally Hitler. There are legitimate criticisms to be had of Positive Money’s proposal, but “There won’t be enough money under this fiat money system” is a really weird take.)

There’s a relatively simple way to gradually transition from our current system to a full-reserve sytem: Simply increase the reserve ratio over time, and print more central bank money to keep the total money supply constant. If we find that it seems to be causing more problems than it solves, we could stop or reverse the trend.

Krugman has pointed out that this wouldn’t really fix the problems in the banking system, which actually seem to be much worse in the shadow banking sector than in conventional commercial banking. This is clearly right, but it isn’t really an argument against trying to improve conventional banking. I guess if stricter regulations on conventional banking push more money into the shadow banking system, that’s bad; but really that just means we should be imposing stricter regulations on the shadow banking system first (or simultaneously).

We don’t need to accept bank runs as a routine part of the financial system. There are other ways of doing things.

Where is the money going in academia?

Feb 19 JDN 2459995

A quandary for you:

My salary is £41,000.

Annual tuition for a full-time full-fee student in my department is £23,000.

I teach roughly the equivalent of one full-time course (about 1/2 of one and 1/4 of two others; this is typically counted as “teaching 3 courses”, but if I used that figure, it would underestimate the number of faculty needed).

Each student takes about 5 or 6 courses at a time.

Why do I have 200 students?

If you multiply this out, the 200 students I teach, divided by the 6 instructors they have at one time, times the £23,000 they are paying… I should be bringing in over £760,000 for the university. Why am I paid only 5% of that?

Granted, there are other costs a university must bear aside from paying instructors. There are facilities, and administration, and services. And most of my students are not full-fee paying; that £23,000 figure really only applies to international students.

Students from Scotland pay only £1,820, but there aren’t very many of them, and public funding is supposed to make up that difference. Even students from the rest of the UK pay £9,250. And surely the average tuition paid has got to be close to that? Yet if we multiply that out, £9,000 times 200 divided by 6, we’re still looking at £300,000. So I’m still getting only 14%.

Where is the rest going?

This isn’t specific to my university by any means. It seems to be a global phenomenon. The best data on this seems to be from the US.

According to salary.com, the median salary for an adjunct professor in the US is about $63,000. This actually sounds high, given what I’ve heard from other entry-level faculty. But okay, let’s take that as our figure. (My pay is below this average, though how much depends upon the strength of the pound against the dollar. Currently the pound is weak, so quite a bit.)

Yet average tuition for out-of-state students at public college is $23,000 per year.

This means that an adjunct professor in the US with 200 students takes in $760,000 but receives $63,000. Where does that other $700,000 go?

If you think that it’s just a matter of paying for buildings, service staff, and other costs of running a university, consider this: It wasn’t always this way.

Since 1970, inflation-adjusted salaries for US academic faculty at public universities have risen a paltry 3.1%. In other words, basically not at all.

This is considerably slower than the growth of real median household income, which has risen almost 40% in that same time.

Over the same interval, nominal tuition has risen by over 2000%; adjusted for inflation, this is a still-staggering increase of 250%.

In other words, over the last 50 years, college has gotten three times as expensive, but faculty are still paid basically the same. Where is all this extra money going?

Part of the explanation is that public funding for colleges has fallen over time, and higher tuition partly makes up the difference. But private school tuition has risen just as fast, and their faculty salaries haven’t kept up either.

In their annual budget report, the University of Edinburgh proudly declares that their income increased by 9% last year. Let me assure you, my salary did not. (In fact, inflation-adjusted, my salary went down.) And their EBITDA—earnings before interest, taxes, depreciation, and amortization—was £168 million. Of that, £92 million was lost to interest and depreciation, but they don’t pay taxes at all, so their real net income was about £76 million. In the report, they include price changes of their endowment and pension funds to try to make this number look smaller, ending up with only £37 million, but that’s basically fiction; these are just stock market price drops, and they will bounce back.

Using similar financial alchemy, they’ve been trying to cut our pensions lately, because they say they “are too expensive” (because the stock market went down—nevermind that it’ll bounce back in a year or two). Fortunately, the unions are fighting this pretty hard. I wish they’d also fight harder to make them put people like me on the tenure track.

Had that £76 million been distributed evenly between all 5,000 of us faculty, we’d each get an extra £15,600.

Well, then, that solves part of the mystery in perhaps the most obvious, corrupt way possible: They’re literally just hoarding it.

And Edinburgh is far from the worst offender here. No, that would be Harvard, who are sitting on over $50 billion in assets. Since they have 21,000 students, that is over $2 million per student. With even a moderate return on its endowment, Harvard wouldn’t need to charge tuition at all.

But even then, raising my salary to £56,000 wouldn’t explain why I need to teach 200 students. Even that is still only 19% of the £300,000 those students are bringing in. But hey, then at least the primary service for which those students are here for might actually account for one-fifth of what they’re paying!

Now let’s considers administrators. Median salary for a university administrator in the US is about $138,000—twice what adjunct professors make.


Since 1970, that same time interval when faculty salaries were rising a pitiful 3% and tuition was rising a staggering 250%, how much did chancellors’ salaries increase? Over 60%.

Of course, the number of administrators is not fixed. You might imagine that with technology allowing us to automate a lot of administrative tasks, the number of administrators could be reduced over time. If that’s what you thought happened, you would be very, very wrong. The number of university administrators in the US has more than doubled since the 1980s. This is far faster growth than the number of students—and quite frankly, why should the number of administrators even grow with the number of students? There is a clear economy of scale here, yet it doesn’t seem to matter.

Combine those two facts: 60% higher pay times twice as many administrators means that universities now spend at least 3 times as much on administration as they did 50 years ago. (Why, that’s just about the proportional increase in tuition! Coincidence? I think not.)

Edinburgh isn’t even so bad in this regard. They have 6,000 administrative staff versus 5,000 faculty. If that already sounds crazy—more admins than instructors?—consider that the University of Michigan has 7,000 faculty but 19,000 administrators.

Michigan is hardly exceptional in this regard: Illinois UC has 2,500 faculty but nearly 8,000 administrators, while Ohio State has 7,300 faculty and 27,000 administrators. UCLA is even worse, with only 4,000 faculty but 26,000 administrators—a ratio of 6 to 1. It’s not the UC system in general, though: My (other?) alma mater of UC Irvine somehow supports 5,600 faculty with only 6,400 administrators. Yes, that’s right; compared to UCLA, UCI has 40% more faculty but 76% fewer administrators. (As far as students? UCLA has 47,000 while UCI has 36,000.)

At last, I think we’ve solved the mystery! Where is all the money in academia going? Administrators.

They keep hiring more and more of them, and paying them higher and higher salaries. Meanwhile, they stop hiring tenure-track faculty and replace them with adjuncts that they can get away with paying less. And then, whatever they manage to save that way, they just squirrel away into the endowment.

A common right-wing talking point is that more institutions should be “run like a business”. Well, universities seem to have taken that to heart. Overpay your managers, underpay your actual workers, and pocket the savings.