How to detect discrimination, empirically

Aug 25 JDN 2460548

For concreteness, I’ll use men and women as my example, though the same principles would apply for race, sexual orientation, and so on. Suppose we find that there are more men than women in a given profession; does this mean that women are being discriminated against?

Not necessarily. Maybe women are less interested in that kind of work, or innately less qualified. Is there a way we can determine empirically that it really is discrimination?

It turns out that there is. All we need is a reliable measure of performance in that profession. Then, we compare performance between men and women, and that comparison can tell us whether discrimination is happening or not. The key insight is that workers in a job are not a random sample; they are a selected sample. The results of that selection can tell us whether discrimination is happening.

Here’s a simple model to show how this works.

Suppose there are five different skill levels in the job, from 1 to 5 where 5 is the most skilled. And suppose there are 5 women and 5 men in the population.

1. Baseline

The baseline case to consider is when innate talents are equal and there is no discrimination. In that case, we should expect men and women to be equally represented in the profession.

For the simplest case, let’s say that there is one person at each skill level:

MenWomen
11
22
33
44
55

Now suppose that everyone above a certain skill threshold gets hired. Since we’re assuming no discrimination, the threshold should be the same for men and women. Let’s say it’s 3; then these are the people who get hired:

Hired MenHired Women
33
44
55

The result is that not only are there the same number of men and women in the job, their skill levels are also the same. There are just as many highly-competent men as highly-competent women.

2. Innate Differences

Now, suppose there is some innate difference in talent between men and women for this job. For most jobs this seems suspicious, but consider pro sports: Men really are better at basketball, in general, than women, and this is pretty clearly genetic. So it’s not absurd to suppose that for at least some jobs, there might be some innate differences. What would that look like?


Again suppose a population of 5 men and 5 women, but now the women are a bit less qualified: There are two 1s and no 5s among the women.

MenWomen
11
21
32
43
54

Then, this is the group that will get hired:

Hired MenHired Women
33
44
5

The result will be fewer women who are on average less qualified. The most highly-qualified individuals at that job will be almost entirely men. (In this simple model, entirely men; but you can easily extend it so that there are a few top-qualified women.)

This is in fact what we see for a lot of pro sports; in a head-to-head match, even the best WNBA teams would generally lose against most NBA teams. That’s what it looks like when there are real innate differences.

But it’s hard to find clear examples outside of sports. The genuine, large differences in size and physical strength between the sexes just don’t seem to be associated with similar differences in mental capabilities or even personality. You can find some subtler effects, but nothing very large—and certainly nothing large enough to explain the huge gender gaps in various industries.

3. Discrimination

What does it look like when there is discrimination?

Now assume that men and women are equally qualified, but it’s harder for women to get hired, because of discrimination. The key insight here is that this amounts to women facing a higher threshold. Where men only need to have level 3 competence to get hired, women need level 4.

So if the population looks like this:

MenWomen
11
22
33
44
55

The hired employees will look like this:

Hired MenHired Women
3
44
55

Once again we’ll have fewer women in the profession, but they will be on average more qualified. The top-performing individuals will be as likely to be women as they are to be men, while the lowest-performing individuals will be almost entirely men.

This is the kind of pattern we observe when there is discrimination. Do we see it in real life?

Yes, we see it all the time.

Corporations with women CEOs are more profitable.

Women doctors have better patient outcomes.

Startups led by women are more likely to succeed.

This shows that there is some discrimination happening, somewhere in the process. Does it mean that individual firms are actively discriminating in their hiring process? No, it doesn’t. The discrimination could be happening somewhere else; maybe it happens during education, or once women get hired. Maybe it’s a product of sexism in society as a whole, that isn’t directly under the control of employers. But it must be in there somewhere. If women are both rarer and more competent, there must be some discrimination going on.

What if there is also innate difference? We can detect that too!

4. Both

Suppose now that men are on average more talented, but there is also discrimination against women. Then the population might look like this:

MenWomen
11
21
32
43
54

And the hired employees might look like this:

Hired MenHired Women
3
4
54

In such a scenario, you’ll see a large gender imbalance, but there may not be a clear difference in competence. The tiny fraction of women who get hired will perform about as well as the men, on average.

Of course, this assumes that the two effects are of equal strength. In reality, we might see a whole spectrum of possibilities, from very strong discrimination with no innate differences, all the way to very large innate differences with no discrimination. The outcomes will then be similarly along a spectrum: When discrimination is much larger than innate difference, women will be rare but more competent. When innate difference is much larger than discrimination, women will be rare and less competent. And when there is a mix of both, women will be rare but won’t show as much difference in competence.

Moreover, if you look closer at the distribution of performance, you can still detect the two effects independently. If the lowest-performing workers are almost all men, that’s evidence of discrimination against women; while if the highest-performing workers are almost all men, that’s evidence of innate difference. And if you look at the table above, that’s exactly what we see: Both the 3 and the 5 are men, indicating the presence of both effects.

What does affirmative action do?

Effectively, affirmative action lowers the threshold for hiring women (or minorities) in order to equalize representation in the workplace. In the presence of discrimination raising that threshold, this is exactly what we need! It can take us from case 3 (discrimination) to case 1 (equality), or from case 4 (both discrimination and innate difference) to case 2 (innate difference only).

Of course, it’s possible for us to overshoot, using more affirmative action than we should have. If we achieve better representation of women, but the lowest performers at the job are women, then we have overshot, effectively now discriminating against men. Fortunately, there is very little evidence of this in practice. In general, even with affirmative action programs in place, we tend to find that the lowest performers are still men—so there is still discrimination against women that we’ve failed to compensate for.

What if we can’t measure competence?

Of course, it’s possible that we don’t have good measures of competence in a given industry. (One must wonder how firms decide who to hire, but frankly I’m prepared to believe they’re just really bad at it.) Then we can’t observe discrimination statistically in this way. What do we do then?

Well, there is at least one avenue left for us to detect discrimination: We can do direct experiments comparing resumes with male names versus female names. These sorts of experiments typically don’t find very much, though—at least for women. For different races, they absolutely do find strong results. They also find evidence of discrimination against people with disabilities, older people, and people who are physically unattractive. There’s also evidence of intersectional effects, where women of particular ethnic groups get discriminated against even when women in general don’t.

But this will only pick up discrimination if it occurs during the hiring process. The advantage of having a competence measure is that it can detect discrimination that occurs anywhere—even outside employer control. Of course, if we don’t know where the discrimination is happening, that makes it very hard to fix; so the two approaches are complementary.

And there is room for new methods too; right now we don’t have a good way to detect discrimination in promotion decisions, for example. Many of us suspect that it occurs, but unless you have a good measure of competence, you can’t really distinguish promotion discrimination from innate differences in talent. We don’t have a good method for testing that in a direct experiment, either, because unlike hiring, we can’t just use fake resumes with masculine or feminine names on them.

Why does everyone work full-time?

Jun 30 JDN 2460492

Over 70% of US workers work “full-time”, that is, at least 40 hours a week. The average number of hours worked per week is 33.8, and the average number of overtime hours is only 3.6. So basically, about 2/3 of workers work almost exactly 40 hours per week.

We’re accustomed to this situation, so it may not seem strange to you. But stop and think for a moment: What are the odds that across every industry, exactly 40 hours per week is the most efficient arrangement?

Indeed, there is mounting evidence that in many industries, 40 hours is too much, and something like 5 or even 30 would actually be more efficient. Yet we continue to work 40-hour weeks.

This looks like a corner solution: Rather than choosing an optimal amount, we’re all up against some kind of constraint.


What’s the constraint? Well, the government requires (for most workers) that anything above 40 hours per week must be paid as overtime, that is, at a higher wage rate. So it looks like we would all be working more than 40 hours per week, but we hit the upper limit due to these regulations.

Does this mean we would be better off without the regulations? Clearly not. As I just pointed out, the evidence is mounting that 40 hours is too much, not too little. But why, then, would we all be trying to work so many hours?

I believe this is yet another example of hyper-competition, where competition drives us to an inefficient outcome.

Employers value employees who work a lot of hours. Indeed, I contend that they do so far more than makes any rational sense; they seem to care more about how many hours you work than about the actual quality or quantity of your output. Maybe this is because hours worked is easier to measure, or because it seems like a fairer estimate of your effort; but for whatever reason, employers really seem to reward employees who work a lot of hours, regardless of almost everything else.

In the absence of a limit on hours worked, then, employers are going to heap rewards on whoever works the most hours, and so people will be pressured to work more and more hours. Then we would all work ourselves to death, and it’s not even clear that this would be good for GDP.

Indeed, this seems to be what happened, before the 40-hour work week became the standard. In the 1800s, the average American worked over 60 hours per week. It wasn’t until the 1940s that 40-hour weeks became the norm.

But speaking of norms, that also seems to be a big factor here. The truth is, overtime isn’t really that expensive, and employers could be smarter about rewarding good work rather than more hours. But once a norm establishes itself in a society, it can be very hard to change. And right now, the norm is that 40 hours is a “normal” “standard” “full” work week—any more is above and beyond, and any less is inferior.

This is a problem, because a lot of people can’t work 40-hour weeks. Our standard for what makes someone “disabled” isn’t that you can’t work at all; it’s that you can’t work as much as society expects. I wonder how many people are currently living on disability who could have been working part-time, but there just weren’t enough part-time jobs available. The employment rate among people with a disability is only 41%, compared to 77% of the general population.

And it’s not that we need to work this much. Our productivity is now staggeringly high: We produce more than five times as much wealth per hour of work than we did as recently as the 1940s. So in theory, we should be able to live just as well while working one-fifth as much… but that’s clearly not what happened.

Keynes accurately predicted our high level of productivity; but he wrongly predicted that we would work less, when instead we just kept right on working almost as hard as before.

Indeed, it doesn’t even seem like we live five times as well while working just as much. Many things are better now—healthcare, entertainment, and of course electronics—but somehow, we really don’t feel like we are living better lives than our ancestors.

The Economic Policy Institute offers an explanation for this phenomenon: Our pay hasn’t kept up with our productivity.


Up until about 1980, productivity and pay rose in lockstep. But then they started to diverge, and they never again converged. Productivity continued to soar, while real wages only barely increased. The result is that since then, productivity has grown by 64%, and hourly pay has only grown 15%.

This is definitely part of the problem, but I think there’s more to it as well. Housing and healthcare have become so utterly unaffordable in this country that it really doesn’t matter that our cars are nice and our phones are dirt cheap. We are theoretically wealthier now, but most of that extra wealth goes into simply staying healthy and having a home. Our consumption has been necessitized.

If we can solve these problems, maybe people won’t feel a need to work so many hours. Or, maybe competition will continue to pressure them to work those hours… but at least we’ll actually feel richer when we do it.

No, the system is not working as designed

You say you’ve got a real solution…

Well, you know,

We’d all love to see the plan.

“Revolution”, the Beatles


Jun 16 JDN 2460478


There are several different versions of the meme, but they all follow the same basic format: Rejecting the statement “the system is broken and must be fixed”, they endorse the statement “the system is working exactly as intended and must be destroyed”.


This view is not just utterly wrong; it’s also incredibly dangerous.

First of all, it should be apparent to anyone who has ever worked in any large, complex organization—a corporation, a university, even a large nonprofit org—that no human system works exactly as intended. Some obviously function better than others, and most function reasonably well most of the time (probably because those that don’t fail and disappear, so there is a sort of natural selection process at work); but even with apparently simple goals and extensive resources, no complex organization will ever be able to coordinate its actions perfectly toward those goals.

But when we’re talking about “the system”, well, first of all:

What exactly is “the system”?

Is it government? Society as a whole? The whole culture, or some subculture? Is it local, national, or international? Are we talking about democracy, or maybe capitalism? The world isn’t just one system; it’s a complex network of interacting systems. So to be quite honest with you, I don’t even know what people are complaining about when they complain about “the system”. All I know is that there is some large institution that they don’t like.

Let’s suppose we can pin that down—say we’re talking about capitalism, for instance, or the US government. Then, there is still the obvious fact that any real-world implementation of a system is going to have failures. Particularly when millions of people are involved, no system is ever going to coordinate exactly toward achieving its goals as efficiently as possible. At best it’s going to coordinate reasonably well and achieve its goals most of the time.

But okay, let’s try to be as charitable as possible here.

What are people trying to say when they say this?

I think that fundamentally this is meant as an expression of Conflict Theory over Mistake Theory: The problems with the world aren’t due to well-intentioned people making honest mistakes, they are due to people being evil. The response isn’t to try to correct their mistakes; it’s to fight them (kill them?), because they are evil.

Well, it is certainly true that evil people exist. There are mass murderers and tyrants, rapists and serial killers. And though they may be less extreme, it is genuinely true that billionaires are disproportionately likely to be psychopaths and that those who aren’t typically share a lot of psychopathic traits.

But does this really look like the sort of system that was designed to optimize payoffs for a handful of psychopaths? Really? You can’t imagine any way that the world could be more optimized for that goal?

How about, say… feudalism?

Not that long ago, historically—less than a millennium—the world was literally ruled by those same sorts of uber-rich psychopaths, and they wielded absolute power over their subjects. In medieval times, your king could confiscate your wealth whenever he chose, or even have you executed on a whim. That system genuinely looks like it’s optimized for the power of a handful of evil people.

Democracy, on the other hand, actually looks like it’s trying to be better. Maybe sometimes it isn’t better—or at least isn’t enough better. But why would they even bother letting us vote, if they were building a system to optimize their own power over us? Why would we have these free speech protections—that allow you to post those memes without going to prison?

In fact, there are places today where near-absolute power really is concentrated in a handful of psychopaths, where authoritarian dictators still act very much like kings of yore. In North Korea or Russia or China, there really is a system in place that’s very well optimized to maximize the power of a few individuals over everyone else.

But in the United States, we don’t have that. Not yet, anyway. Our democracy is flawed and imperilled, but so far, it stands. It needs our constant vigilance to defend it, but so far, it stands.

This is precisely why these ideas are so dangerous.

If you tell people that the system is already as bad as it’s ever going to get, that the only hope now is to burn it all down and build something new, then those people aren’t going to stand up and defend what we still have. They aren’t going to fight to keep authoritarians out of office, because they don’t believe that their votes or donations or protests actually do anything to control who ends up in office.

In other words, they are acting exactly as the authoritarians want them to.

Short of your actual support, the best gift you can give your enemy is apathy.

If all the good people give up on democracy, then it will fail, and we will see something worse in its place. Your belief that the world can’t get any worse can make the world much, much worse.

I’m not saying our system of government couldn’t be radically improved. It absolutely could, even by relatively simple reforms, such as range voting and a universal basic income. But there are people who want to tear it all down, and if they succeed, what they put in its place is almost certainly going to be worse, not better.

That’s what happened in Communist countries, after all: They started with bad systems, they tore them down in the name of making something better—and then they didn’t make something better. They made something worse.

And I don’t think it’s an accident that Marxists are so often Conflict Theorists; Marx himself certainly was. Marx seemed convinced that all we needed to do was tear down the old system, and a new, better system would spontaneously emerge. But that isn’t how any of this works.

Good governance is actually really hard.

Life isn’t simple. People aren’t easy to coordinate. Conflicts of interest aren’t easy to resolve. Coordination failures are everywhere. If you tear down the best systems we have for solving these problems, with no vision at all of what you would replace them with, you’re not going to get something better.

Different people want different things. We have to resolve those disagreements somehow. There are lots of ways we could go about doing that. But so far, some variation on voting seems to be the best method we have for resolving disagreements fairly.

It’s true; some people out there are really just bad people. Some of what even good people want is ultimately not reasonable, or based on false presumptions. (Like people who want to “cut” foreign aid to 5% of the budget—when it is in fact about 1%.) Maybe there is some alternative system out there that could solve these problems better, ensure that only the reasonable voices with correct facts actually get heard.

If so, well, you know:

We’d all love to see the plan.

It’s not enough to recognize that our current system is flawed and posit that something better could exist. You need to actually have a clear vision of what that better system looks like. For if you go tearing down the current system without any idea of what to replace it with, you’re going to end up with something much worse.

Indeed, if you had a detailed plan of how to improve things, it’s quite possible you could convince enough people to get that plan implemented, without tearing down the whole system first.

We’ve done it before, after all:

We ended slavery, then racial segregation. We gave women the right to vote, then integrated them into the workforce. We removed the ban of homosexuality, and then legalized same-sex marriage.


We have a very clear track record of reform working. Things are getting better, on a lot of different fronts. (Maybe not all fronts, I admit.) When the moral case becomes overwhelming, we really can convince people to change their minds and then vote to change our policies.

We do not have such a track record when it comes to revolutions.

Yes, some revolutions have worked out well, such as the one that founded the United States. (But I really cannot emphasize this: they had a plan!) But plenty more have worked out very badly. Even France, which turned out okay in the end, had to go through a Napoleon phase first.

Overall, it seems like our odds are better when we treat the system as broken and try to fix it, than when we treat it as evil and try to tear it down.

The world could be a lot better than it is. But never forget: It could also be a lot worse.

Wrongful beneficence

Jun 9 JDN 2460471

One of the best papers I’ve ever read—one that in fact was formative in making me want to be an economist—is Wrongful Beneficence by Chris Meyers.

This paper opened my eyes to a whole new class of unethical behavior: Acts that unambiguously make everyone better off, but nevertheless are morally wrong. Hence, wrongful beneficence.

A lot of economists don’t even seem to believe in such things. They seem convinced that as long as no one is made worse off by a transaction, that transaction must be ethically defensible.

Chris Meyers convinced me that they are wrong.

The key insight here is that it’s still possible to exploit someone even if you make them better off. This happens when they are in a desperate situation and you take advantage of that to get an unfair payoff.


Here one of the cases Meyers offers to demonstrate this:

Suppose Carole is driving across the desert on a desolate road when her car breaks down. After two days and two nights without seeing a single car pass by, she runs out of water and feels rather certain that she will perish if not rescued soon. Now suppose that Jason happens to drive down this road and finds Carole. He sees that her situation is rather desperate and that she needs (or strongly desires) to get to the nearest town as soon as possible. So Jason offers her a ride but only on the condition that […] [she gives him] her entire net worth, the title to her house and car, all of her money in the bank, and half of her earnings for the next ten years.

Carole obviously is better off than she would be if Jason hadn’t shown up—she might even have died. She freely consented to this transaction—again, because if she didn’t, she might die. Yet it seems absurd to say that Jason has done nothing wrong by making such an exorbitant demand. If he had asked her to pay for gas, or even to compensate him for his time at a reasonable rate, we’d have no objection. But to ask for her life savings, all her assets, and half her earnings for ten years? Obviously unfair—and obviously unethical. Jason is making Carole (a little) better off while making himself (a lot) better off, so everyone is benefited; but what he’s doing is obviously wrong.

Once you recognize that such behavior can exist, you start to see it all over the place, particularly in markets, where corporations are quite content to gouge their customers with high prices and exploit their workers with low wages—but still, technically, we’re better off than we would be with no products and no jobs at all.

Indeed, the central message of Wrongful Beneficence is actually about sweatshop labor: It’s not that the workers are worse off than they would have been (in general, they aren’t); it’s that they are so desperate that corporations can get away with exploiting them with obviously unfair wages and working conditions.

Maybe it would be easier just to move manufacturing back to First World countries?

Right-wingers are fond of making outlandish claims that making products at First World wages would be utterly infeasible; here’s one claiming that an iPhone would need to cost $30,000 if it were made in the US. In fact, the truth is that it would only need to cost about $40 more—because hardly any of its cost is actually going to labor. Most of its price is pure monopoly profit for Apple; most of the rest is components and raw materials. (Of course, if those also had to come from the US, the price would go up more; but even so, we’re talking something like double its original price, not thirty times. Workers in the US are indeed paid a lot more than workers in China; they are also more productive.)

It’s true that actually moving manufacturing from other countries back to the US would be a substantial undertaking, requiring retooling factories, retraining engineers, and so on; but it’s not like we’ve never done that sort of thing before. I’m sure it could not be done overnight; but of course it could be done. We do this sort of thing all the time.

Ironically, this sort of right-wing nonsense actually seems to feed the far left as well, supporting their conviction that all this prosperity around us is nothing more than an illusion, that all our wealth only exists because we steal it from others. But this could scarcely be further from the truth; our wealth comes from technology, not theft. If we offered a fairer bargain to poorer countries, we’d be a bit less rich, but they would be much less poor—the overall wealth in the world would in fact probably increase.

A better argument for not moving manufacturing back to the First World is that many Third World economies would collapse if they stopped manufacturing things for other countries, and that would be disastrous for millions of people.

And free trade really does increase efficiency and prosperity for all.

So, yes; let’s keep on manufacturing goods wherever it is cheapest to do so. But when we decide what’s cheapest, let’s evaluate that based on genuinely fair wages and working conditions, not the absolute cheapest that corporations think they can get away with.

Sometimes they may even decide that it’s not really cheaper to manufacture in poorer countries, because they need advanced technology and highly-skilled workers that are easier to come by in First World countries. In that case, bringing production back here is the right thing to do.

Of course, this raises the question:

What would be fair wages and working conditions?

That’s not so easy to answer. Since workers in Third World countries are less educated than workers in First World countries, and have access to less capital and worse technology, we should in fact expect them to be less productive and therefore get paid less. That may be unfair in some cosmic sense, but it’s not anyone’s fault, and it’s not any particular corporation’s responsibility to fix it.

But when there are products for which less than 1% of the sales price of the product goes to the workers who actually made the product, something is wrong. When the profit margin is often wildly larger than the total amount spent on labor, something is wrong.

It may be that we will never have precise thresholds we can set to decide what definitely is or is not exploitative; but that doesn’t mean we can’t ever recognize it when we see it. There are various institutional mechanisms we could use to enforce better wages and working conditions without ever making such a sharp threshold.

One of the simplest, in fact, is Fair Trade.

Fair Trade is by no means a flawless system; in fact there’s a lot of research debating how effective it is at achieving its goals. But it does seem to be accomplishing something. And it’s a system that we already have in place, operating successfully in many countries; it simply needs to be scaled up (and hopefully improved along the way).

One of the clearest pieces of evidence that it’s helping, in fact, is that farmers are willing to participate in it. That shows that it is beneficent.

Of course, that doesn’t mean that it’s genuinely fair! This could just be another kind of wrongful beneficence. Perhaps Fair Trade is really just less exploitative than all the available alternatives.

If so, then we need something even better still, some new system that will reliably pass on the increased cost for customers all the way down to increased wages for workers.

Fair Trade shows us something else, too: A lot of customers clearly are willing to pay a bit more in order to see workers treated better. Even if they weren’t, maybe they should be forced to. But the fact is, they are! Even those who are most adamantly opposed to Fair Trade can’t deny that people really are willing to pay more to help other people. (Yet another example of obvious altruism that neoclassical economists somehow manage to ignore.) They simply deny that it’s actually helping, which is an empirical matter.

But if this isn’t helping enough, fine; let’s find something else that does.

How Effective Altruism hurt me

May 12 JDN 2460443

I don’t want this to be taken the wrong way. I still strongly believe in the core principles of Effective Altruism. Indeed, it’s shockingly hard to deny them, because basically they come out to this:

Doing more good is better than doing less good.

Then again, most people want to do good. Basically everyone agrees that more good is better than less good. So what’s the big deal about Effective Altruism?

Well, in practice, most people put shockingly little effort into trying to ensure that they are doing the most good they can. A lot of people just try to be nice people, without ever concerning themselves with the bigger picture. Many of these people don’t give to charity at all.

Then, even among people who do give to charity, typically give to charities more or less at random—or worse, in proportion to how much mail those charities send them begging for donations. (Surely you can see how that is a perverse incentive?) They donate to religious organizations, which sometimes do good things, but fundamentally are founded upon ignorance, patriarchy, and lies.

Effective Altruism is a movement intended to fix this, to get people to see the bigger picture and focus their efforts on where they will do the most good. Vet charities not just for their honesty, but also their efficiency and cost-effectiveness:

Just how many mQALY can you buy with that $1?

That part I still believe in. There is a lot of value in assessing which charities are the most effective, and trying to get more people to donate to those high-impact charities.

But there is another side to Effective Altruism, which I now realize has severely damaged my mental health.

That is the sense of obligation to give as much as you possibly can.

Peter Singer is the most extreme example of this. He seems to have mellowed—a little—in more recent years, but in some of his most famous books he uses the following thought experiment:

To challenge my students to think about the ethics of what we owe to people in need, I ask them to imagine that their route to the university takes them past a shallow pond. One morning, I say to them, you notice a child has fallen in and appears to be drowning. To wade in and pull the child out would be easy but it will mean that you get your clothes wet and muddy, and by the time you go home and change you will have missed your first class.

I then ask the students: do you have any obligation to rescue the child? Unanimously, the students say they do. The importance of saving a child so far outweighs the cost of getting one’s clothes muddy and missing a class, that they refuse to consider it any kind of excuse for not saving the child. Does it make a difference, I ask, that there are other people walking past the pond who would equally be able to rescue the child but are not doing so? No, the students reply, the fact that others are not doing what they ought to do is no reason why I should not do what I ought to do.

Basically everyone agrees with this particular decision: Even if you are wearing a very expensive suit that will be ruined, even if you’ll miss something really important like a job interview or even a wedding—most people agree that if you ever come across a drowning child, you should save them.

(Oddly enough, when contemplating this scenario, nobody ever seems to consider the advice that most lifeguards give, which is to throw a life preserver and then go find someone qualified to save the child—because saving someone who is drowning is a lot harder and a lot riskier than most people realize. (“Reach or throw, don’t go.”) But that’s a bit beside the point.)

But Singer argues that we are basically in this position all the time. For somewhere between $500 and $3000, you—yes, you—could donate to a high-impact charity, and thereby save a child’s life.

Does it matter that many other people are better positioned to donate than you are? Does it matter that the child is thousands of miles away and you’ll never see them? Does it matter that there are actually millions of children, and you could never save them all by yourself? Does it matter that you’ll only save a child in expectation, rather than saving some specific child with certainty?

Singer says that none of this matters. For a long time, I believed him.

Now, I don’t.

For, if you actually walked by a drowning child that you could save, only at the cost of missing a wedding and ruining your tuxedo, you clearly should do that. (If it would risk your life, maybe not—and as I alluded to earlier, that’s more likely than you might imagine.) If you wouldn’t, there’s something wrong with you. You’re a bad person.

But most people don’t donate everything they could to high-impact charities. Even Peter Singer himself doesn’t. So if donating is the same as saving the drowning child, it follows that we are all bad people.

(Note: In general, if an ethical theory results in the conclusion that the whole of humanity is evil, there is probably something wrong with that ethical theory.)

Singer has tried to get out of this by saying we shouldn’t “sacrifice things of comparable importance”, and then somehow cash out what “comparable importance” means in such a way that it doesn’t require you to live on the street and eat scraps from trash cans. (Even though the people you’d be donating to largely do live that way.)

I’m not sure that really works, but okay, let’s say it does. Even so, it’s pretty clear that anything you spend money on purely for enjoyment would have to go. You would never eat out at restaurants, unless you could show that the time saved allowed you to get more work done and therefore donate more. You would never go to movies or buy video games, unless you could show that it was absolutely necessary for your own mental functioning. Your life would be work, work, work, then donate, donate, donate, and then do the absolute bare minimum to recover from working and donating so you can work and donate some more.

You would enslave yourself.

And all the while, you’d believe that you were never doing enough, you were never good enough, you are always a terrible person because you try to cling to any personal joy in your own life rather than giving, giving, giving all you have.

I now realize that Effective Altruism, as a movement, had been basically telling me to do that. And I’d been listening.

I now realize that Effective Altruism has given me this voice in my head, which I hear whenever I want to apply for a job or submit work for publication:

If you try, you will probably fail. And if you fail, a child will die.

The “if you try, you will probably fail” is just an objective fact. It’s inescapable. Any given job application or writing submission will probably fail.

Yes, maybe there’s some sort of bundling we could do to reframe that, as I discussed in an earlier post. But basically, this is correct, and I need to accept it.

Now, what about the second part? “If you fail, a child will die.” To most of you, that probably sounds crazy. And it is crazy. It’s way more pressure than any ordinary person should have in their daily life. This kind of pressure should be reserved for neurosurgeons and bomb squads.

But this is essentially what Effective Altruism taught me to believe. It taught me that every few thousand dollars I don’t donate is a child I am allowing to die. And since I can’t donate what I don’t have, it follows that every few thousand dollars I fail to get is another dead child.

And since Effective Altruism is so laser-focused on results above all else, it taught me that it really doesn’t matter whether I apply for the job and don’t get it, or never apply at all; the outcome is the same, and that outcome is that children suffer and die because I had no money to save them.

I think part of the problem here is that Effective Altruism is utilitarian through and through, and utilitarianism has very little place for good enough. There is better and there is worse; but there is no threshold at which you can say that your moral obligations are discharged and you are free to live your life as you wish. There is always more good that you could do, and therefore always more that you should do.

Do we really want to live in a world where to be a good person is to owe your whole life to others?

I do not believe in absolute selfishness. I believe that we owe something to other people. But I no longer believe that we owe everything. Sacrificing my own well-being at the altar of altruism has been incredibly destructive to my mental health, and I don’t think I’m the only one.

By all means, give to high-impact charities. But give a moderate amount—at most, tithe—and then go live your life. You don’t owe the world more than that.

The Butlerian Jihad is looking better all the time

Mar 24 JDN 2460395

A review of The Age of Em by Robin Hanson

In the Dune series, the Butlerian Jihad was a holy war against artificial intelligence that resulted in a millenias-long taboo against all forms of intelligent machines. It was effectively a way to tell a story about the distant future without basically everything being about robots or cyborgs.

After reading Robin Hanson’s book, I’m starting to think that maybe we should actually do it.

Thus it is written: “Thou shalt not make a machine in the likeness of a human mind.”

Hanson says he’s trying to reserve judgment and present objective predictions without evaluation, but it becomes very clear throughout that this is the future he wants, as well as—or perhaps even instead of—the world he expects.

In many ways, it feels like he has done his very best to imagine a world of true neoclassical rational agents in perfect competition, a sort of sandbox for the toys he’s always wanted to play with. Throughout he very much takes the approach of a neoclassical economist, making heroic assumptions and then following them to their logical conclusions, without ever seriously asking whether those assumptions actually make any sense.

To his credit, Hanson does not buy into the hype that AGI will be successful any day now. He predicts that we will achieve the ability to fully emulate human brains and thus create a sort of black-box AGI that behaves very much like a human within about 100 years. Given how the Blue Brain Project has progressed (much slower than its own hype machine told us it would—and let it be noted that I predicted this from the very beginning), I think this is a fairly plausible time estimate. He refers to a mind emulated in this way as an “em”; I have mixed feelings about the term, but I suppose we did need some word for that, and it certainly has conciseness on its side.

Hanson believes that a true understanding of artificial intelligence will only come later, and the sort of AGI that can be taken apart and reprogrammed for specific goals won’t exist for at least a century after that. Both of these sober, reasonable predictions are deeply refreshing in a field that’s been full of people saying “any day now” for the last fifty years.

But Hanson’s reasonableness just about ends there.

In The Age of Em, government is exactly as strong as Hanson needs it to be. Somehow it simultaneously ensures a low crime rate among a population that doubles every few months while also having no means of preventing that population growth. Somehow ensures that there is no labor collusion and corporations never break the law, but without imposing any regulations that might reduce efficiency in any way.

All of this begins to make more sense when you realize that Hanson’s true goal here is to imagine a world where neoclassical economics is actually true.

He realized it didn’t work on humans, so instead of giving up the theory, he gave up the humans.

Hanson predicts that ems will casually make short-term temporary copies of themselves called “spurs”, designed to perform a particular task and then get erased. I guess maybe he would, but I for one would not so cavalierly create another person and then make their existence dedicated to doing a single job before they die. The fact that I created this person, and they are very much like me, seem like reasons to care more about their well-being, not less! You’re asking me to enslave and murder my own child. (Honestly, the fact that Robin Hanson thinks ems will do this all the time says more about Robin Hanson than anything else.) Any remotely sane society of ems would ban the deletion of another em under any but the most extreme circumstances, and indeed treat it as tantamount to murder.

Hanson predicts that we will only copy the minds of a few hundred people. This is surely true at some point—the technology will take time to develop, and we’ll have to start somewhere. But I don’t see why we’d stop there, when we could continue to copy millions or billions of people; and his choices of who would be emulated, while not wildly implausible, are utterly terrifying.

He predicts that we’d emulate genius scientists and engineers; okay, fair enough, that seems right. I doubt that the benefits of doing so will be as high as many people imagine, because scientific progress actually depends a lot more on the combined efforts of millions of scientists than on rare sparks of brilliance by lone geniuses; but those people are definitely very smart, and having more of them around could be a good thing. I can also see people wanting to do this, and thus investing in making it happen.

He also predicts that we’d emulate billionaires. Now, as a prediction, I have to admit that this is actually fairly plausible; billionaires are precisely the sort of people who are rich enough to pay to be emulated and narcissistic enough to want to. But where Hanson really goes off the deep end here is that he sees this as a good thing. He seems to honestly believe that billionaires are so rich because they are so brilliant and productive. He thinks that a million copies of Elon Musks would produce a million hectobillionaires—when in reality it would produce a million squabbling narcissists, who at best had to split the same $200 billion wealth between them, and might very well end up with less because they squander it.

Hanson has a long section on trying to predict the personalities of ems. Frankly this could just have been dropped entirely; it adds almost nothing to the book, and the book is much too long. But the really striking thing to me about that section is what isn’t there. He goes through a long list of studies that found weak correlations between various personality traits like extroversion or openness and wealth—mostly comparing something like the 20th percentile to the 80th percentile—and then draws sweeping conclusions about what ems will be like, under the assumption that ems are all drawn from people in the 99.99999th percentile. (Yes, upper-middle-class people are, on average, more intelligent and more conscientious than lower-middle-class people. But do we even have any particular reason to think that the personalities of people who make $150,000 are relevant to understanding the behavior of people who make $15 billion?) But he completely glosses over the very strong correlations that specifically apply to people in that very top super-rich class: They’re almost all narcissists and/or psychopaths.

Hanson predicts a world where each em is copied many, many times—millions, billions, even trillions of times, and also in which the very richest ems are capable of buying parallel processing time that lets them accelerate their own thought processes to a million times faster than a normal human. (Is that even possible? Does consciousness work like that? Who knows!?) The world that Hanson is predicting is thus one where all the normal people get outnumbered and overpowered by psychopaths.

Basically this is the most abjectly dystopian cyberpunk hellscape imaginable. And he talks about it the whole time as if it were good.

It’s like he played the game Action Potential and thought, “This sounds great! I’d love to live there!” I mean, why wouldn’t you want to owe a life-debt on your own body and have to work 120-hour weeks for a trillion-dollar corporation just to make the payments on it?

Basically, Hanson doesn’t understand how wealth is actually acquired. He is educated as an economist, yet his understanding of capitalism basically amounts to believing in magic. He thinks that competitive markets just somehow perfectly automatically allocate wealth to whoever is most productive, and thus concludes that whoever is wealthy now must just be that productive.

I can see no other way to explain his wildly implausible predictions that the em economy will double every month or two. A huge swath of the book depends upon this assumption, but he waits until halfway through the book to even try to defend it, and then does an astonishingly bad job of doing so. (Honestly, even if you buy his own arguments—which I don’t—they seem to predict that population would grow with Moore’s Law—doubling every couple of years, not every couple of months.)

Whereas Keynes predicted based on sound economic principles that economic growth would more or less proceed apace and got his answer spot-on, Hanson predicts that for mysterious, unexplained reasons economic growth will suddenly increase by two orders of magnitude—and I’m pretty sure he’s going to be wildly wrong.

Hanson also predicts that ems will be on average poorer than we are, based on some sort of perfect-competition argument that doesn’t actually seem to mesh at all with his predictions of spectacularly rapid economic and technological growth. I think the best way to make sense of this is to assume that it means the trend toward insecure affluence will continue: Ems will have an objectively high standard of living in terms of what they own, what games they play, where they travel, and what they eat and drink (in simulation), but they will constantly be struggling to keep up with the rent on their homes—or even their own bodies. This is a world where (the very finest simulation of) Dom Perignon is $7 a bottle and wages are $980 an hour—but monthly rent is $284,000.

Early in the book Hanson argues that this life of poverty and scarcity will lead to more conservative values, on the grounds that people who are poorer now seem to be more conservative, and this has something to do with farmers versus foragers. Hanson’s explanation of all this is baffling; I will quote it at length, just so it’s clear I’m not misrepresenting it:

The other main (and independent) axis of value variation ranges between poor and rich societies. Poor societies place more value on conformity, security, and traditional values such as marriage, heterosexuality, religion, patriotism, hard work, and trust in authority. In contrast, rich societies place more value on individualism, self-direction, tolerance, pleasure, nature, leisure, and trust. When the values of individuals within a society vary on the same axis, we call this a left/liberal (rich) versus right/conservative (poor) axis.

Foragers tend to have values more like those of rich/liberal people today, while subsistence farmers tend to have values more like those of poor/conservative people today. As industry has made us richer, we have on average moved from conservative/farmer values to liberal/forager values. This value movement can make sense if cultural evolution used the social pressures farmers faced, such as conformity and religion, to induce humans, who evolved to find forager behaviors natural, to instead act like farmers. As we become rich, we don’t as strongly fear the threats behind these social pressures. This connection may result in part from disease; rich people are healthier, and healthier societies fear less.

The alternate theory that we have instead learned that rich forager values are more true predicts that values should have followed a random walk over time, and be mostly common across space. It also predicts the variance of value changes tracking the rate at which relevant information appears. But in fact industrial-era value changes have tracked the wealth of each society in much more steady and consistent fashion. And on this theory, why did foragers ever acquire farmer values?

[…]

In the scenario described in this book, many strange-to-forager behaviors are required, and median per-person (i.e. per-em) incomes return to near-subsistence levels. This suggests that the em era may reverse the recent forager-like trend toward more liberality; ems may have more farmer-like values.

The Age of Em, p. 26-27

There’s a lot to unpack here, but maybe it’s better to burn the whole suitcase.

First of all, it’s not entirely clear that this is really a single axis of variation, that foragers and farmers differ from each other in the same way as liberals and conservatives. There’s some truth to that at least—both foragers and liberals tend to be more generous, both farmers and conservatives tend to enforce stricter gender norms. But there are also clear ways that liberal values radically deviate from forager values: Forager societies are extremely xenophobic, and typically very hostile to innovation, inequality, or any attempts at self-aggrandizement (a phenomenon called “fierce egalitarianism“). San Francisco epitomizes rich, liberal values, but it would be utterly alien and probably regarded as evil by anyone from the Yanomamo.

Second, there is absolutely no reason to predict any kind of random walk. That’s just nonsense. Would you predict that scientific knowledge is a random walk, with each new era’s knowledge just a random deviation from the last’s? Maybe next century we’ll return to geocentrism, or phrenology will be back in vogue? On the theory that liberal values (or at least some liberal values) are objectively correct, we would expect them to advance as knowledge doesimproving over time, and improving faster in places that have better institutions for research, education, and free expression. And indeed, this is precisely the pattern we have observed. (Those places are also richer, but that isn’t terribly surprising either!)

Third, while poorer regions are indeed more conservative, poorer people within a region actually tend to be more liberal. Nigeria is poorer and more conservative than Norway, and Mississippi is poorer and more conservative than Massachusetts. But higher-income households in the United States are more likely to vote Republican. I think this is particularly true of people living under insecure affluence: We see the abundance of wealth around us, and don’t understand why we can’t learn to share it better. We’re tired of fighting over scraps while the billionaires claim more and more. Millennials and Zoomers absolutely epitomize insecure affluence, and we also absolutely epitomize liberalism. So, if indeed ems live a life of insecure affluence, we should expect them to be like Zoomers: “Trans liberation now!” and “Eat the rich!” (Or should I say, “Delete the rich!”)

And really, doesn’t that make more sense? Isn’t that the trend our society has been on, for at least the last century? We’ve been moving toward more and more acceptance of women and minorities, more and more deviation from norms, more and more concern for individual rights and autonomy, more and more resistance to authority and inequality.

The funny thing is, that world sounds a lot better than the one Hanson is predicting.

A world of left-wing ems would probably run things a lot better than Hanson imagines: Instead of copying the same hundred psychopaths over and over until we fill the planet, have no room for anything else, and all struggle to make enough money just to stay alive, we could moderate our population to a more sustainable level, preserve diversity and individuality, and work toward living in greater harmony with each other and the natural world. We could take this economic and technological abundance and share it and enjoy it, instead of killing ourselves and each other to make more of it for no apparent reason.

The one good argument Hanson makes here is expressed in a single sentence: “And on this theory, why did foragers ever acquire farmer values?” That actually is a good question; why did we give up on leisure and egalitarianism when we transitioned from foraging to agriculture?

I think scarcity probably is relevant here: As food became scarcer, maybe because of climate change, people were forced into an agricultural lifestyle just to have enough to eat. Early agricultural societies were also typically authoritarian and violent. Under those conditions, people couldn’t be so generous and open-minded; they were surrounded by threats and on the verge of starvation.

I guess if Hanson is right that the em world is also one of poverty and insecurity, we might go back to those sort of values, borne of desperation. But I don’t see any reason to think we’d give up all of our liberal values. I would predict that ems will still be feminist, for instance; in fact, Hanson himself admits that since VR avatars would let us change gender presentation at will, gender would almost certainly become more fluid in a world of ems. Far from valuing heterosexuality more highly (as conservatives do, a “farmer value” according to Hanson), I suspect that ems will have no further use for that construct, because reproduction will be done by manufacturing, not sex, and it’ll be so easy to swap your body into a different one that hardly anyone will even keep the same gender their whole life. They’ll think it’s quaint that we used to identify so strongly with our own animal sexual dimorphism.

But maybe it is true that the scarcity induced by a hyper-competitive em world would make people more selfish, less generous, less trusting, more obsessed with work. Then let’s not do that! We don’t have to build that world! This isn’t a foregone conclusion!

There are many other paths yet available to us.

Indeed, perhaps the simplest would be to just ban artificial intelligence, at least until we can get a better handle on what we’re doing—and perhaps until we can institute the kind of radical economic changes necessary to wrest control of the world away from the handful of psychopaths currently trying their best to run it into the ground.

I admit, it would kind of suck to not get any of the benefits of AI, like self-driving cars, safer airplanes, faster medical research, more efficient industry, and better video games. It would especially suck if we did go full-on Butlerian Jihad and ban anything more complicated than a pocket calculator. (Our lifestyle might have to go back to what it was in—gasp! The 1950s!)

But I don’t think it would suck nearly as much as the world Robin Hanson thinks is in store for us if we continue on our current path.

So I certainly hope he’s wrong about all this.

Fortunately, I think he probably is.

Reflections at the crossroads

Jan 21 JDN 2460332

When this post goes live, I will have just passed my 36th birthday. (That means I’ve lived for about 1.1 billion seconds, so in order to be as rich as Elon Musk, I’d need to have made, on average, since birth, $200 per second—$720,000 per hour.)

I certainly feel a lot better turning 36 than I did 35. I don’t have any particular additional accomplishments to point to, but my life has already changed quite a bit, in just that one year: Most importantly, I quit my job at the University of Edinburgh, and I am currently in the process of moving out of the UK and back home to Michigan. (We moved the cat over Christmas, and the movers have already come and taken most of our things away; it’s really just us and our luggage now.)

But I still don’t know how to field the question that people have been asking me since I announced my decision to do this months ago:

“What’s next?”

I’m at a crossroads now, trying to determine which path to take. Actually maybe it’s more like a roundabout; it has a whole bunch of different paths, surely not just two or three. The road straight ahead is labeled “stay in academia”; the others at the roundabout are things like “freelance writing”, “software programming”, “consulting”, and “tabletop game publishing”. There’s one well-paved and superficially enticing road that I’m fairly sure I don’t want to take, labeled “corporate finance”.

Right now, I’m just kind of driving around in circles.

Most people don’t seem to quit their jobs without a clear plan for where they will go next. Often they wait until they have another offer in hand that they intend to take. But when I realized just how miserable that job was making me, I made the—perhaps bold, perhaps courageous, perhaps foolish—decision to get out as soon as I possibly could.

It’s still hard for me to fully understand why working at Edinburgh made me so miserable. Many features of an academic career are very appealing to me. I love teaching, I like doing research; I like the relatively flexible hours (and kinda need them, because of my migraines).

I often construct formal decision models to help me make big choices—generally it’s a linear model, where I simply rate each option by its relative quality in a particular dimension, then try different weightings of all the different dimensions. I’ve used this successfully to pick out cars, laptops, even universities. I’m not entrusting my decisions to an algorithm; I often find myself tweaking the parameters to try to get a particular result—but that in itself tells me what I really want, deep down. (Don’t do that in research—people do, and it’s bad—but if the goal is to make yourself happy, your gut feelings are important too.)

My decision models consistently rank university teaching quite high. It generally only gets beaten by freelance writing—which means that maybe I should give freelance writing another try after all.

And yet, my actual experience at Edinburgh was miserable.

What went wrong?

Well, first of all, I should acknowledge that when I separate out the job “university professor” into teaching and research as separate jobs in my decision model, and include all that goes into both jobs—not just the actual teaching, but the grading and administrative tasks; not just doing the research, but also trying to fund and publish it—they both drop lower on the list, and research drops down a lot.

Also, I would rate them both even lower now, having more direct experience of just how awful the exam-grading, grant-writing and journal-submitting can be.

Designing and then grading an exam was tremendously stressful: I knew that many of my students’ futures rested on how they did on exams like this (especially in the UK system, where exams are absurdly overweighted! In most of my classes, the final exam was at least 60% of the grade!). I struggled mightily to make the exam as fair as I could, all the while knowing that it would never really feel fair and I didn’t even have the time to make it the best it could be. You really can’t assess how well someone understands an entire subject in a multiple-choice exam designed to take 90 minutes. It’s impossible.

The worst part of research for me was the rejection.

I mentioned in a previous post how I am hypersensitive to rejection; applying for grants and submitting to journals was clearly the worst feelings of rejection I’ve felt in any job. It felt like they were evaluting not only the value of my work, but my worth as a scientist. Failure felt like being told that my entire career was a waste of time.

It was even worse than the feeling of rejection in freelance writing (which is one of the few things that my model tells me is bad about freelancing as a career for me, along with relatively low and uncertain income). I think the difference is that a book publisher is saying “We don’t think we can sell it.”—’we’ and ‘sell’ being vital. They aren’t saying “this is a bad book; it shouldn’t exist; writing it was a waste of time.”; they’re just saying “It’s not a subgenre we generally work with.” or “We don’t think it’s what the market wants right now.” or even “I personally don’t care for it.”. They acknowledge their own subjective perspective and the fact that it’s ultimately dependent on forecasting the whims of an extremely fickle marketplace. They aren’t really judging my book, and they certainly aren’t judging me.

But in research publishing, it was different. Yes, it’s all in very polite language, thoroughly spiced with sophisticated jargon (though some reviewers are more tactful than others). But when your grant application gets rejected by a funding agency or your paper gets rejected by a journal, the sense really basically is “This project is not worth doing.”; “This isn’t good science.”; “It was/would be a waste of time and money.”; “This (theory or experiment you’ve spent years working on) isn’t interesting or important.” Nobody ever came out and said those things, nor did they come out and say “You’re a bad economist and you should feel bad.”; but honestly a couple of the reviews did kinda read to me like they wanted to say that. They thought that the whole idea that human beings care about each other is fundamentally stupid and naive and not worth talking about, much less running experiments on.

It isn’t so much that I believed them that my work was bad science. I did make some mistakes along the way (but nothing vital; I’ve seen far worse errors by Nobel Laureates). I didn’t have very large samples (because every person I add to the experiment is money I have to pay, and therefore funding I have to come up with). But overall I do believe that my work is sufficiently rigorous to be worth publishing in scientific journals.

It’s more that I came to feel that my work is considered bad, that the kind of work I wanted to do would forever be an uphill battle against an implacable enemy. I already feel exhausted by that battle, and it had only barely begun. I had thought that behavioral economics was a more successful paradigm by now, that it had largely displaced the neoclassical assumptions that came before it; but I was wrong. Except specifically in journals dedicated to experimental and behavioral economics (of which prestigious journals are few—I quickly exhausted them), it really felt like a lot of the feedback I was getting amounted to, “I refuse to believe your paradigm.”.

Part of the problem, also, was that there simply aren’t that many prestigious journals, and they don’t take that many papers. The top 5 journals—which, for whatever reason, command far more respect than any other journals among economists—each accept only about 5-10% of their submissions. Surely more than that are worth publishing; and, to be fair, much of what they reject probably gets published later somewhere else. But it makes a shockingly large difference in your career how many “top 5s” you have; other publications almost don’t matter at all. So once you don’t get into any of those (which of course I didn’t), should you even bother trying to publish somewhere else?

And what else almost doesn’t matter? Your teaching. As long as you show up to class and grade your exams on time (and don’t, like, break the law or something), research universities basically don’t seem to care how good a teacher you are. That was certainly my experience at Edinburgh. (Honestly even their responses to professors sexually abusing their students are pretty unimpressive.)

Some of the other faculty cared, I could tell; there were even some attempts to build a community of colleagues to support each other in improving teaching. But the administration seemed almost actively opposed to it; they didn’t offer any funding to support the program—they wouldn’t even buy us pizza at the meetings, the sort of thing I had as an undergrad for my activist groups—and they wanted to take the time we spent in such pedagogy meetings out of our grading time (probably because if they didn’t, they’d either have to give us less grading, or some of us would be over our allotted hours and they’d owe us compensation).

And honestly, it is teaching that I consider the higher calling.

The difference between 0 people knowing something and 1 knowing it is called research; the difference between 1 person knowing it and 8 billion knowing it is called education.

Yes, of course, research is important. But if all the research suddenly stopped, our civilization would stagnate at its current level of technology, but otherwise continue unimpaired. (Frankly it might spare us the cyberpunk dystopia/AI apocalypse we seem to be hurtling rapidly toward.) Whereas if all education suddenly stopped, our civilization would slowly decline until it ultimately collapsed into the Stone Age. (Actually it might even be worse than that; even Stone Age cultures pass on knowledge to their children, just not through formal teaching. If you include all the ways parents teach their children, it may be literally true that humans cannot survive without education.)

Yet research universities seem to get all of their prestige from their research, not their teaching, and prestige is the thing they absolutely value above all else, so they devote the vast majority of their energy toward valuing and supporting research rather than teaching. In many ways, the administrators seem to see teaching as an obligation, as something they have to do in order to make money that they can spend on what they really care about, which is research.

As such, they are always making classes bigger and bigger, trying to squeeze out more tuition dollars (well, in this case, pounds) from the same number of faculty contact hours. It becomes impossible to get to know all of your students, much less give them all sufficient individual attention. At Edinburgh they even had the gall to refer to their seminars as “tutorials” when they typically had 20+ students. (That is not tutoring!)And then of course there were the lectures, which often had over 200 students.

I suppose it could be worse: It could be athletics they spend all their money on, like most Big Ten universities. (The University of Michigan actually seems to strike a pretty good balance: they are certainly not hurting for athletic funding, but they also devote sizeable chunks of their budget to research, medicine, and yes, even teaching. And unlike virtually all other varsity athletic programs, University of Michigan athletics turns a profit!)

If all the varsity athletics in the world suddenly disappeared… I’m not convinced we’d be any worse off, actually. We’d lose a source of entertainment, but it could probably be easily replaced by, say, Netflix. And universities could re-focus their efforts on academics, instead of acting like a free training and selection system for the pro leagues. The University of California, Irvine certainly seemed no worse off for its lack of varsity football. (Though I admit it felt a bit strange, even to a consummate nerd like me, to have a varsity League of Legends team.)

They keep making the experience of teaching worse and worse, even as they cut faculty salaries and make our jobs more and more precarious.

That might be what really made me most miserable, knowing how expendable I was to the university. If I hadn’t quit when I did, I would have been out after another semester anyway, and going through this same process a bit later. It wasn’t even that I was denied tenure; it was never on the table in the first place. And perhaps because they knew I wouldn’t stay anyway, they didn’t invest anything in mentoring or supporting me. Ostensibly I was supposed to be assigned a faculty mentor immediately; I know the first semester was crazy because of COVID, but after two and a half years I still didn’t have one. (I had a small research budget, which they reduced in the second year; that was about all the support I got. I used it—once.)

So if I do continue on that “academia” road, I’m going to need to do a lot of things differently. I’m not going to put up with a lot of things that I did. I’ll demand a long-term position—if not tenure-track, at least renewable indefinitely, like a lecturer position (as it is in the US, where the tenure-track position is called “assistant professor” and “lecturer” is permanent but not tenured; in the UK, “lecturers” are tenure-track—except at Oxford, and as of 2021, Cambridge—just to confuse you). Above all, I’ll only be applying to schools that actually have some track record for valuing teaching and supporting their faculty.

And if I can’t find any such positions? Then I just won’t apply at all. I’m not going in with the “I’ll take what I can get” mentality I had last time. Our household finances are stable enough that I can afford to wait awhile.

But maybe I won’t even do that. Maybe I’ll take a different path entirely.

For now, I just don’t know.

Empathy is not enough

Jan 14 JDN 2460325

A review of Against Empathy by Paul Bloom

The title Against Empathy is clearly intentionally provocative, to the point of being obnoxious: How can you be against empathy? But the book really does largely hew toward the conclusion that empathy, far from being an unalloyed good as we may imagine it to be, is overall harmful and detrimental to society.

Bloom defines empathy narrowly, but sensibly, as the capacity to feel other people’s emotions automatically—to feel hurt when you see someone hurt, afraid when you see someone afraid. He argues surprisingly well that this capacity isn’t really such a great thing after all, because it often makes us help small numbers of people who are like us rather than large numbers of people who are different from us.

But something about the book rubs me the wrong way all throughout, and I think I finally put my finger on it:

If empathy is bad… compared to what?

Compared to some theoretical ideal of perfect compassion where we love all sentient beings in the universe equally and act only according to maxims that would yield the greatest benefit for all, okay, maybe empathy is bad.

But that is an impossible ideal. No human being has ever approached it. Even our greatest humanitarians are not like that.

Indeed, one thing has clearly characterized the very best human beings, and that is empathy. Every one of them has been highly empathetic.

The case for empathy gets even stronger if you consider the other extreme: What are human beings like when they lack empathy? Why, those people are psychopaths, and they are responsible for the majority of violent crimes and nearly all the most terrible atrocities.

Empirically, if you look at humans as we actually are, it really seems like this function is monotonic: More empathy makes people behave better. Less empathy makes them behave worse.

Yet Bloom does have a point, nevertheless.

There are real-world cases where empathy seems to have done more harm than good.

I think his best examples come from analysis of charitable donations. Most people barely give anything to charity, which we might think of as a lack of empathy. But a lot of people do give to a great deal to charity—yet the charities they give to and the gifts they give are often woefully inefficient.

Let’s even set aside cases like the Salvation Army, where the charity is actively detrimental to society due to the distortions of ideology. The Salvation Army is in fact trying to do good—they’re just starting from a fundamentally evil outlook on the universe. (And if that sounds harsh to you? Take a look at what they say about people like me.)

No, let’s consider charities that are well-intentioned, and not blinded by fanatical ideology, who really are trying to work toward good things. Most of them are just… really bad at it.

The most cost-effective charities, like the ones GiveWell gives top ratings to, can save a life for about $3,000-5,000, or about $150 to $250 per QALY.

But a typical charity is far, far less efficient than that. It’s difficult to get good figures on it, but I think it would be generous to say that a typical charity is as efficient as the standard cost-effectiveness threshold used in US healthcare, which is $50,000 per QALY. That’s already two hundred times less efficient.

And many charities appear to be even below that, where their marginal dollars don’t really seem to have any appreciable benefit in terms of QALY. Maybe $1 million per QALY—spend enough, and they’d get a QALY eventually.

Other times, people give gifts to good charities, but the gifts they give are useless—the Red Cross is frequently inundated with clothing and toys that it has absolutely no use for. (Please, please, I implore you: Give them money. They can buy what they need. And they know what they need a lot better than you do.)

Why do people give to charities that don’t really seem to accomplish anything? Because they see ads that tug on their heartstrings, or get solicited donations directly by people on the street or door-to-door canvassers. In other words, empathy.

Why do people give clothing and toys to the Red Cross after a disaster, instead of just writing a check or sending a credit card payment? Because they can see those crying faces in their minds, and they know that if they were a crying child, they’d want a toy to comfort them, not some boring, useless check. In other words, empathy.

Empathy is what you’re feeling when you see those Sarah McLachlan ads with sad puppies in them, designed to make you want to give money to the ASPCA.

Now, I’m not saying you shouldn’t give to the ASPCA. Actually animal welfare advocacy is one of those issues where cost-effectiveness is really hard to assess—like political donations, and for much the same reason. If we actually managed to tilt policy so that factory farming were banned, the direct impact on billions of animals spared that suffering—while indubitably enormous—might actually be less important, morally, than the impact on public health and climate change from people eating less meat. I don’t know what multiplier to apply to a cow’s suffering to convert her QALY into mine. But I do know that the world currently eats far too much meat, and it’s cooking the planet along with the cows. Meat accounts for 60% of food-related greenhouse gases, and 35% of all greenhouse gases.

But I am saying that if you give to the ASPCA, it should be because you support their advocacy against factory farming—not because you saw pictures of very sad puppies.

And empathy, unfortunately, doesn’t really work that way.

When you get right down to it, what Paul Bloom is really opposing is scope neglect, which is something I’ve written about before.

We just aren’t capable of genuinely feeling the pain of a million people, or a thousand, or probably even a hundred. (Maybe we can do a hundred; that’s under our Dunbar number, after all.) So when confronted with global problems that affect millions of people, our empathy system just kind of overloads and shuts down.

ERROR: OVERFLOW IN EMPATHY SYSTEM. ABORT, RETRY, IGNORE?

But when confronted with one suffering person—or five, or ten, or twenty—we can actually feel empathy for them. We can look at their crying face and we may share their tears.

Charities know this; that’s why Sarah McLachlan does those ASPCA ads. And if that makes people donate to good causes, that’s a good thing. (If it makes them donate to the Salvation Army, that’s a different story.)

The problem is, it really doesn’t tell us what causes are best to donate to. Almost any cause is going to alleviate some suffering of someone, somewhere; but there’s an enormous difference between $250 per QALY, $50,000per QALY, and $1 million per QALY. Your $50 donation would add either two and a half months, eight hours, or just over 26 minutes of joy to someone else’s life, respectively. (In the latter case, it may literally be better—morally—for you to go out to lunch or buy a video game.)

To really know the best places to give to, you simply can’t rely on your feelings of empathy toward the victims. You need to do research—you need to do math. (Or someone does, anyway; you can also trust GiveWell to do it for you.)

Paul Bloom is right about this. Empathy doesn’t solve this problem. Empathy is not enough.

But where I think he loses me is in suggesting that we don’t need empathy at all—that we could somehow simply dispense with it. His offer is to replace it with an even-handed, universal-minded utilitarian compassion, a caring for all beings in the universe that values all their interests evenly.

That sounds awfully appealing—other than the fact that it’s obviously impossible.

Maybe it’s something we can all aspire to. Maybe it’s something we as a civilization can someday change ourselves to become capable of feeling, in some distant transhuman future. Maybe even, sometimes, at our very best moments, we can even approximate it.

But as a realistic guide for how most people should live their lives? It’s a non-starter.

In the real world, people with little or no empathy are terrible. They don’t replace it with compassion; they replace it with selfishness, greed, and impulsivity.

Indeed, in the real world, empathy and compassion seem to go hand-in-hand: The greatest humanitarians do seem like they better approximate that universal caring (though of course they never truly achieve it). But they are also invariably people of extremely high empathy.

And so, Dr. Bloom, I offer you a new title, perhaps not as catchy or striking—perhaps it would even have sold fewer books. But I think it captures the correct part of your thesis much better:

Empathy is not enough.

How do we stop overspending on healthcare?

Dec 10 JDN 2460290

I don’t think most Americans realize just how much more the US spends on healthcare than other countries. This is true not simply in absolute terms—of course it is, the US is rich and huge—but in relative terms: As a portion of GDP, our healthcare spending is a major outlier.

Here’s a really nice graph from Healthsystemtracker.org that illustrates it quite nicely: Almost all other First World countries share a simple linear relationship between their per-capita GDP and their per-capita healthcare spending. But one of these things is not like the other ones….

The outlier in the other direction is Ireland, but that’s because their GDP is wildly inflated by Leprechaun Economics. (Notice that it looks like Ireland is by far the richest country in the sample! This is clearly not the case in reality.) With a corrected estimate of their true economic output, they are also quite close to the line.

Since US GDP per capita ($70,181) is in between that of Denmark ($64,898) and Norway ($80,496) both of which have very good healthcare systems (#ScandinaviaIsBetter), we would expect that US spending on healthcare would similarly be in between. But while Denmark spends $6,384 per person per year on healthcare and Norway spends $7,065 per person per year, the US spends $12,914.

That is, the US spends nearly twice as much as it should on healthcare.

The absolute difference between what we should spend and what we actually spend is nearly $6,000 per person per year. Multiply that out by the 330 million people in the US, and…

The US overspends on healthcare by nearly $2 trillion per year.

This might be worth it, if health in the US were dramatically better than health in other countries. (In that case I’d be saying that other countries spend too little.) But plainly it is not.

Probably the simplest and most comparable measure of health across countries is life expectancy. US life expectancy is 76 years, and has increased over time. But if you look at the list of countries by life expectancy, the US is not even in the top 50. Our life expectancy looks more like middle-income countries such as Algeria, Brazil, and China than it does like Norway or Sweden, who should be our economic peers.

There are of course many things that factor into life expectancy aside from healthcare: poverty and homicide are both much worse in the US than in Scandinavia. But then again, poverty is much worse in Algeria, and homicide is much worse in Brazil, and yet they somehow manage to nearly match the US in life expectancy (actually exceeding it in some recent years).

The US somehow manages to spend more on healthcare than everyone else, while getting outcomes that are worse than any country of comparable wealth—and even some that are far poorer.

This is largely why there is a so-called “entitlements crisis” (as many a libertarian think tank is fond of calling it). Since libertarians want to cut Social Security most of all, they like to lump it in with Medicare and Medicaid as an “entitlement” in “crisis”; but in fact we only need a few minor adjustments to the tax code to make sure that Social Security remains solvent for decades to come. It’s healthcare spending that’s out of control.

Here, take a look.

This is the ratio of Social Security spending to GDP from 1966 to the present. Notice how it has been mostly flat since the 1980s, other than a slight increase in the Great Recession.

This is the ratio of Medicare spending to GDP over the same period. Even ignoring the first few years while it was ramping up, it rose from about 0.6% in the 1970s to almost 4% in 2020, and only started to decline in the last few years (and it’s probably too early to say whether that will continue).

Medicaid has a similar pattern: It rose steadily from 0.2% in 1966 to over 3% today—and actually doesn’t even show any signs of leveling off.

If you look at Medicare and Medicaid together, they surged from just over 1% of GDP in 1970 to nearly 7% today:

Put another way: in 1982, Social Security was 4.8% of GDP while Medicare and Medicaid combined were 2.4% of GDP. Today, Social Security is 4.9% of GDP while Medicare and Medicaid are 6.8% of GDP.

Social Security spending barely changed at all; healthcare spending more than doubled. If we reduced our Medicare and Medicaid spending as a portion of GDP back to what it was in 1982, we would save 4.4% of GDP—that is, 4.4% of over $25 trillion per year, so $1.1 trillion per year.

Of course, we can’t simply do that; if we cut benefits that much, millions of people would suddenly lose access to healthcare they need.

The problem is not that we are spending frivolously, wasting the money on treatments no one needs. On the contrary, both Medicare and Medicaid carefully vet what medical services they are willing to cover, and if anything probably deny services more often than they should.

No, the problem runs deeper than this.

Healthcare is too expensive in the United States.

We simply pay more for just about everything, and especially for specialist doctors and hospitals.

In most other countries, doctors are paid like any other white-collar profession. They are well off, comfortable, certainly, but few of them are truly rich. But in the US, we think of doctors as an upper-class profession, and expect them to be rich.

Median doctor salaries are $98,000 in France and $138,000 in the UK—but a whopping $316,000 in the US. Germany and Canada are somewhere in between, at $183,000 and $195,000 respectively.

Nurses, on the other hand, are paid only a little more in the US than in Western Europe. This means that the pay difference between doctors and nurses is much higher in the US than most other countries.

US prices on brand-name medication are frankly absurd. Our generic medications are typically cheaper than other countries, but our brand name pills often cost twice as much. I noticed this immediately on moving to the UK: I had always been getting generics before, because the brand name pills cost ten times as much, but when I moved here, suddenly I started getting all brand-name medications (at no cost to me), because the NHS was willing to buy the actual brand name products, and didn’t have to pay through the nose to do so.

But the really staggering differences are in hospitals.

Let’s compare the prices of a few inpatient procedures between the US and Switzerland. Switzerland, you should note, is a very rich country that spends a lot on healthcare and has nearly the world’s highest life expectancy. So it’s not like they are skimping on care. (Nor is it that prices in general are lower in Switzerland; on the contrary, they are generally higher.)

A coronary bypass in Switzerland costs about $33,000. In the US, it costs $76,000.

A spinal fusion in Switzerland costs about $21,000. In the US? $52,000.

Angioplasty in Switzerland: $9.000. In the US? $32,000.

Hip replacement: Switzerland? $16,000. The US? $28,000.

Knee replacement: Switzerland? $19,000. The US? $27,000.

Cholecystectomy: Switzerland? $8,000. The US? $16,000.

Appendectomy: Switzerland? $7,000. The US? $13,000.

Caesarian section: Switzerland? $8,000. The US? $11,000.

Hospital prices are even lower in Germany and Spain, whose life expectancies are not as high as Switzerland—but still higher than the US.

These prices are so much lower that in fact if you were considering getting surgery for a chronic condition in the US, don’t. Buy plane tickets to Europe and get the procedure done there. Spend an extra few thousand dollars on a nice European vacation and you’d still end up saving money. (Obviously if you need it urgently you have no choice but to use your nearest hospital.) I know that if I ever need a knee replacement (which, frankly, is likely, given my height), I’m gonna go to Spain and thereby save $22,000 relative to what it would cost in the US. That’s a difference of a car.

Combine this with the fact that the US is the only First World country without universal healthcare, and maybe you can see why we’re also the only country in the world where people are afraid to call an ambulance because they don’t think they can afford it. We are also the only country in the world with a medical debt crisis.

Where is all this extra money going?

Well, a lot of it goes to those doctors who are paid three times as much as in France. That, at least, seems defensible: If we want the best doctors in the world maybe we need to pay for them. (Then again, do we have the best doctors in the world? If so, why is our life expectancy so mediocre?)

But a significant portion is going to shareholders.

You probably already knew that there are pharmaceutical companies that rake in huge profits on those overpriced brand-name medications. The top five US pharma companies took in net earnings of nearly $82 billion last year. Pharmaceutical companies typically take in much higher profit margins than other companies: a typical corporation makes about 8% of its revenue in profit, while pharmaceutical companies average nearly 14%.

But you may not have realized that a surprisingly large proportion of hospitals are for-profit businesseseven though they make most of their revenue from Medicare and Medicaid.

I was surprised to find that the US is not unusual in that; in fact, for-profit hospitals exist in dozens of countries, and the fraction of US hospital capacity that is for-profit isn’t even particularly high by world standards.

What is especially large is the profits of US hospitals. 7 healthcare corporations in the US all posted net incomes over $1 billion in 2021.

Even nonprofit US hospitals are tremendously profitable—as oxymoronic as that may sound. In fact, mean operating profit is higher among nonprofit hospitals in the US than for-profit hospitals. So even the hospitals that aren’t supposed to be run for profit… pretty much still are. They get tax deductions as if they were charities—but they really don’t act like charities.

They are basically nonprofit in name only.

So fixing this will not be as simple as making all hospitals nonprofit. We must also restructure the institutions so that nonprofit hospitals are genuinely nonprofit, and no longer nonprofit in name only. It’s normal for a nonprofit to have a little bit of profit or loss—nobody can make everything always balance perfectly—but these hospitals have been raking in huge profits and keeping it all in cash instead of using it to reduce prices or improve services. In the study I linked above, those 2,219 “nonprofit” hospitals took in operating profits averaging $43 million each—for a total of $95 billion.

Between pharmaceutical companies and hospitals, that’s a total of over $170 billion per year just in profit. (That’s more than we spend on food stamps, even after surge due to COVID.) This is pure grift. It must be stopped.

But that still doesn’t explain why we’re spending $2 trillion more than we should! So after all, I must leave you with a question:

What is America doing wrong? Why is our healthcare so expensive?

Israel, Palestine, and the World Bank’s disappointing priorities

Nov 12 JDN 2460261

Israel and Palestine are once again at war. (There are a disturbing number of different years in which one could have written that sentence.) The BBC has a really nice section of their website dedicated to reporting on various facets of the war. The New York Times also has a section on it, but it seems a little tilted in favor of Israel.

This time, it started with a brutal attack by Hamas, and now Israel has—as usual—overreacted and retaliated with a level of force that is sure to feed the ongoing cycle of extremism. All across social media I see people wanting me to take one side or the other, often even making good points: “Hamas slaughters innocents” and “Israel is a de facto apartheid state” are indeed both important points I agree with. But if you really want to know my ultimate opinion, it’s that this whole thing is fundamentally evil and stupid because human beings are suffering and dying over nothing but lies. All religions are false, most of them are evil, and we need to stop killing each other over them.

Anti-Semitism and Islamophobia are both morally wrong insofar as they involve harming, abusing or discriminating against actual human beings. Let people dress however they want, celebrate whatever holidays they want, read whatever books they want. Even if their beliefs are obviously wrong, don’t hurt them if they aren’t hurting anyone else. But both Judaism and Islam—and Christianity, and more besides—are fundamentally false, wrong, evil, stupid, and detrimental to the advancement of humanity.

That’s the thing that so much of the public conversation is too embarrassed to say; we’re supposed to pretend that they aren’t fighting over beliefs that obviously false. We’re supposed to respect each particular flavor of murderous nonsense, and always find some other cause to explain the conflict. It’s over culture (what culture?); it’s over territory (whose territory?); it’s a retaliation for past conflict (over what?). We’re not supposed to say out loud that all of this violence ultimately hinges upon people believing in nonsense. Even if the conflict wouldn’t disappear overnight if everyone suddenly stopped believing in God—and are we sure it wouldn’t? Let’s try it—it clearly could never have begun, if everyone had started with rational beliefs in the first place.

But I don’t really want to talk about that right now. I’ve said enough. Instead I want to talk about something a little more specific, something less ideological and more symptomatic of systemic structural failures. Something you might have missed amidst the chaos.

The World Bank recently released a report on the situation focused heavily on the looming threat of… higher oil prices. (And of course there has been breathless reporting from various outlets regarding a headline figure of $150 per barrel which is explicitly stated in the report as an unlikely “worst-case scenario”.)

There are two very big reasons why I found this dismaying.


The first, of course, is that there are obviously far more important concerns here than commodity prices. Yes, I know that this report is part of an ongoing series of Commodity Markets Outlook reports, but the fact that this is the sort of thing that the World Bank has ongoing reports about is also saying something important about the World Bank’s priorities. They release monthly commodity forecasts and full Commodity Markets Outlook reports that come out twice a year, unlike the World Development Reports that only come out once a year. The World Bank doesn’t release a twice-annual Conflict Report or a twice-annual Food Security Report. (Even the FAO, which publishes an annual State of Food Security and Nutrition in the World report, also publishes a State of Agricultural Marketsreport just as often.)

The second is that, when reading the report, one can clearly tell that whoever wrote it thinks that rising oil and gas prices are inherently bad. They keep talking about all of these negative consequences that higher oil prices could have, and seem utterly unaware of the really enormous upside here: We may finally get a chance to do something about climate change.

You see, one of the most basic reasons why we haven’t been able to fix climate change is that oil is too damn cheap. Its market price has consistently failed to reflect its actual costs. Part of that is due to oil subsidies around the world, which have held the price lower than it would be even in a free market; but most of it is due to the simple fact that pollution and carbon emissions don’t cost money for the people who produce them, even though they do cost the world.

Fortunately, wind and solar power are also getting very cheap, and are now at the point where they can outcompete oil and gas for electrical power generation. But that’s not enough. We need to remove oil and gas from everything: heating, manufacturing, agriculture, transportation. And that is far easier to do if oil and gas suddenly become more expensive and so people are forced to stop using them.

Now, granted, many of the downsides in that report are genuine: Because oil and gas are such vital inputs to so many economic processes, it really is true that making them more expensive will make lots of other things more expensive, and in particular could increase food insecurity by making farming more expensive. But if that’s what we’re concerned about, we should be focusing on that: What policies can we use to make sure that food remains available to all? And one of the best things we could be doing toward that goal is finding ways to make agriculture less dependent on oil.

By focusing on oil prices instead, the World Bank is encouraging the world to double down on the very oil subsidies that are holding climate policy back. Even food subsides—which certainly have their own problems—would be an obviously better solution, and yet they are barely mentioned.

In fact, if you actually read the report, it shows that fears of food insecurity seem unfounded: Food prices are actually declining right now. Grain prices in particular seem to be falling back down remarkably quickly after their initial surge when Russia invaded Ukraine. Of course that could change, but it’s a really weird attitude toward the world to see something good and respond with, “Yes, but it might change!” This is how people with anxiety disorders (and I would know) think—which makes it seem as though much of the economic policy community suffers from some kind of collective equivalent of an anxiety disorder.

There also seems to be a collective sense that higher prices are always bad. This is hardly just a World Bank phenomenon; on the contrary, it seems to pervade all of economic thought, including the most esteemed economists, the most powerful policymakers, and even most of the general population of citizens. (The one major exception seems to be housing, where the sense is that higher prices are always good—even when the world is in a chronic global housing shortage that leaves millions homeless.) But prices can be too low or too high. And oil prices are clearly, definitely too low. Prices should reflect the real cost of production—all the real costs of production. It should cost money to pollute other people’s air.

In fact I think the whole report is largely a nothingburger: Oil prices haven’t even risen all that much so far—we’re still at $80 per barrel last I checked—and the one thing that is true about the so-called Efficient Market Hypothesis is that forecasting future prices is a fool’s errand. But it’s still deeply unsettling to see such intelligent, learned experts so clearly panicking over the mere possibility that there could be a price change which would so obviously be good for the long-term future of humanity.

There is plenty more worth saying about the Israel-Palestine conflict, and in particular what sort of constructive policy solutions we might be able to find that would actually result in any kind of long-term peace. I’m no expert on peace negotiations, and frankly I admit it would probably be a liability that if I were ever personally involved in such a negotiation, I’d be tempted to tell both sides that they are idiots and fanatics. (The headline the next morning: “Israeli and Palestinian Delegates Agree on One Thing: They Hate the US Ambassador”.)

The World Bank could have plenty to offer here, yet so far they’ve been too focused on commodity prices. Their thinking is a little too much ‘bank’ and not enough ‘world’.

It is a bit ironic, though also vaguely encouraging, that there are those within the World Bank itself who recognize this problem: Just a few weeks ago Ajay Banga gave a speech to the World Bank about “a world free of poverty on a livable planet”.

Yes. Those sound like the right priorities. Now maybe you could figure out how to turn that lip service into actual policy.