There should be a glut of nurses.

Jan 15 JDN 2459960

It will not be news to most of you that there is a worldwide shortage of healthcare staff, especially nurses and emergency medical technicians (EMTs). I would like you to stop and think about the utterly terrible policy failure this represents. Maybe if enough people do, we can figure out a way to fix it.

It goes without saying—yet bears repeating—that people die when you don’t have enough nurses and EMTs. Indeed, surely a large proportion of the 2.6 million (!) deaths each year from medical errors are attributable to this. It is likely that at least one million lives per year could be saved by fixing this problem worldwide. In the US alone, over 250,000 deaths per year are caused by medical errors; so we’re looking at something like 100,000 lives we could safe each year by removing staffing shortages.

Precisely because these jobs have such high stakes, the mere fact that we would ever see the word “shortage” beside “nurse” or “EMT” was already clear evidence of dramatic policy failure.

This is not like other jobs. A shortage of accountants or baristas or even teachers, while a bad thing, is something that market forces can be expected to correct in time, and it wouldn’t be unreasonable to simply let them do so—meaning, let wages rise on their own until the market is restored to equilibrium. A “shortage” of stockbrokers or corporate lawyers would in fact be a boon to our civilization. But a shortage of nurses or EMTs or firefighters (yes, there are those too!) is a disaster.

Partly this is due to the COVID pandemic, which has been longer and more severe than any but the most pessimistic analysts predicted. But there shortages of nurses before COVID. There should not have been. There should have been a massive glut.

Even if there hadn’t been a shortage of healthcare staff before the pandemic, the fact that there wasn’t a glut was already a problem.

This is what a properly-functioning healthcare policy would look like: Most nurses are bored most of the time. They are widely regarded as overpaid. People go into nursing because it’s a comfortable, easy career with very high pay and usually not very much work. Hospitals spend most of their time with half their beds empty and half of their ambulances parked while the drivers and EMTs sit around drinking coffee and watching football games.

Why? Because healthcare, especially emergency care, involves risk, and the stakes couldn’t be higher. If the number of severely sick people doubles—as in, say, a pandemic—a hospital that usually runs at 98% capacity won’t be able to deal with them. But a hospital that usually runs at 50% capacity will.

COVID exposed to the world what a careful analysis would already have shown: There was not nearly enough redundancy in our healthcare system. We had been optimizing for a narrow-minded, short-sighted notion of “efficiency” over what we really needed, which was resiliency and robustness.

I’d like to compare this to two other types of jobs.

The first is stockbrokers.Set aside for a moment the fact that most of what they do is worthless is not actively detrimental to human society. Suppose that their most adamant boosters are correct and what they do is actually really important and beneficial.

Their experience is almost like what I just said nurses ought to be. They are widely regarded (correctly) as very overpaid. There is never any shortage of them; there are people lining up to be hired. People go into the work not because they care about it or even because they are particularly good at it, but because they know it’s an easy way to make a lot of money.

The one thing that seems to be different from my image may not be as different as it seems. Stockbrokers work long hours, but nobody can really explain why. Frankly most of what they do can be—and has been—successfully automated. Since there simply isn’t that much work for them to do, my guess is that most of the time they spend “working” 60-80 hour weeks is actually not actually working, but sitting around pretending to work. Since most financial forecasters are outperformed by a simple diversified portfolio, the most profitable action for most stock analysts to take most of the time would be nothing.

It may also be that stockbrokers work hard at sales—trying to convince people to buy and sell for bad reasons in order to earn sales commissions. This would at least explain why they work so many hours, though it would make it even harder to believe that what they do benefits society. So if we imagine our “ideal” stockbroker who makes the world a better place, I think they mostly just use a simple algorithm and maybe adjust it every month or two. They make better returns than their peers, but spend 38 hours a week goofing off.

There is a massive glut of stockbrokers. This is what it looks like when a civilization is really optimized to be good at something.

The second is soldiers. Say what you will about them, no one can dispute that their job has stakes of life and death. A lot of people seem to think that the world would be better off without them, but that’s at best only true if everyone got rid of them; if you don’t have soldiers but other countries do, you’re going to be in big trouble. (“We’ll beat our swords into liverwurst / Down by the East Riverside; / But no one wants to be the first!”) So unless and until we can solve that mother of all coordination problems, we need to have soldiers around.

What is life like for a soldier? Well, they don’t seem overpaid; if anything, underpaid. (Maybe some of the officers are overpaid, but clearly not most of the enlisted personnel. Part of the problem there is that “pay grade” is nearly synonymous with “rank”—it’s a primate hierarchy, not a rational wage structure. Then again, so are most industries; the military just makes it more explicit.) But there do seem to be enough of them. Military officials may lament of “shortages” of soldiers, but they never actually seem to want for troops to deploy when they really need them. And if a major war really did start that required all available manpower, the draft could be reinstated and then suddenly they’d have it—the authority to coerce compliance is precisely how you can avoid having a shortage while keeping your workers underpaid. (Russia’s soldier shortage is genuine—something about being utterly outclassed by your enemy’s technological superiority in an obviously pointless imperialistic war seems to hurt your recruiting numbers.)

What is life like for a typical soldier? The answer may surprise you. The overwhelming answer in surveys and interviews (which also fits with the experiences I’ve heard about from friends and family in the military) is that life as a soldier is boring. All you do is wake up in the morning and push rubbish around camp.” Bosnia was scary for about 3 months. After that it was boring. That is pretty much day to day life in the military. You are bored.”

This isn’t new, nor even an artifact of not being in any major wars: Union soldiers in the US Civil War had the same complaint. Even in World War I, a typical soldier spent only half the time on the front, and when on the front only saw combat 1/5 of the time. War is boring.

In other words, there is a massive glut of soldiers. Most of them don’t even know what to do with themselves most of the time.

This makes perfect sense. Why? Because an army needs to be resilient. And to be resilient, you must be redundant. If you only had exactly enough soldiers to deploy in a typical engagement, you’d never have enough for a really severe engagement. If on average you had enough, that means you’d spend half the time with too few. And the costs of having too few soldiers are utterly catastrophic.

This is probably an evolutionary outcome, in fact; civilizations may have tried to have “leaner” militaries that didn’t have so much redundancy, and those civilizations were conquered by other civilizations that were more profligate. (This is not to say that we couldn’t afford to cut military spending at all; it’s one thing to have the largest military in the world—I support that, actually—but quite another to have more than the next 10 combined.)

What’s the policy solution here? It’s actually pretty simple.

Pay nurses and EMTs more. A lot more. Whatever it takes to get to the point where we not only have enough, but have so many people lining up to join we don’t even know what to do with them all. If private healthcare firms won’t do it, force them to—or, all the more reason to nationalize healthcare. The stakes are far too high to leave things as they are.

Would this be expensive? Sure.

Removing the shortage of EMTs wouldn’t even be that expensive. There are only about 260,000 EMTs in the US, and they get paid the apallingly low median salary of $36,000. That means we’re currently spending only about $9 billion per year on EMTs. We could double their salaries and double their numbers for only an extra $27 billion—about 0.1% of US GDP.

Nurses would cost more. There are about 5 million nurses in the US, with an average salary of about $78,000, so we’re currently spending about $390 billion a year on nurses. We probably can’t afford to double both salary and staffing. But maybe we could increase both by 20%, costing about an extra $170 billion per year.

Altogether that would cost about $200 billion per year. To save one hundred thousand lives.

That’s $2 million per life saved, or about $40,000 per QALY. The usual estimate for the value of a statistical life is about $10 million, and the usual threshold for a cost-effective medical intervention is $50,000-$100,000 per QALY; so we’re well under both. This isn’t as efficient as buying malaria nets in Africa, but it’s more efficient than plenty of other things we’re spending on. And this isn’t even counting additional benefits of better care that go beyond lives saved.

In fact if we nationalized US healthcare we could get more than these amounts in savings from not wasting our money on profits for insurance and drug companies—simply making the US healthcare system as cost-effective as Canada’s would save $6,000 per American per year, or a whopping $1.9 trillion. At that point we could double the number of nurses and their salaries and still be spending less.

No, it’s not because nurses and doctors are paid much less in Canada than the US. That’s true in some countries, but not Canada. The median salary for nurses in Canada is about $95,500 CAD, which is $71,000 US at current exchange rates. Doctors in Canada can make anywhere from $80,000 to $400,000 CAD, which is $60,000 to $300,000 US. Nor are healthcare outcomes in Canada worse than the US; if anything, they’re better, as Canadians live an average of four years longer than Americans. No, the radical difference in cost—a factor of 2 to 1—between Canada and the US comes from privatization. Privatization is supposed to make things more efficient and lower costs, but it has absolutely not done that in US healthcare.

And if our choice is between spending more money and letting hundreds of thousands or millions of people die every year, that’s no choice at all.

Good enough is perfect, perfect is bad

Jan 8 JDN 2459953

Not too long ago, I read the book How to Keep House While Drowning by KC Davis, which I highly recommend. It offers a great deal of useful and practical advice, especially for someone neurodivergent and depressed living through an interminable pandemic (which I am, but honestly, odds are, you may be too). And to say it is a quick and easy read is actually an unfair understatement; it is explicitly designed to be readable in short bursts by people with ADHD, and it has a level of accessibility that most other books don’t even aspire to and I honestly hadn’t realized was possible. (The extreme contrast between this and academic papers is particularly apparent to me.)

One piece of advice that really stuck with me was this: Good enough is perfect.

At first, it sounded like nonsense; no, perfect is perfect, good enough is just good enough. But in fact there is a deep sense in which it is absolutely true.

Indeed, let me make it a bit stronger: Good enough is perfect; perfect is bad.

I doubt Davis thought of it in these terms, but this is a concise, elegant statement of the principles of bounded rationality. Sometimes it can be optimal not to optimize.

Suppose that you are trying to optimize something, but you have limited computational resources in which to do so. This is actually not a lot for you to suppose—it’s literally true of basically everyone basically every moment of every day.

But let’s make it a bit more concrete, and say that you need to find the solution to the following math problem: “What is the product of 2419 times 1137?” (Pretend you don’t have a calculator, as it would trivialize the exercise. I thought about using a problem you couldn’t do with a standard calculator, but I realized that would also make it much weirder and more obscure for my readers.)

Now, suppose that there are some quick, simple ways to get reasonably close to the correct answer, and some slow, difficult ways to actually get the answer precisely.

In this particular problem, the former is to approximate: What’s 2500 times 1000? 2,500,000. So it’s probably about 2,500,000.

Or we could approximate a bit more closely: Say 2400 times 1100, that’s about 100 times 100 times 24 times 11, which is 2 times 12 times 11 (times 10,000), which is 2 times (110 plus 22), which is 2 times 132 (times 10,000), which is 2,640,000.

Or, we could actually go through all the steps to do the full multiplication (remember I’m assuming you have no calculator), multiply, carry the 1s, add all four sums, re-check everything and probably fix it because you messed up somewhere; and then eventually you will get: 2,750,403.

So, our really fast method was only off by about 10%. Our moderately-fast method was only off by 4%. And both of them were a lot faster than getting the exact answer by hand.

Which of these methods you’d actually want to use depends on the context and the tools at hand. If you had a calculator, sure, get the exact answer. Even if you didn’t, but you were balancing the budget for a corporation, I’m pretty sure they’d care about that extra $110,403. (Then again, they might not care about the $403 or at least the $3.) But just as an intellectual exercise, you really didn’t need to do anything; the optimal choice may have been to take my word for it. Or, if you were at all curious, you might be better off choosing the quick approximation rather than the precise answer. Since nothing of any real significance hinged on getting that answer, it may be simply a waste of your time to bother finding it.

This is of course a contrived example. But it’s not so far from many choices we make in real life.

Yes, if you are making a big choice—which job to take, what city to move to, whether to get married, which car or house to buy—you should get a precise answer. In fact, I make spreadsheets with formal utility calculations whenever I make a big choice, and I haven’t regretted it yet. (Did I really make a spreadsheet for getting married? You’re damn right I did; there were a lot of big financial decisions to make there—taxes, insurance, the wedding itself! I didn’t decide whom to marry that way, of course; but we always had the option of staying unmarried.)

But most of the choices we make from day to day are small choices: What should I have for lunch today? Should I vacuum the carpet now? What time should I go to bed? In the aggregate they may all add up to important things—but each one of them really won’t matter that much. If you were to construct a formal model to optimize your decision of everything to do each day, you’d spend your whole day doing nothing but constructing formal models. Perfect is bad.

In fact, even for big decisions, you can’t really get a perfect answer. There are just too many unknowns. Sometimes you can spend more effort gathering additional information—but that’s costly too, and sometimes the information you would most want simply isn’t available. (You can look up the weather in a city, visit it, ask people about it—but you can’t really know what it’s like to live there until you do.) Even those spreadsheet models I use to make big decisions contain error bars and robustness checks, and if, even after investing a lot of effort trying to get precise results, I still find two or more choices just can’t be clearly distinguished to within a good margin of error, I go with my gut. And that seems to have been the best choice for me to make. Good enough is perfect.

I think that being gifted as a child trained me to be dangerously perfectionist as an adult. (Many of you may find this familiar.) When it came to solving math problems, or answering quizzes, perfection really was an attainable goal a lot of the time.

As I got older and progressed further in my education, maybe getting every answer right was no longer feasible; but I still could get the best possible grade, and did, in most of my undergraduate classes and all of my graduate classes. To be clear, I’m not trying to brag here; if anything, I’m a little embarrassed. What it mainly shows is that I had learned the wrong priorities. In fact, one of the main reasons why I didn’t get a 4.0 average in undergrad is that I spent a lot more time back then writing novels and nonfiction books, which to this day I still consider my most important accomplishments and grieve that I’ve not (yet?) been able to get them commercially published. I did my best work when I wasn’t trying to be perfect. Good enough is perfect; perfect is bad.

Now here I am on the other side of the academic system, trying to carve out a career, and suddenly, there is no perfection. When my exam is being graded by someone else, there is a way to get the most points. When I’m the one grading the exams, there is no “correct answer” anymore. There is no one scoring me to see if I did the grading the “right way”—and so, no way to be sure I did it right.

Actually, here at Edinburgh, there are other instructors who moderate grades and often require me to revise them, which feels a bit like “getting it wrong”; but it’s really more like we had different ideas of what the grade curve should look like (not to mention US versus UK grading norms). There is no longer an objectively correct answer the way there is for, say, the derivative of x^3, the capital of France, or the definition of comparative advantage. (Or, one question I got wrong on an undergrad exam because I had zoned out of that lecture to write a book on my laptop: Whether cocaine is a dopamine reuptake inhibitor. It is. And the fact that I still remember that because I got it wrong over a decade ago tells you a lot about me.)

And then when it comes to research, it’s even worse: What even constitutes “good” research, let alone “perfect” research? What would be most scientifically rigorous isn’t what journals would be most likely to publish—and without much bigger grants, I can afford neither. I find myself longing for the research paper that will be so spectacular that top journals have to publish it, removing all risk of rejection and failure—in other words, perfect.

Yet such a paper plainly does not exist. Even if I were to do something that would win me a Nobel or a Fields Medal (this is, shall we say, unlikely), it probably wouldn’t be recognized as such immediately—a typical Nobel isn’t awarded until 20 or 30 years after the work that spawned it, and while Fields Medals are faster, they’re by no means instant or guaranteed. In fact, a lot of ground-breaking, paradigm-shifting research was originally relegated to minor journals because the top journals considered it too radical to publish.

Or I could try to do something trendy—feed into DSGE or GTFO—and try to get published that way. But I know my heart wouldn’t be in it, and so I’d be miserable the whole time. In fact, because it is neither my passion nor my expertise, I probably wouldn’t even do as good a job as someone who really buys into the core assumptions. I already have trouble speaking frequentist sometimes: Are we allowed to say “almost significant” for p = 0.06? Maximizing the likelihood is still kosher, right? Just so long as I don’t impose a prior? But speaking DSGE fluently and sincerely? I’d have an easier time speaking in Latin.

What I know—on some level at least—I ought to be doing is finding the research that I think is most worthwhile, given the resources I have available, and then getting it published wherever I can. Or, in fact, I should probably constrain a little by what I know about journals: I should do the most worthwhile research that is feasible for me and has a serious chance of getting published in a peer-reviewed journal. It’s sad that those two things aren’t the same, but they clearly aren’t. This constraint binds, and its Lagrange multiplier is measured in humanity’s future.

But one thing is very clear: By trying to find the perfect paper, I have floundered and, for the last year and a half, not written any papers at all. The right choice would surely have been to write something.

Because good enough is perfect, and perfect is bad.

Charity shouldn’t end at home

It so happens that this week’s post will go live on Christmas Day. I always try to do some kind of holiday-themed post around this time of year, because not only Christmas, but a dozen other holidays from various religions all fall around this time of year. The winter solstice seems to be a very popular time for holidays, and has been since antiquity: The Romans were celebrating Saturnalia 2000 years ago. Most of our ‘Christmas’ traditions are actually derived from Yuletide.

These holidays certainly mean many different things to different people, but charity and generosity are themes that are very common across a lot of them. Gift-giving has been part of the season since at least Saturnalia and remains as vital as ever today. Most of those gifts are given to our friends and loved ones, but a substantial fraction of people also give to strangers in the form of charitable donations: November and December have the highest rates of donation to charity in the US and the UK, with about 35-40% of people donating during this season. (Of course this is complicated by the fact that December 31 is often the day with the most donations, probably from people trying to finish out their tax year with a larger deduction.)

My goal today is to make you one of those donors. There is a common saying, often attributed to the Bible but not actually present in it: “Charity begins at home”.

Perhaps this is so. There’s certainly something questionable about the Effective Altruism strategy of “earning to give” if it involves abusing and exploiting the people around you in order to make more money that you then donate to worthy causes. Certainly we should be kind and compassionate to those around us, and it makes sense for us to prioritize those close to us over strangers we have never met. But while charity may begin at home, it must not end at home.

There are so many global problems that could benefit from additional donations. While global poverty has been rapidly declining in the early 21st century, this is largely because of the efforts of donors and nonprofit organizations. Official Development Assitance has been roughly constant since the 1970s at 0.3% of GNI among First World countries—well below international targets set decades ago. Total development aid is around $160 billion per year, while private donations from the United States alone are over $480 billion. Moreover, 9% of the world’s population still lives in extreme poverty, and this rate has actually slightly increased the last few years due to COVID.

There are plenty of other worthy causes you could give to aside from poverty eradication, from issues that have been with us since the dawn of human civilization (the Humane Society International for domestic animal welfare, the World Wildlife Federation for wildlife conservation) to exotic fat-tail sci-fi risks that are only emerging in our own lifetimes (the Machine Intelligence Research Institute for AI safety, the International Federation of Biosafety Associations for biosecurity, the Union of Concerned Scientists for climate change and nuclear safety). You could fight poverty directly through organizations like UNICEF or GiveDirectly, fight neglected diseases through the Schistomoniasis Control Initiative or the Against Malaria Foundation, or entrust an organization like GiveWell to optimize your donations for you, sending them where they think they are needed most. You could give to political causes supporting civil liberties (the American Civil Liberties Union) or protecting the rights of people of color (the North American Association of Colored People) or LGBT people (the Human Rights Campaign).

I could spent a lot of time and effort trying to figure out the optimal way to divide up your donations and give them to causes such as this—and then convincing you that it’s really the right one. (And there is even a time and place for that, because seemingly-small differences can matter a lot in this.) But instead I think I’m just going to ask you to pick something. Give something to an international charity with a good track record.

I think we worry far too much about what is the best way to give—especially people in the Effective Altruism community, of which I’m sort of a marginal member—when the biggest thing the world really needs right now is just more people giving more. It’s true, there are lots of worthless or even counter-productive charities out there: Please, please do not give to the Salvation Army. (And think twice before donating to your own church; if you want to support your own community, okay, go ahead. But if you want to make the world better, there are much better places to put your money.)

But above all, give something. Or if you already give, give more. Most people don’t give at all, and most people who give don’t give enough.

In defense of civility

Dec 18 JDN 2459932

Civility is in short supply these days. Perhaps it has always been in short supply; certainly much of the nostalgia for past halcyon days of civility is ill-founded. Wikipedia has an entire article on hundreds of recorded incidents of violence in legislative assemblies, in dozens of countries, dating all the way from to the Roman Senate in 44 BC to Bosnia in 2019. But the Internet seems to bring about its own special kind of incivility, one which exposes nearly everyone to some of the worst vitriol the entire world has to offer. I think it’s worth talking about why this is bad, and perhaps what we might do about it.

For some, the benefits of civility seem so self-evident that they don’t even bear mentioning. For others, the idea of defending civility may come across as tone-deaf or even offensive. I would like to speak to both of those camps today: If you think the benefits of civility are obvious, I assure you, they aren’t to everyone. And if you think that civility is just a tool of the oppressive status quo, I hope I can make you think again.

A lot of the argument against civility seems to be founded in the notion that these issues are important, lives are at stake, and so we shouldn’t waste time and effort being careful how we speak to each other. How dare you concern yourself with the formalities of argumentation when people are dying?

But this is totally wrongheaded. It is precisely because these issues are important that civility is vital. It is precisely because lives are at stake that we must make the right decisions. And shouting and name-calling (let alone actual fistfights or drawn daggers—which have happened!) are not conducive to good decision-making.

If you shout someone down when choosing what restaurant to have dinner at, you have been very rude and people may end up unhappy with their dining experience—but very little of real value has been lost. But if you shout someone down when making national legislation, you may cause the wrong policy to be enacted, and this could lead to the suffering or death of thousands of people.

Think about how court proceedings work. Why are they so rigid and formal, with rules upon rules upon rules? Because the alternative was capricious violence. In the absence of the formal structure of a court system, so-called ‘justice’ was handed out arbitrarily, by whoever was in power, or by mobs of vigilantes. All those seemingly-overcomplicated rules were made in order to resolve various conflicts of interest and hopefully lead toward more fair, consistent results in the justice system. (And don’t get me wrong; they still could stand to be greatly improved!)

Legislatures have complex rules of civility for the same reason: Because the outcome is so important, we need to make sure that the decision process is as reliable as possible. And as flawed as existing legislatures still are, and as silly as it may seem to insist upon addressing ‘the Honorable Representative from the Great State of Vermont’, it’s clearly a better system than simply letting them duke it out with their fists.

A related argument I would like to address is that of ‘tone policing‘. If someone objects, not to the content of what you are saying, but to the tone in which you have delivered it, are they arguing in bad faith?

Well, possibly. Certainly, arguments about tone can be used that way. In particular I remember that this was basically the only coherent objection anyone could come up with against the New Atheism movement: “Well, sure, obviously, God isn’t real and religion is ridiculous; but why do you have to be so mean about it!?”

But it’s also quite possible for tone to be itself a problem. If your tone is overly aggressive and you don’t give people a chance to even seriously consider your ideas before you accuse them of being immoral for not agreeing with you—which happens all the time—then your tone really is the problem.

So, how can we tell which is which? I think a good way to reply to what you think might be bad-faith tone policing is this: “What sort of tone do you think would be better?”

I think there are basically three possible responses:

1. They can’t offer one, because there is actually no tone in which they would accept the substance of your argument. In that case, the tone policing really is in bad faith; they don’t want you to be nicer, they want you to shut up. This was clearly the case for New Atheism: As Daniel Dennett aptly remarked, “There’s simply no polite way to tell someone they have dedicated their lives to an illusion.” But sometimes, such things need to be said all the same.

2. They offer an alternative argument you could make, but it isn’t actually expressing your core message. Either they have misunderstood your core message, or they actually disagree with the substance of your argument and should be addressing it on those terms.

3. They offer an alternative way of expressing your core message in a milder, friendlier tone. This means that they are arguing in good faith and actually trying to help you be more persuasive!

I don’t know how common each of these three possibilities is; it could well be that the first one is the most frequent occurrence. That doesn’t change the fact that I have definitely been at the other end of the third one, where I absolutely agree with your core message and want your activism to succeed, but I can see that you’re acting like a jerk and nobody will want to listen to you.

Here, let me give some examples of the type of argument I’m talking about:

1. “Defund the police”: This slogan polls really badly. Probably because most people have genuine concerns about crime and want the police to protect them. Also, as more and more social services (like for mental health and homelessness) get co-opted into policing, this slogan makes it sound like you’re just going to abandon those people. But do we need serious, radical police reform? Absolutely. So how about “Reform the police”, “Put police money back into the community”, or even “Replace the police”?

2. “All Cops Are Bastards”: Speaking of police reform, did I mention we need it? A lot of it? Okay. Now, let me ask you: All cops? Every single one of them? There is not a single one out of the literally millions of police officers on this planet who is a good person? Not one who is fighting to take down police corruption from within? Not a single individual who is trying to fix the system while preserving public safety? Now, clearly, it’s worth pointing out, some cops are bastards—but hey, that even makes a better acronym: SCAB. In fact, it really is largely a few bad apples—the key point here is that you need to finish the aphorism: “A few bad apples spoil the whole barrel.” The number of police who are brutal and corrupt is relatively small, but as long as the other police continue to protect them, the system will be broken. Either you get those bad apples out pronto, or your whole barrel is bad. But demonizing the very people who are in the best position to implement those reforms—good police officers—is not helping.

3. “Be gay, do crime”: I know it’s tongue-in-cheek and ironic. I get that. It’s still a really dumb message. I am absolutely on board with LGBT rights. Even aside from being queer myself, I probably have more queer and trans friends than straight friends at this point. But why in the world would you want to associate us with petty crime? Why are you lumping us in with people who harm others at best out of desperation and at worst out of sheer greed? Even if you are literally an anarchist—which I absolutely am not—you’re really not selling anarchism well if the vision you present of it is a world of unfettered crime! There are dozens of better pro-LGBT slogans out there; pick one. Frankly even “do gay, be crime” is better, because it’s more clearly ironic. (Also, you can take it to mean something like this: Don’t just be gay, do gay—live your fullest gay life. And if you can be crime, that means that the system is fundamentally unjust: You can be criminalized just for who you are. And this is precisely what life is like for millions of LGBT people on this planet.)

A lot of people seem to think that if you aren’t immediately convinced by the most vitriolic, aggressive form of an argument, then you were never going to be convinced anyway and we should just write you off as a potential ally. This isn’t just obviously false; it’s incredibly dangerous.

The whole point of activism is that not everyone already agrees with you. You are trying to change minds. If it were really true that all reasonable, ethical people already agreed with your view, you wouldn’t need to be an activist. The whole point of making political arguments is that people can be reasonable and ethical and still be mistaken about things, and when we work hard to persuade them, we can eventually win them over. In fact, on some things we’ve actually done spectacularly well.

And what about the people who aren’t reasonable and ethical? They surely exist. But fortunately, they aren’t the majority. They don’t rule the whole world. If they did, we’d basically be screwed: If violence is really the only solution, then it’s basically a coin flip whether things get better or worse over time. But in fact, unreasonable people are outnumbered by reasonable people. Most of the things that are wrong with the world are mistakes, errors that can be fixed—not conflicts between irreconcilable factions. Our goal should be to fix those mistakes wherever we can, and that means being patient, compassionate educators—not angry, argumentative bullies.

The Efficient Roulette Hypothesis

Nov 27 JDN 2459911

The efficient market hypothesis is often stated in several different ways, and these are often treated as equivalent. There are at least three very different definitions of it that people seem to use interchangeably:

  1. Market prices are optimal and efficient.
  2. Market prices aggregate and reflect all publicly-available relevant information.
  3. Market prices are difficult or impossible to predict.

The first reading, I will call the efficiency hypothesis, because, well, it is what we would expect a phrase like “efficient market hypothesis” to mean. The ordinary meaning of those words would imply that we are asserting that market prices are in some way optimal or near-optimal, that markets get prices “right” in some sense at least the vast majority of the time.

The second reading I’ll call the information hypothesis; it implies that market prices are an information aggregation mechanism which automatically incorporates all publicly-available information. This already seems quite different from efficiency, but it seems at least tangentially related, since information aggregation could be one useful function that markets serve.

The third reading I will call the unpredictability hypothesis; it says simply that market prices are very difficult to predict, and so you can’t reasonably expect to make money by anticipating market price changes far in advance of everyone else. But as I’ll get to in more detail shortly, that doesn’t have the slightest thing to do with efficiency.

The empirical data in favor of the unpredictability hypothesis is quite overwhelming. It’s exceedingly hard to beat the market, and for most people, most of the time, the smartest way to invest is just to buy a diversified portfolio and let it sit.

The empirical data in favor of the information hypothesis is mixed, but it’s at least plausible; most prices do seem to respond to public announcements of information in ways we would expect, and prediction markets can be surprisingly accurate at forecasting the future.

The empirical data in favor of the efficiency hypothesis, on the other hand, is basically nonexistent. On the one hand this is a difficult hypothesis to test directly, since it isn’t clear what sort of benchmark we should be comparing against—so it risks being not even wrong. But if you consider basically any plausible standard one could try to set for how an efficient market would run, our actual financial markets in no way resemble it. They are erratic, jumping up and down for stupid reasons or no reason at all. They are prone to bubbles, wildly overvaluing worthless assets. They have collapsed governments and ruined millions of lives without cause. They have resulted in the highest-paying people in the world doing jobs that accomplish basically nothing of genuine value. They are, in short, a paradigmatic example of what inefficiency looks like.

Yet, we still have economists who insist that “the efficient market hypothesis” is a proven fact, because the unpredictability hypothesis is clearly correct.

I do not think this is an accident. It’s not a mistake, or an awkwardly-chosen technical term that people are misinterpreting.

This is a motte and bailey doctrine.

Motte-and-bailey was a strategy in medieval warfare. Defending an entire region is very difficult, so instead what was often done was constructing a small, highly defensible fortification—the motte—while accepting that the land surrounding it—the bailey—would not be well-defended. Most of the time, the people stayed on the bailey, where the land was fertile and it was relatively pleasant to live. But should they be attacked, they could retreat to the motte and defend themselves until the danger was defeated.

A motte-and-bailey doctrine is an analogous strategy used in argumentation. You use the same words for two different versions of an idea: The motte is a narrow, defensible core of your idea that you can provide strong evidence for, but it isn’t very strong and may not even be interesting or controversial. The bailey is a broad, expansive version of your idea that is interesting and controversial and leads to lots of significant conclusions, but can’t be well-supported by evidence.

The bailey is the efficiency hypothesis: That market prices are optimal and we are fools to try to intervene or even regulate them because the almighty Invisible Hand is superior to us.

The motte is the unpredictability hypothesis: Market prices are very hard to predict, and most people who try to make money by beating the market fail.

By referring to both of these very different ideas as “the efficient market hypothesis”, economists can act as if they are defending the bailey, and prescribe policies that deregulate financial markets on the grounds that they are so optimal and efficient; but then when pressed for evidence to support their beliefs, they can pivot to the motte, and merely show that markets are unpredictable. As long as people don’t catch on and recognize that these are two very different meanings of “the efficient market hypothesis”, then they can use the evidence for unpredictability to support their goal of deregulation.

Yet when you look closely at this argument, it collapses. Unpredictability is not evidence of efficiency; if anything, it’s the opposite. Since the world doesn’t really change on a minute-by-minute basis, an efficient system should actually be relatively predictable in the short term. If prices reflected the real value of companies, they would change only very gradually, as the fortunes of the company change as a result of real-world events. An earthquake or a discovery of a new mine would change stock prices in relevant industries; but most of the time, they’d be basically flat. The occurrence of minute-by-minute or even second-by-second changes in prices basically proves that we are not tracking any genuine changes in value.

Roulette wheels are extremely unpredictable by design—by law, even—and yet no one would accuse them of being an efficient way of allocating resources. If you bet on roulette wheels and try to beat the house, you will almost surely fail, just as you would if you try to beat the stock market—and dare I say, for much the same reasons?

So if we’re going to insist that “efficiency” just means unpredictability, rather than actual, you know, efficiency, then we should all speak of the Efficient Roulette Hypothesis. Anything we can’t predict is now automatically “efficient” and should therefore be left unregulated.

Now is the time for CTCR

Nov 6 JDN 2459890

We live in a terrifying time. As Ukraine gains ground in its war with Russia, thanks in part to the deployment of high-tech weapons from NATO, Vladimir Putin has begun to make thinly-veiled threats of deploying his nuclear arsenal in response. No one can be sure how serious he is about this. Most analysts believe that he was referring to the possible use of small-scale tactical nuclear weapons, not a full-scale apocalyptic assault. Many think he’s just bluffing and wouldn’t resort to any nukes at all. Putin has bluffed in the past, and could be doing so again. Honestly, “this is not a bluff” is exactly the sort of thing you say when you’re bluffing—people who aren’t bluffing have better ways of showing it. (It’s like whenever Trump would say “Trust me”, and you’d know immediately that this was an especially good time not to. Of course, any time is a good time not to trust Trump.)

(By the way, financial news is a really weird thing: I actually found this article discussing how a nuclear strike would be disastrous for the economy. Dude, if there’s a nuclear strike, we’ve got much bigger things to worry about than the economy. It reminds me of this XKCD.)

But if Russia did launch nuclear weapons, and NATO responded with its own, it could trigger a nuclear war that would kill millions in a matter of hours. So we need to be prepared, and think very carefully about the best way to respond.

The current debate seems to be over whether to use economic sanctions, conventional military retaliation, or our own nuclear weapons. Well, we already have economic sanctions, and they aren’t making Russia back down. (Though they probably are hurting its war effort, so I’m all for keeping them in place.) And if we were to use our own nuclear weapons, that would only further undermine the global taboo against nuclear weapons and could quite possibly trigger that catastrophic nuclear war. Right now, NATO seems to be going for a bluff of our own: We’ll threaten an overwhelming nuclear response, but then we obviously won’t actually carry it out because that would be murder-suicide on a global scale.

That leaves conventional military retaliation. What sort of retaliation? Several years ago I came up with a very specific method of conventional retaliation I call credible targeted conventional response (CTCR, which you can pronounce “cut-core”). I believe that now would be an excellent time to carry it out.

The basic principle of CTCR is really quite simple: Don’t try to threaten entire nations. A nation is an abstract entity. Threaten people. Decisions are made by people. The response to Vladimir Putin launching nuclear weapons shouldn’t be to kill millions of innocent people in Russia that probably mean even less to Putin than they do to us. It should be to kill Vladimir Putin.

How exactly to carry this out is a matter for military strategists to decide. There are a variety of weapons at our disposal, ranging from the prosaic (covert agents) to the exotic (precision strikes from high-altitude stealth drones). Indeed, I think we should leave it purposefully vague, so that Putin can’t try to defend himself against some particular mode of attack. The whole gamut of conventional military responses should be considered on the table, from a single missile strike to a full-scale invasion.

But the basic goal is quite simple: Launching a nuclear weapon is one of the worst possible war crimes, and it must be met with an absolute commitment to bring the perpetrator to justice. We should be willing to accept some collateral damage, even a lot of collateral damage; carpet-bombing a city shouldn’t be considered out of the question. (If that sounds extreme, consider that we’ve done it before for much weaker reasons.) The only thing that we should absolutely refuse to do is deploy nuclear weapons ourselves.

The great advantage of this strategy—even aside from being obviously more humane than nuclear retaliation—is that it is more credible. It sounds more like something we’d actually be willing to do. And in fact we likely could even get help from insiders in Russia, because there are surely many people in the Russian government who aren’t so loyal to Putin that they’d want him to get away with mass murder. It might not just be an assassination; it might end up turning into a coup. (Also something we’ve done for far weaker reasons.)


This is how we preserve the taboo on nuclear weapons: We refuse to use them, but otherwise stop at nothing to kill anyone who does use them.

I therefore call upon the world to make this threat:

Launch a nuclear weapon, Vladimir Putin, and we will kill you. Not your armies, not your generals—you. It could be a Tomahawk missile at the Kremlin. It could be a car bomb in your limousine, or a Stinger missile at Aircraft One. It could be a sniper at one of your speeches. Or perhaps we’ll poison your drink with polonium, like you do to your enemies. You won’t know when or where. You will live the rest of your short and miserable life in terror. There will be nowhere for you to hide. We will stop at nothing. We will deploy every available resource around the world, and it will be our top priority. And you will die.

That’s how you threaten a psychopath. And it’s what we must do in order to keep the world safe from nuclear war.

Updating your moral software

Oct 23 JDN 2459876

I’ve noticed an odd tendency among politically active people, particular social media slacktivists (a term I do not use pejoratively: slacktivism is highly cost-effective). They adopt new ideas very rapidly, trying to stay on the cutting edge of moral and political discourse—and then they denigrate and disparage anyone who fails to do the same as an irredeemable monster.

This can take many forms, such as “if you don’t buy into my specific take on Critical Race Theory, you are a racist”, “if you have any uncertainty about the widespread use of puberty blockers you are a transphobic bigot”, “if you give any credence to the medical consensus on risks of obesity you are fatphobic“, “if you think disabilities should be cured you’re an ableist”, and “if you don’t support legalizing abortion in all circumstances you are a misogynist”.

My intention here is not to evaluate any particular moral belief, though I’ll say the following: I am skeptical of Critical Race Theory, especially the 1619 project which seems to be to include substantial distortions of history. I am cautiously supportive of puberty blockers, because the medical data on their risks are ambiguous—while the sociological data on how much happier trans kids are when accepted are totally unambiguous. I am well aware of the medical data saying that the risks of obesity are overblown (but also not negligible, particular for those who are very obese). Speaking as someone with a disability that causes me frequent, agonizing pain, yes, I want disabilities to be cured, thank you very much; accommodations are nice in the meantime, but the best long-term solution is to not need accommodations. (I’ll admit to some grey areas regarding certain neurodivergences such as autism and ADHD, and I would never want to force cures on people who don’t want them; but paralysis, deafness, blindness, diabetes, depression, and migraine are all absolutely worth finding cures for—the QALY at stake here are massive—and it’s silly to say otherwise.) I think abortion should generally be legal and readily available in the first trimester (which is when most abortions happen anyway), but much more strictly regulated thereafter—but denying it to children and rape victims is a human rights violation.

What I really want to talk about today is not the details of the moral belief, but the attitude toward those who don’t share it. There are genuine racists, transphobes, fatphobes, ableists, and misogynists in the world. There are also structural institutions that can lead to discrimination despite most of the people involved having no particular intention to discriminate. It’s worthwhile to talk about these things, and to try to find ways to fix them. But does calling anyone who disagrees with you a monster accomplish that goal?

This seems particularly bad precisely when your own beliefs are so cutting-edge. If you have a really basic, well-established sort of progressive belief like “hiring based on race should be illegal”, “women should be allowed to work outside the home” or “sodomy should be legal”, then people who disagree with you pretty much are bigots. But when you’re talking about new, controversial ideas, there is bound to be some lag; people who adopted the last generation’s—or even the last year’s—progressive beliefs may not yet be ready to accept the new beliefs, and that doesn’t make them bigots.

Consider this: Were you born believing in your current moral and political beliefs?

I contend that you were not. You may have been born intelligent, open-minded, and empathetic. You may have been born into a progressive, politically-savvy family. But the fact remains that any particular belief you hold about race, or gender, or ethics was something you had to learn. And if you learned it, that means that at some point you didn’t already know it. How would you have felt back then, if, instead of calmly explaining it to you, people called you names for not believing in it?

Now, perhaps it is true that as soon as you heard your current ideas, you immediately adopted them. But that may not be the case—it may have taken you some time to learn or change your mind—and even if it was, it’s still not fair to denigrate anyone who takes a bit longer to come around. There are many reasons why someone might not be willing to change their beliefs immediately, and most of them are not indicative of bigotry or deep moral failings.

It may be helpful to think about this in terms of updating your moral software. You were born with a very minimal moral operating system (emotions such as love and guilt, the capacity for empathy), and over time you have gradually installed more and more sophisticated software on top of that OS. If someone literally wasn’t born with the right OS—we call these people psychopaths—then, yes, you have every right to hate, fear, and denigrate them. But most of the people we’re talking about do have that underlying operating system, they just haven’t updated all their software to the same version as yours. It’s both unfair and counterproductive to treat them as irredeemably defective simply because they haven’t updated to the newest version yet. They have the hardware, they have the operating system; maybe their download is just a little slower than yours.

In fact, if you are very fast to adopt new, trendy moral beliefs, you may in fact be adopting them too quickly—they haven’t been properly vetted by human experience just yet. You can think of this as like a beta version: The newest update has some great new features, but it’s also buggy and unstable. It may need to be fixed before it is really ready for widespread release. If that’s the case, then people aren’t even wrong not to adopt them yet! It isn’t necessarily bad that you have adopted the new beliefs; we need beta testers. But you should be aware of your status as a beta tester and be prepared both to revise your own beliefs if needed, and also to cut other people slack if they disagree with you.

I understand that it can be immensely frustrating to be thoroughly convinced that something is true and important and yet see so many people disagreeing with it. (I am an atheist activist after all, so I absolutely know what that feels like.) I understand that it can be immensely painful to watch innocent people suffer because they have to live in a world where other people have harmful beliefs. But you aren’t changing anyone’s mind or saving anyone from harm by calling people names. Patience, tact, and persuasion will win the long game, and the long game is really all we have.

And if it makes you feel any better, the long game may not be as long as it seems. The arc of history may have tighter curvature than we imagine. We certainly managed a complete flip of the First World consensus on gay marriage in just a single generation. We may be able to achieve similarly fast social changes in other areas too. But we haven’t accomplished the progress we have so far by being uncharitable or aggressive toward those who disagree.

I am emphatically not saying you should stop arguing for your beliefs. We need you to argue for your beliefs. We need you to argue forcefully and passionately. But when doing so, try not to attack the people who don’t yet agree with you—for they are precisely the people we need to listen to you.

On (gay) marriage

Oct 9 JDN 2459862

This post goes live on my first wedding anniversary. Thus, as you read this, I will have been married for one full year.

Honestly, being married hasn’t felt that different to me. This is likely because we’d been dating since 2012 and lived together for several years before actually getting married. It has made some official paperwork more convenient, and I’ve reached the point where I feel naked without my wedding band; but for the most part our lives have not really changed.

And perhaps this is as it should be. Perhaps the best way to really know that you should get married is to already feel as though you are married, and just finally get around to making it official. Perhaps people for whom getting married is a momentous change in their lives (as opposed to simply a formal announcement followed by a celebration) are people who really shouldn’t be getting married just yet.

A lot of things in my life—my health, my career—have not gone very well in this past year. But my marriage has been only a source of stability and happiness. I wouldn’t say we never have conflict, but quite honestly I was expecting a lot more challenges and conflicts from the way I’d heard other people talk about marriage in the past. All of my friends who have kids seem to be going through a lot of struggles as a result of that (which is one of several reasons we keep procrastinating on looking into adoption), but marriage itself does not appear to be any more difficult than friendship—in fact, maybe easier.

I have found myself oddly struck by how un-important it has been that my marriage is to a same-sex partner. I keep expecting people to care—to seem uncomfortable, to be resistant, or simply to be surprised—and it so rarely happens.

I think this is probably generational: We Millennials grew up at the precise point in history when the First World suddenly decided, all at once, that gay marriage was okay.

Seriously, look at this graph. I’ve made it combining this article using data from the General Social Survey, and this article from Pew:

Until around 1990—when I was 2 years old—support for same-sex marriage was stable and extremely low: About 10% of Americans supported it (presumably most of them LGBT!), and over 70% opposed it. Then, quite suddenly, attitudes began changing, and by 2019, over 60% of Americans supported it and only 31% opposed it.

That is, within a generation, we went from a country where almost no one supported gay marriage to a country where same-sex marriage is so popular that any major candidate who opposed it would almost certainly lose a general election. (They might be able to survive a Republican primary, as Republican support for same-sex marriage is only about 44%—about where it was among Democrats in the early 2000s.)

This is a staggering rate of social change. If development economics is the study of what happened in South Korea from 1950-2000, I think political science should be the study of what happened to attitudes on same-sex marriage in the US from 1990-2020.

And of course it isn’t just the US. Similar patterns can be found across Western Europe, with astonishingly rapid shifts from near-universal opposition to near-universal support within a generation.

I don’t think I have been able to fully emotionally internalize this shift. I grew up in a world where homophobia was mainstream, where only the most radical left-wing candidates were serious about supporting equal rights and representation for LGBT people. And suddenly I find myself in a world where we are actually accepted and respected as equals, and I keep waiting for the other shoe to drop. Aren’t you the same people who told me as a teenager that I was a sexual deviant who deserved to burn in Hell? But now you’re attending my wedding? And offering me joint life insurance policies? My own extended family members treat me differently now than they did when I was a teenager, and I don’t quite know how to trust that the new way is the true way and not some kind of facade that could rapidly disappear.

I think this sort of generational trauma may never fully heal, in which case it will be the generation after us—the Zoomers, I believe we’re calling them now—who will actually live in this new world we created, while the rest of us forever struggle to accept that things are not as we remember them. Once bitten, we remain forever twice shy, lest attitudes regress as suddenly as they advanced.

Then again, it seems that Zoomers may be turning against the institution of marriage in general. As the meme says: “Boomers: No gay marriage. Millennials: Yes gay marriage. Gen Z: Yes gay, no marriage.” Maybe that’s for the best; maybe the future of humanity is for personal relationships to be considered no business of the government at all. But for now at least, equal marriage is clearly much better than unequal marriage, and the First World seems to have figured that out blazing fast.

And of course the rest of the world still hasn’t caught up. While trends are generally in a positive direction, there are large swaths of the world where even very basic rights for LGBT people are opposed by most of the population. As usual, #ScandinaviaIsBetter, with over 90% support for LGBT rights; and, as usual, Sub-Saharan Africa is awful, with support in Kenya, Uganda and Nigeria not even hitting 20%.

The injustice of talent

Sep 4 JDN 2459827

Consider the following two principles of distributive justice.

A: People deserve to be rewarded in proportion to what they accomplish.

B: People deserve to be rewarded in proportion to the effort they put in.

Both principles sound pretty reasonable, don’t they? They both seem like sensible notions of fairness, and I think most people would broadly agree with both them.

This is a problem, because they are mutually contradictory. We cannot possibly follow them both.

For, as much as our society would like to pretend otherwise—and I think this contradiction is precisely why our society would like to pretend otherwise—what you accomplish is not simply a function of the effort you put in.

Don’t get me wrong; it is partly a function of the effort you put in. Hard work does contribute to success. But it is neither sufficient, nor strictly necessary.

Rather, success is a function of three factors: Effort, Environment, and Talent.

Effort is the work you yourself put in, and basically everyone agrees you deserve to be rewarded for that.

Environment includes all the outside factors that affect you—including both natural and social environment. Inheritance, illness, and just plain luck are all in here, and there is general, if not universal, agreement that society should make at least some efforts to minimize inequality created by such causes.

And then, there is talent. Talent includes whatever capacities you innately have. It could be strictly genetic, or it could be acquired in childhood or even in the womb. But by the time you are an adult and responsible for your own life, these factors are largely fixed and immutable. This includes things like intelligence, disability, even height. The trillion-dollar question is: How much should we reward talent?

For talent clearly does matter. I will never swim like Michael Phelps, run like Usain Bolt, or shoot hoops like Steph Curry. It doesn’t matter how much effort I put in, how many hours I spend training—I will never reach their level of capability. Never. It’s impossible. I could certainly improve from my current condition; perhaps it would even be good for me to do so. But there are certain hard fundamental constraints imposed by biology that give them more potential in these skills than I will ever have.

Conversely, there are likely things I can do that they will never be able to do, though this is less obvious. Could Michael Phelps never be as good a programmer or as skilled a mathematician as I am? He certainly isn’t now. Maybe, with enough time, enough training, he could be; I honestly don’t know. But I can tell you this: I’m sure it would be harder for him than it was for me. He couldn’t breeze through college-level courses in differential equations and quantum mechanics the way I did. There is something I have that he doesn’t, and I’m pretty sure I was born with it. Call it spatial working memory, or mathematical intuition, or just plain IQ. Whatever it is, math comes easy to me in not so different a way from how swimming comes easy to Michael Phelps. I have talent for math; he has talent for swimming.

Moreover, these are not small differences. It’s not like we all come with basically the same capabilities with a little bit of variation that can be easily washed out by effort. We’d like to believe that—we have all sorts of cultural tropes that try to inculcate that belief in us—but it’s obviously not true. The vast majority of quantum physicists are people born with high IQ. The vast majority of pro athletes are people born with physical prowess. The vast majority of movie stars are people born with pretty faces. For many types of jobs, the determining factor seems to be talent.

This isn’t too surprising, actually—even if effort matters a lot, we would still expect talent to show up as the determining factor much of the time.

Let’s go back to that contest function model I used to analyze the job market awhile back (the one that suggests we spend way too much time and money in the hiring process). This time let’s focus on the perspective of the employees themselves.

Each employee has a level of talent, h. Employee X has talent hx and exerts effort x, producing output of a quality that is the product of these: hx x. Similarly, employee Z has talent hz and exerts effort z, producing output hz z.

Then, there’s a certain amount of luck that factors in. The most successful output isn’t necessarily the best, or maybe what should have been the best wasn’t because some random circumstance prevailed. But we’ll say that the probability an individual succeeds is proportional to the quality of their output.

So the probability that employee X succeeds is: hx x / ( hx x + hz z)

I’ll skip the algebra this time (if you’re interested you can look back at that previous post), but to make a long story short, in Nash equilibrium the two employees will exert exactly the same amount of effort.

Then, which one succeeds will be entirely determined by talent; because x = z, the probability that X succeeds is hx / ( hx + hz).

It’s not that effort doesn’t matter—it absolutely does matter, and in fact in this model, with zero effort you get zero output (which isn’t necessarily the case in real life). It’s that in equilibrium, everyone is exerting the same amount of effort; so what determines who wins is innate talent. And I gotta say, that sounds an awful lot like how professional sports works. It’s less clear whether it applies to quantum physicists.

But maybe we don’t really exert the same amount of effort! This is true. Indeed, it seems like actually effort is easier for people with higher talent—that the same hour spent running on a track is easier for Usain Bolt than for me, and the same hour studying calculus is easier for me than it would be for Usain Bolt. So in the end our equilibrium effort isn’t the same—but rather than compensating, this effect only serves to exaggerate the difference in innate talent between us.

It’s simple enough to generalize the model to allow for such a thing. For instance, I could say that the cost of producing a unit of effort is inversely proportional to your talent; then instead of hx / ( hx + hz ), in equilibrium the probability of X succeeding would become hx2 / ( hx2 + hz2). The equilibrium effort would also be different, with x > z if hx > hz.

Once we acknowledge that talent is genuinely important, we face an ethical problem. Do we want to reward people for their accomplishment (A), or for their effort (B)? There are good cases to be made for each.

Rewarding for accomplishment, which we might call meritocracy,will tend to, well, maximize accomplishment. We’ll get the best basketball players playing basketball, the best surgeons doing surgery. Moreover, accomplishment is often quite easy to measure, even when effort isn’t.

Rewarding for effort, which we might call egalitarianism, will give people the most control over their lives, and might well feel the most fair. Those who succeed will be precisely those who work hard, even if they do things they are objectively bad at. Even people who are born with very little talent will still be able to make a living by working hard. And it will ensure that people do work hard, which meritocracy can actually fail at: If you are extremely talented, you don’t really need to work hard because you just automatically succeed.

Capitalism, as an economic system, is very good at rewarding accomplishment. I think part of what makes socialism appealing to so many people is that it tries to reward effort instead. (Is it very good at that? Not so clear.)

The more extreme differences are actually in terms of disability. There’s a certain baseline level of activities that most people are capable of, which we think of as “normal”: most people can talk; most people can run, if not necessarily very fast; most people can throw a ball, if not pitch a proper curveball. But some people can’t throw. Some people can’t run. Some people can’t even talk. It’s not that they are bad at it; it’s that they are literally not capable of it. No amount of effort could have made Stephen Hawking into a baseball player—not even a bad one.

It’s these cases when I think egalitarianism becomes most appealing: It just seems deeply unfair that people with severe disabilities should have to suffer in poverty. Even if they really can’t do much productive work on their own, it just seems wrong not to help them, at least enough that they can get by. But capitalism by itself absolutely would not do that—if you aren’t making a profit for the company, they’re not going to keep you employed. So we need some kind of social safety net to help such people. And it turns out that such people are quite numerous, and our current system is really not adequate to help them.

But meritocracy has its pull as well. Especially when the job is really important—like surgery, not so much basketball—we really want the highest quality work. It’s not so important whether the neurosurgeon who removes your tumor worked really hard at it or found it a breeze; what we care about is getting that tumor out.

Where does this leave us?

I think we have no choice but to compromise, on both principles. We will reward both effort and accomplishment, to greater or lesser degree—perhaps varying based on circumstances. We will never be able to entirely reward accomplishment or entirely reward effort.

This is more or less what we already do in practice, so why worry about it? Well, because we don’t like to admit that it’s what we do in practice, and a lot of problems seem to stem from that.

We have people acting like billionaires are such brilliant, hard-working people just because they’re rich—because our society rewards effort, right? So they couldn’t be so successful if they didn’t work so hard, right? Right?

Conversely, we have people who denigrate the poor as lazy and stupid just because they are poor. Because it couldn’t possibly be that their circumstances were worse than yours? Or hey, even if they are genuinely less talented than you—do less talented people deserve to be homeless and starving?

We tell kids from a young age, “You can be whatever you want to be”, and “Work hard and you’ll succeed”; and these things simply aren’t true. There are limitations on what you can achieve through effort—limitations imposed by your environment, and limitations imposed by your innate talents.

I’m not saying we should crush children’s dreams; I’m saying we should help them to build more realistic dreams, dreams that can actually be achieved in the real world. And then, when they grow up, they either will actually succeed, or when they don’t, at least they won’t hate themselves for failing to live up to what you told them they’d be able to do.

If you were wondering why Millennials are so depressed, that’s clearly a big part of it: We were told we could be and do whatever we wanted if we worked hard enough, and then that didn’t happen; and we had so internalized what we were told that we thought it had to be our fault that we failed. We didn’t try hard enough. We weren’t good enough. I have spent years feeling this way—on some level I do still feel this way—and it was not because adults tried to crush my dreams when I was a child, but on the contrary because they didn’t do anything to temper them. They never told me that life is hard, and people fail, and that I would probably fail at my most ambitious goals—and it wouldn’t be my fault, and it would still turn out okay.

That’s really it, I think: They never told me that it’s okay not to be wildly successful. They never told me that I’d still be good enough even if I never had any great world-class accomplishments. Instead, they kept feeding me the lie that I would have great world-class accomplishments; and then, when I didn’t, I felt like a failure and I hated myself. I think my own experience may be particularly extreme in this regard, but I know a lot of other people in my generation who had similar experiences, especially those who were also considered “gifted” as children. And we are all now suffering from depression, anxiety, and Impostor Syndrome.

All because nobody wanted to admit that talent, effort, and success are not the same thing.

How to fix economics publishing

Aug 7 JDN 2459806

The current system of academic publishing in economics is absolutely horrible. It seems practically designed to undermine the mental health of junior faculty.

1. Tenure decisions, and even most hiring decisions, are almost entirely based upon publication in five (5) specific journals.

2. One of those “top five” journals is owned by Elsevier, a corrupt monopoly that has no basis for its legitimacy yet somehow controls nearly one-fifth of all scientific publishing.

3. Acceptance rates in all of these journals are between 5% and 10%—greatly decreased from what they were a generation or two ago. Given a typical career span, the senior faculty evaluating you on whether you were published in these journals had about a three times better chance to get their own papers published there than you do.

4. Submissions are only single-blinded, so while you have no idea who is reading your papers, they know exactly who you are and can base their decision on whether you are well-known in the profession—or simply whether they like you.

5. Simultaneous submissions are forbidden, so when submitting to journals you must go one at a time, waiting to hear back from one before trying the next.

6. Peer reviewers are typically unpaid and generally uninterested, and so procrastinate as long as possible on doing their reviews.

7. As a result, review times for a paper are often measured in months, for every single cycle.

So, a highly successful paper goes like this: You submit it to a top journal, wait three months, it gets rejected. You submit it to another one, wait another four months, it gets rejected. You submit it to a third one, wait another two months, and you are told to revise and resubmit. You revise and resubmit, wait another three months, and then finally get accepted.

You have now spent an entire year getting one paper published. And this was a success.

Now consider a paper that doesn’t make it into a top journal. You submit, wait three months, rejected; you submit again, wait four months, rejected; you submit again, wait two months, rejected. You submit again, wait another five months, rejected; you submit to the fifth and final top-five, wait another four months, and get rejected again.

Now, after a year and a half, you can turn to other journals. You submit to a sixth journal, wait three months, rejected. You submit to a seventh journal, wait four months, get told to revise and resubmit. You revise and resubmit, wait another two months, and finally—finally, after two years—actually get accepted, but not to a top-five journal. So it may not even help you get tenure, unless maybe a lot of people cite it or something.

And what if you submit to a seventh, an eighth, a ninth journal, and still keep getting rejected? At what point do you simply give up on that paper and try to move on with your life?

That’s a trick question: Because what really happens, at least to me, is I can’t move on with my life. I get so disheartened from all the rejections of that paper that I can’t bear to look at it anymore, much less go through the work of submitting it to yet another journal that will no doubt reject it again. But worse than that, I become so depressed about my academic work in general that I become unable to move on to any other research either. And maybe it’s me, but it isn’t just me: 28% of academic faculty suffer from severe depression, and 38% from severe anxiety. And that’s across all faculty—if you look just at junior faculty it’s even worse: 43% of junior academic faculty suffer from severe depression. When a problem is that prevalent, at some point we have to look at the system that’s making us this way.

I can blame the challenges of moving across the Atlantic during a pandemic, and the fact that my chronic migraines have been the most frequent and severe they have been in years, but the fact remains: I have accomplished basically nothing towards the goal of producing publishable research in the past year. I have two years left at this job; if I started right now, I might be able to get something published before my contract is done. Assuming that the project went smoothly, I could start submitting it as soon as it was done, and it didn’t get rejected as many times as the last one.

I just can’t find the motivation to do it. When the pain is so immediate and so intense, and the rewards are so distant and so uncertain, I just can’t bring myself to do the work. I had hoped that talking about this with my colleagues would help me cope, but it hasn’t; in fact it only makes me seem to feel worse, because so few of them seem to understand how I feel. Maybe I’m talking to the wrong people; maybe the ones who understand are themselves suffering too much to reach out to help me. I don’t know.

But it doesn’t have to be this way. Here are some simple changes that could make the entire process of academic publishing in economics go better:

1. Boycott Elsevier and all for-profit scientific journal publishers. Stop reading their journals. Stop submitting to their journals. Stop basing tenure decisions on their journals. Act as though they don’t exist, because they shouldn’t—and then hopefully soon they won’t.

2. Peer reviewers should be paid for their time, and in return required to respond promptly—no more than a few weeks. A lack of response should be considered a positive vote on that paper.

3. Allow simultaneous submissions; if multiple journals accept, let the author choose between them. This is already how it works in fiction publishing, which you’ll note has not collapsed.

4. Increase acceptance rates. You are not actually limited by paper constraints anymore; everything is digital now. Most of the work—even in the publishing process—already has to be done just to go through peer review, so you may as well publish it. Moreover, most papers that are submitted are actually worthy of publishing, and this whole process is really just an idiotic status hierarchy. If the prestige of your journal decreases because you accept more papers, we are measuring prestige wrong. Papers should be accepted something like 50% of the time, not 5-10%.

5. Double blind submissions, and insist on ethical standards that maintain that blinding. No reviewer should know whether they are reading the work of a grad student or a Nobel Laureate. Reputation should mean nothing; scientific rigor should mean everything.

And, most radical of all, what I really need in my life right now:

6. Faculty should not have to submit their own papers. Each university department should have administrative staff whose job it is to receive papers from their faculty, format them appropriately, and submit them to journals. They should deal with all rejections, and only report to the faculty member when they have received an acceptance or a request to revise and resubmit. Faculty should simply do the research, write the papers, and then fire and forget them. We have highly specialized skills, and our valuable time is being wasted on the clerical tasks of formatting and submitting papers, which many other people could do as well or better. Worse, we are uniquely vulnerable to the emotional impact of the rejection—seeing someone else’s paper rejected is an entirely different feeling from having your own rejected.

Do all that, and I think I could be happy to work in academia. As it is, I am seriously considering leaving and never coming back.