How to detect discrimination, empirically

Aug 25 JDN 2460548

For concreteness, I’ll use men and women as my example, though the same principles would apply for race, sexual orientation, and so on. Suppose we find that there are more men than women in a given profession; does this mean that women are being discriminated against?

Not necessarily. Maybe women are less interested in that kind of work, or innately less qualified. Is there a way we can determine empirically that it really is discrimination?

It turns out that there is. All we need is a reliable measure of performance in that profession. Then, we compare performance between men and women, and that comparison can tell us whether discrimination is happening or not. The key insight is that workers in a job are not a random sample; they are a selected sample. The results of that selection can tell us whether discrimination is happening.

Here’s a simple model to show how this works.

Suppose there are five different skill levels in the job, from 1 to 5 where 5 is the most skilled. And suppose there are 5 women and 5 men in the population.

1. Baseline

The baseline case to consider is when innate talents are equal and there is no discrimination. In that case, we should expect men and women to be equally represented in the profession.

For the simplest case, let’s say that there is one person at each skill level:

MenWomen
11
22
33
44
55

Now suppose that everyone above a certain skill threshold gets hired. Since we’re assuming no discrimination, the threshold should be the same for men and women. Let’s say it’s 3; then these are the people who get hired:

Hired MenHired Women
33
44
55

The result is that not only are there the same number of men and women in the job, their skill levels are also the same. There are just as many highly-competent men as highly-competent women.

2. Innate Differences

Now, suppose there is some innate difference in talent between men and women for this job. For most jobs this seems suspicious, but consider pro sports: Men really are better at basketball, in general, than women, and this is pretty clearly genetic. So it’s not absurd to suppose that for at least some jobs, there might be some innate differences. What would that look like?


Again suppose a population of 5 men and 5 women, but now the women are a bit less qualified: There are two 1s and no 5s among the women.

MenWomen
11
21
32
43
54

Then, this is the group that will get hired:

Hired MenHired Women
33
44
5

The result will be fewer women who are on average less qualified. The most highly-qualified individuals at that job will be almost entirely men. (In this simple model, entirely men; but you can easily extend it so that there are a few top-qualified women.)

This is in fact what we see for a lot of pro sports; in a head-to-head match, even the best WNBA teams would generally lose against most NBA teams. That’s what it looks like when there are real innate differences.

But it’s hard to find clear examples outside of sports. The genuine, large differences in size and physical strength between the sexes just don’t seem to be associated with similar differences in mental capabilities or even personality. You can find some subtler effects, but nothing very large—and certainly nothing large enough to explain the huge gender gaps in various industries.

3. Discrimination

What does it look like when there is discrimination?

Now assume that men and women are equally qualified, but it’s harder for women to get hired, because of discrimination. The key insight here is that this amounts to women facing a higher threshold. Where men only need to have level 3 competence to get hired, women need level 4.

So if the population looks like this:

MenWomen
11
22
33
44
55

The hired employees will look like this:

Hired MenHired Women
3
44
55

Once again we’ll have fewer women in the profession, but they will be on average more qualified. The top-performing individuals will be as likely to be women as they are to be men, while the lowest-performing individuals will be almost entirely men.

This is the kind of pattern we observe when there is discrimination. Do we see it in real life?

Yes, we see it all the time.

Corporations with women CEOs are more profitable.

Women doctors have better patient outcomes.

Startups led by women are more likely to succeed.

This shows that there is some discrimination happening, somewhere in the process. Does it mean that individual firms are actively discriminating in their hiring process? No, it doesn’t. The discrimination could be happening somewhere else; maybe it happens during education, or once women get hired. Maybe it’s a product of sexism in society as a whole, that isn’t directly under the control of employers. But it must be in there somewhere. If women are both rarer and more competent, there must be some discrimination going on.

What if there is also innate difference? We can detect that too!

4. Both

Suppose now that men are on average more talented, but there is also discrimination against women. Then the population might look like this:

MenWomen
11
21
32
43
54

And the hired employees might look like this:

Hired MenHired Women
3
4
54

In such a scenario, you’ll see a large gender imbalance, but there may not be a clear difference in competence. The tiny fraction of women who get hired will perform about as well as the men, on average.

Of course, this assumes that the two effects are of equal strength. In reality, we might see a whole spectrum of possibilities, from very strong discrimination with no innate differences, all the way to very large innate differences with no discrimination. The outcomes will then be similarly along a spectrum: When discrimination is much larger than innate difference, women will be rare but more competent. When innate difference is much larger than discrimination, women will be rare and less competent. And when there is a mix of both, women will be rare but won’t show as much difference in competence.

Moreover, if you look closer at the distribution of performance, you can still detect the two effects independently. If the lowest-performing workers are almost all men, that’s evidence of discrimination against women; while if the highest-performing workers are almost all men, that’s evidence of innate difference. And if you look at the table above, that’s exactly what we see: Both the 3 and the 5 are men, indicating the presence of both effects.

What does affirmative action do?

Effectively, affirmative action lowers the threshold for hiring women (or minorities) in order to equalize representation in the workplace. In the presence of discrimination raising that threshold, this is exactly what we need! It can take us from case 3 (discrimination) to case 1 (equality), or from case 4 (both discrimination and innate difference) to case 2 (innate difference only).

Of course, it’s possible for us to overshoot, using more affirmative action than we should have. If we achieve better representation of women, but the lowest performers at the job are women, then we have overshot, effectively now discriminating against men. Fortunately, there is very little evidence of this in practice. In general, even with affirmative action programs in place, we tend to find that the lowest performers are still men—so there is still discrimination against women that we’ve failed to compensate for.

What if we can’t measure competence?

Of course, it’s possible that we don’t have good measures of competence in a given industry. (One must wonder how firms decide who to hire, but frankly I’m prepared to believe they’re just really bad at it.) Then we can’t observe discrimination statistically in this way. What do we do then?

Well, there is at least one avenue left for us to detect discrimination: We can do direct experiments comparing resumes with male names versus female names. These sorts of experiments typically don’t find very much, though—at least for women. For different races, they absolutely do find strong results. They also find evidence of discrimination against people with disabilities, older people, and people who are physically unattractive. There’s also evidence of intersectional effects, where women of particular ethnic groups get discriminated against even when women in general don’t.

But this will only pick up discrimination if it occurs during the hiring process. The advantage of having a competence measure is that it can detect discrimination that occurs anywhere—even outside employer control. Of course, if we don’t know where the discrimination is happening, that makes it very hard to fix; so the two approaches are complementary.

And there is room for new methods too; right now we don’t have a good way to detect discrimination in promotion decisions, for example. Many of us suspect that it occurs, but unless you have a good measure of competence, you can’t really distinguish promotion discrimination from innate differences in talent. We don’t have a good method for testing that in a direct experiment, either, because unlike hiring, we can’t just use fake resumes with masculine or feminine names on them.

What’s the deal with Trump supporters?

Jul 28 JDN 2460520


I have never understood how this Presidential election is a close one. On the one hand, we have a decent President with many redeeming qualities who has done a great job, but is getting old; on the other hand, we have a narcissistic, authoritarian con man (who is almost as old). It should be obvious who the right choice is here.

And yet, half the country disagrees. I really don’t get it. Other Republican candidates actually have had redeeming qualities, and I could understand why someone might support them; but Trump has basically none.

I have even asked some of my relatives who support Trump why they do, what they see in him, and I could never get a straight answer.

I now think I know why: They don’t want to admit the true answer.

Political scientists have been studying this, and they’ve come to some very unsettling conclusions. The two strongest predictors of support for Trump are authoritarianism and hatred of minorities.

In other words, people support Trump not in spite of what makes him awful, but because of it. They are happy to finally have a political publicly supporting their hateful, bigoted views. And since they believe in authoritarian hierarchy, his desire to become a dictator doesn’t worry them; they may even welcome it, believing that he’ll use that power to hurt the right people. They like him because he promises retribution against social change. He also uses a lot of fear-mongering.

This isn’t the conclusion I was hoping for. I wanted there to be something sympathetic, some alternative view of the world that could be reasoned with. But when bigotry and authoritarianism are the main predictors of a candidate’s support, it seems that reasonableness has pretty much failed.

I wanted there to be something I had missed, something I wasn’t seeing about Trump—or about Biden—that would explain how good, reasonable people could support the former over the latter. But the data just doesn’t seem to show anything. There is an urban/rural divide; there is a generational divide; and there is an educational divide. Maybe there’s something there; certainly I can sympathize with old people in rural areas with low education. But by far the best way to tell whether someone supports Trump is to find out whether they are racist, sexist, xenophobic, and authoritarian. How am I supposed to sympathize with that? Where can we find common ground here?

There seems to be something deep and primal that motivates Trump supporters: Fear of change, tribal identity, or simply anger. It doesn’t seem to be rational. Ask them what policies Trump has done or plans to do that they like, and they often can’t name any. But they are certain in their hearts that he will “Make America Great Again”.

What do we do about this? We can win this election—maybe—but that’s only the beginning. Somehow we need to root out the bigotry that drives support for Trump and his ilk, and I really don’t know how to do that.

I don’t know what else to say here. This all feels so bleak. This election has become a battle for the soul of America: Are we a pluralistic democracy that celebrates diversity, or are we a nation of racist, sexist, xenophobic authoritarians?

Did we push too hard, too fast for social change? Did we leave too many people behind, people who felt coerced into compliance rather than persuaded of our moral correctness? Is this a temporary backlash that we can bear as the arc of the moral universe bends toward justice? Or is this the beginning of a slow and agonizing march toward neo-fascism?

I have never feared Trump himself nearly so much as I fear a nation that could elect him—especially one that could re-elect him.

People need permission to disagree

Jul 21 JDN 2460513

Obviously, most of the blame for the rise of far-right parties in various countries has to go to the right-wing people who either joined up or failed to stop their allies from joining up. I would hope that goes without saying, but it probably doesn’t, so there, I said it; it’s mostly their fault.

But there is still some fault to go around, and I think we on the left need to do some soul-searching about this.

There is a very common mode of argumentation that is popular on the left, which I think is very dangerous:

What? You don’t already agree with [policy idea]? You bigot!”

Often it’s not quite that blatant, but the implication is still there: If you don’t agree with this policy involving race, you’re a racist. If you don’t agree with this policy involving transgender rights, you’re a transphobe. If you don’t agree with this policy involving women’s rights, you are a sexist. And so on.

I understand why people think this way. But I also think it has pushed some people over to the right who might otherwise have been possible to persuade to our own side.

And here comes the comeback, I know:

If being mistreated turns you into a Nazi, you were never a good ally to begin with.”

Well, first of all, not everyone who was pushed away from the left became a full-blown Nazi. Some of them just stopped listening to us, and started listening to whatever the right wing was saying instead.

Second, life is more complicated than that. Most people don’t really have well-defined political views, believe it or not. Most people sort of form their political views on the spot based on whoever else is around them and who they hear talking the loudest. Most swing voters are really low-information voters who really don’t follow politics and make up their minds based on frankly stupid reasons.

And with this in mind, the mere fact that we are pushing people away with our rhetoric means that we are shifting what those low-information voters hear—and thereby giving away elections to the right.

When people disagree about moral questions, isn’t someone morally wrong?

Yes, by construction. (At least one must be; possibly everyone is.)

But we don’t always know who is wrong—and generally speaking, everyone goes into a conversation assuming that they themselves are right. But our ultimate goal of moral conversation is to get more people to be right and fewer people to be wrong, yes? If we treat it as morally wrong to disagree in the first place,we are shutting down any hope of reaching that goal.

Not everyone knows everything about everything.

That may seem perfectly obvious to you, but when you leap from “disagree with [policy]” to “bigot”, you are basically assuming the opposite. You are assuming that whoever you are speaking with knows everything you know about all the relevant considerations of politics and social science, and the only possible reason they could come to a different conclusion is that they have a fundamentally different preference, namely, they are a bigot.

Maybe you are indeed such an enlightened individual that you never get any moral questions wrong. (Maybe.) But can you really expect everyone else to be like that? Isn’t it unfair to ask that of absolutely everyone?

This is why:

People need permission to disagree.

In order for people to learn and grow in their understanding, they need permission to not know all the answers right away. In order for people to change their beliefs, they need permission to believe something that might turn out to be wrong later.


This is exactly the permission we are denying when we accuse anyone we disagree with of being a bigot. Instead of continuing the conversation in the hopes of persuading people to our point of view, we are shutting the conversation down with vitriol and name-calling.

Try to consider this from the opposite perspective.

You enter a conversation about an important political or moral issue. You hear their view expressed, and then you express your own. Immediately, they start accusing you of being morally defective, a racist, sexist, homophobic, and/or transphobic bigot. How likely are you to continue that conversation? How likely are you to go on listening to this person? How likely are you to change your mind about the original political issue?

In fact, might you even be less likely to change your mind than you would have been if you’d just heard their view expressed and then ended the conversation? I think so. I think just respectfully expressing an alternative view pushes people a little—not a lot, but a little—in favor of whatever view you have expressed. It tells them that someone else who is reasonable and intelligent believes X, so maybe X isn’t so unreasonable.

Conversely, when someone resorts to name-calling, what does that do to your evaluation of their views? They suddenly seem unreasonable. You begin to doubt everything they’re saying. You may even try to revise your view further away out of spite (though this is clearly not rational—reversed stupidity is not intelligence).

Think about that, before you resort to name-calling your opponents.

But now I know you’re thinking:

But some people really are bigots!”

Yes, that’s true. And some of them may even be the sort of irredeemable bigot you’re imagining right now, someone for whom no amount of conversation could ever change their mind.

But I don’t think most people are like that. In fact, I don’t think most bigots are like that. I think even most people who hold bigoted views about whatever population could in fact be persuaded out of those views, under the right circumstances. And I think that the right circumstances involves a lot more patient, respectful conversation than it does angry name-calling. For we are all Judy Hopps.

Maybe I’m wrong. Maybe it doesn’t matter how patiently we argue. But it’s still morally better to be respectful and kind, so I’m going to do it.

You have my permission to disagree.

The worst is not inevitable

Jul 14 JDN 2460506

As I write this, the left has just won two historic landslide victories: In France, where a coalition of left-wing parties set aside their differences and prevailed; and in the UK, where the Labour Party just curb-stomped all competition.

Many commentators had been worried that the discredited center-right parties in these countries had left a power vacuum that would be filled by far-right parties like France’s National Rally, but this isn’t what happened. Voters showed up to the polls, and they voted out the center-right all right; but what they put in its place was the center-left, not the far-right.

The New York Times is constitutionally incapable of celebrating anything, so they immediately turned to worries that “turnout was low” and this indicates “an unhappy Britain”. Honestly this seems to be a general failing of journalists: They can’t ever say anything is good. Their entire view of the world is based around “if it bleeds, it leads”. I’m assuming this has something to do with incentives created by the market of news consumers, but it also seems to be an entrenched social norm among journalists themselves. The world must be getting worse, in every way, or if it’s obviously not, we don’t talk about those things—because good things just aren’t news. (Look no further than the fact we now have the lowest global homicide rates in the history of the human race. What, you didn’t realize we had that right now? Could that perhaps be because literally no news source even mentioned it, ever?)

Now, to be fair, turnout was low, and far-right parties did win some representation, and any kind of sudden political shift indicates some kind of public dissatisfaction… but for goodness’ sake, can we take the win for once?

These elections are proof that the free world’s slide into far-right authoritarianism doesn’t have to be inevitable. We can fight it, we are fighting it—and sometimes, we actually win.

So let’s not give up hope in the United States, either. Yes, polls of the Biden/Trump election don’t look great right now; Trump seems to have a slight lead, and it’s way too close for comfort. But we don’t need to roll over and die. The left can win, when we band together well enough; and if France and Britain can pull it off, I don’t see why we can’t too.

And don’t tell me they had way better candidates. The new UK Prime Minister is not a particularly appealing or charismatic candidate. I frankly don’t even like him. He either is a TERF, or is at least willing to capitulate to them. (He also underestimates the number of trans women by about an order of magnitude.) But he won, because the Labour Party won, and he happened to be the Labour Party leader at the time.

Biden is old. Sure. So is Trump. And if it turns out that Biden is really unhealthy, guess what? That means he’ll die or resign and we get a woman of color as President instead. I don’t see eye-to-eye with Kamala Harris on everything, but I don’t see her taking office as a horrible outcome. It’s certainly a hundred times better than what happens if we let Trump win.

Are there better candidates out there? Theoretically, sure. But unless one of them manages to win nomination by one of the two leading parties, that doesn’t matter. Because in a first-past-the-post voting system, you either vote for one of the top two, or you waste your vote. I’m sorry. It sucks. I want a new voting system too. I know exactly which one we could use that would be a hundred times better. But we’re not going to get it by refusing to vote altogether.

We might get a better voting system by voting strategically for candidates who are open to the idea—which at this juncture clearly means Democrats, not Republicans. (At this point in history, Republicans don’t seem entirely convinced that we should decide things democratically in the first place.)There are also other forms of activism we can use, independent of voting. But not voting isn’t a form of activism, and we should stop acting like it is. Not voting is the lazy, selfish, default option. It’s what you’d do if you were a neoclassical rational agent who cares not in the least for his fellow human beings. You should never be proud of not voting. You’re not sending a message; you’re shirking your civic responsibility.

Voting isn’t writing a love letter. It isn’t signing a form endorsing everything a candidate has ever done or ever will do. If you think of it that way, you’re going to never want to vote—and thus you’re going to give up the most important power you have as a citizen of a democracy.

Voting is a decision. It’s choosing one alternative over another. Like any decision in the real world, there will almost never be a perfect option. There will only be better or worse options. Sometimes, even, you’ll feel that there are only bad options, and you are choosing the least-bad option. But you still have to choose the least-bad option, because literally everything else is worse—including doing nothing.

So get out there and try to help Biden win. Not because you love Biden, but because it’s your civic duty. And if enough people do it, we can still win this.

Adverse selection and all-you-can-eat

Jul 7 JDN 2460499

The concept of adverse selection is normally associated with finance and insurance, and they certainly do have a lot of important applications there. But finance and insurance are complicated (possibly intentionally?) and a lot of people are intimidated by them, and it turns out there’s a much simpler example of this phenomenon, which most people should find familiar:

All-you-can-eat meals.

At most restaurants, you buy a specific amount of food: One cheeseburger, one large order of fries. But at some, you have another option: You can buy an indeterminate amount of food, as much as you are able to eat at one sitting.

Now think about this from the restaurant’s perspective: How do you price an all-you-can-eat meal and turn a profit? Your cost obviously depends on how much food you need to prepare, but you don’t know exactly how much each customer is going to eat.

Fortunately, you don’t need to! You only need to know how much people will eat on average. As long as the average customer’s meal is worth less than what they paid for it, you will continue to make a profit, even though some customers end up eating more than what they paid for.

Insurance works the same way: Some people will cash in on their insurance, costing the company money; but most will not, providing the company with revenue. In fact, you could think of an all-you-can-eat-meal as a form of food insurance.

So, all you need to do is figure out how much an average person eats in one meal, and price based on that, right?

Wrong. Here’s the problem: The people who eat at your restaurant aren’t a random sample of people. They are specifically the kind of people who eat at all-you-can-eat restaurants.

Someone who eats very little probably won’t want to go to your restaurant very much, because they’ll have to pay a high price for very little food. But someone with a big appetite will go to your restaurant frequently, because they get to eat a large amount of food for that same price.

This means that, on average, your customers will end up eating more than what an average restaurant customer eats. You’ll have to raise the price accordingly—which will make the effect even stronger.

This can end in one of two ways: Either an equilibrium is reached where the price is pretty high and most of the customers have big appetites, or no equilibrium is reached, and the restaurant either goes bankrupt or gets rid of its all-you-can-eat policy.

But there’s basically no way to get the outcome that seems the best, which is a low price and a wide variety of people attending the restaurant. Those who eat very little just won’t show up.

That’s adverse selection. Because there’s no way to charge people who eat more a higher price (other than, you know, not being all-you-can-eat), people will self-select by choosing whether or not to attend, and the people who show up at your restaurant will be the ones with big appetites.

The same thing happens with insurance. Say we’re trying to price health insurance; we don’t just need to know the average medical expenses of our population, even if we know a lot of specific demographic information. People who are very healthy may choose not to buy insurance, leaving us with only the less-healthy people buying our insurance—which will force us to raise the price of our insurance.

Once again, you’re not getting a random sample; you’re getting a sample of the kind of people who buy health insurance.

Obamacare was specifically designed to prevent this, by imposing a small fine on people who choose not to buy health insurance. The goal was to get more healthy people buying insurance, in order to bring the cost down. It worked, at least for awhile—but now that individual mandate has been nullified, so adverse selection will once again rear its ugly head. Had our policymakers better understood this concept, they might not have removed the individual mandate.

Another option might occur to you, analogous to the restaurant: What if we just didn’t offer insurance, and made people pay for all their own healthcare? This would be like the restaurant ending its all-you-can-eat policy and charging for each new serving. Most restaurants do that, so maybe it’s the better option in general?

There are two problems here, one ethical, one economic.

The ethical problem is that people don’t deserve to be sick or injured. They didn’t choose those things. So it isn’t fair to let them suffer or bear all the costs of getting better. As a society, we should share in those costs. We should help people in need. (If you don’t already believe this, I don’t know how to convince you of it. But hopefully most people do already believe this.)

The economic problem is that some healthcare is rarely needed, but very expensive. That’s exactly the sort of situation where insurance makes sense, to spread the cost around. If everyone had to pay for their own care with no insurance at all, then most people who get severe illnesses simply wouldn’t be able to afford it. They’d go massively into debt, go bankrupt—people already do, even with insurance!—and still not even get much of the care they need. It wouldn’t matter that we have good treatments for a lot of cancers now; they are all very expensive, so most people with cancer would be unable to pay for them, and they’d just die anyway.

In fact, the net effect of such a policy would probably be to make us all poorer, because a lot of illness and disability would go untreated, making our workforce less productive. Even if you are very healthy and never need health insurance, it may still be in your own self-interest to support a policy of widespread health insurance, so that sick people get treated and can go back to work.

A world without all-you-can-eat restaurants wouldn’t be so bad. But a world without health insurance would be one in which millions of people suffer needlessly because they can’t afford healthcare.

Why does everyone work full-time?

Jun 30 JDN 2460492

Over 70% of US workers work “full-time”, that is, at least 40 hours a week. The average number of hours worked per week is 33.8, and the average number of overtime hours is only 3.6. So basically, about 2/3 of workers work almost exactly 40 hours per week.

We’re accustomed to this situation, so it may not seem strange to you. But stop and think for a moment: What are the odds that across every industry, exactly 40 hours per week is the most efficient arrangement?

Indeed, there is mounting evidence that in many industries, 40 hours is too much, and something like 5 or even 30 would actually be more efficient. Yet we continue to work 40-hour weeks.

This looks like a corner solution: Rather than choosing an optimal amount, we’re all up against some kind of constraint.


What’s the constraint? Well, the government requires (for most workers) that anything above 40 hours per week must be paid as overtime, that is, at a higher wage rate. So it looks like we would all be working more than 40 hours per week, but we hit the upper limit due to these regulations.

Does this mean we would be better off without the regulations? Clearly not. As I just pointed out, the evidence is mounting that 40 hours is too much, not too little. But why, then, would we all be trying to work so many hours?

I believe this is yet another example of hyper-competition, where competition drives us to an inefficient outcome.

Employers value employees who work a lot of hours. Indeed, I contend that they do so far more than makes any rational sense; they seem to care more about how many hours you work than about the actual quality or quantity of your output. Maybe this is because hours worked is easier to measure, or because it seems like a fairer estimate of your effort; but for whatever reason, employers really seem to reward employees who work a lot of hours, regardless of almost everything else.

In the absence of a limit on hours worked, then, employers are going to heap rewards on whoever works the most hours, and so people will be pressured to work more and more hours. Then we would all work ourselves to death, and it’s not even clear that this would be good for GDP.

Indeed, this seems to be what happened, before the 40-hour work week became the standard. In the 1800s, the average American worked over 60 hours per week. It wasn’t until the 1940s that 40-hour weeks became the norm.

But speaking of norms, that also seems to be a big factor here. The truth is, overtime isn’t really that expensive, and employers could be smarter about rewarding good work rather than more hours. But once a norm establishes itself in a society, it can be very hard to change. And right now, the norm is that 40 hours is a “normal” “standard” “full” work week—any more is above and beyond, and any less is inferior.

This is a problem, because a lot of people can’t work 40-hour weeks. Our standard for what makes someone “disabled” isn’t that you can’t work at all; it’s that you can’t work as much as society expects. I wonder how many people are currently living on disability who could have been working part-time, but there just weren’t enough part-time jobs available. The employment rate among people with a disability is only 41%, compared to 77% of the general population.

And it’s not that we need to work this much. Our productivity is now staggeringly high: We produce more than five times as much wealth per hour of work than we did as recently as the 1940s. So in theory, we should be able to live just as well while working one-fifth as much… but that’s clearly not what happened.

Keynes accurately predicted our high level of productivity; but he wrongly predicted that we would work less, when instead we just kept right on working almost as hard as before.

Indeed, it doesn’t even seem like we live five times as well while working just as much. Many things are better now—healthcare, entertainment, and of course electronics—but somehow, we really don’t feel like we are living better lives than our ancestors.

The Economic Policy Institute offers an explanation for this phenomenon: Our pay hasn’t kept up with our productivity.


Up until about 1980, productivity and pay rose in lockstep. But then they started to diverge, and they never again converged. Productivity continued to soar, while real wages only barely increased. The result is that since then, productivity has grown by 64%, and hourly pay has only grown 15%.

This is definitely part of the problem, but I think there’s more to it as well. Housing and healthcare have become so utterly unaffordable in this country that it really doesn’t matter that our cars are nice and our phones are dirt cheap. We are theoretically wealthier now, but most of that extra wealth goes into simply staying healthy and having a home. Our consumption has been necessitized.

If we can solve these problems, maybe people won’t feel a need to work so many hours. Or, maybe competition will continue to pressure them to work those hours… but at least we’ll actually feel richer when we do it.

No, the system is not working as designed

You say you’ve got a real solution…

Well, you know,

We’d all love to see the plan.

“Revolution”, the Beatles


Jun 16 JDN 2460478


There are several different versions of the meme, but they all follow the same basic format: Rejecting the statement “the system is broken and must be fixed”, they endorse the statement “the system is working exactly as intended and must be destroyed”.


This view is not just utterly wrong; it’s also incredibly dangerous.

First of all, it should be apparent to anyone who has ever worked in any large, complex organization—a corporation, a university, even a large nonprofit org—that no human system works exactly as intended. Some obviously function better than others, and most function reasonably well most of the time (probably because those that don’t fail and disappear, so there is a sort of natural selection process at work); but even with apparently simple goals and extensive resources, no complex organization will ever be able to coordinate its actions perfectly toward those goals.

But when we’re talking about “the system”, well, first of all:

What exactly is “the system”?

Is it government? Society as a whole? The whole culture, or some subculture? Is it local, national, or international? Are we talking about democracy, or maybe capitalism? The world isn’t just one system; it’s a complex network of interacting systems. So to be quite honest with you, I don’t even know what people are complaining about when they complain about “the system”. All I know is that there is some large institution that they don’t like.

Let’s suppose we can pin that down—say we’re talking about capitalism, for instance, or the US government. Then, there is still the obvious fact that any real-world implementation of a system is going to have failures. Particularly when millions of people are involved, no system is ever going to coordinate exactly toward achieving its goals as efficiently as possible. At best it’s going to coordinate reasonably well and achieve its goals most of the time.

But okay, let’s try to be as charitable as possible here.

What are people trying to say when they say this?

I think that fundamentally this is meant as an expression of Conflict Theory over Mistake Theory: The problems with the world aren’t due to well-intentioned people making honest mistakes, they are due to people being evil. The response isn’t to try to correct their mistakes; it’s to fight them (kill them?), because they are evil.

Well, it is certainly true that evil people exist. There are mass murderers and tyrants, rapists and serial killers. And though they may be less extreme, it is genuinely true that billionaires are disproportionately likely to be psychopaths and that those who aren’t typically share a lot of psychopathic traits.

But does this really look like the sort of system that was designed to optimize payoffs for a handful of psychopaths? Really? You can’t imagine any way that the world could be more optimized for that goal?

How about, say… feudalism?

Not that long ago, historically—less than a millennium—the world was literally ruled by those same sorts of uber-rich psychopaths, and they wielded absolute power over their subjects. In medieval times, your king could confiscate your wealth whenever he chose, or even have you executed on a whim. That system genuinely looks like it’s optimized for the power of a handful of evil people.

Democracy, on the other hand, actually looks like it’s trying to be better. Maybe sometimes it isn’t better—or at least isn’t enough better. But why would they even bother letting us vote, if they were building a system to optimize their own power over us? Why would we have these free speech protections—that allow you to post those memes without going to prison?

In fact, there are places today where near-absolute power really is concentrated in a handful of psychopaths, where authoritarian dictators still act very much like kings of yore. In North Korea or Russia or China, there really is a system in place that’s very well optimized to maximize the power of a few individuals over everyone else.

But in the United States, we don’t have that. Not yet, anyway. Our democracy is flawed and imperilled, but so far, it stands. It needs our constant vigilance to defend it, but so far, it stands.

This is precisely why these ideas are so dangerous.

If you tell people that the system is already as bad as it’s ever going to get, that the only hope now is to burn it all down and build something new, then those people aren’t going to stand up and defend what we still have. They aren’t going to fight to keep authoritarians out of office, because they don’t believe that their votes or donations or protests actually do anything to control who ends up in office.

In other words, they are acting exactly as the authoritarians want them to.

Short of your actual support, the best gift you can give your enemy is apathy.

If all the good people give up on democracy, then it will fail, and we will see something worse in its place. Your belief that the world can’t get any worse can make the world much, much worse.

I’m not saying our system of government couldn’t be radically improved. It absolutely could, even by relatively simple reforms, such as range voting and a universal basic income. But there are people who want to tear it all down, and if they succeed, what they put in its place is almost certainly going to be worse, not better.

That’s what happened in Communist countries, after all: They started with bad systems, they tore them down in the name of making something better—and then they didn’t make something better. They made something worse.

And I don’t think it’s an accident that Marxists are so often Conflict Theorists; Marx himself certainly was. Marx seemed convinced that all we needed to do was tear down the old system, and a new, better system would spontaneously emerge. But that isn’t how any of this works.

Good governance is actually really hard.

Life isn’t simple. People aren’t easy to coordinate. Conflicts of interest aren’t easy to resolve. Coordination failures are everywhere. If you tear down the best systems we have for solving these problems, with no vision at all of what you would replace them with, you’re not going to get something better.

Different people want different things. We have to resolve those disagreements somehow. There are lots of ways we could go about doing that. But so far, some variation on voting seems to be the best method we have for resolving disagreements fairly.

It’s true; some people out there are really just bad people. Some of what even good people want is ultimately not reasonable, or based on false presumptions. (Like people who want to “cut” foreign aid to 5% of the budget—when it is in fact about 1%.) Maybe there is some alternative system out there that could solve these problems better, ensure that only the reasonable voices with correct facts actually get heard.

If so, well, you know:

We’d all love to see the plan.

It’s not enough to recognize that our current system is flawed and posit that something better could exist. You need to actually have a clear vision of what that better system looks like. For if you go tearing down the current system without any idea of what to replace it with, you’re going to end up with something much worse.

Indeed, if you had a detailed plan of how to improve things, it’s quite possible you could convince enough people to get that plan implemented, without tearing down the whole system first.

We’ve done it before, after all:

We ended slavery, then racial segregation. We gave women the right to vote, then integrated them into the workforce. We removed the ban of homosexuality, and then legalized same-sex marriage.


We have a very clear track record of reform working. Things are getting better, on a lot of different fronts. (Maybe not all fronts, I admit.) When the moral case becomes overwhelming, we really can convince people to change their minds and then vote to change our policies.

We do not have such a track record when it comes to revolutions.

Yes, some revolutions have worked out well, such as the one that founded the United States. (But I really cannot emphasize this: they had a plan!) But plenty more have worked out very badly. Even France, which turned out okay in the end, had to go through a Napoleon phase first.

Overall, it seems like our odds are better when we treat the system as broken and try to fix it, than when we treat it as evil and try to tear it down.

The world could be a lot better than it is. But never forget: It could also be a lot worse.

Wrongful beneficence

Jun 9 JDN 2460471

One of the best papers I’ve ever read—one that in fact was formative in making me want to be an economist—is Wrongful Beneficence by Chris Meyers.

This paper opened my eyes to a whole new class of unethical behavior: Acts that unambiguously make everyone better off, but nevertheless are morally wrong. Hence, wrongful beneficence.

A lot of economists don’t even seem to believe in such things. They seem convinced that as long as no one is made worse off by a transaction, that transaction must be ethically defensible.

Chris Meyers convinced me that they are wrong.

The key insight here is that it’s still possible to exploit someone even if you make them better off. This happens when they are in a desperate situation and you take advantage of that to get an unfair payoff.


Here one of the cases Meyers offers to demonstrate this:

Suppose Carole is driving across the desert on a desolate road when her car breaks down. After two days and two nights without seeing a single car pass by, she runs out of water and feels rather certain that she will perish if not rescued soon. Now suppose that Jason happens to drive down this road and finds Carole. He sees that her situation is rather desperate and that she needs (or strongly desires) to get to the nearest town as soon as possible. So Jason offers her a ride but only on the condition that […] [she gives him] her entire net worth, the title to her house and car, all of her money in the bank, and half of her earnings for the next ten years.

Carole obviously is better off than she would be if Jason hadn’t shown up—she might even have died. She freely consented to this transaction—again, because if she didn’t, she might die. Yet it seems absurd to say that Jason has done nothing wrong by making such an exorbitant demand. If he had asked her to pay for gas, or even to compensate him for his time at a reasonable rate, we’d have no objection. But to ask for her life savings, all her assets, and half her earnings for ten years? Obviously unfair—and obviously unethical. Jason is making Carole (a little) better off while making himself (a lot) better off, so everyone is benefited; but what he’s doing is obviously wrong.

Once you recognize that such behavior can exist, you start to see it all over the place, particularly in markets, where corporations are quite content to gouge their customers with high prices and exploit their workers with low wages—but still, technically, we’re better off than we would be with no products and no jobs at all.

Indeed, the central message of Wrongful Beneficence is actually about sweatshop labor: It’s not that the workers are worse off than they would have been (in general, they aren’t); it’s that they are so desperate that corporations can get away with exploiting them with obviously unfair wages and working conditions.

Maybe it would be easier just to move manufacturing back to First World countries?

Right-wingers are fond of making outlandish claims that making products at First World wages would be utterly infeasible; here’s one claiming that an iPhone would need to cost $30,000 if it were made in the US. In fact, the truth is that it would only need to cost about $40 more—because hardly any of its cost is actually going to labor. Most of its price is pure monopoly profit for Apple; most of the rest is components and raw materials. (Of course, if those also had to come from the US, the price would go up more; but even so, we’re talking something like double its original price, not thirty times. Workers in the US are indeed paid a lot more than workers in China; they are also more productive.)

It’s true that actually moving manufacturing from other countries back to the US would be a substantial undertaking, requiring retooling factories, retraining engineers, and so on; but it’s not like we’ve never done that sort of thing before. I’m sure it could not be done overnight; but of course it could be done. We do this sort of thing all the time.

Ironically, this sort of right-wing nonsense actually seems to feed the far left as well, supporting their conviction that all this prosperity around us is nothing more than an illusion, that all our wealth only exists because we steal it from others. But this could scarcely be further from the truth; our wealth comes from technology, not theft. If we offered a fairer bargain to poorer countries, we’d be a bit less rich, but they would be much less poor—the overall wealth in the world would in fact probably increase.

A better argument for not moving manufacturing back to the First World is that many Third World economies would collapse if they stopped manufacturing things for other countries, and that would be disastrous for millions of people.

And free trade really does increase efficiency and prosperity for all.

So, yes; let’s keep on manufacturing goods wherever it is cheapest to do so. But when we decide what’s cheapest, let’s evaluate that based on genuinely fair wages and working conditions, not the absolute cheapest that corporations think they can get away with.

Sometimes they may even decide that it’s not really cheaper to manufacture in poorer countries, because they need advanced technology and highly-skilled workers that are easier to come by in First World countries. In that case, bringing production back here is the right thing to do.

Of course, this raises the question:

What would be fair wages and working conditions?

That’s not so easy to answer. Since workers in Third World countries are less educated than workers in First World countries, and have access to less capital and worse technology, we should in fact expect them to be less productive and therefore get paid less. That may be unfair in some cosmic sense, but it’s not anyone’s fault, and it’s not any particular corporation’s responsibility to fix it.

But when there are products for which less than 1% of the sales price of the product goes to the workers who actually made the product, something is wrong. When the profit margin is often wildly larger than the total amount spent on labor, something is wrong.

It may be that we will never have precise thresholds we can set to decide what definitely is or is not exploitative; but that doesn’t mean we can’t ever recognize it when we see it. There are various institutional mechanisms we could use to enforce better wages and working conditions without ever making such a sharp threshold.

One of the simplest, in fact, is Fair Trade.

Fair Trade is by no means a flawless system; in fact there’s a lot of research debating how effective it is at achieving its goals. But it does seem to be accomplishing something. And it’s a system that we already have in place, operating successfully in many countries; it simply needs to be scaled up (and hopefully improved along the way).

One of the clearest pieces of evidence that it’s helping, in fact, is that farmers are willing to participate in it. That shows that it is beneficent.

Of course, that doesn’t mean that it’s genuinely fair! This could just be another kind of wrongful beneficence. Perhaps Fair Trade is really just less exploitative than all the available alternatives.

If so, then we need something even better still, some new system that will reliably pass on the increased cost for customers all the way down to increased wages for workers.

Fair Trade shows us something else, too: A lot of customers clearly are willing to pay a bit more in order to see workers treated better. Even if they weren’t, maybe they should be forced to. But the fact is, they are! Even those who are most adamantly opposed to Fair Trade can’t deny that people really are willing to pay more to help other people. (Yet another example of obvious altruism that neoclassical economists somehow manage to ignore.) They simply deny that it’s actually helping, which is an empirical matter.

But if this isn’t helping enough, fine; let’s find something else that does.

Go ahead and identify as a season

Jun 2 JDN 2460464

A few weeks back, Fox News was running the story that “kids today are identifying as seasons instead of genders”. I suspected that by “kids today” they meant “one particular person on the Internet”, but in fact it was even worse than that; the one person on the Internet they had used as an example hadn’t actually said what Fox claimed they said.

What they actually said was far more nuanced: It was basically that their fluid gender expression varied based on what kind of clothes they wear, which, naturally, varies with the seasons. So they end up feeling more masculine at certain times of year when they like to wear masculine clothing. Honestly, this would be pretty boring stuff if conservatives hadn’t blown it out of proportion.

But after thinking about it for awhile, I decided that I don’t even care if kids want to identify as seasons.

It seems silly. I don’t understand why you’d want to do it. It would probably always feel weird to me. (And what pronouns do you even use for someone who identifies as “summer”?)

But ultimately, it seems completely, utterly harmless. So if there are in fact kids—or adults—out there who really feel that they want to identify their gender with a season, I’m here to tell you now:

Go right ahead and do that.

It’s really astonishing just what upsets conservatives in this world. Poverty? No big deal. Climate change? Probably a hoax or something. War? That’s just how it goes. But kids with weird genders!? The horror! The horror!

I think the reasoning here goes something like this:

  1. Civilization is built upon social constructions.
  2. Social constructions rely upon consensus behavior.
  3. Consensus behavior relies upon shared norms.
  4. Challenging any shared norms challenges all shared norms.
  5. Challenging any norm will cause it to collapse.
  6. Challenging gender norms is challenging a shared norm.
  7. Therefore, challenging gender norms will cause civilization to collapse.

Premises 1 through 3 are true, though I suspect that phrases like “social construction” would actually not sit well with most conservatives. (Part of their whole shtick seems to be that if you simply admit that money, government, and national identity are socially constructed, that in itself will cause them to immediately and irretrievably collapse. Nevermind that I can tell you money is made up all day long, and you’ll still be able to spend it.)

Premise 6 is also true, indeed, nearly tautological.

And, indeed, the argument is valid; the conclusion would follow from the premises.

So of course we come to the two premises that aren’t valid.


Premise 4 is wrong because you can challenge some norms but not others. I have yet to see anyone seriously challenge the norm against murder, for example. Nor does it even seem especially popular to challenge the norm in favor of democratic voting. But those are the kind of norms that actually sustain our civilization—not gender!

And premise 5 is even worse: A norm that can’t withstand even the slightest challenge is a norm that’s too weak to rely upon in the first place. If our civilization is to be strong and robust, it must allow its norms to be challenged, and those norms must be able to sustain themselves against the challenge. And indeed, if someone were to challenge the norm against murder or the norm in favor of democratic voting, there are plenty of things I could say to reply to that challenge. These norms aren’t arbitrary. They are strong because we can defend them.

What about gender norms? How defensible are they?

Well, uh… not very, it turns out.

The existence of sexes is defensible. Humans are sexually dimorphic, and the vast majority of humans can be readily classified as either male or female. Yes, there are exceptions even to that, and those people count too. But it’s a pretty useful and accurate heuristic to divide our species into two sexes.

But gender norms are so much more than this. We don’t simply recognize that some people have penises and others have vaginas. We attach all sorts of social and behavioral requirements to people based on their bodies, many of which are utterly arbitrary and culturally dependent. (Not all, to be fair: The stereotype that men are stronger than women is itself a very useful and accurate heuristic.)

Worse, we don’t merely assign stereotypes to predict behavior—which might sometimes be useful. We assign norms to control behavior. We tell people who deviate from those norms that they are bad. We abuse them, discriminate against them, ostracize them from society. This is really weird.

And for what?

What benefit do gender norms have?

I can see how norms against murder and in favor of democracy sustain our civilization. I’m just not seeing how norms against using she/her pronouns when you have a penis provide similar support.

It’s true, most human societies throughout history have had strict gender norms, so maybe that’s some sort of evidence in their favor… but how about we at least try not having them for awhile? Or just relax them here and there, a little at a time, see how it goes? If indeed it seems to result in some sort of disaster, we’ll stop doing it. But I don’t see how it could—and so far, it hasn’t.

I think maybe the problem here is that conservatives don’t understand how to evaluate norms, or perhaps even that norms can be evaluated. To them, a rule is a rule, and you never challenge the rules, because if there were no rules, there would be chaos and destruction.

But challenging some rules—or even all rules—doesn’t mean having no rules! It means checking to make sure our rules are good rules, and if they aren’t, changing them so they are.

And since I see no particular reason why having two genders is an especially good rule, go ahead, make up some more if you want.

Go ahead and identify if a season, if you really want to.

Medical progress, at least, is real

May 26 JDN 2460457

The following vignettes are about me.

Well, one of them is about me as I actually am. The others are about the person I would have been, if someone very much like me, with the same medical conditions, had been born in a particular place and time. Someone in these times and places probably had actual experiences like this, though of course we’ll never know who they were.

976 BC, the hilled lands near the mouth of the river:

Since I was fourteen years old, I have woken up almost every day in pain. Often it is mild, but occasionally it is severe. It often seems to be worse when I encounter certain plants, or if I awaken too early, or if I exert myself too much, or if a storm is coming. No one knows why. The healers have tried every herb and tincture imaginable in their efforts to cure me, but nothing has worked. The priests believe it is a curse from the gods, but at least they appreciate my ability to sometimes predict storms. I am lucky to even remain alive, as I am of little use to the tribe. I will most likely remain this way the rest of my life.

24 AD, Rome:

Since I was fourteen years old, I have woken up almost every day in pain. Often it is mild, but occasionally it is severe. It often seems to be worse when I encounter certain plants, or if I awaken too early, or if I exert myself too much, or if a storm is coming. No one knows why. The healers have tried every herb and tincture imaginable in their efforts to cure me, but nothing has worked. The priests believe it is a curse from the gods, but at least they appreciate my ability to sometimes predict storms. I am lucky that my family was rich enough to teach me reading and mathematics, as I would be of little use for farm work, but can at least be somewhat productive as a scribe and a tutor. I will most likely remain this way the rest of my life.

1024 AD, England:

Since I was fourteen years old, I have woken up almost every day in pain. Often it is mild, but occasionally it is severe. It often seems to be worse when I encounter certain plants, or if I awaken too early, or if I exert myself too much, or if a storm is coming. No one knows why. The healers have tried every herb and tincture imaginable in their efforts to cure me, but nothing has worked. The priests believe it is a curse imposed upon me by some witchcraft, but at least they appreciate my ability to sometimes predict storms. I am lucky that my family was rich enough to teach me reading and mathematics, as I would be of little use for farm work, but can at least be somewhat productive as a scribe and a tutor. I will most likely remain this way the rest of my life.

2024 AD, Michigan:

Since I was fourteen years old, I have woken up almost every day in pain. Often it is mild, but occasionally it is severe. It often seems to be worse when I encounter certain pollens, fragrances, or chemicals, or if I awaken too early, or if I exert myself too much, or when the air pressure changes before a storm. Brain scans detected no gross abnormalities. I have been diagnosed with chronic migraine, but this is more a description of my symptoms than an explanation. I have tried over a dozen different preventative medications; most of them didn’t work at all, some of them worked but gave me intolerable side effects. (One didn’t work at all and put me in the hospital with a severe allergic reaction.) I’ve been more successful with acute medications, which at least work as advertised, but I have to ration them carefully to avoid rebound effects. And the most effective acute medication is a subcutaneous injection that makes me extremely nauseated unless I also take powerful anti-emetics along with it. I have had the most success with botulinum toxin injections, so I will be going back to that soon; but I am also looking into transcranial magnetic stimulation. Currently my condition is severe enough that I can’t return to full-time work, but I am hopeful that with future treatment I will be able to someday. For now, I can at least work as a writer and a tutor. Hopefully things get better soon.

3024 AD, Aegir 7, Ran System:

For a few months when I was fourteen years old, I woke up nearly every day in pain. Often it was mild, but occasionally it was severe. It often seemed to be worse when I encountered certain pollens, fragrances or chemicals, or if I awakened too early, or if I exerted myself too much, or when the air pressure changed before a storm. Brain scans detected no gross abnormalities, only subtle misfiring patterns. Genetic analysis confirmed I had chronic migraine type IVb, and treatment commenced immediately. Acute medications suppressed the pain while I underwent gene therapy and deep-effect transcranial magnetic stimulation. After three months of treatment, I was cured. That was an awful few months, but it’s twenty years behind me now. I can scarcely imagine how it might have impaired my life if it had gone on that whole time.

What is the moral of this story?

Medical progress is real.

Many people often doubt that society has made real progress. And in a lot of ways, maybe it hasn’t. Human nature is still the same, and so many of the problems we suffer have remained the same.

Economically, of course we have had tremendous growth in productivity and output, but it doesn’t really seem to have made us much happier. We have all this stuff, but we’re still struggling and miserable as a handful at the top become spectacularly, disgustingly rich.

Social progress seems to have gone better: Institutions have improved, more of the world is democratic than ever before, and women and minorities are better represented and better protected from oppression. Rates of violence have declined to some of their lowest levels in history. But even then, it’s pretty clear that we have a long, long way to go.

But medical progress is undeniable. We live longer, healthier lives than at any other point in history. Our infant and child mortality rates have plummeted. Even chronic conditions that seem intractable today (such as my chronic migraines) still show signs of progress; in a few generations they should be cured—in surely far less than the thousand years I’ve considered here.

Like most measures of progress, this change wasn’t slow and gradual over thousands of years; it happened remarkably suddenly. Humans went almost 200,000 years without any detectable progress in medicine, using basically the same herbs and tinctures (and a variety of localized and ever-changing superstitions) the entire time. Some of it worked (the herbs and tinctures, at least), but mostly it didn’t. Then, starting around the 18th century, as the Enlightenment took hold and Industrial Revolution ramped up, everything began to change.

We began to test our medicine and see if it actually worked. (Yes, amazingly, somehow, nobody had actually ever thought to do that before—not in anything resembling a scientific way.) And when we learned that most of it didn’t, we began to develop new methods, and see if those worked; and when they didn’t either, we tried new things instead—until, finally, eventually, we actually found medicines that actually did something, medicines worthy of the name. Our understanding of anatomy and biology greatly improved as well, allowing us to make better predictions about the effects our medicines would have. And after a few hundred years of that—a few hundred, out of two hundred thousand years of our species—we actually reached the point where most medicine is effective and a variety of health conditions are simply curable or preventable, including diseases like malaria and polio that had once literally plagued us.

Scientific medicine brought humanity into a whole new era of existence.

I could have set the first vignette 10,000 years ago without changing it. But the final vignette I could probably have set only 200 years from now. I’m actually assuming remarkable stagnation by putting it in the 31st century; but presumably technological advancement will slow at one point, perhaps after we’ve more or less run out of difficult challenges to resolve. (Then again, for all I know, maybe my 31st century counterpart will be an emulated consciousness, and his chronic pain will be resolved in 17.482 seconds by a code update.)

Indeed, the really crazy thing about all this is that there are still millions of people who don’t believe in scientific medicine, who want to use “homeopathy” or “naturopathy” or “acupuncture” or “chiropractic” or whatever else—who basically want to go back to those same old herbs and tinctures that maybe sometimes kinda worked but probably not and nobody really knows. (I have a cousin who is a chiropractor. I try to be polite about it, but….) They point out the various ways that scientific medicine has failed—and believe me, I am painfully aware of those failures—but then where the obvious solution is to improve scientific medicine, they instead want to turn the whole ship around, and go back to what we had before, which was obviously a million times worse.

And don’t tell me it’s harmless: One, it’s a completewaste of resources that could instead have been used for actual scientific medicine. (9% of all out-of-pocket spending on healthcare in the US is on “alternative medicine”—which is to say, on pointless nonsense.) Two, when you have a chronic illness and people keep shoving nonsense treatments in your face, you start to feel blamed for your condition: “Why haven’t you tried [other incredibly stupid idea that obviously won’t work]? You’re so closed-minded! Maybe your illness isn’t really that bad, or you’d be more desperate!” If “alternative medicine” didn’t exist, maybe these people could help me cope with the challenges of living with a chronic illness, or even just sympathize with me, instead of constantly shoving stupid nonsense in my face.

Not everything about the future looks bright.

In particular, I am pessimistic about the near-term future of artificial intelligence, which I think will cause a lot more problems than it solves and does have a small—but not negligible—risk of causing a global catastrophe.

I’m also not very optimistic about climate change; I don’t think it will wipe out our civilization or anything so catastrophic, but I do think it’s going to kill millions of people and we’ve done too little, too late to prevent that. We’re now doing about what we should have been doing in the 1980s.

But I am optimistic about scientific medicine. Every day, new discoveries are made. Every day, new treatments are invented. Yes, there is a lot we haven’t figured out how to cure yet; but people are working on it.

And maybe they could do it faster if we stopped wasting time on stuff that obviously won’t work.