The paradoxical obviousness of reason

Nov 26 JDN 2460275

The basic precepts of reason seem obvious and irrefutable:

Believe what’s most likely to be true.

Do what’s most likely to work.

How are you going to argue with that? In fact, it seems like by the time you try to argue at all, you’ve already agreed to it. These principles may be undeniable—literally impossible to coherently deny.

Even when expressed a little more precisely, the principles of reason still seem pretty obvious:

Beliefs should be consistent with each other and with observations.

The best action is the one with the best expected outcome.

And you really can get surprisingly far with this. A few more steps of mathematical precision, and you basically get the scientific method and utilitarianism:
Beliefs should be assigned consistent Bayesian probabilities according to the observed evidence.

The best action is the one that maximizes expected utility.

Why, then, did it take humanity 99.9% of its existence to figure this out? Why did a species that has lived for 300,000 years only really start getting this right in about the past 300?

In fact, even today, while most people would at least assent to the basic notion of rationality, a large number don’t really follow it well, and only a small fraction really understand it at the deepest level.

Reason just seems obvious if you think about it. How do so many people miss it?

Because most people really don’t think about it that much.

In fact, I’m going to make a stronger claim:

Most people don’t think about anything that much.

Remember: To a first approximation, all human behavior is social norms.

Most human beings go through most of their lives behaving according to habits and social norms that they may not even be consciously aware of. They do things how they were always done; they believe what those around them believe. They adopt the religion of their parents, cheer for the sports team of their hometown, vote for the political party that is popular in their community. They may not even register these things as decisions at all—they simply did not consider the alternatives.

It’s not that they are incapable of thinking. When they really need to think hard about something, they can do it. But hard thinking is, well, hard. It’s difficult; it’s uncomfortable; for most people, it’s unfamiliar. So, they avoid it when they can. (There is even a kind of meta-rationality in that: Behavioral economists call it rational inattention.)

Few would willingly assent to the claim “I believe a lot of things that aren’t true.” People generally believe that their beliefs are true.

I doubt even most people in ancient history would agree with a statement like that. People who wholemindedly believed in witches, werewolves, ghosts, and sympathetic magic still believed that their beliefs were true. People who thought that a giant beetle rolled the sun across the sky still thought they had a good handle on how the world works.

In fact, the few people I know who would agree with a statement like that are very honest, introspective Bayesians who recognize that the joint probability of all their beliefs being true must be quite small. Agreeing that some of your beliefs are false is a sign not that you are irrational, but that you are extremely rational. (In fact, I would agree with a statement like that: If I knew what I’m wrong about, I’d change my belief; but odds are, I’m wrong about something.)

But most people simply don’t even bother to evaluate the truth of many of their beliefs. If something is easy to check and directly affects their lives, they’ll probably try to gather evidence for it. But if it’s at all abstract or difficult to evaluate, they’ll more or less give up and believe whatever seems to be popular. (This explains Carlin’s dictum: “Tell people there’s an invisible man in the sky who created the universe, and the vast majority will believe you. Tell them the paint is wet, and they have to touch it to be sure.“)

This can also help to explain why so many people—mostly, but not exclusively right-wing people—complain that scientists are “elitist” while worshipping at the feet of clergy and business executives (the latter only—so far—figuratively, but the former all too literally).


What could be more elitist than clergy? They are basically claiming a special, unique connection to the ultimate truths of the universe that is only accessible to them. They claim to be ordained by the all-powerful ruler of the universe with the absolute to right adjudicate all truth and morality.

For goodness’ sake, one of the most popular and powerful ones literally claims to be infallible.

Meanwhile, basically all scientists agree that anyone who is reasonably smart and willing to work hard, either making their own observations, running their own experiments, or just reading the work of a lot of other people’s observations and experiments, can become a scientist. Some scientists are arrogant or condescending, but as an institution and culture, science is fundamentally egalitarian.

No, what people are objecting to among scientists is not elitism. Part of it may be the condescension of telling people: “This is obvious. If you thought about it, you would see that it has to be right.”

Yet the reason we keep saying that is… it is basically true. The precepts of rationality are obvious if think about them, and they do lead quite directly to rejecting a lot of mainstream beliefs, particularly about religion. I’m sure it feels insulting to be told that you just aren’t thinking hard enough about important things… but maybe you aren’t?

We may need to find a gentler way to convey this message. There’s no point in saying it if nobody is going to listen. Yet that doesn’t make it any less true.

It’s not that quantum mechanics is intuitively obvious (quite the opposite is still a terrible understatement), nor even that Darwinian natural selection or comparative advantage are obvious (though surely they’re less counter-intuitive than quantum mechanics). The conclusions of science are not obvious. They took centuries to figure out for good reason.

But the principles of science really are: Want to know if something is true? Look! Find out!

Yet historically this has not in fact been how human beings formed most of their beliefs. Indeed, I am often awed by just how bad most people throughout history have been at thinking empirically.

It’s not just that people throughout history believed in witches without ever having seen one, or knowing anyone who had seen one. (I’ve never seen a platypus or a quasar, and I still believe in them.) It’s that they were willing to execute people for being witches—killing people as punishment for deeds that not only they did not do, but could not possibly have done. Entire civilizations for millennia failed to realize that this was wrong.

Aristotle believed that men’s body temperature was hotter than women’s, and that this temperature difference determined the sex of children. That’s Aristotle, a certifiable genius living in the culture that pioneered rationalist philosophy. (Ironically—and by pure Stopped Clock Principle—he’d almost be right about certain species of reptiles.) It never occurred to him to even try to measure the body temperatures of lots of people and see if this was true. (Admittedly they didn’t have very good thermometers back then.)

Aristotle did get a lot of things right: In particular, his trichotomy of souls is basically accurate, with “vegetative soul” renamed “homeostatic metabolism and reproduction”, “sensitive soul” renamed “limbic system”, and “rational soul” renamed “prefrontal cortex”. The vegetative soul is what makes you alive, the sensitive soul is what makes you sentient, and the rational soul is what makes you a person. He even recognized a deep truth that the majority of human beings today do not: The soul is a function of the body, and dies when the body dies. For his time, he was absolutely off the charts in rationality. But even he didn’t really integrate rationality and empiricism fully into his way of thinking.

Even today there are a shocking number of common misconceptions that could be easily refuted by anyone who thought to check (or look it up!):

Wolves howl at the full moon? Nope, wolves don’t care about the phase of the moon, and if you live near any, you’ll hear them howl all year round. Actually, wolf howling is more like that “Twilight Bark” from 101 Dalmations; it’s a long-distance communication and coordination signal.

Eggs can only balance on the equinox? Nope, it’s tricky, but you can balance an egg just as well any day of the year.

You don’t lose most of your heat through your head: Try going outside in the cold wearing a t-shirt and shorts with a hat, and then again with snow pants and a heavy coat and no hat; you’ll see which feels colder.

“Beer before liquor, never sicker” is nonsense: It matters how much alcohol you drink (and how much you eat), not what order you do it in, and you’d know that if you just tried it both ways a few times.

Taste on your tongue is localized to particular areas? No, it’s not, and you can tell by putting foods with strong flavors on different parts of your tongue. (Indeed, I did when they did that demonstration in elementary school; I wondered if that meant my tongue was somehow weird.)

I can understand not wanting to take the risk with fan death yourself, but maybe listen to all the other people—including medical experts—who tell you it’s not real? I keep a fan in my bedroom every night and it hasn’t killed me yet.

Even the gambler’s fallacy is something you could easily disabuse yourself of by rolling some dice for awhile and taking careful notes. Am I more likely to roll snake eyes if I haven’t in awhile? Nope; the odds on any given roll are always exactly the same.

But most people simply don’t think to check.

Indeed, most people get a lot of their beliefs—particular those about complex, abstract, or distant things—from authority figures. While empiricism doesn’t come very naturally to humans, hierarchy absolutely does. (I think it’s a primate thing.) Another reason scientists may seem “elitist” is that people think we are trying to usurp that authority. We’re telling you that what your religious leaders taught you is false; that must mean that we are trying to become religious leaders ourselves.

But in fact we’re telling you something far more radical than that: You don’t need religious leaders. You don’t need to take things on faith. If you want to know whether something is true, you can look.

We are not trying to usurp control over your hierarchy. We are trying to utterly dismantle it. We dethrone the king, not so that we can become kings ourselves—but so that the world can have kings no longer.

Granted, most people aren’t going to be able to run particle accelerator experiments in their garages. But if you want to know how particle physics works, and how we know what we know about it, go to your nearest university, find a particle physicist, and ask: I guarantee they’ll be more than happy to tell you whatever you want to know. You can even do this via email from anywhere in the world.

That is, we do need expertise: People who specialize in a particular field of knowledge can learn it much better than others. But we do not need authority: You don’t just have to take their word for it. There’s a difference between expertise and authority.

And sometimes, really all you need to do is stop and think. People should try that more often.

Multilevel selection: A tale of three tribes

Jun 19 JDN 2459780

There’s something odd about the debate in evolutionary theory about multilevel selection (sometimes called “group selection”). On one side are the mainstream theorists who insist that selection only happens at the individual level (or is the gene level?); and on the other are devout group-selectionists who insist that group selection is everywhere and the only possible explanation of altruism.

Both of these sides are wrong. Selection does happen at multiple levels, but it’s entirely possible for altruism to emerge without it.

The usual argument by the mainstream is that group selection would require the implausible assumption that group live and die on the same timescale as individuals. The usual argument by group-selectionists is that there’s no other explanation for why humans are so altruistic. But neither of these things are true.

There is plenty of discussion out there about why group selection isn’t necessary for altruism: Kin selection is probably the clearest example. So I’m going to focus on showing that group selection can work even when groups live and die much slower than individuals.

To do this, I would like to present you a model. It’s a very pared-down, simplified version, but it is nevertheless a valid evolutionary game theory model.

Consider a world where the only kind of interaction is Iterated Prisoner’s Dilemmas. For the uninitiated, an Iterated Prisoner’s Dilemma is as follows.

Time goes on forever. At each point in time, some people are born, and some people die; people have a limited lifespan and some general idea of how long it is, but nobody can predict for sure when they will die. (So far, this isn’t even a model; all of this is literally true.)

In this world, people are randomly matched with others one on one, and they play a game together, where each person can choose either “Cooperate” or “Defect”. They choose in secret and reveal simultaneously. If both choose “Cooperate”, everyone gets 3 points. If both choose “Defect”, everyone gets 2 points. If one chooses “Cooperate” and the other chooses “Defect”, the “Cooperate” person gets only 1 point while the “Defect” person gets 4 points.

What are these points? Since this is evolution, let’s call them offspring. An average lifetime score of 4 points means 4 offspring per couple per generation—you get rapid population growth. 1 point means 1 offspring per couple per generation—your genes will gradually die out.

That makes the payoffs follow this table:


CD
C3, 31, 4
D4, 12, 2

There are two very notable properties of this game; together they seem paradoxical, which is probably why the game has such broad applicability and such enduring popularity.

  1. Everyone, as a group, is always better off if more people choose “Cooperate”.
  2. Each person, as an individual, regardless of what the others do, is always better off choosing “Defect”.

Thus, Iterated Prisoner’s Dilemmas are ideal for understanding altruism, as they directly model a conflict between individual self-interest and group welfare. (They didn’t do a good job of explaining it in A Beautiful Mind, but that one line in particular was correct: the Prisoner’s Dilemma is precisely what proves “Adam Smith was wrong.”)

Each person is matched with someone else at random for a few rounds, and then re-matched with someone else; and nobody knows how long they will be with any particular person. (For technical reasons, with these particular payoffs, the chance of going to another round needs to be at least 50%; but that’s not too important for what I have to say here.)

Now, suppose there are three tribes of people, who are related by family ties but also still occasionally intermingle with one another.

In the Hobbes tribe, people always play “Defect”.

In the Rousseau tribe, people always play “Cooperate”.

In the Axelrod tribe, people play “Cooperate” the first time they meet someone, then copy whatever the other person did in the previous round. (This is called “tit for tat“.)

How will these tribes evolve? In the long run, will all tribes survive, or will some prevail over others?

The Rousseau tribe seems quite nice; everyone always gets along! Unfortunately, the Rousseau tribe will inevitably and catastrophically collapse. As soon as a single Hobbes gets in, or a mutation arises to make someone behave like a Hobbes, that individual will become far more successful than everyone else, have vastly more offspring, and ultimately take over the entire population.

The Hobbes tribe seems pretty bad, but it’ll be stable. If a Rousseau should come visit, they’ll just be ruthlessly exploited and makes the Hobbes better off. If an Axelrod arrives, they’ll learn not to be exploited (after the first encounter), but they won’t do any better than the Hobbeses do.

What about the Axelrod tribe? They seem similar to the Rousseau tribe, because everyone is choosing “Cooperate” all the time—will they suffer the same fate? No, they won’t! They’ll do just fine, it turns out. Should a Rousseau come to visit, nobody will even notice; they’ll just keep on choosing “Cooperate” and everything will be fine. And what if a Hobbes comes? They’ll try to exploit the Axelrods, and succeed at first—but soon enough they will be punished for their sins, and in the long run they’ll be worse off (this is why the probability of continuing needs to be sufficiently high).

The net result, then, will be that the Rousseau tribe dies out and only the Hobbes and Axelrod tribes remain. But that’s not the end of the story.

Look back at that payoff table. Both tribes are stable, but each round the Hobbeses are getting 2 each round, while the Axelrods are getting 3. Remember that these are offspring per couple per generation. This means that the Hobbes tribe will have a roughly constant population, while the Axelrods will have an increasing population.

If the two tribes then come into conflict, perhaps competing over resources, the larger population will most likely prevail. This means that, in the long run, the Axelrod tribe will come to dominate. In the end, all the world will be ruled by Axelrods.

And indeed, most human beings behave like Axelrods: We’re nice to most people most of the time, but we’re no chumps. Betray our trust, and you will be punished severely. (It seems we also have a small incursion of Hobbeses: We call them psychopaths. Perhaps there are a few Rousseaus among us as well, whom the Hobbeses exploit.)

What is this? It’s multilevel selection. It’s group selection, if you like that term. There’s clearly no better way to describe it.

Moreover, we can’t simply stop at the reciprocal altruism as most mainstream theorists do; yes, Axelrods exhibit reciprocal altruism. But that’s not the only equilibrium! Why is reciprocal altruism so common? Why in the real world are there fifty Axelrods for every Hobbes? Multilevel selection.

And at no point did I assume either (1) that individual selection wasn’t operating, or (2) that the timescales of groups and individuals were the same. Indeed, I’m explicitly assuming the opposite: Individual selection continues to work at every generation, and groups only live or die over many generations.

The key insight that makes this possible is that the game is iterated—it happens over many rounds, and nobody knows exactly how many. This results in multiple Nash equilibria for individual selection, and then group selection can occur over equilibria.

This is by no means restricted to the Prisoner’s Dilemma. In fact, any nontrivial game will result in multiple equilibria when it is iterated, and group selection should always favor the groups that choose a relatively cooperative, efficient outcome. As long as such a strategy emerges by mutation, and gets some chance to get a foothold, it will be successful in the long run.

Indeed, since these conditions don’t seem all that difficult to meet, we would expect that group selection should actually occur quite frequently, and should be a major explanation for a lot of important forms of altruism.

And in fact this seems to be the case. Humans look awfully group-selected. (Like I said, we behave very much like Axelrods.) Many other social species, such as apes, dolphins, and wolves, do as well. There is altruism in nature that doesn’t look group-selected, for instance among eusocial insects; but much of the really impressive altruism seems more like equilibrium selection at the group level than it does like direct selection at the individual level.

Even multicellular life can be considered group selection: A bunch of cells “agree” to set aside some of their own interest in self-replication in favor of supporting a common, unified whole. (And should any mutated cells try to defect and multiply out of control, what happens? We call that cancer.) This can only work when there are multiple equilibria to select from at the individual level—but there nearly always are.

Economists aren’t that crazy

Dec 12 JDN 2459561

I’ve been seeing this meme go around lately, and I felt a need to respond:

Economics: “Humans only value things monetarily.”

Sociology: “Uh, I don’t…”

Economics: “Humans are always rational and value is calculated by complex internal calculus.”

Sociology: “Uhhh, Psy, can you help?”

Psychology: “That’s not how humans…”

Economics: “ALSO MY SYSTEM WILL GROW EXPONENTIALLY FOREVER!”

Physics: drops teacup

I have plenty of criticisms to make of neoclassical economics—but this is clearly unfair.

Economists aren’t that crazy.

Above all, economists don’t actually believe in exponential growth forever. I literally have never met one who does. The mainstream, uncontroversial (I daresay milquetoast)neoclassical growth model, the Solow-Swan model, predicts a long-run decline in the rate of economic growth. Indeed, I would not be surprised to find that long-run per-capita GDP growth is asymptotic, meaning that there is some standard of living that we can never expect the world to exceed. It’s simply a question of what that limit is, and it is most likely a good deal better than how we live even in First World countries.

It’s nothing more than a strawman of neoclassical economics to assert otherwise. Yes, economists do believe that current growth can and should continue for some time yet—though even among them it is controversial how long it will continue. But they absolutely do not believe that we can expect 3% annual growth in per-capita GDP for the next 1000 years. And indeed, it is precisely their mathematical sophistication that makes this so: They would be the first to recognize that this implies a 6.8 trillion-fold increase in standard of living, which is obviously ludicrous. A much more plausible figure for that timescale is something like 0.2%, which would be only a 7-fold increase over that same interval. And if you really want to extrapolate to millions of years, the only plausible long-run economic growth rate over that period is basically 0%. Yet billions of lives hinge upon whether it is actually 0.0001%, 0.0002%, or 0.0003%—if indeed human beings don’t go extinct long before then.

What about the other two claims? Well, neoclassical economists do have a habit of assuming greater rationality than human beings actually exhibit, and of trying to value everything in monetary terms. And economists are nothing if not arrogant in their relationship to other fields of social science. So here, at least, there is a kernel of truth.

Yet that makes this at best hyperbole for comedic effect—and at worst highly misleading as to what actual economists believe. You can find a few fringe economists who might seriously assent to the claim “humans are always rational”, and you can easily find plenty of amoral corporate shills who are paid to say such things on TV. (Krugman refers to them as “professionally conservative economists”.)

Moreover, I think the behavioral economics paradigm still hasn’t penetrated fully enough—most economists will give lip service to the idea of irrational behavior without being willing to seriously face up to how frequent it is or what this implies for policy. But no serious mainstream economist actually believes that all human beings are always rational.

And while there is surely a tendency to over-emphasize monetary costs and try to put everything in monetary terms, I don’t think I’ve ever met an economist who genuinely believes that all humans value everything monetarily. At most they might think that everyone should value everything monetarily—and even then the only ones who say things like this are weird fringe figures like that guy who hates Christmas.

Am I reading too much into a joke? Maybe. But given how poorly most people understand economics, this kind of joke can do real damage. It’s already a big problem that (aforementioned) corporate shills can present themselves as economic experts, but if popular culture is accustomed to dismissing the claims of actual economic experts, that makes matters much worse. And rather than the playful ribbing that neoclassical economists well deserve (like Jon Stewart gave them: “People are screwy.” “You’re just now figuring this out?”), this meme mocks economists aggressively enough that it seems to be trying to actively undermine their credibility.

If COVID taught us anything, it should be that expertise matters. Trusting experts more than we did would have saved thousands of lives—and trusting them less would have doomed even more.

So maybe a joke that will make people trust economic experts less isn’t so harmless after all?

We must not tolerate this brazen authoritarianism

Jul 26 JDN 2459057

Imagine for a moment what this would feel like:

Your girlfriend, who works as an EMT, just got home and went to bed after a long shift. Suddenly you hear banging on your door. “Who is it?” you shout; no answer. “Who is it?” you ask again; still no answer. The banging continues.

You know there is a lot of crime in your neighborhood, so you bought a handgun to protect your family. Since it seems like someone is about to invade your home, now seems like the obvious time to use it. You get the gun, load it, and aim it at the doorway. You hesitate; are you really prepared to pull that trigger? You know that you could kill someone on the other side. But you need to protect your family. So you fire a few shots at the doorway, hoping it will be enough to scare them away.


The response is a hail of bullets from several different directions, several of which hit your girlfriend and kill her while she is asleep.


Then, the door breaks down and several police officers barge in, having never announced themselves as police officers. They arrest you. You learn later that they were serving a so-called “no knock warrant”, which was intended for someone who wasn’t even there. They were never supposed to be in your home in the first place. Your girlfriend is now dead. And then, to top it all off, they have the audacity to charge you with attempted murder of a police officer because you tried to defend your home.

Now imagine what this would feel like as well:

In the evening you joined a protest. It was a peaceful protest, and there were hardly even any police officers around. There was no rioting, no vandalism, no tear gas or rubber bullets; just people holding signs and chanting. It’s now about 2:00 AM, and the protest is ending for the night, so you begin walking home.

Suddenly a van pulls up next to you. It’s completely unmarked; it just looks like a rental car that anybody could have rented. The door slides open and men in tactical body armor leap out of it, pointing rifles at you. They demand that you get in the van with them, and since you think they’re likely to shoot you if you don’t, you comply.

They handcuff you, cover your eyes with your hat, and drive you somewhere. They unload you into a building, then frisk you, photograph you, and rummage through your belongings. Then, they put you into a cell. They have not identified themselves. They have not explained why they abducted you.

Only after they have put you into a cell do they identify themselves as federal agents and start reading you your Miranda rights. They still won’t tell you why you were arrested. They ask you to waive your right to counsel; when you refuse, they leave you there for an hour and a half and then release you. Only as you walk outside do you realize that you had been taken to a federal courthouse.

These stories did not happen in Zimbabwe or Congo or Nicaragua. They did not happen in Russia or China or Venezuela. They happened right here in the United States of America. The first one is the story of Kenneth Walker in Louisville, whose girlfriend Breonna Taylor was murdered by police who didn’t announce that they were police and were never supposed to be in his home. It wasn’t a completely random error; the intended target was someone Breonna Taylor knew. So yes, it was possible that the intended target—who did have a legitimate warrant out for his arrest—might have been present. But how does that justify not even announcing themselves as police?

The second is the story of Mark Pettibone in Portland, who was abducted by anonymous paramilitary forces in an unmarked van. The Department of Homeland Security (an Orwellian name for an agency if ever there were) released a report on the incidents of “violent anarchists” that justified their use of such extreme measures: Most of them are graffiti or vandalism. There are a few genuinely violent incidents in there: Some throwing rocks, some pointing laser pointers at police officers’ eyes, and at least one alleged pipe bomb; but in the whole report there is only one incident listed in which any police officers were injured.

This is authoritarianism. It is not like authoritarianism; it is not moving toward authoritarianism. It is authoritarianism. Secret police in unmarked vehicles abducting people off the street is simply something that should not be allowed to happen in a liberal democracy. Right now it is rare, and for this we should be grateful; but it should not be rare, it should be non-existent. And we should continue fighting until it is. This is not a utopian dream, like imagining that we could make rape or murder non-existent. This is a policy choice. No other First World country does this. (Indeed, are we even a First World country anymore? We were supposed to be the paragon of the First World, but I’m not so sure we even belong in the category anymore.) What we have made rare they have managed to avoid entirely.

While arrest warrants are a necessary part of law enforcement, “no-knock” warrants are inherently authoritarian. Police should be required to identify themselves: Not simply that they are law enforcement, but what agency they work for, their own names and badge numbers, and the reason they are conducting the arrest. A “no-knock” warrant would already be unjust even applied in the best of circumstances (capturing an organized crime boss, perhaps); but typically they are used for drug raids (is criminalizing drugs is even right in the first place?), and in this case the person they wanted wasn’t even there.

Pettibone was at least promptly released. Walker will grieve the loss of his girlfriend for the rest of his life. Jonathan Mattingly, Brett Hankison, and Myles Cosgrove, the officers who shot Breonna Taylor, have still not been charged.

I wish that I could blame Trump for all of this and promise that it will go away when he loses the election in November (as statistical forecasts strongly predict he will). But while Trump and those who enable him have clearly accelerated and exacerbated this problem, the roots run much deeper.

For many people, particularly Black people, the United States is a de facto police state, and more or less always has been. (In fact, in most ways it’s probably better than it used to be—which isn’t to say that it is remotely acceptable right now, but to point out just how horrific it once was.) Harassment and abuse by police are commonplace, and death at the hands of police is a constant fear. Many of us are blissfully unaware of this, because we live in places where it doesn’t happen. This violence is highly concentrated: Major US cities vary in their races of police homicide by nearly a full order of magnitude.

The power of our government is unmatched. We have the third-largest standing army (after China and India, each of which has four times our population), the fourth-largest police force (in addition to China and India, add Russia to the list—though their population is less than half ours), and the largest incarcerated population in the world. Our military spending is higher than the next ten countries combined. Our intelligence services are not simply the largest in the world; the CIA alone accounts for nearly two-thirds of all worldwide intelligence spending. And while by the CIA is by far the largest, the US has over a dozen other intelligence agencies. When this power is abused—as it all too often is—the whole world feels the pain. We cannot afford to tolerate such abuses. We must stamp them out while we still can.

Getting Trump out won’t fix this. We must get him out, for a hundred thousand reasons, but that will not be nearly enough. Like hairline fractures in a steel beam that become wide gashes when the bridge is loaded, there are deep, structural flaws in our society and our system of government that are now becoming visible under the strain of crisis. I for one believe that these flaws can still be mended. But the longer we wait, the closer we come to a total collapse.

Fear not to “overreact”

Mar 29 JDN 2458938

It could be given as a story problem in an algebra class, if you didn’t mind terrifying your students:

A virus spreads exponentially, so that the population infected doubles every two days. Currently 10,000 people are infected. How long will it be until 300,000 are infected? Until 10,000,000 are infected? Until 600,000,000 are infected?

The answers:

300,000/10,000 is about 32 = 2^5, so it will take 5 doublings, or 10 days.

10,000,000/10,000 is about 1024=2^10, so it will take 10 doublings, or 20 days.

600,000,000/10,000 is about 64*1024=2^6*2^10, so it will take 16 doublings, or 32 days.

This is the approximate rate at which COVID-19 spreads if uncontrolled.

Fortunately it is not completely uncontrolled; there were about 10,000 confirmed infections on January 30, and there are now about 300,000 as of March 22. This is about 50 days, so the daily growth rate has averaged about 7%. On the other hand, this is probably a substantial underestimate, because testing remains very poor, particularly here in the US.

Yet the truth is, we don’t know how bad COVID-19 is going to get. Some estimates suggest it may be nearly as bad as the 1918 flu pandemic; others say it may not be much worse than H1N1. Perhaps all this social distancing and quarantine is an overreaction? Perhaps the damage from closing all the schools and restaurants will actually be worse than the damage from the virus itself?

Yes, it’s possible we are overreacting. But we really shouldn’t be too worried about this possibility.

This is because the costs here are highly asymmetric. Overreaction has a moderate, fairly predictable cost. Underreaction could be utterly catastrophic. If we overreact, we waste a quarter or two of productivity, and then everything returns to normal. If we underreact, millions of people die.

This is what it means to err on the side of caution: If we are not 90% sure that we are overreacting, then we should be doing more. We should be fed up with the quarantine procedures and nearly certain that they are not all necessary. That means we are doing the right thing.

Indeed, the really terrifying thing is that we may already have underreacted. These graphs of what will happen under various scenarios really don’t look good:

pandemic_graph

But there may still be a chance to react adequately. The advice for most of us seems almost too simple: Stay home. Wash your hands.

Monopsony is all around us

Mar 15 JDN 2458924

Perhaps because of the board game (the popularity of which honestly baffles me; it’s really not a very good game!), the concept of monopoly is familiar to most people: A market with one seller and many buyers can command high prices and high profits for the seller.

But the opposite situation, a market with many sellers and one buyer, is equally problematic, yet far less well-known. This is called monopsony. Whereas in a monopoly prices are too high, in a monopsony prices are too low.

I have long suspected, but the data now confirms, that the most widespread form of monopsony occurs in labor markets. This is a particularly bad place for monopsony, because it means that instead of consumer prices being lower, wages will be lower. Monopsonistic labor markets are bad in two ways: They lower wages and they increase unemployment.


Monopsonistic labor markets are one of the reasons why raising minimum wage seems to have very little effect on employment.
In the presence of monopsony, forcing employers to increase wages won’t cause them to fire workers; it will just eat into their profits. In some cases it can actually cause them to hire more workers.

Take a look at this map, from the Roosevelt Institute:

widespread-labor-monopsony1

This map is color-coded by commuting zone, based on whether the average labor market (different labor markets weighted by their number of employees) is monopsonistic. Commuting zones with only a few major employers are colored red, while those with many employers are colored green. In between are shaded orange and yellow. (Not a very colorblind-friendly coding scheme, I’m afraid.)

Basically you can see that the only places where labor markets are not monopsonistic are in major metro areas. Suburban areas are typically yellow, and rural areas are almost all orange or red.


It seems then that we have two choices for where we want to live: We can
live in rural areas and have monopsonistic labor markets with low wages and competitive real estate markets with low housing prices, or we can live in urban areas and have competitive labor markets with high wages and monopolistic real estate markets with high housing prices. There’s hardly anywhere we can live where both wages and housing prices are fair.

Actually the best seems to be Detroit! Median housing price in the Detroit area is an affordable $179,000, while median household income is a low but not terrible $31,000. This means you can pay off a house spending 30% of your income in about 10 years. That’s the American Dream, right there.

Compare this to the San Francisco area, where median housing price is $1.1 million and median income is an impressive $104,000. This means it would take over 35 years to pay off your house spending 30% of your income. (And that’s not accounting for interest!) You can make six figures in San Francisco and still be considered “low income”, because housing prices there are so absurd.

Of course, student loans are denominated in nominal terms, so you might actually be able to pay off your student loans faster living in San Francisco than you could in Detroit. Say taxes are 20%, so these become after-tax incomes of $25,000 and $83,000. Even if you spend only a third of your income on housing in Detroit and spend two-thirds in San Francisco, that leaves you with $16,600 in Detroit but $27,600 in San Francisco. Of course other prices are different too, but it seems quite likely that being able to pay $5,000 per year on your student loans is easier living in San Francisco than it is in Detroit.

What can be done about monopsony in labor markets? First, we could try to split up employers—the FTC already doesn’t do enough to break up monopolies, but it basically does nothing to break up monopsonies. But that may not always be feasible, particularly in rural areas. And there are genuine economies of scale that can make larger firms more efficient in certain ways; we don’t want to lose those.

Perhaps the best solution is the one we used to use, and most of the First World continues to use: Labor unions. Union membership in the US declined by half in the last 30 years. Europe is heavily unionized, and the most unionized of all are Scandinavian countries—probably not a coincidence that these are the most prosperous places in the world.


At first glance, labor unions seem anti-competitive: They act like a monopoly. But when you currently have a
monopsony, adding a monopoly can actually be a good thing. Instead of one seller and many buyers, resulting in prices that are too low, you can have one seller and one buyer, resulting in prices that are negotiated and can, at least potentially, be much fairer. This market structure is called a bilateral monopoly, and while it’s not as good as perfect competition, it’s considerably more efficient than either monopsony or monopoly alone.

A Socratic open letter to transphobes everywhere

Feb 23 JDN 2458903

This post is a bit different than usual. This is an open letter to those who doubt that trans people actually exist, or insist on using the wrong pronouns; above all it is an open letter to those who want to discriminate against trans people, denying trans people civil rights or the dignity to use public bathrooms in peace. Most of my readers are probably not such people, but I think you’ll still benefit from reading it—perhaps you can use some of its arguments when you inevitably encounter someone who is.

Content warning: Because of how sex and gender are tied up together in transphobes’ minds, I’m going to need to talk a little bit about sexual anatomy and genital surgery. If such topics make you uncomfortable, feel free to skip this post.

Dear Transphobe:

First of all, I’m going to assume you are a man. Statistically you probably are, in which case that works. If by chance you’re not, well, now you know what it feels like for people to assume your gender and never correct themselves. You’re almost certainly politically right-wing, so that’s an even safer assumption on my part.

You probably think that gender and sex are interchangeable things, that the idea of a woman born with a penis or a man born without one is utter nonsense. I’m here to hopefully make you question this notion.

Let’s start by thinking about your own identity. You are a man. I presume that you have a penis. I am not going to make the standard insult many on the left would and say that it’s probably a small penis. In fact I have no particular reason to believe that, and in any case the real problem is that we as a society have so thoroughly equated penis size with masculinity with value as a human being. Right-wing attitudes of the sort that lead to discriminating against LGBT people are strongly correlated with aggressive behaviors to assert one’s masculinity. Even if I had good reason—which I assuredly do not—to do so, attacking your masculinity would be inherently counterproductive, causing you to double down on the same aggressive, masculinity-signaling behaviors. If it so happens that you are insecure in your masculinity, I certainly don’t want to make that worse, as masculine insecurity was one of the strongest predictors of voting for Donald Trump. You are a man, and I make no challenges to your masculinity whatsoever. I’m even prepared to concede that you are more manly than I am, whatever you may take that to mean.

Let us consider a thought experiment. Suppose that you were to lose your penis in some tragic accident. Don’t try to imagine the details; I’m sure the mere fact of it is terrifying enough. Suppose a terrible day were to arrive where you wake up in a hospital and find you no longer have a penis.

I have a question for you now: Should such a terrible day arrive, would you cease to be a man?

I contend that you would remain a man. I think that you, upon reflection, would also contend the same. There are a few thousand men in the world who have undergone penectomy, typically as a treatment for genital cancer. You wouldn’t even know unless you saw them naked or they told you. As far as anyone else can tell, they look and act as men, just as they did before their surgery. They are still men, just as they were before.

In fact, it’s quite likely that you would experience a phantom limb effect—where here the limb that is in your self-image but no longer attached to your body is your penis. You would sometimes feel “as if” your penis was still there, because your brain continues to have the neural connections that generate such sensations.

An even larger number of men have undergone castration for various reasons, and while they do often find that their thoughts and behavior change due to the changes in hormone balances, they still consider themselves men, and are generally considered men by others as well. We do not even consider them transgender men; we simply consider them men.

But does this not mean, then, that there is something more to being a man than simply having male anatomy?

Perhaps it has to do with other body parts, or some totality of the male body? Let’s consider another thought experiment then. Suppose that by some bizarre event you were transported into a female body. The mechanism isn’t important: Perhaps it was a mad scientist, or aliens, or magic. But just suppose that somehow or other, while you slept, your brain in its current state was transported into an entirely female body, complete with breasts, vulva, wide hips, narrow shoulders—the whole package. When you awaken, your body is female.

Such a transition would no doubt be distressing and disorienting. People would probably begin to see you as a woman when they looked at you. You would be denied access to men’s spaces you had previously used, and suddenly granted access to women’s spaces you had never before been allowed into. And who knows what sort of effect the hormonal changes would have on your mind?

Particularly if you are sexually attracted to women, you might imagine that you would enjoy this transformation: Now you get to play with female body parts whenever you want! But think about this matter carefully, now: While there might be some upsides, would you really want this change to happen? You have to now wear women’s clothing, use women’s restrooms, cope with a menstrual cycle. Everyone will see you as a woman and treat you as a woman. (How do you treat women, by the way? Is this something you’ve thought carefully about?)

And if you still think that being a woman isn’t so bad, maybe it isn’t—if your mind and body are in agreement. But remember that you’ve still got the mind of a man; you still feel that mental attachment to body parts that are no longer present, and these new body parts you have don’t feel like they are properly your own.

But throughout this harrowing experience, would you still be a man?

Once again I contend that you would. You would now feel a deep conflict between your mind and your body—dare I call it gender dysphoria?—and you would probably long to change your body back to what it was, or at least back to a body that is male.

You would once again experience phantom limb effects—but now all over, everywhere your new body deviated from your original form. In your brain there is a kind of map of where your body parts are supposed to be: Your shoulders are supposed to end here, your legs are supposed to end there, and down here there is supposed to be a penis, not vulva. This map is deeply ingrained into your mind, its billions of strands almost literally woven into the fabric of your brain.

We are presumably born with such a map: By some mindbogglingly complex mix of genetic and environmental factors our brains organize themselves into specific patterns, telling us what kind of body we’re supposed to have. Some of this structuring may go on before birth, some while we are growing up. But surely by the time we are adults the process is complete.

This mental map does allow for some flexibility: When we were young and growing, it allowed us to adjust to our ever-increasing height. Now that we are older, it allows us to adjust to gaining or losing weight. But this flexibility is quite limited: it might take years, or perhaps we could never adjust at all, to finding that we had suddenly grown a tail—or suddenly changed from male to female.

Now imagine that this transformation didn’t happen by some sudden event when you were an adult, but by some quirk of ontogeny while you were still in the womb. Suppose that you were born this way: in a body that is female, but with a mind that is male.

In such a state, surely something is wrong, in the same way that being born with sickle-cell anemia or spina bifida is wrong. There are more ambiguous cases: Is polydactyly a disorder? Sometimes? But surely there are some ways to be born that are worth correcting, and “female body, male mind” seems like one of them.

And yet, this is often precisely how trans people describe their experience. Not always—humans are nothing if not diverse, and trans people are no exception—but quite frequently, they will say that they feel like “a man in a woman’s body” or the reverse. By all accounts, they seem to have precisely this hypothetical condition: The gender of their mind does not match the sex of their body. And since this mismatch causes great suffering, we ought to correct it.

But then the question becomes: Correct it how?

Broadly speaking, it seems we’ve only two options: Change the body, or change the mind. If you were in this predicament, which would you want?

In the case of being transferred into a new body as an adult, I’m quite sure you’d prefer to change your body, and keep your mind as it is. You don’t belong in this new body, and you want your old one back.

Yet perhaps you think that if you were born with this mismatch, things might be different: Perhaps in such a case you think it would make more sense to change the mind to match the body. But I ask you this: Which is more fundamental to who you are? If you are still an infant, we can’t ask your opinion; but what do you suppose you’d say if we could?

Or suppose that you notice the mismatch later, as a child, or even as a teenager. Before that, something felt off somehow, but you couldn’t quite put your finger on it. But now you realize where the problem lies: You were born in a body of the wrong sex. Now that you’ve had years to build up your identity, would you still say that the mind is the right thing to change? Once you can speak, now we can ask you—and we do ask such children, and their answers are nigh-unanimous: They want to change their bodies, not their minds. David Reimer was raised as a girl for years, and yet he always still knew he was a boy and tried to act like one.

In fact, we don’t even know how to change the gender of a mind. Despite literally millennia of civilization trying at great expense to enforce particular gender norms on everyone’s minds, we still get a large proportion of the population deviating substantially from them—if you include mild enough deviations, probably a strict majority. If I seem a soft “soy boy” to you (and, I admit, I am both bisexual and vegetarian—though I already knew I was the former before I became the latter), ask yourself this: Why would I continue to deviate from your so ferociously-enforced gender norms, if it were easy to conform?

Whereas, we do have some idea how to change a body. We have hormonal and surgical treatments that allow people to change their bodies substantially—trans women can grow breasts, trans men can grow beards. Often this is enough to make people feel much more comfortable in their own bodies, and also present themselves in a way that leads others to recognize them as their desired gender.

Sex reassignment surgery is not as reliable, especially for trans men: While building artificial vulva works relatively well, building a good artificial penis still largely eludes us. Yet technological process in this area continues, and we’ve improved our ability to change the sex of bodies substantially in just the last few decades—while, let me repeat, we have not meaningfully improved our ability to change the gender of minds in the last millennium.

If we could reliably change the gender of minds, perhaps that would be an option worth considering. But ought implies can: We cannot be ethically expected to do that which we are simply incapable.

At present, this means that our only real options are two: We can accept the gender of the mind, change the sex of the body, and treat this person as the gender they identify themselves as; or we can demand that they repress and conceal their mental gender in favor of conforming to the standards we have imposed upon them based on their body. The option you may most prefer—accept the body, change the mind—simply is not feasible with any current or foreseeable technology.

We have tried repressing transgender identity for centuries: It has brought endless suffering, depression, suicide.

But now that we are trying to affirm transgender identity the outlook seems much better: Simply having one adult in their life who accepts their gender identity reduces the risk of a transgender child attempting suicide by 40%. Meta-analysis of research on the subject shows that gender transition, while surely no panacea, does overall improve outcomes for transgender people—including reducing risk of depression and suicide. (That site is actually refreshingly nuanced; it does not simply accept either the left-wing or right-wing ideology on the subject, instead delving deeply into the often quite ambiguous evidence.)

Above all, ask yourself: If you ever found yourself in the wrong sort of body, what would you want us to do?

The backfire effect has been greatly exaggerated

Sep 8 JDN 2458736

Do a search for “backfire effect” and you’re likely to get a large number of results, many of them from quite credible sources. The Oatmeal did an excellent comic on it. The basic notion is simple: “[…]some individuals when confronted with evidence that conflicts with their beliefs come to hold their original position even more strongly.”

The implications of this effect are terrifying: There’s no point in arguing with anyone about anything controversial, because once someone strongly holds a belief there is nothing you can do to ever change it. Beliefs are fixed and unchanging, stalwart cliffs against the petty tides of evidence and logic.

Fortunately, the backfire effect is not actually real—or if it is, it’s quite rare. Over many years those seemingly-ineffectual tides can erode those cliffs down and turn them into sandy beaches.

The most recent studies with larger samples and better statistical analysis suggest that the typical response to receiving evidence contradicting our beliefs is—lo and behold—to change our beliefs toward that evidence.

To be clear, very few people completely revise their worldview in response to a single argument. Instead, they try to make a few small changes and fit them in as best they can.

But would we really expect otherwise? Worldviews are holistic, interconnected systems. You’ve built up your worldview over many years of education, experience, and acculturation. Even when someone presents you with extremely compelling evidence that your view is wrong, you have to weigh that against everything else you have experienced prior to that point. It’s entirely reasonable—rational, even—for you to try to fit the new evidence in with a minimal overall change to your worldview. If it’s possible to make sense of the available evidence with only a small change in your beliefs, it makes perfect sense for you to do that.

What if your whole worldview is wrong? You might have based your view of the world on a religion that turns out not to be true. You might have been raised into a culture with a fundamentally incorrect concept of morality. What if you really do need a radical revision—what then?

Well, that can happen too. People change religions. They abandon their old cultures and adopt new ones. This is not a frequent occurrence, to be sure—but it does happen. It happens, I would posit, when someone has been bombarded with contrary evidence not once, not a few times, but hundreds or thousands of times, until they can no longer sustain the crumbling fortress of their beliefs against the overwhelming onslaught of argument.

I think the reason that the backfire effect feels true to us is that our life experience is largely that “argument doesn’t work”; we think back to all the times that we have tried to convince to change a belief that was important to them, and we can find so few examples of when it actually worked. But this is setting the bar much too high. You shouldn’t expect to change an entire worldview in a single conversation. Even if your worldview is correct and theirs is not, that one conversation can’t have provided sufficient evidence for them to rationally conclude that. One person could always be mistaken. One piece of evidence could always be misleading. Even a direct experience could be a delusion or a foggy memory.

You shouldn’t be trying to turn a Young-Earth Creationist into an evolutionary biologist, or a climate change denier into a Greenpeace member. You should be trying to make that Creationist question whether the Ussher chronology is really so reliable, or if perhaps the Earth might be a bit older than a 17th century theologian interpreted it to be. You should be getting the climate change denier to question whether scientists really have such a greater vested interest in this than oil company lobbyists. You can’t expect to make them tear down the entire wall—just get them to take out one brick today, and then another brick tomorrow, and perhaps another the day after that.

The proverb is of uncertain provenance, variously attributed, rarely verified, but it is still my favorite: No single raindrop feels responsible for the flood.

Do not seek to be a flood. Seek only to be a raindrop—for if we all do, the flood will happen sure enough. (There’s a version more specific to our times: So maybe we’re snowflakes. I believe there is a word for a lot of snowflakes together: Avalanche.)

And remember this also: When you argue in public (which includes social media), you aren’t just arguing for the person you’re directly engaged with; you are also arguing for everyone who is there to listen. Even if you can’t get the person you’re arguing with to concede even a single point, maybe there is someone else reading your post who now thinks a little differently because of something you said. In fact, maybe there are many people who think a little differently—the marginal impact of slacktivism can actually be staggeringly large if the audience is big enough.

This can be frustrating, thankless work, for few people will ever thank you for changing their mind, and many will condemn you even for trying. Finding out you were wrong about a deeply-held belief can be painful and humiliating, and most people will attribute that pain and humiliation to the person who called them out for being wrong—rather than placing the blame where it belongs, which is on whatever source or method made you wrong in the first place. Being wrong feels just like being right.

But this is important work, among the most important work that anyone can do. Philosophy, mathematics, science, technology—all of these things depend upon it. Changing people’s minds by evidence and rational argument is literally the foundation of civilization itself. Every real, enduring increment of progress humanity has ever made depends upon this basic process. Perhaps occasionally we have gotten lucky and made the right choice for the wrong reasons; but without the guiding light of reason, there is nothing to stop us from switching back and making the wrong choice again soon enough.

So I guess what I’m saying is: Don’t give up. Keep arguing. Keep presenting evidence. Don’t be afraid that your arguments will backfire—because in fact they probably won’t.

Privatized prisons were always an atrocity

Aug 4 JDN 2458700

Let’s be clear: The camps that Trump built on the border absolutely are concentration camps. They aren’t extermination camps—yet?—but they are in fact “a place where large numbers of people (such as prisoners of war, political prisoners, refugees, or the members of an ethnic or religious minority) are detained or confined under armed guard.” Above all, it is indeed the case that “Persons are placed in such camps often on the basis of identification with a particular ethnic or political group rather than as individuals and without benefit either of indictment or fair trial.”

And I hope it goes without saying that this is an unconscionable atrocity that will remain a stain upon America for generations to come. Trump was clear from the beginning that this was his intention, and thus this blood is on the hands of anyone who voted for him. (The good news is that even they are now having second thoughts: Even a majority of Fox News viewers agrees that Trump has gone too far.)

Yet these camps are only a symptom of a much older disease: We should have seen this sort of cruelty and inhumanity coming when first we privatized prisons.

Krugman makes the point using economics: Without market competition or public view, how can the private sector be kept from abuse, corruption, and exploitation? And this is absolutely true—but it is not the strongest reason.

No, the reason privatized prisons are unjust is much more fundamental than that: Prisons are a direct incursion against liberty. The only institution that should ever have that authority is a democratically-elected government restrained by a constitution.

I don’t care if private prisons were cleaner and nicer and safer and more effective at rehabilitation (as you’ll see from those links, exactly the opposite is true across the board). No private institution has the right to imprison people. No one should be making profits from locking people up.

This is the argument we should have been making for the last 40 years. You can’t privatize prisons, because no one has a right to profit from locking people up. You can’t privatize the military, because no one has a right to profit from killing people. These are basic government functions precisely because they are direct incursions against fundamental rights; though such incursions are sometimes necessary, we allow only governments to make them, because democracy is the only means we have found to keep them from being used indiscriminately. (And even then, there are always abuses and we must remain eternally vigilant.)

Yes, obviously we must shut down these concentration camps as soon as possible. But we can’t stop there. This is a symptom of a much deeper disease: Our liberty is being sold for profit.

“Harder-working” countries are not richer

July 28 JDN 2458693

American culture is obsessed with work. We define ourselves by our professions. We are one of only a handful of countries in the world that don’t guarantee vacations for their workers. Over 50 million Americans suffer from chronic sleep deprivation, mostly due to work. Then again, we are also an extremely rich country; perhaps our obsession with work is what made us so rich?

Well… not really. Take a look at this graph, which I compiled from OECD data:

 

Worker_productivity

The X-axis shows the average number of hours per worker per year. I think this is the best measure of a country’s “work obsession”, as it includes both length of work week, proportion of full-time work, and amount of vacation time. The At 1,786 hours per worker per year, the US is not actually the highest: That title goes to Mexico, at an astonishing 2,148 hours per worker per year. The lowest is Germany at only 1,363 hours per worker per year. Converted into standard 40-hour work weeks, this means that on average Americans work 44 weeks per year, Germans work on average 34 weeks per year, and Mexicans work 54 weeks per year—that is, they work more than full-time every week of the year.

The Y-axis shows GDP per worker per year. I calculated this by multiplying GDP per work hour (a standard measure of labor productivity) by average number of work hours per worker per year. At first glance, these figures may seem too large; for instance they are $114,000 in the US and $154,000 in Ireland. But keep in mind that this is per worker, not per person; the usual GDP per capita figure divides by everyone in the population, while this is only dividing by the number of people who are actively working. Unemployed people are not included, and neither are children or retired people.

There is an obvious negative trend line here. While Ireland is an outlier with exceptionally high labor productivity, the general pattern is clear: the countries with the most GDP per worker actually work the fewest hours. Once again #ScandinaviaIsBetter: Norway and Denmark are near the bottom for work hours and near the top for GDP per worker. The countries that work the most hours, like Mexico and Costa Rica, have the lowest GDP per worker.

This is actually quite remarkable. We would expect that productivity per hour decreases as work hours increase; that’s not surprising at all. But productivity per worker decreasing means that these extra hours are actually resulting in less total output. We are so overworked, overstressed, and underslept that we actually produce less than our counterparts in Germany or Denmark who spend less time working.

Where we would expect the graph of output as a function of hours to look like the blue line below, it actually looks more like the orange line:

Labor_output

Rather than merely increasing at a decreasing rate, output per worker actually decreases as we put in more hours—and does so over most of the range in which countries actually work. It wouldn’t be so surprising if this sort of effect occurred above say 2000 hours per year, when you start running out of time to do anything else; but in fact it seems to be happening somewhere around 1400 hours per year, which is less than most countries work.

Only a handful of countries—mostly Scandinavian—actually seem to be working the right amount; everyone else is working too much and producing less as a result.

And note that this is not restricted to white-collar or creative jobs where we would expect sleep deprivation and stress to have a particularly high impact. This includes all jobs. Our obsession with work is actually making us poorer!