The Butlerian Jihad is looking better all the time

Mar 24 JDN 2460395

A review of The Age of Em by Robin Hanson

In the Dune series, the Butlerian Jihad was a holy war against artificial intelligence that resulted in a millenias-long taboo against all forms of intelligent machines. It was effectively a way to tell a story about the distant future without basically everything being about robots or cyborgs.

After reading Robin Hanson’s book, I’m starting to think that maybe we should actually do it.

Thus it is written: “Thou shalt not make a machine in the likeness of a human mind.”

Hanson says he’s trying to reserve judgment and present objective predictions without evaluation, but it becomes very clear throughout that this is the future he wants, as well as—or perhaps even instead of—the world he expects.

In many ways, it feels like he has done his very best to imagine a world of true neoclassical rational agents in perfect competition, a sort of sandbox for the toys he’s always wanted to play with. Throughout he very much takes the approach of a neoclassical economist, making heroic assumptions and then following them to their logical conclusions, without ever seriously asking whether those assumptions actually make any sense.

To his credit, Hanson does not buy into the hype that AGI will be successful any day now. He predicts that we will achieve the ability to fully emulate human brains and thus create a sort of black-box AGI that behaves very much like a human within about 100 years. Given how the Blue Brain Project has progressed (much slower than its own hype machine told us it would—and let it be noted that I predicted this from the very beginning), I think this is a fairly plausible time estimate. He refers to a mind emulated in this way as an “em”; I have mixed feelings about the term, but I suppose we did need some word for that, and it certainly has conciseness on its side.

Hanson believes that a true understanding of artificial intelligence will only come later, and the sort of AGI that can be taken apart and reprogrammed for specific goals won’t exist for at least a century after that. Both of these sober, reasonable predictions are deeply refreshing in a field that’s been full of people saying “any day now” for the last fifty years.

But Hanson’s reasonableness just about ends there.

In The Age of Em, government is exactly as strong as Hanson needs it to be. Somehow it simultaneously ensures a low crime rate among a population that doubles every few months while also having no means of preventing that population growth. Somehow ensures that there is no labor collusion and corporations never break the law, but without imposing any regulations that might reduce efficiency in any way.

All of this begins to make more sense when you realize that Hanson’s true goal here is to imagine a world where neoclassical economics is actually true.

He realized it didn’t work on humans, so instead of giving up the theory, he gave up the humans.

Hanson predicts that ems will casually make short-term temporary copies of themselves called “spurs”, designed to perform a particular task and then get erased. I guess maybe he would, but I for one would not so cavalierly create another person and then make their existence dedicated to doing a single job before they die. The fact that I created this person, and they are very much like me, seem like reasons to care more about their well-being, not less! You’re asking me to enslave and murder my own child. (Honestly, the fact that Robin Hanson thinks ems will do this all the time says more about Robin Hanson than anything else.) Any remotely sane society of ems would ban the deletion of another em under any but the most extreme circumstances, and indeed treat it as tantamount to murder.

Hanson predicts that we will only copy the minds of a few hundred people. This is surely true at some point—the technology will take time to develop, and we’ll have to start somewhere. But I don’t see why we’d stop there, when we could continue to copy millions or billions of people; and his choices of who would be emulated, while not wildly implausible, are utterly terrifying.

He predicts that we’d emulate genius scientists and engineers; okay, fair enough, that seems right. I doubt that the benefits of doing so will be as high as many people imagine, because scientific progress actually depends a lot more on the combined efforts of millions of scientists than on rare sparks of brilliance by lone geniuses; but those people are definitely very smart, and having more of them around could be a good thing. I can also see people wanting to do this, and thus investing in making it happen.

He also predicts that we’d emulate billionaires. Now, as a prediction, I have to admit that this is actually fairly plausible; billionaires are precisely the sort of people who are rich enough to pay to be emulated and narcissistic enough to want to. But where Hanson really goes off the deep end here is that he sees this as a good thing. He seems to honestly believe that billionaires are so rich because they are so brilliant and productive. He thinks that a million copies of Elon Musks would produce a million hectobillionaires—when in reality it would produce a million squabbling narcissists, who at best had to split the same $200 billion wealth between them, and might very well end up with less because they squander it.

Hanson has a long section on trying to predict the personalities of ems. Frankly this could just have been dropped entirely; it adds almost nothing to the book, and the book is much too long. But the really striking thing to me about that section is what isn’t there. He goes through a long list of studies that found weak correlations between various personality traits like extroversion or openness and wealth—mostly comparing something like the 20th percentile to the 80th percentile—and then draws sweeping conclusions about what ems will be like, under the assumption that ems are all drawn from people in the 99.99999th percentile. (Yes, upper-middle-class people are, on average, more intelligent and more conscientious than lower-middle-class people. But do we even have any particular reason to think that the personalities of people who make $150,000 are relevant to understanding the behavior of people who make $15 billion?) But he completely glosses over the very strong correlations that specifically apply to people in that very top super-rich class: They’re almost all narcissists and/or psychopaths.

Hanson predicts a world where each em is copied many, many times—millions, billions, even trillions of times, and also in which the very richest ems are capable of buying parallel processing time that lets them accelerate their own thought processes to a million times faster than a normal human. (Is that even possible? Does consciousness work like that? Who knows!?) The world that Hanson is predicting is thus one where all the normal people get outnumbered and overpowered by psychopaths.

Basically this is the most abjectly dystopian cyberpunk hellscape imaginable. And he talks about it the whole time as if it were good.

It’s like he played the game Action Potential and thought, “This sounds great! I’d love to live there!” I mean, why wouldn’t you want to owe a life-debt on your own body and have to work 120-hour weeks for a trillion-dollar corporation just to make the payments on it?

Basically, Hanson doesn’t understand how wealth is actually acquired. He is educated as an economist, yet his understanding of capitalism basically amounts to believing in magic. He thinks that competitive markets just somehow perfectly automatically allocate wealth to whoever is most productive, and thus concludes that whoever is wealthy now must just be that productive.

I can see no other way to explain his wildly implausible predictions that the em economy will double every month or two. A huge swath of the book depends upon this assumption, but he waits until halfway through the book to even try to defend it, and then does an astonishingly bad job of doing so. (Honestly, even if you buy his own arguments—which I don’t—they seem to predict that population would grow with Moore’s Law—doubling every couple of years, not every couple of months.)

Whereas Keynes predicted based on sound economic principles that economic growth would more or less proceed apace and got his answer spot-on, Hanson predicts that for mysterious, unexplained reasons economic growth will suddenly increase by two orders of magnitude—and I’m pretty sure he’s going to be wildly wrong.

Hanson also predicts that ems will be on average poorer than we are, based on some sort of perfect-competition argument that doesn’t actually seem to mesh at all with his predictions of spectacularly rapid economic and technological growth. I think the best way to make sense of this is to assume that it means the trend toward insecure affluence will continue: Ems will have an objectively high standard of living in terms of what they own, what games they play, where they travel, and what they eat and drink (in simulation), but they will constantly be struggling to keep up with the rent on their homes—or even their own bodies. This is a world where (the very finest simulation of) Dom Perignon is $7 a bottle and wages are $980 an hour—but monthly rent is $284,000.

Early in the book Hanson argues that this life of poverty and scarcity will lead to more conservative values, on the grounds that people who are poorer now seem to be more conservative, and this has something to do with farmers versus foragers. Hanson’s explanation of all this is baffling; I will quote it at length, just so it’s clear I’m not misrepresenting it:

The other main (and independent) axis of value variation ranges between poor and rich societies. Poor societies place more value on conformity, security, and traditional values such as marriage, heterosexuality, religion, patriotism, hard work, and trust in authority. In contrast, rich societies place more value on individualism, self-direction, tolerance, pleasure, nature, leisure, and trust. When the values of individuals within a society vary on the same axis, we call this a left/liberal (rich) versus right/conservative (poor) axis.

Foragers tend to have values more like those of rich/liberal people today, while subsistence farmers tend to have values more like those of poor/conservative people today. As industry has made us richer, we have on average moved from conservative/farmer values to liberal/forager values. This value movement can make sense if cultural evolution used the social pressures farmers faced, such as conformity and religion, to induce humans, who evolved to find forager behaviors natural, to instead act like farmers. As we become rich, we don’t as strongly fear the threats behind these social pressures. This connection may result in part from disease; rich people are healthier, and healthier societies fear less.

The alternate theory that we have instead learned that rich forager values are more true predicts that values should have followed a random walk over time, and be mostly common across space. It also predicts the variance of value changes tracking the rate at which relevant information appears. But in fact industrial-era value changes have tracked the wealth of each society in much more steady and consistent fashion. And on this theory, why did foragers ever acquire farmer values?

[…]

In the scenario described in this book, many strange-to-forager behaviors are required, and median per-person (i.e. per-em) incomes return to near-subsistence levels. This suggests that the em era may reverse the recent forager-like trend toward more liberality; ems may have more farmer-like values.

The Age of Em, p. 26-27

There’s a lot to unpack here, but maybe it’s better to burn the whole suitcase.

First of all, it’s not entirely clear that this is really a single axis of variation, that foragers and farmers differ from each other in the same way as liberals and conservatives. There’s some truth to that at least—both foragers and liberals tend to be more generous, both farmers and conservatives tend to enforce stricter gender norms. But there are also clear ways that liberal values radically deviate from forager values: Forager societies are extremely xenophobic, and typically very hostile to innovation, inequality, or any attempts at self-aggrandizement (a phenomenon called “fierce egalitarianism“). San Francisco epitomizes rich, liberal values, but it would be utterly alien and probably regarded as evil by anyone from the Yanomamo.

Second, there is absolutely no reason to predict any kind of random walk. That’s just nonsense. Would you predict that scientific knowledge is a random walk, with each new era’s knowledge just a random deviation from the last’s? Maybe next century we’ll return to geocentrism, or phrenology will be back in vogue? On the theory that liberal values (or at least some liberal values) are objectively correct, we would expect them to advance as knowledge doesimproving over time, and improving faster in places that have better institutions for research, education, and free expression. And indeed, this is precisely the pattern we have observed. (Those places are also richer, but that isn’t terribly surprising either!)

Third, while poorer regions are indeed more conservative, poorer people within a region actually tend to be more liberal. Nigeria is poorer and more conservative than Norway, and Mississippi is poorer and more conservative than Massachusetts. But higher-income households in the United States are more likely to vote Republican. I think this is particularly true of people living under insecure affluence: We see the abundance of wealth around us, and don’t understand why we can’t learn to share it better. We’re tired of fighting over scraps while the billionaires claim more and more. Millennials and Zoomers absolutely epitomize insecure affluence, and we also absolutely epitomize liberalism. So, if indeed ems live a life of insecure affluence, we should expect them to be like Zoomers: “Trans liberation now!” and “Eat the rich!” (Or should I say, “Delete the rich!”)

And really, doesn’t that make more sense? Isn’t that the trend our society has been on, for at least the last century? We’ve been moving toward more and more acceptance of women and minorities, more and more deviation from norms, more and more concern for individual rights and autonomy, more and more resistance to authority and inequality.

The funny thing is, that world sounds a lot better than the one Hanson is predicting.

A world of left-wing ems would probably run things a lot better than Hanson imagines: Instead of copying the same hundred psychopaths over and over until we fill the planet, have no room for anything else, and all struggle to make enough money just to stay alive, we could moderate our population to a more sustainable level, preserve diversity and individuality, and work toward living in greater harmony with each other and the natural world. We could take this economic and technological abundance and share it and enjoy it, instead of killing ourselves and each other to make more of it for no apparent reason.

The one good argument Hanson makes here is expressed in a single sentence: “And on this theory, why did foragers ever acquire farmer values?” That actually is a good question; why did we give up on leisure and egalitarianism when we transitioned from foraging to agriculture?

I think scarcity probably is relevant here: As food became scarcer, maybe because of climate change, people were forced into an agricultural lifestyle just to have enough to eat. Early agricultural societies were also typically authoritarian and violent. Under those conditions, people couldn’t be so generous and open-minded; they were surrounded by threats and on the verge of starvation.

I guess if Hanson is right that the em world is also one of poverty and insecurity, we might go back to those sort of values, borne of desperation. But I don’t see any reason to think we’d give up all of our liberal values. I would predict that ems will still be feminist, for instance; in fact, Hanson himself admits that since VR avatars would let us change gender presentation at will, gender would almost certainly become more fluid in a world of ems. Far from valuing heterosexuality more highly (as conservatives do, a “farmer value” according to Hanson), I suspect that ems will have no further use for that construct, because reproduction will be done by manufacturing, not sex, and it’ll be so easy to swap your body into a different one that hardly anyone will even keep the same gender their whole life. They’ll think it’s quaint that we used to identify so strongly with our own animal sexual dimorphism.

But maybe it is true that the scarcity induced by a hyper-competitive em world would make people more selfish, less generous, less trusting, more obsessed with work. Then let’s not do that! We don’t have to build that world! This isn’t a foregone conclusion!

There are many other paths yet available to us.

Indeed, perhaps the simplest would be to just ban artificial intelligence, at least until we can get a better handle on what we’re doing—and perhaps until we can institute the kind of radical economic changes necessary to wrest control of the world away from the handful of psychopaths currently trying their best to run it into the ground.

I admit, it would kind of suck to not get any of the benefits of AI, like self-driving cars, safer airplanes, faster medical research, more efficient industry, and better video games. It would especially suck if we did go full-on Butlerian Jihad and ban anything more complicated than a pocket calculator. (Our lifestyle might have to go back to what it was in—gasp! The 1950s!)

But I don’t think it would suck nearly as much as the world Robin Hanson thinks is in store for us if we continue on our current path.

So I certainly hope he’s wrong about all this.

Fortunately, I think he probably is.

How I feel is how things are

Mar 17 JDN 2460388

One of the most difficult things in life to learn is how to treat your own feelings and perceptions as feelings and perceptions—rather than simply as the way the world is.

A great many errors people make can be traced to this.

When we disagree with someone (whether it is as trivial as pineapple on pizza or as important as international law), we feel like they must be speaking in bad faith, they must be lying—because, to us, they are denying the way the world is. If the subject is important enough, we may become convinced that they are evil—for only someone truly evil could deny such important truths. (Ultimately, even holy wars may come from this perception.)


When we are overconfident, we not only can’t see that; we can scarcely even consider that it could be true. Because we don’t simply feel confident; we are sure we will succeed. And thus if we do fail, as we often do, the result is devastating; it feels as if the world itself has changed in order to make our wishes not come true.

Conversely, when we succumb to Impostor Syndrome, we feel inadequate, and so become convinced that we are inadequate, and thus that anyone who says they believe we are competent must either be lying or else somehow deceived. And then we fear to tell anyone, because we know that our jobs and our status depend upon other people seeing us as competent—and we are sure that if they knew the truth, they’d no longer see us that way.

When people see their beliefs as reality, they don’t even bother to check whether their beliefs are accurate.

Why would you need to check whether the way things are is the way things are?

This is how common misconceptions persist—the information needed to refute them is widely available, but people simply don’t realize they needed to be looking for that information.

For lots of things, misconceptions aren’t very consequential. But some common misconceptions do have large consequences.

For instance, most Americans think that crime is increasing and worse now than it was 30 or 50 years ago. (I tested this on my mother this morning; she thought so too.) It is in fact much, much better—violent crimes are about half as common in the US today as they were in the 1970s. Republicans are more likely to get this wrong than Democrats—but an awful lot of Democrats still get it wrong.

It’s not hard to see how that kind of misconception could drive voters into supporting “tough on crime” candidates who will enact needlessly harsh punishments and waste money on excessive police and incarceration. Indeed, when you look at our world-leading spending on police and incarceration (highest in absolute terms, third-highest as a portion of GDP), it’s pretty clear this is exactly what’s happening.

And it would be so easy—just look it up, right here, or here, or here—to correct that misconception. But people don’t even think to bother; they just know that their perception must be the truth. It never even occurs to them that they could be wrong, and so they don’t even bother to look.

This is not because people are stupid or lazy. (I mean, compared to what?) It’s because perceptions feel like the truth, and it’s shockingly difficult to see them as anything other than the truth.

It takes a very dedicated effort, and no small amount of training, to learn to see your own perceptions as how you see things rather than simply how things are.

I think part of what makes this so difficult is the existential terror that results when you realize that anything you believe—even anything you perceive—could potentially be wrong. Basically the entire field of epistemology is dedicated to understanding what we can and can’t be certain of—and the “can’t” is a much, much bigger set than the “can”.

In a sense, you can be certain of what you feel and perceive—you can be certain that you feel and perceive them. But you can’t be certain whether those feelings and perceptions correspond to your external reality.

When you are sad, you know that you are sad. You can be certain of that. But you don’t know whether you should be sad—whether you have a reason to be sad. Often, perhaps even usually, you do. But sometimes, the sadness comes from within you, or from misperceiving the world.

Once you learn to recognize your perceptions as perceptions, you can question them, doubt them, challenge them. Training your mind to do this is an important part of mindfulness meditation, and also of cognitive behavioral therapy.

But even after years of training, it’s still shockingly hard to do this, especially in the throes of a strong emotion. Simply seeing that what you’re feeling—about yourself, or your situation, or the world—is not an entirely accurate perception can take an incredible mental effort.

We really seem to be wired to see our perceptions as reality.

This makes a certain amount of sense, in evolutionary terms. In an ancestral environment where death was around every corner, we really didn’t have time to stop and thinking carefully about whether our perceptions were accurate.

Two ancient hominids hear a sound that might be a tiger. One immediately perceives it as a tiger, and runs away. The other stops to think, and then begins carefully examining his surroundings, looking for more conclusive evidence to determine whether it is in fact a tiger.

The latter is going to have more accurate beliefs—right up until the point where it is a tiger and he gets eaten.

But in our world today, it may be more dangerous to hold onto false beliefs than to analyze and challenge our beliefs. We may harm ourselves—and others—more by trusting our perceptions too much rather than by taking the time to analyze them.

Against Self-Delusion

Mar 10 JDN 2460381

Is there a healthy amount of self-delusion? Would we be better off convincing ourselves that the world is better than it really is, in order to be happy?


A lot of people seem to think so.

I most recently encountered this attitude in Kathryn Schulz’s book Being Wrong (I liked the TED talk much better, in part because it didn’t have this), but there are plenty of other examples.

You’ll even find advocates for this attitude in the scientific literature, particularly when talking about the Lake Wobegon Effect, optimism bias, and depressive realism.

Fortunately, the psychology community seems to be turning away from this, perhaps because of mounting empirical evidence that “depressive realism” isn’t a robust effect. When I searched today, it was easier to find pop psych articles against self-delusion than in favor of it. (I strongly suspect that would not have been true about 10 years ago.)

I have come up with a very simple, powerful argument against self-delusion:

If you’re allowed to delude yourself, why not just believe everything is perfect?

If you can paint your targets after shooting, why not always paint a bullseye?

The notion seems to be that deluding yourself will help you achieve your goals. But if you’re going to delude yourself, why bother achieving goals? You could just pretend to achieve goals. You could just convince yourself that you have achieved goals. Wouldn’t that be so much easier?

The idea seems to be, for instance, to get an aspiring writer to actually finish the novel and submit it to the publisher. But why shouldn’t she simply imagine she has already done so? Why not simply believe she’s already a bestselling author?

If there’s something wrong with deluding yourself into thinking you’re a bestselling author, why isn’t that exact same thing wrong with deluding yourself into thinking you’re a better writer than you are?

Once you have opened this Pandora’s Box of lies, it’s not clear how you can ever close it again. Why shouldn’t you just stop working, stop eating, stop doing anything at all, but convince yourself that your life is wonderful and die in a state of bliss?

Granted, this is not generally what people who favor (so-called) “healthy self-delusion” advocate. But it’s difficult to see any principled reason why they should reject it. Once you give up on tying your beliefs to reality, it’s difficult to see why you shouldn’t just say that anything goes.

Why are some deviations from reality okay, but not others? Is it because they are small? Small changes in belief can still have big consequences: Believe a car is ten meters behind where it really is, and it may just run you over.

The general approach of “healthy self-delusion” seems to be that it’s all right to believe that you are smarter, prettier, healthier, wiser, and more competent than you actually are, because that will make you more confident and therefore more successful.

Well, first of all, it’s worth pointing out that some people obviously go way too far in that direction and become narcissists. But okay, let’s say we find a way to avoid that. (It’s unclear exactly how, since, again, by construction, we aren’t tying ourselves to reality.)

In practice, the people who most often get this sort of advice are people who currently lack self-confidence, who doubt their own abilities—people who suffer from Impostor Syndrome. And for people like that (and I count myself among them), a certain amount of greater self-confidence would surely be a good thing.

The idea seems to be that deluding yourself to increase your confidence will get you to face challenges and take risks you otherwise wouldn’t have, and that this will yield good outcomes.

But there’s a glaring hole in this argument:

If you have to delude yourself in order to take a risk, you shouldn’t take that risk.

Risk-taking is not an unalloyed good. Russian Roulette is certainly risky, but it’s not a good career path.

There are in fact a lot of risks you simply shouldn’t take, because they aren’t worth it.

The right risks to take are the ones for which the expected benefit outweighs the expected cost: The one with the highest expected utility. (That sounds simple, and in principle it is; but in practice, it can be extraordinarily difficult to determine.)

In other words, the right risks to take are the ones that are rational. The ones that a correct view of the world will instruct you to take.

That aspiring novelist, then, should write the book and submit it to publishers—if she’s actually any good at writing. If she’s actually terrible, then never submitting the book is the correct decision; she should spend more time honing her craft before she tries to finish it—or maybe even give up on it and do something else with her life.

What she needs, therefore, is not a confident assessment of her abilities, but an accurate one. She needs to believe that she is competent if and only if she actually is competent.

But I can also see how self-delusion can seem like good advice—and even work for some people.

If you start from an excessively negative view of yourself or the world, then giving yourself a more positive view will likely cause you to accomplish more things. If you’re constantly telling yourself that you are worthless and hopeless, then convincing yourself that you’re better than you thought is absolutely what you need to do. (Because it’s true.)

I can even see how convincing yourself that you are the best is useful—even though, by construction, most people aren’t. When you live in a hyper-competitive society like ours, where we are constantly told that winning is everything, losers are worthless, and second place is as bad as losing, it may help you get by to tell yourself that you really are the best, that you really can win. (Even weirder: “Winning isn’t everything; it’s the only thing.” Uh, that’s just… obviously false? Like, what is this even intended to mean that “Winning is everything” didn’t already say better?)

But that’s clearly not the right answer. You’re solving one problem by adding another. You shouldn’t believe you are the best; you should recognize that you don’t have to be. Second place is not as bad as losing—and neither is fifth, or tenth, or fiftieth place. The 100th-most successful author in the world still makes millions writing. The 1,000th-best musician does regular concert tours. The 10,000th-best accountant has a steady job. Even the 100,000th-best trucker can make a decent living. (Well, at least until the robots replace him.)

Honestly, it’d be great if our whole society would please get this memo. It’s no problem that “only a minority of schools play sport to a high level”—indeed, that’s literally inevitable. It’s also not clear that “60% of students read below grade level” is a problem, when “grade level” seems to be largely defined by averages. (Literacy is great and all, but what’s your objective standard for “what a sixth grader should be able to read”?)

We can’t all be the best. We can’t all even be above-average.

That’s okay. Below-average does not mean inadequate.

That’s the message we need to be sending:

You don’t have to be the best in order to succeed.

You don’t have to be perfect in order to be good enough.

You don’t even have to be above-average.

This doesn’t require believing anything that isn’t true. It doesn’t require overestimating your abilities or your chances. In fact, it asks you to believe something that is more true than “You have to be the best” or “Winning is everything”.

If what you want to do is actually worth doing, an accurate assessment will tell you that. And if an accurate assessment tells you not to do it, then you shouldn’t do it. So you have no reason at all to strive for anything other than accurate beliefs.

With this in mind, the fact that the empirical evidence for “depressive realism” is shockingly weak is not only unsurprising; it’s almost irrelevant. You can’t have evidence against being rational. If deluded people succeed more, that means something is very, very wrong; and the solution is clearly not to make more people deluded.

Of course, it’s worth pointing out that the evidence is shockingly weak: Depressed people show different biases, not less bias. And in fact they seem to be more overconfident in the following sense: They are more certain that what they predict will happen is what will actually happen.

So while most people think they will succeed when they will probably fail, depressed people are certain they will fail when in fact they could succeed. Both beliefs are inaccurate, but the depressed one is in an important sense more inaccurate: It tells you to give up, which is the wrong thing to do.

“Healthy self-delusion” ultimately amounts to trying to get you to do the right thing for the wrong reasons. But why? Do the right thing for the right reasons! If it’s really the right thing, it should have the right reasons!

Why Leap Years?

Mar 3 JDN 2460374

When this post goes live it will be March 3, not March 4, because February had an extra day this year. But what is this nonsense? Why are we adding a day to February?


There are two parts to this answer.

One part is fundamental astronomical truth.

The other part is historically contingent nonsense.

The fundamental astronomical truth is that Earth’s solar year is not an even multiple of its solar day. That’s kind of what you’d expect, seeing as the two are largely independent. (Actually it’s not as obvious as you might think, because orbital resonances actually do make many satellites have years that are even multiples of or even equal to their days—the latter is called tidal locking.)

So if we’re going to measure time in both years and days, one of two things will happen:

  1. The first day of the year will move around, relative to the solstices—and therefore relative to the seasons.
  2. We need to add or subtract days from some years and not others.

The Egyptians took option 1: 365 days each year, no nonsense, let the solstices fall where they may.

The Romans, on the other hand, had both happen—the Julian calendar did have leap years, but it got them slightly wrong, and as a result the first day of the year gradually moved around. (It’s now about two weeks off, if you were to still use the Julian calendar.)

It wasn’t until the Gregorian calendar that we got a good enough leap year system to stop this from happening—and even it is really only an approximation that would eventually break down and require some further fine-tuning. (It’s just going to be several thousand years, so we’ve got time.)

So, we need some sort of leap year system. Fine. But why this one?

And that’s where the historically contingent nonsense comes in.

See, if you have 365.2422 days per year, and a moon that orbits around you in every 27.32 days, the obvious thing to do would be to find a calendar that divides 365 or 366 into units of 27 or 28.

And it turns out you can actually do that pretty well, by having 13 months, each of 28 days, as well as 5 extra days on normal years and 6 extra days on leap years. (They could be the winter solstice holiday season, for instance.)

You could even make each month exactly 4 weeks of 7 days, if for some reason you like 7-day weeks (not really sure why we do).

But no, that’s not what we did. Of course it’s not.

13 is an unlucky number in Christian societies, because of the betrayal of Judas (though it could even go back further than that).

So we wanted to have only 12 months. Okay, fine.

Then each month is 30 days and we have 5 extra days like before? Oh no, definitely not.

7 months are 30 days and 5 months are 31 days? No, that would be too easy.

7 months are 31 days, 5 are 30, and 1 is 28, unless it’s 29? Uh… what?

Why this is so has all sorts of reasons:

There’s the fact that the months of July and August were created to honor Julius and Augustus respectively.

There’s the fact that there used to be an entire intercalary month which was 27 or 28 days long and functioned kind of like February does now (but it wasn’t February, which already existed).

There are still other calendars in use, such as the Coptic Calendar, the Chinese lunisolar calendar, and the Hijri Calendar. Indeed, what calendar you use seems to be quite strictly determined by your society’s predominant religious denominations.

Basically, it’s a mess. (And it makes programming that involves dates and times surprisingly hard.)

But calendars are the coordination mechanism par excellence, and here’s the thing about coordination mechanisms:

Once you have one, it’s really hard to change it.

The calendar everyone wants to use is whatever calendar everyone else is using. In order to get anyone to switch, we need to get most people to switch. It doesn’t really matter which one is the best in theory; the best in practice is whatever is actually in use.

That is much easier to do when a single guy has absolute authority—as in, indeed, the Roman Empire and the Catholic Church, for the Julian and Gregorian calendars respectively.

There are other ways to accomplish it: The SI was developed intentionally as explicitly rational, and is in fact in wide use around the world. The French revolutionaries intentionally devised a better way to measure things, and actually got it to stick (mostly).

Then again, we never did adopt the French metric system for time. So it may be that time coordination, being the prerequisite for nearly all other forms of coordination, is so vital that it’s exceptionally difficult to change.

Further evidence in favor of this: The Babylonians used base-60 for everything. We literally only use it for time. And we use it for time… probably because we ultimately got it from them.

So while nobody seriously uses “rod“, “furlong“, “firkin“, or “buttload” (yes, that’s a real unit) sincerely anymore, we still use the same days, weeks, and months as the Romans and the same hours, minutes, and seconds as the Babylonians. (And while Americans may not use “fortnight” much, I can assure you that Brits absolutely do—and it’s really nice, because it doesn’t have the ambiguity of “biweekly” or “bimonthly” where it’s never quite clear whether the prefix applies to the rate or the period.)

So, in short, we’re probably stuck with leap years, and furthermore stuck with the weirdness of February.

The only thing I think is likely to seriously cause us to change this system would be widespread space colonization necessitating a universal calendar—but even then I feel like we’ll probably use whatever is in use on Earth anyway.

Even when we colonize space, I think the most likely scenario is that “day” and “year” will still mean Earth-day and Earth-year, and for local days and years you’d use something like “sol” and “rev”. It would just get too confusing to compare people’s ages across worlds otherwise—someone who is 11 on Mars could be 21 on Earth, but 88 on Mercury. (Are they a child, a young adult, or a senior citizen? They’re definitely a young adult—and it’s easiest to see that if you stick to Earth years. Maybe on Mars they can celebrate their 11th rev-sol, but on Earth it’s still their 21st birthday.)

So we’re probably going to be adding these leap years (and, most of us, forgetting which centuries don’t have one) until the end of time.

Serenity and its limits

Feb 25 JDN 2460367

God grant me the serenity
to accept the things I cannot change;
courage to change the things I can;
and wisdom to know the difference.

Of course I don’t care for its religious message (and the full prayer is even more overtly religious), but the serenity prayer does capture an important insight into some of the most difficult parts of human existence.

Some things are as we would like them to be. They don’t require our intervention. (Though we may still stand to benefit from teaching ourselves to savor them and express gratitude for them.)

Other things are not as we would like them to be. The best option, of course, would be to change them.

But such change is often difficult, and sometimes practically impossible.

Sometimes we don’t even know whether change is possible—that’s where the wisdom to know the difference comes in. This is a wisdom we often lack, but it’s at least worth striving for.

If it is impossible to change what we want to change, then we are left with only one choice:

Do we accept it, or not?

The serenity prayer tells us to accept it. There is wisdom in this. Often it is the right answer. Some things about our lives are awful, but simply cannot be changed by any known means.

Death, for instance.

Someday, perhaps, we will finally conquer death, and humanity—or whatever humanity has become—will enter a new era of existence. But today is not that day. When grieving the loss of people we love, ultimately our only option is to accept that they are gone, and do our best to appreciate what they left behind, and the parts of them that are still within us. They would want us to carry on and live full lives, not forever be consumed by grief.

There are many other things we’d like to change, and maybe someday we will, but right now, we simply don’t know how: diseases we can’t treat, problems we can’t solve, questions we can’t answer. It’s often useful for someone to be trying to push those frontiers, but for any given person, the best option is often to find a way to accept things as they are.

But there are also things I cannot change and yet will not accept.

Most of these things fall into one broad category:

Injustice.

I can’t end war, or poverty, or sexism, or racism, or homophobia. Neither can you. Neither can any one person, or any hundred people, or any thousand people, or probably even any million people. (If all it took were a million dreams, we’d be there already. A billion might be enough—though it would depend which billion people shared the dream.)

I can’t. You can’t. But we can.

And here I mean “we” in a very broad sense indeed: Humanity as a collective whole. All of us together can end injustice—and indeed that is the only way it ever could be ended, by our collective action. Collective action is what causes injustice, and collective action is what can end it.

I therefore consider serenity in the face of injustice to be a very dangerous thing.

At times, and to certain degrees, that serenity may be necessary.

Those who are right now in the grips of injustice may need to accept it in order to survive. Reflecting on the horror of a concentration camp won’t get you out of it. Embracing the terror of war won’t save you from being bombed. Weeping about the sorrow of being homeless won’t get you off the streets.

Even for those of us who are less directly affected, it may sometimes be wisest to blunt our rage and sorrow at injustice—for otherwise they could be paralyzing, and if we are paralyzed, we can’t help anyone.

Sometimes we may even need to withdraw from the fight for justice, simply because we are too exhausted to continue. I read recently of a powerful analogy about this:

A choir can sing the same song forever, as long as its singers take turns resting.

If everyone tries to sing their very hardest all the time, the song must eventually end, as no one can sing forever. But if we rotate our efforts, so that at any given moment some are singing while others are resting, then we theoretically could sing for all time—as some of us die, others would be born to replace us in the song.

For a literal choir this seems absurd: Who even wants to sing the same song forever? (Lamb Chop, I guess.)

But the fight for justice probably is one we will need to continue forever, in different forms in different times and places. There may never be a perfectly just society, and even if there is, there will be no guarantee that it remains so without eternal vigilance. Yet the fight is worth it: in so many ways our society is already more just than it once was, and could be made more so in the future.

This fight will only continue if we don’t accept the way things are. Even when any one of us can’t change the world—even if we aren’t sure how many of us it would take to change the world—we still have to keep trying.

But as in the choir, each one of us also needs to rest.

We can’t all be fighting all the time as hard as we can. (I suppose if literally everyone did that, the fight for justice would be immediately and automatically won. But that’s never going to happen. There will always be opposition.)

And when it is time for each of us to rest, perhaps some serenity is what we need after all. Perhaps there is a balance to be found here: We do not accept things as they are, but we do accept that we cannot change them immediately or single-handedly. We accept that our own strength is limited and sometimes we must withdraw from the fight.

So yes, we need some serenity. But not too much.

Enough serenity to accept that we won’t win the fight immediately or by ourselves, and sometimes we’ll need to stop fighting and rest. But not so much serenity that we give up the fight altogether.

For there are many things that I can’t change—but we can.

Love is more than chemicals

Feb 18 JDN 2460360

One of the biggest problems with the rationalist community is an inability to express sincerity and reverence.

I get it: Religion is the world’s greatest source of sincerity and reverence, and religion is the most widespread and culturally important source of irrationality. So we declare ourselves enemies of religion, and also end up being enemies of sincerity and reverence.

But in doing so, we lose something very important. We cut ourselves off from some of the greatest sources of meaning and joy in human life.

In fact, we may even be undermining our own goals: If we don’t offer people secular, rationalist forms of reverence, they may find they need to turn back to religion in order to fill that niche.

One of the most pernicious forms of this anti-sincerity, anti-reverence attitude (I can’t just say ‘insincere’ or ‘irreverent’, as those have different meanings) is surely this one:

Love is just a chemical reaction.

(I thought it seemed particularly apt to focus on this one during the week of Valentine’s Day.)

On the most casual of searches I could find at least half a dozen pop-sci articles and a YouTube video propounding this notion (though I could also find a few articles trying to debunk the notion as well).

People who say this sort of thing seem to think that they are being wise and worldly while the rest of us are just being childish and naive. They think we are seeing something that isn’t there. In fact, they are being jaded and cynical. They are failing to see something that is there.

(Perhaps the most extreme form of this was from Rick & Morty; and while Rick as a character is clearly intended to be jaded and cynical, far too many people also see him as a role model.)

Part of the problem may also be a failure to truly internalize the Basic Fact of Cognitive Science:

You are your brain.

No, your consciousness is not an illusion. It’s not an “epiphenomenon” (whatever that isI’ve never encountered one in real life). Your mind is not fake or imaginary. Your mind actually exists—and it is a product of your brain. Both brain and mind exist, and are in fact the same.

It’s so hard for people to understand this that some become dualists, denying the unity of the brain and the mind. That, at least, I can sympathize with, even though we have compelling evidence that it is wrong. But there’s another tack people sometimes take, eliminative materialism, where they try to deny that the mind exists at all. And that I truly do not understand. How can you think that nobody can think? Yet intelligent, respected philosophers have claimed to believe such things.

Love is one of the most important parts of our lives.

This may be more true of humans than of literally any other entity in the known universe.

The only serious competition comes from other mammals: They are really the only other beings we know of that are capable of love. And even they don’t seem to be as good at it as we are; they can love only those closest to them, while we can love entire nations and even abstract concepts.

And once you go beyond that, even to reptiles—let alone fish, or amphibians, or insects, or molluscs—it’s not clear that other animals are really capable of love at all. They seem to be capable of some forms of thought and feeling: They get hungry, or angry, or horny. But do they really love?

And even the barest emotional capacities of an insect are still categorically beyond what most of the universe is capable of feeling, which is to say: Nothing. The vast, vast majority of the universe feels neither love nor hate, neither joy nor pain.

Yet humans can love, and do love, and it is a large part of what gives our lives meaning.

I don’t just mean romantic love here, though I do think it’s worth noting that people who dismiss the reality of romantic love somehow seem reluctant to do the same for the love parents have for their children—even though it’s made of pretty much the same brain chemicals. Perhaps there is a limit to their cynicism.

Yes, love is made of chemicals—because everything is made of chemicals. We live in a material, chemical universe. Saying that love is made of chemicals is an almost completely vacuous statement; it’s basically tantamount to saying that love exists.

In other contexts, you already understand this.

“That’s not a bridge, it’s just a bunch of iron atoms!” rightfully strikes you as an absurd statement to make. Yes, the bridge is made of steel, and steel is mostly iron, and everything is made of atoms… but clearly there’s a difference between a random pile of iron and a bridge.

“That’s not a computer, it’s just a bunch of silicon atoms!” similarly registers as nonsense: Yes, it is indeed mostly made of silicon, but beach sand and quartz crystals are not computers.

It is in this same sense that joy is made of dopamine and love is made of chemical reactions. Yes, those are in fact the constituent parts—but things are more than just their parts.

I think that on some level, even most rationalists recognize that love is more than some arbitrary chemical reaction. I think “love is just chemicals” is mainly something people turn to for a couple of reasons: Sometimes, they are so insistent on rejecting everything that even resembles religious belief that they end up rejecting all meaning and value in human life. Other times, they have been so heartbroken, that they try to convince themselves love isn’t real—to dull the pain. (But of course if it weren’t, there would be no pain to dull.)

But love is no more (or less) a chemical reaction than any other human experience: The very belief “love is just a chemical reaction” is, itself, made of chemical reactions.

Everything we do is made of chemical reactions, because we are made of chemical reactions.

Part of the problem here—and with the Basic Fact of Cognitive Science in general—is that we really have no idea how this works. For most of what we deal with in daily life, and even an impressive swath of the overall cosmos, we have a fairly good understanding of how things work. We know how cars drive, how wind blows, why rain falls; we even know how cats purr and why birds sing. But when it comes to understanding how the physical matter of the brain generates the subjective experiences of thought, feeling, and belief—of which love is made—we lack even the most basic understanding. The correlation between the two is far too strong to deny; but as far as causal mechanisms, we know absolutely nothing. (Indeed, worse than that: We can scarcely imagine a causal mechanism that would make any sense. We not only don’t know the answer; we don’t know what an answer would look like.)

So, no, I can’t tell you how we get from oxytocin and dopamine to love. I don’t know how that makes any sense. No one does. But we do know it’s true.

And just like everything else, love is more than the chemicals it’s made of.

Let’s call it “copytheft”

Feb 11 JDN 2460353

I have written previously about how ridiculous it is that we refer to the unauthorized copying of media such as music and video games as “piracy” as though it were somehow equivalent to capturing ships on the high seas.

In that post a few years ago I suggested calling it simply “unauthorized copying”, but that clearly isn’t catching on, perhaps because it’s simply too much of a mouthful. So today I offer a compromise:

Let’s call it “copytheft”.

That takes no longer to say than “piracy” (and only slightly longer to write), and far more clearly states what’s actually going on. No ships have been seized on the high seas; there has been no murder, arson, or slavery.

Yes, it’s debatable whether copytheft really constitutes theft—and I would generally argue that it does not—but just from hearing that word, you would probably infer that the following process took place:

  1. I took a thing.
  2. I made a copy of that thing that I wasn’t supposed to.
  3. I put the original thing back where it was, unharmed.

The paradigmatic example of this theft-copy-replace sequence would be a key, of course: You take someone’s key, copy it, then put the key back where it was, so you now can unlock their locks but they are none the wiser.

With unauthorized copying of media, you’re not exactly doing steps 1 and 3; the copier often has the media completely legitimately before they make the copy, and it may not even have a clear physical location to be put back to (it must be physically stored somewhere, but particularly if it’s streamed from the cloud it hardly matters where).

But you’re definitely doing step 2, and that was the only part that had a permanent effect; so I think that the nomenclature still seems to work well enough.

Copytheft also has a similar sound to copyleft, the use of alternative intellectual property mechanisms by the authors to grand broader licensing than is ordinarily afforded by copyright, and also to copyfraud, the crime of claiming exclusive copyright to content that is in fact public domain. Hopefully that common structure will help the term get some purchase.

Of course, I can hardly bring a word into widespread use on my own. Others like you have to not only read it, but like it enough that you’re willing to actually use it—and then we need a certain critical mass of people using it in order to make it actually catch on.

So, I’d like to take a moment to offer you some justification why it’s worth changing to this new word.

First, it is admittedly imperfect; by containing the word “theft”, it already feels like we’re conceding something to the defenders of copyright.

But by including the word “copy” in the term, we can draw attention to the most important aspect that distinguishes copytheft from, well, theft:

The original owner still has the thing.

That’s the part that they want us to forget, that the harsh word “piracy” leads you towards. A ship that is captured by pirates is a ship that may never again sail for your own navy. A song that is “pirated”—copythefted—is one that not only the original owners, but also everyone who bought it, still have in exactly the same state they did before.

Thus it simply cannot be that copytheft takes money out of the hands of artists. At worst, it fails to give money to artists.

That could still be a bad thing: Artists need to pay bills too, and a world where nobody pays for any art is surely a world with a lot fewer artists—and the ones who remain far more miserable. But it’s clearly a different sort of thing than ordinary theft, as nothing has been lost.

Moreover, it’s not clear that in most cases copytheft even does fail to give money that would otherwise have been given. Maybe sometimes it does—a certain proportion of people who copytheft a given song, film, or video game might have been willing to pay the original price if the copythefted version had not been available. But typically I suspect that people who’d be willing to pay full price… do pay full price. Thus, the people who are copythefting the media wouldn’t have bought it at full price anyway.

They might have bought it at some lower price, in which case that is foregone payment; but it’s surely considerably less than the “losses” often reported by the film and music industries, which seem to be based on the assumption that everyone who copythefts would have otherwise paid full price. And in fact many people might have been unwilling to buy at any nonzero price, and were only willing to copytheft the media precisely because it didn’t cost them any money or a great deal of effort to do so.

And in fact if you think about it, what about people who would have been willing to pay more than the original price? Surely there were many of them as well, yet we don’t grant media corporations the right to that money. That is also money that they could have been given but weren’t—and we decided, as a society, that they didn’t deserve to have it. It’s not that it would be impossible to do so: We could give corporations the authority to price-discriminate on all of their media. (They probably couldn’t do it perfectly, but they could surely do it quite well.) But we made the policy choice to live in a world where media is sold by single-price monopolies rather than one where it is sold by price-discriminating monopolies.

The mere fact that someone might have been willing to pay you more money if the market were different does not entitle you to receive that money. It has not been stolen from you. Indeed, typically it’s more that you have not been allowed to exploit them. It’s usually the presence of competition that prevents corporations from receiving the absolute maximum profit they might potentially have received if they had full control over the market. Corporations making less profit than they otherwise would have is generally a sign of good economic policy—a sign that things are reasonably fair.

Why else is “copytheft” a good word to use?

Above all, we do not allow our terms to be defined by our opponents.

We don’t allow them insinuate that our technically violating draconian regulations designed to maximize the profits of Disney and Viacom somehow constitutes a terrible crime against other human beings.

“Piracy is not a victimless crime”, they will say.

Well, actual piracy isn’t. But copytheft? Yeah, uh, it kinda is.

Maybe not quite as victimless as, say, marijuana or psilocybin, which no one even has any rational reason to prefer you not do. But still, you’re not really making anyone else worse off—that sounds pretty victimless.

Of course, it does give us less reason to wear tricorn hats and eyepatches.

But guess what? You can still do that anyway!

Adversarial design

Feb 4 JDN 2460346

Have you noticed how Amazon feels a lot worse lately? Years ago, it was extremely convenient: You’d just search for what you want, it would give you good search results, you could buy what you want and be done. But now you have to slog through “sponsored results” and a bunch of random crap made by no-name companies in China before you can get to what you actually want.

Temu is even worse, and has been from the start: You can’t buy anything on Temu without first being inundated in ads. It’s honestly such an awful experience, I don’t understand why anyone is willing to buy anything from Temu.

#WelcomeToCyberpunk, I guess.

Even some video games have become like this: The free-to-play or “freemium” business model seems to be taking off, where you don’t pay money for the game itself, but then have to deal with ads inside the game trying to sell you additional content, because that’s where the developers actually make their money. And now AAA firms like EA and Ubisoft are talking about going to a subscription-based model where you don’t even own your games anymore. (Fortunately there’s been a lot of backlash against that; I hope it persists.)

Why is this happening? Isn’t capitalism supposed to make life better for consumers? Isn’t competition supposed to make products and services supposed to improve over time?

Well, first of all, these markets are clearly not as competitive as they should be. Amazon has a disturbingly large market share, and while the video game market is more competitive, it’s still dominated by a few very large firms (like EA and Ubisoft).

But I think there’s a deeper problem here, one which may be specific to media content.

What I mean by “media content” here is fairly broad: I would include art, music, writing, journalism, film, and video games.

What all of these things have in common is that they are not physical products (they’re not like a car or a phone that is a single physical object), but they are also not really services either (they aren’t something you just do as an action and it’s done, like a haircut, a surgery, or a legal defense).

Another way of thinking about this is that media content can be copied with zero marginal cost.

Because it can be copied with zero marginal cost, media content can’t simply be made and sold the way that conventional products and services are. There are a few different ways it can be monetized.


The most innocuous way is commission or patronage, where someone pays someone else to create a work because they want that work. This is totally unproblematic. You want a piece of art, you pay an artist, they make it for you; great. Maybe you share copies with the world, maybe you don’t; whatever. It’s good either way.

Unfortunately, it’s hard to sustain most artists and innovators on that model alone. (In a sense I’m using a patronage model, because I have a Patreon. But I’m not making anywhere near enough to live on that way.)

The second way is intellectual property, which I have written about before, and surely will again. If you can enforce limits on who is allowed to copy a work, then you can make a work and sell it for profit without fear of being undercut by someone else who simply copies it and sells it for cheaper. A detailed discussion of that is beyond the scope of this post, but you can read those previous posts, and I can give you the TLDR version: Some degree of intellectual property is probably necessary, but in our current society, it has clearly been taken much too far. I think artists and authors deserve to be able to copyright (or maybe copyleft) their work—but probably not for 70 years after their death.

And then there is a third way, the most insidious way: advertising. If you embed advertisements for other products and services within your content, you can then sell those ad slots for profit. This is how newspapers stay afloat, mainly; subscriptions have never been the majority of their revenue. It’s how TV was supported before cable and streaming—and cable usually has ads too, and streaming is starting to.

There is something fundamentally different about advertising as a service. Whereas most products and services you encounter in a capitalist society are made for you, designed for you to use, advertising it made at you, designed to manipulate you.

I’ve heard it put well this way:

If you’re not paying, you aren’t the customer; you’re the product.

Monetizing content by advertising effectively makes your readers (or viewers, players, etc.) into the product instead of the customer.

I call this effect adversarial design.

I chose this term because it not only conveys the right sense of being an adversary: it also includes the word ‘ad’ and the same Latin root ‘advertere‘ as ‘advertising’.

When a company designs a car or a phone, they want it to appeal to customers—they want you to like it. Yes, they want to take your money; but it’s a mutually beneficial exchange. They get money, you get a product; you’re both happier.

When a company designs an ad, they want it to affect customers—they want you to do what it says, whether you like it or not. And they wouldn’t be doing it if they thought you would buy it anyway—so they are basically trying to make you do something you wouldn’t otherwise have done.

In other words, when designing a product, corporations want to be your friend.

When designing an ad, they become your enemy.

You would absolutely prefer not to have ads. You don’t want your attention taken in this way. But they way that these corporations make money—disgustingly huge sums of money—is by forcing those ads in your face anyway.

Yes, to be fair, there might be some kinds of ads that aren’t too bad. Simple, informative, unobtrusive ads that inform you that something is available you might not otherwise have known about. Movie trailers are like this; people often enjoy watching movie trailers, and they want to see what movies are going to come out next. That’s fine. I have no objection to that.

But it should be clear to anyone who has, um, used the Internet in the past decade that we have gone far, far beyond that sort of advertising. Ads have become aggressive, manipulative, aggravating, and—above all—utterly ubiquitous. You can’t escape them. They’re everywhere. Even when you use ad-block software (which I highly recommend, particularly Adblock Plus—which is free), you still can’t completely escape them.

That’s another thing that should make it pretty clear that there’s something wrong with ads: People are willing to make efforts or even pay money to make ads go away.

Whenever there is a game I like that’s ad-supported but you can pay to make the ads go away, I always feel like I’m being extorted, even if what I have to pay would have been a totally reasonable price for the game. Come on, just sell me the game. Don’t give me the game for free and then make me pay to make it not unpleasant. Don’t add anti-features.

This is clearly not a problem that market competition alone will solve. Even in highly competitive markets, advertising is still ubiquitous, aggressive and manipulative. In fact, competition may even make it worse—a true monopoly wouldn’t need to advertise very much.

Consider Coke and Pepsi ads; they’re actually relatively pleasant, aren’t they? Because all they’re trying to do is remind you and make you thirsty so you’ll buy more of the product you were already buying. They aren’t really trying to get you to buy something you wouldn’t have otherwise. They know that their duopoly is solid, and only a true Black Swan event would unseat their hegemony.

And have you ever seen an ad for your gas company? I don’t think I have—probably because I didn’t have a choice in who my gas company was; there was only one that covered my area. So why bother advertising to me?

If competition won’t fix this, what will? Is there some regulation we could impose that would make advertising less obtrusive? People have tried, without much success. I think imposing an advertising tax would help, but even that might not do enough.

What I really think we need right now is to recognize the problem and invest in solving it. Right now we have megacorporations which are thoroughly (literally) invested in making advertising more obtrusive and more ubiquitous. We need other institutions—maybe government, maybe civil society more generally—that are similarly invested in counteracting it.


Otherwise, it’s only going to get worse.

Administering medicine to the dead

Jan 28 JDN 2460339

Here are a couple of pithy quotes that go around rationalist circles from time to time:

“To argue with a man who has renounced the use and authority of reason, […] is like administering medicine to the dead[…].”

Thomas Paine, The American Crisis

“It is useless to attempt to reason a man out of a thing he was never reasoned into.”

Jonathan Swift

You usually hear that abridged version, but Thomas Paine’s full quotation is actually rather interesting:

“To argue with a man who has renounced the use and authority of reason, and whose philosophy consists in holding humanity in contempt, is like administering medicine to the dead, or endeavoring to convert an atheist by scripture.”

― Thomas Paine, The American Crisis

It is indeed quite ineffective to convert an atheist by scripture (though that doesn’t seem to stop them from trying). Yet this quotation seems to claim that the opposite should be equally ineffective: It should be impossible to convert a theist by reason.

Well, then, how else are we supposed to do it!?

Indeed, how did we become atheists in the first place!?

You were born an atheist? No, you were born having absolutely no opinion about God whatsoever. (You were born not realizing that objects don’t fade from existence when you stop seeing them! In a sense, we were all born believing ourselves to be God.)

Maybe you were raised by atheists, and religion never tempted you at all. Lucky you. I guess you didn’t have to be reasoned into atheism.

Well, most of us weren’t. Most of us were raised into religion, and told that it held all the most important truths of morality and the universe, and that believing anything else was horrible and evil and would result in us being punished eternally.

And yet, somehow, somewhere along the way, we realized that wasn’t true. And we were able to realize that because people made rational arguments.

Maybe we heard those arguments in person. Maybe we read them online. Maybe we read them in books that were written by people who died long before we were born. But somehow, somewhere people actually presented the evidence for atheism, and convinced us.

That is, they reasoned us out of something that we were not reasoned into.

I know it can happen. I have seen it happen. It has happened to me.

And it was one of the most important events in my entire life. More than almost anything else, it made me who I am today.

I’m scared that if you keep saying it’s impossible, people will stop trying to do it—and then it will stop happening to people like me.

So please, please stop telling people it’s impossible!

Quotes like these encourage you to simply write off entire swaths of humanity—most of humanity, in fact—judging them as worthless, insane, impossible to reach. When you should be reaching out and trying to convince people of the truth, quotes like these instead tell you to give up and consider anyone who doesn’t already agree with you as your enemy.

Indeed, it seems to me that the only logical conclusion of quotes like these is violence. If it’s impossible to reason with people who oppose us, then what choice do we have, but to fight them?

Violence is a weapon anyone can use.

Reason is the one weapon in the universe that works better when you’re right.

Reason is the sword that only the righteous can wield. Reason is the shield that only protects the truth. Reason is the only way we can ever be sure that the right people win—instead of just whoever happens to be strongest.

Yes, it’s true: reason isn’t always effective, and probably isn’t as effective as it should be. Convincing people to change their minds through rational argument is difficult and frustrating and often painful for both you and them—but it absolutely does happen, and our civilization would have long ago collapsed if it didn’t.

Even people who claim to have renounced all reason really haven’t: they still know 2+2=4 and they still look both ways when they cross the street. Whatever they’ve renounced, it isn’t reason; and maybe, with enough effort, we can help them see that—by reason, of course.

In fact, maybe even literally administering medicine to the dead isn’t such a terrible idea.

There are degrees of death, after all: Someone whose heart has stopped is in a different state than someone whose cerebral activity has ceased, and both of them clearly stand a better chance of being resuscitated than someone who has been vaporized by an explosion.

As our technology improves, more and more states that were previously considered irretrievably dead will instead be considered severe states of illness or injury from which it is possible to recover. We can now restart many stopped hearts; we are working on restarting stopped brains. (Of course we’ll probably never be able to restore someone who got vaporized—unless we figure out how to make backup copies of people?)

Most of the people who now live in the world’s hundreds of thousands of ICU beds would have been considered dead even just 100 years ago. But many of them will recover, because we didn’t give up on them.

So don’t give up on people with crazy beliefs either.

They may seem like they are too far gone, like nothing in the world could ever bring them back to the light of reason. But you don’t actually know that for sure, and the only way to find out is to try.

Of course, you won’t convince everyone of everything immediately. No matter how good your evidence is, that’s just not how this works. But you probably will convince someone of something eventually, and that is still well worthwhile.

You may not even see the effects yourself—people are often loathe to admit when they’ve been persuaded. But others will see them. And you will see the effects of other people’s persuasion.

And in the end, reason is really all we have. It’s the only way to know that what we’re trying to make people believe is the truth.

Don’t give up on reason.

And don’t give up on other people, whatever they might believe.

Reflections at the crossroads

Jan 21 JDN 2460332

When this post goes live, I will have just passed my 36th birthday. (That means I’ve lived for about 1.1 billion seconds, so in order to be as rich as Elon Musk, I’d need to have made, on average, since birth, $200 per second—$720,000 per hour.)

I certainly feel a lot better turning 36 than I did 35. I don’t have any particular additional accomplishments to point to, but my life has already changed quite a bit, in just that one year: Most importantly, I quit my job at the University of Edinburgh, and I am currently in the process of moving out of the UK and back home to Michigan. (We moved the cat over Christmas, and the movers have already come and taken most of our things away; it’s really just us and our luggage now.)

But I still don’t know how to field the question that people have been asking me since I announced my decision to do this months ago:

“What’s next?”

I’m at a crossroads now, trying to determine which path to take. Actually maybe it’s more like a roundabout; it has a whole bunch of different paths, surely not just two or three. The road straight ahead is labeled “stay in academia”; the others at the roundabout are things like “freelance writing”, “software programming”, “consulting”, and “tabletop game publishing”. There’s one well-paved and superficially enticing road that I’m fairly sure I don’t want to take, labeled “corporate finance”.

Right now, I’m just kind of driving around in circles.

Most people don’t seem to quit their jobs without a clear plan for where they will go next. Often they wait until they have another offer in hand that they intend to take. But when I realized just how miserable that job was making me, I made the—perhaps bold, perhaps courageous, perhaps foolish—decision to get out as soon as I possibly could.

It’s still hard for me to fully understand why working at Edinburgh made me so miserable. Many features of an academic career are very appealing to me. I love teaching, I like doing research; I like the relatively flexible hours (and kinda need them, because of my migraines).

I often construct formal decision models to help me make big choices—generally it’s a linear model, where I simply rate each option by its relative quality in a particular dimension, then try different weightings of all the different dimensions. I’ve used this successfully to pick out cars, laptops, even universities. I’m not entrusting my decisions to an algorithm; I often find myself tweaking the parameters to try to get a particular result—but that in itself tells me what I really want, deep down. (Don’t do that in research—people do, and it’s bad—but if the goal is to make yourself happy, your gut feelings are important too.)

My decision models consistently rank university teaching quite high. It generally only gets beaten by freelance writing—which means that maybe I should give freelance writing another try after all.

And yet, my actual experience at Edinburgh was miserable.

What went wrong?

Well, first of all, I should acknowledge that when I separate out the job “university professor” into teaching and research as separate jobs in my decision model, and include all that goes into both jobs—not just the actual teaching, but the grading and administrative tasks; not just doing the research, but also trying to fund and publish it—they both drop lower on the list, and research drops down a lot.

Also, I would rate them both even lower now, having more direct experience of just how awful the exam-grading, grant-writing and journal-submitting can be.

Designing and then grading an exam was tremendously stressful: I knew that many of my students’ futures rested on how they did on exams like this (especially in the UK system, where exams are absurdly overweighted! In most of my classes, the final exam was at least 60% of the grade!). I struggled mightily to make the exam as fair as I could, all the while knowing that it would never really feel fair and I didn’t even have the time to make it the best it could be. You really can’t assess how well someone understands an entire subject in a multiple-choice exam designed to take 90 minutes. It’s impossible.

The worst part of research for me was the rejection.

I mentioned in a previous post how I am hypersensitive to rejection; applying for grants and submitting to journals was clearly the worst feelings of rejection I’ve felt in any job. It felt like they were evaluting not only the value of my work, but my worth as a scientist. Failure felt like being told that my entire career was a waste of time.

It was even worse than the feeling of rejection in freelance writing (which is one of the few things that my model tells me is bad about freelancing as a career for me, along with relatively low and uncertain income). I think the difference is that a book publisher is saying “We don’t think we can sell it.”—’we’ and ‘sell’ being vital. They aren’t saying “this is a bad book; it shouldn’t exist; writing it was a waste of time.”; they’re just saying “It’s not a subgenre we generally work with.” or “We don’t think it’s what the market wants right now.” or even “I personally don’t care for it.”. They acknowledge their own subjective perspective and the fact that it’s ultimately dependent on forecasting the whims of an extremely fickle marketplace. They aren’t really judging my book, and they certainly aren’t judging me.

But in research publishing, it was different. Yes, it’s all in very polite language, thoroughly spiced with sophisticated jargon (though some reviewers are more tactful than others). But when your grant application gets rejected by a funding agency or your paper gets rejected by a journal, the sense really basically is “This project is not worth doing.”; “This isn’t good science.”; “It was/would be a waste of time and money.”; “This (theory or experiment you’ve spent years working on) isn’t interesting or important.” Nobody ever came out and said those things, nor did they come out and say “You’re a bad economist and you should feel bad.”; but honestly a couple of the reviews did kinda read to me like they wanted to say that. They thought that the whole idea that human beings care about each other is fundamentally stupid and naive and not worth talking about, much less running experiments on.

It isn’t so much that I believed them that my work was bad science. I did make some mistakes along the way (but nothing vital; I’ve seen far worse errors by Nobel Laureates). I didn’t have very large samples (because every person I add to the experiment is money I have to pay, and therefore funding I have to come up with). But overall I do believe that my work is sufficiently rigorous to be worth publishing in scientific journals.

It’s more that I came to feel that my work is considered bad, that the kind of work I wanted to do would forever be an uphill battle against an implacable enemy. I already feel exhausted by that battle, and it had only barely begun. I had thought that behavioral economics was a more successful paradigm by now, that it had largely displaced the neoclassical assumptions that came before it; but I was wrong. Except specifically in journals dedicated to experimental and behavioral economics (of which prestigious journals are few—I quickly exhausted them), it really felt like a lot of the feedback I was getting amounted to, “I refuse to believe your paradigm.”.

Part of the problem, also, was that there simply aren’t that many prestigious journals, and they don’t take that many papers. The top 5 journals—which, for whatever reason, command far more respect than any other journals among economists—each accept only about 5-10% of their submissions. Surely more than that are worth publishing; and, to be fair, much of what they reject probably gets published later somewhere else. But it makes a shockingly large difference in your career how many “top 5s” you have; other publications almost don’t matter at all. So once you don’t get into any of those (which of course I didn’t), should you even bother trying to publish somewhere else?

And what else almost doesn’t matter? Your teaching. As long as you show up to class and grade your exams on time (and don’t, like, break the law or something), research universities basically don’t seem to care how good a teacher you are. That was certainly my experience at Edinburgh. (Honestly even their responses to professors sexually abusing their students are pretty unimpressive.)

Some of the other faculty cared, I could tell; there were even some attempts to build a community of colleagues to support each other in improving teaching. But the administration seemed almost actively opposed to it; they didn’t offer any funding to support the program—they wouldn’t even buy us pizza at the meetings, the sort of thing I had as an undergrad for my activist groups—and they wanted to take the time we spent in such pedagogy meetings out of our grading time (probably because if they didn’t, they’d either have to give us less grading, or some of us would be over our allotted hours and they’d owe us compensation).

And honestly, it is teaching that I consider the higher calling.

The difference between 0 people knowing something and 1 knowing it is called research; the difference between 1 person knowing it and 8 billion knowing it is called education.

Yes, of course, research is important. But if all the research suddenly stopped, our civilization would stagnate at its current level of technology, but otherwise continue unimpaired. (Frankly it might spare us the cyberpunk dystopia/AI apocalypse we seem to be hurtling rapidly toward.) Whereas if all education suddenly stopped, our civilization would slowly decline until it ultimately collapsed into the Stone Age. (Actually it might even be worse than that; even Stone Age cultures pass on knowledge to their children, just not through formal teaching. If you include all the ways parents teach their children, it may be literally true that humans cannot survive without education.)

Yet research universities seem to get all of their prestige from their research, not their teaching, and prestige is the thing they absolutely value above all else, so they devote the vast majority of their energy toward valuing and supporting research rather than teaching. In many ways, the administrators seem to see teaching as an obligation, as something they have to do in order to make money that they can spend on what they really care about, which is research.

As such, they are always making classes bigger and bigger, trying to squeeze out more tuition dollars (well, in this case, pounds) from the same number of faculty contact hours. It becomes impossible to get to know all of your students, much less give them all sufficient individual attention. At Edinburgh they even had the gall to refer to their seminars as “tutorials” when they typically had 20+ students. (That is not tutoring!)And then of course there were the lectures, which often had over 200 students.

I suppose it could be worse: It could be athletics they spend all their money on, like most Big Ten universities. (The University of Michigan actually seems to strike a pretty good balance: they are certainly not hurting for athletic funding, but they also devote sizeable chunks of their budget to research, medicine, and yes, even teaching. And unlike virtually all other varsity athletic programs, University of Michigan athletics turns a profit!)

If all the varsity athletics in the world suddenly disappeared… I’m not convinced we’d be any worse off, actually. We’d lose a source of entertainment, but it could probably be easily replaced by, say, Netflix. And universities could re-focus their efforts on academics, instead of acting like a free training and selection system for the pro leagues. The University of California, Irvine certainly seemed no worse off for its lack of varsity football. (Though I admit it felt a bit strange, even to a consummate nerd like me, to have a varsity League of Legends team.)

They keep making the experience of teaching worse and worse, even as they cut faculty salaries and make our jobs more and more precarious.

That might be what really made me most miserable, knowing how expendable I was to the university. If I hadn’t quit when I did, I would have been out after another semester anyway, and going through this same process a bit later. It wasn’t even that I was denied tenure; it was never on the table in the first place. And perhaps because they knew I wouldn’t stay anyway, they didn’t invest anything in mentoring or supporting me. Ostensibly I was supposed to be assigned a faculty mentor immediately; I know the first semester was crazy because of COVID, but after two and a half years I still didn’t have one. (I had a small research budget, which they reduced in the second year; that was about all the support I got. I used it—once.)

So if I do continue on that “academia” road, I’m going to need to do a lot of things differently. I’m not going to put up with a lot of things that I did. I’ll demand a long-term position—if not tenure-track, at least renewable indefinitely, like a lecturer position (as it is in the US, where the tenure-track position is called “assistant professor” and “lecturer” is permanent but not tenured; in the UK, “lecturers” are tenure-track—except at Oxford, and as of 2021, Cambridge—just to confuse you). Above all, I’ll only be applying to schools that actually have some track record for valuing teaching and supporting their faculty.

And if I can’t find any such positions? Then I just won’t apply at all. I’m not going in with the “I’ll take what I can get” mentality I had last time. Our household finances are stable enough that I can afford to wait awhile.

But maybe I won’t even do that. Maybe I’ll take a different path entirely.

For now, I just don’t know.