Why Leap Years?

Mar 3 JDN 2460374

When this post goes live it will be March 3, not March 4, because February had an extra day this year. But what is this nonsense? Why are we adding a day to February?


There are two parts to this answer.

One part is fundamental astronomical truth.

The other part is historically contingent nonsense.

The fundamental astronomical truth is that Earth’s solar year is not an even multiple of its solar day. That’s kind of what you’d expect, seeing as the two are largely independent. (Actually it’s not as obvious as you might think, because orbital resonances actually do make many satellites have years that are even multiples of or even equal to their days—the latter is called tidal locking.)

So if we’re going to measure time in both years and days, one of two things will happen:

  1. The first day of the year will move around, relative to the solstices—and therefore relative to the seasons.
  2. We need to add or subtract days from some years and not others.

The Egyptians took option 1: 365 days each year, no nonsense, let the solstices fall where they may.

The Romans, on the other hand, had both happen—the Julian calendar did have leap years, but it got them slightly wrong, and as a result the first day of the year gradually moved around. (It’s now about two weeks off, if you were to still use the Julian calendar.)

It wasn’t until the Gregorian calendar that we got a good enough leap year system to stop this from happening—and even it is really only an approximation that would eventually break down and require some further fine-tuning. (It’s just going to be several thousand years, so we’ve got time.)

So, we need some sort of leap year system. Fine. But why this one?

And that’s where the historically contingent nonsense comes in.

See, if you have 365.2422 days per year, and a moon that orbits around you in every 27.32 days, the obvious thing to do would be to find a calendar that divides 365 or 366 into units of 27 or 28.

And it turns out you can actually do that pretty well, by having 13 months, each of 28 days, as well as 5 extra days on normal years and 6 extra days on leap years. (They could be the winter solstice holiday season, for instance.)

You could even make each month exactly 4 weeks of 7 days, if for some reason you like 7-day weeks (not really sure why we do).

But no, that’s not what we did. Of course it’s not.

13 is an unlucky number in Christian societies, because of the betrayal of Judas (though it could even go back further than that).

So we wanted to have only 12 months. Okay, fine.

Then each month is 30 days and we have 5 extra days like before? Oh no, definitely not.

7 months are 30 days and 5 months are 31 days? No, that would be too easy.

7 months are 31 days, 5 are 30, and 1 is 28, unless it’s 29? Uh… what?

Why this is so has all sorts of reasons:

There’s the fact that the months of July and August were created to honor Julius and Augustus respectively.

There’s the fact that there used to be an entire intercalary month which was 27 or 28 days long and functioned kind of like February does now (but it wasn’t February, which already existed).

There are still other calendars in use, such as the Coptic Calendar, the Chinese lunisolar calendar, and the Hijri Calendar. Indeed, what calendar you use seems to be quite strictly determined by your society’s predominant religious denominations.

Basically, it’s a mess. (And it makes programming that involves dates and times surprisingly hard.)

But calendars are the coordination mechanism par excellence, and here’s the thing about coordination mechanisms:

Once you have one, it’s really hard to change it.

The calendar everyone wants to use is whatever calendar everyone else is using. In order to get anyone to switch, we need to get most people to switch. It doesn’t really matter which one is the best in theory; the best in practice is whatever is actually in use.

That is much easier to do when a single guy has absolute authority—as in, indeed, the Roman Empire and the Catholic Church, for the Julian and Gregorian calendars respectively.

There are other ways to accomplish it: The SI was developed intentionally as explicitly rational, and is in fact in wide use around the world. The French revolutionaries intentionally devised a better way to measure things, and actually got it to stick (mostly).

Then again, we never did adopt the French metric system for time. So it may be that time coordination, being the prerequisite for nearly all other forms of coordination, is so vital that it’s exceptionally difficult to change.

Further evidence in favor of this: The Babylonians used base-60 for everything. We literally only use it for time. And we use it for time… probably because we ultimately got it from them.

So while nobody seriously uses “rod“, “furlong“, “firkin“, or “buttload” (yes, that’s a real unit) sincerely anymore, we still use the same days, weeks, and months as the Romans and the same hours, minutes, and seconds as the Babylonians. (And while Americans may not use “fortnight” much, I can assure you that Brits absolutely do—and it’s really nice, because it doesn’t have the ambiguity of “biweekly” or “bimonthly” where it’s never quite clear whether the prefix applies to the rate or the period.)

So, in short, we’re probably stuck with leap years, and furthermore stuck with the weirdness of February.

The only thing I think is likely to seriously cause us to change this system would be widespread space colonization necessitating a universal calendar—but even then I feel like we’ll probably use whatever is in use on Earth anyway.

Even when we colonize space, I think the most likely scenario is that “day” and “year” will still mean Earth-day and Earth-year, and for local days and years you’d use something like “sol” and “rev”. It would just get too confusing to compare people’s ages across worlds otherwise—someone who is 11 on Mars could be 21 on Earth, but 88 on Mercury. (Are they a child, a young adult, or a senior citizen? They’re definitely a young adult—and it’s easiest to see that if you stick to Earth years. Maybe on Mars they can celebrate their 11th rev-sol, but on Earth it’s still their 21st birthday.)

So we’re probably going to be adding these leap years (and, most of us, forgetting which centuries don’t have one) until the end of time.

Serenity and its limits

Feb 25 JDN 2460367

God grant me the serenity
to accept the things I cannot change;
courage to change the things I can;
and wisdom to know the difference.

Of course I don’t care for its religious message (and the full prayer is even more overtly religious), but the serenity prayer does capture an important insight into some of the most difficult parts of human existence.

Some things are as we would like them to be. They don’t require our intervention. (Though we may still stand to benefit from teaching ourselves to savor them and express gratitude for them.)

Other things are not as we would like them to be. The best option, of course, would be to change them.

But such change is often difficult, and sometimes practically impossible.

Sometimes we don’t even know whether change is possible—that’s where the wisdom to know the difference comes in. This is a wisdom we often lack, but it’s at least worth striving for.

If it is impossible to change what we want to change, then we are left with only one choice:

Do we accept it, or not?

The serenity prayer tells us to accept it. There is wisdom in this. Often it is the right answer. Some things about our lives are awful, but simply cannot be changed by any known means.

Death, for instance.

Someday, perhaps, we will finally conquer death, and humanity—or whatever humanity has become—will enter a new era of existence. But today is not that day. When grieving the loss of people we love, ultimately our only option is to accept that they are gone, and do our best to appreciate what they left behind, and the parts of them that are still within us. They would want us to carry on and live full lives, not forever be consumed by grief.

There are many other things we’d like to change, and maybe someday we will, but right now, we simply don’t know how: diseases we can’t treat, problems we can’t solve, questions we can’t answer. It’s often useful for someone to be trying to push those frontiers, but for any given person, the best option is often to find a way to accept things as they are.

But there are also things I cannot change and yet will not accept.

Most of these things fall into one broad category:

Injustice.

I can’t end war, or poverty, or sexism, or racism, or homophobia. Neither can you. Neither can any one person, or any hundred people, or any thousand people, or probably even any million people. (If all it took were a million dreams, we’d be there already. A billion might be enough—though it would depend which billion people shared the dream.)

I can’t. You can’t. But we can.

And here I mean “we” in a very broad sense indeed: Humanity as a collective whole. All of us together can end injustice—and indeed that is the only way it ever could be ended, by our collective action. Collective action is what causes injustice, and collective action is what can end it.

I therefore consider serenity in the face of injustice to be a very dangerous thing.

At times, and to certain degrees, that serenity may be necessary.

Those who are right now in the grips of injustice may need to accept it in order to survive. Reflecting on the horror of a concentration camp won’t get you out of it. Embracing the terror of war won’t save you from being bombed. Weeping about the sorrow of being homeless won’t get you off the streets.

Even for those of us who are less directly affected, it may sometimes be wisest to blunt our rage and sorrow at injustice—for otherwise they could be paralyzing, and if we are paralyzed, we can’t help anyone.

Sometimes we may even need to withdraw from the fight for justice, simply because we are too exhausted to continue. I read recently of a powerful analogy about this:

A choir can sing the same song forever, as long as its singers take turns resting.

If everyone tries to sing their very hardest all the time, the song must eventually end, as no one can sing forever. But if we rotate our efforts, so that at any given moment some are singing while others are resting, then we theoretically could sing for all time—as some of us die, others would be born to replace us in the song.

For a literal choir this seems absurd: Who even wants to sing the same song forever? (Lamb Chop, I guess.)

But the fight for justice probably is one we will need to continue forever, in different forms in different times and places. There may never be a perfectly just society, and even if there is, there will be no guarantee that it remains so without eternal vigilance. Yet the fight is worth it: in so many ways our society is already more just than it once was, and could be made more so in the future.

This fight will only continue if we don’t accept the way things are. Even when any one of us can’t change the world—even if we aren’t sure how many of us it would take to change the world—we still have to keep trying.

But as in the choir, each one of us also needs to rest.

We can’t all be fighting all the time as hard as we can. (I suppose if literally everyone did that, the fight for justice would be immediately and automatically won. But that’s never going to happen. There will always be opposition.)

And when it is time for each of us to rest, perhaps some serenity is what we need after all. Perhaps there is a balance to be found here: We do not accept things as they are, but we do accept that we cannot change them immediately or single-handedly. We accept that our own strength is limited and sometimes we must withdraw from the fight.

So yes, we need some serenity. But not too much.

Enough serenity to accept that we won’t win the fight immediately or by ourselves, and sometimes we’ll need to stop fighting and rest. But not so much serenity that we give up the fight altogether.

For there are many things that I can’t change—but we can.

Love is more than chemicals

Feb 18 JDN 2460360

One of the biggest problems with the rationalist community is an inability to express sincerity and reverence.

I get it: Religion is the world’s greatest source of sincerity and reverence, and religion is the most widespread and culturally important source of irrationality. So we declare ourselves enemies of religion, and also end up being enemies of sincerity and reverence.

But in doing so, we lose something very important. We cut ourselves off from some of the greatest sources of meaning and joy in human life.

In fact, we may even be undermining our own goals: If we don’t offer people secular, rationalist forms of reverence, they may find they need to turn back to religion in order to fill that niche.

One of the most pernicious forms of this anti-sincerity, anti-reverence attitude (I can’t just say ‘insincere’ or ‘irreverent’, as those have different meanings) is surely this one:

Love is just a chemical reaction.

(I thought it seemed particularly apt to focus on this one during the week of Valentine’s Day.)

On the most casual of searches I could find at least half a dozen pop-sci articles and a YouTube video propounding this notion (though I could also find a few articles trying to debunk the notion as well).

People who say this sort of thing seem to think that they are being wise and worldly while the rest of us are just being childish and naive. They think we are seeing something that isn’t there. In fact, they are being jaded and cynical. They are failing to see something that is there.

(Perhaps the most extreme form of this was from Rick & Morty; and while Rick as a character is clearly intended to be jaded and cynical, far too many people also see him as a role model.)

Part of the problem may also be a failure to truly internalize the Basic Fact of Cognitive Science:

You are your brain.

No, your consciousness is not an illusion. It’s not an “epiphenomenon” (whatever that isI’ve never encountered one in real life). Your mind is not fake or imaginary. Your mind actually exists—and it is a product of your brain. Both brain and mind exist, and are in fact the same.

It’s so hard for people to understand this that some become dualists, denying the unity of the brain and the mind. That, at least, I can sympathize with, even though we have compelling evidence that it is wrong. But there’s another tack people sometimes take, eliminative materialism, where they try to deny that the mind exists at all. And that I truly do not understand. How can you think that nobody can think? Yet intelligent, respected philosophers have claimed to believe such things.

Love is one of the most important parts of our lives.

This may be more true of humans than of literally any other entity in the known universe.

The only serious competition comes from other mammals: They are really the only other beings we know of that are capable of love. And even they don’t seem to be as good at it as we are; they can love only those closest to them, while we can love entire nations and even abstract concepts.

And once you go beyond that, even to reptiles—let alone fish, or amphibians, or insects, or molluscs—it’s not clear that other animals are really capable of love at all. They seem to be capable of some forms of thought and feeling: They get hungry, or angry, or horny. But do they really love?

And even the barest emotional capacities of an insect are still categorically beyond what most of the universe is capable of feeling, which is to say: Nothing. The vast, vast majority of the universe feels neither love nor hate, neither joy nor pain.

Yet humans can love, and do love, and it is a large part of what gives our lives meaning.

I don’t just mean romantic love here, though I do think it’s worth noting that people who dismiss the reality of romantic love somehow seem reluctant to do the same for the love parents have for their children—even though it’s made of pretty much the same brain chemicals. Perhaps there is a limit to their cynicism.

Yes, love is made of chemicals—because everything is made of chemicals. We live in a material, chemical universe. Saying that love is made of chemicals is an almost completely vacuous statement; it’s basically tantamount to saying that love exists.

In other contexts, you already understand this.

“That’s not a bridge, it’s just a bunch of iron atoms!” rightfully strikes you as an absurd statement to make. Yes, the bridge is made of steel, and steel is mostly iron, and everything is made of atoms… but clearly there’s a difference between a random pile of iron and a bridge.

“That’s not a computer, it’s just a bunch of silicon atoms!” similarly registers as nonsense: Yes, it is indeed mostly made of silicon, but beach sand and quartz crystals are not computers.

It is in this same sense that joy is made of dopamine and love is made of chemical reactions. Yes, those are in fact the constituent parts—but things are more than just their parts.

I think that on some level, even most rationalists recognize that love is more than some arbitrary chemical reaction. I think “love is just chemicals” is mainly something people turn to for a couple of reasons: Sometimes, they are so insistent on rejecting everything that even resembles religious belief that they end up rejecting all meaning and value in human life. Other times, they have been so heartbroken, that they try to convince themselves love isn’t real—to dull the pain. (But of course if it weren’t, there would be no pain to dull.)

But love is no more (or less) a chemical reaction than any other human experience: The very belief “love is just a chemical reaction” is, itself, made of chemical reactions.

Everything we do is made of chemical reactions, because we are made of chemical reactions.

Part of the problem here—and with the Basic Fact of Cognitive Science in general—is that we really have no idea how this works. For most of what we deal with in daily life, and even an impressive swath of the overall cosmos, we have a fairly good understanding of how things work. We know how cars drive, how wind blows, why rain falls; we even know how cats purr and why birds sing. But when it comes to understanding how the physical matter of the brain generates the subjective experiences of thought, feeling, and belief—of which love is made—we lack even the most basic understanding. The correlation between the two is far too strong to deny; but as far as causal mechanisms, we know absolutely nothing. (Indeed, worse than that: We can scarcely imagine a causal mechanism that would make any sense. We not only don’t know the answer; we don’t know what an answer would look like.)

So, no, I can’t tell you how we get from oxytocin and dopamine to love. I don’t know how that makes any sense. No one does. But we do know it’s true.

And just like everything else, love is more than the chemicals it’s made of.

Let’s call it “copytheft”

Feb 11 JDN 2460353

I have written previously about how ridiculous it is that we refer to the unauthorized copying of media such as music and video games as “piracy” as though it were somehow equivalent to capturing ships on the high seas.

In that post a few years ago I suggested calling it simply “unauthorized copying”, but that clearly isn’t catching on, perhaps because it’s simply too much of a mouthful. So today I offer a compromise:

Let’s call it “copytheft”.

That takes no longer to say than “piracy” (and only slightly longer to write), and far more clearly states what’s actually going on. No ships have been seized on the high seas; there has been no murder, arson, or slavery.

Yes, it’s debatable whether copytheft really constitutes theft—and I would generally argue that it does not—but just from hearing that word, you would probably infer that the following process took place:

  1. I took a thing.
  2. I made a copy of that thing that I wasn’t supposed to.
  3. I put the original thing back where it was, unharmed.

The paradigmatic example of this theft-copy-replace sequence would be a key, of course: You take someone’s key, copy it, then put the key back where it was, so you now can unlock their locks but they are none the wiser.

With unauthorized copying of media, you’re not exactly doing steps 1 and 3; the copier often has the media completely legitimately before they make the copy, and it may not even have a clear physical location to be put back to (it must be physically stored somewhere, but particularly if it’s streamed from the cloud it hardly matters where).

But you’re definitely doing step 2, and that was the only part that had a permanent effect; so I think that the nomenclature still seems to work well enough.

Copytheft also has a similar sound to copyleft, the use of alternative intellectual property mechanisms by the authors to grand broader licensing than is ordinarily afforded by copyright, and also to copyfraud, the crime of claiming exclusive copyright to content that is in fact public domain. Hopefully that common structure will help the term get some purchase.

Of course, I can hardly bring a word into widespread use on my own. Others like you have to not only read it, but like it enough that you’re willing to actually use it—and then we need a certain critical mass of people using it in order to make it actually catch on.

So, I’d like to take a moment to offer you some justification why it’s worth changing to this new word.

First, it is admittedly imperfect; by containing the word “theft”, it already feels like we’re conceding something to the defenders of copyright.

But by including the word “copy” in the term, we can draw attention to the most important aspect that distinguishes copytheft from, well, theft:

The original owner still has the thing.

That’s the part that they want us to forget, that the harsh word “piracy” leads you towards. A ship that is captured by pirates is a ship that may never again sail for your own navy. A song that is “pirated”—copythefted—is one that not only the original owners, but also everyone who bought it, still have in exactly the same state they did before.

Thus it simply cannot be that copytheft takes money out of the hands of artists. At worst, it fails to give money to artists.

That could still be a bad thing: Artists need to pay bills too, and a world where nobody pays for any art is surely a world with a lot fewer artists—and the ones who remain far more miserable. But it’s clearly a different sort of thing than ordinary theft, as nothing has been lost.

Moreover, it’s not clear that in most cases copytheft even does fail to give money that would otherwise have been given. Maybe sometimes it does—a certain proportion of people who copytheft a given song, film, or video game might have been willing to pay the original price if the copythefted version had not been available. But typically I suspect that people who’d be willing to pay full price… do pay full price. Thus, the people who are copythefting the media wouldn’t have bought it at full price anyway.

They might have bought it at some lower price, in which case that is foregone payment; but it’s surely considerably less than the “losses” often reported by the film and music industries, which seem to be based on the assumption that everyone who copythefts would have otherwise paid full price. And in fact many people might have been unwilling to buy at any nonzero price, and were only willing to copytheft the media precisely because it didn’t cost them any money or a great deal of effort to do so.

And in fact if you think about it, what about people who would have been willing to pay more than the original price? Surely there were many of them as well, yet we don’t grant media corporations the right to that money. That is also money that they could have been given but weren’t—and we decided, as a society, that they didn’t deserve to have it. It’s not that it would be impossible to do so: We could give corporations the authority to price-discriminate on all of their media. (They probably couldn’t do it perfectly, but they could surely do it quite well.) But we made the policy choice to live in a world where media is sold by single-price monopolies rather than one where it is sold by price-discriminating monopolies.

The mere fact that someone might have been willing to pay you more money if the market were different does not entitle you to receive that money. It has not been stolen from you. Indeed, typically it’s more that you have not been allowed to exploit them. It’s usually the presence of competition that prevents corporations from receiving the absolute maximum profit they might potentially have received if they had full control over the market. Corporations making less profit than they otherwise would have is generally a sign of good economic policy—a sign that things are reasonably fair.

Why else is “copytheft” a good word to use?

Above all, we do not allow our terms to be defined by our opponents.

We don’t allow them insinuate that our technically violating draconian regulations designed to maximize the profits of Disney and Viacom somehow constitutes a terrible crime against other human beings.

“Piracy is not a victimless crime”, they will say.

Well, actual piracy isn’t. But copytheft? Yeah, uh, it kinda is.

Maybe not quite as victimless as, say, marijuana or psilocybin, which no one even has any rational reason to prefer you not do. But still, you’re not really making anyone else worse off—that sounds pretty victimless.

Of course, it does give us less reason to wear tricorn hats and eyepatches.

But guess what? You can still do that anyway!

Adversarial design

Feb 4 JDN 2460346

Have you noticed how Amazon feels a lot worse lately? Years ago, it was extremely convenient: You’d just search for what you want, it would give you good search results, you could buy what you want and be done. But now you have to slog through “sponsored results” and a bunch of random crap made by no-name companies in China before you can get to what you actually want.

Temu is even worse, and has been from the start: You can’t buy anything on Temu without first being inundated in ads. It’s honestly such an awful experience, I don’t understand why anyone is willing to buy anything from Temu.

#WelcomeToCyberpunk, I guess.

Even some video games have become like this: The free-to-play or “freemium” business model seems to be taking off, where you don’t pay money for the game itself, but then have to deal with ads inside the game trying to sell you additional content, because that’s where the developers actually make their money. And now AAA firms like EA and Ubisoft are talking about going to a subscription-based model where you don’t even own your games anymore. (Fortunately there’s been a lot of backlash against that; I hope it persists.)

Why is this happening? Isn’t capitalism supposed to make life better for consumers? Isn’t competition supposed to make products and services supposed to improve over time?

Well, first of all, these markets are clearly not as competitive as they should be. Amazon has a disturbingly large market share, and while the video game market is more competitive, it’s still dominated by a few very large firms (like EA and Ubisoft).

But I think there’s a deeper problem here, one which may be specific to media content.

What I mean by “media content” here is fairly broad: I would include art, music, writing, journalism, film, and video games.

What all of these things have in common is that they are not physical products (they’re not like a car or a phone that is a single physical object), but they are also not really services either (they aren’t something you just do as an action and it’s done, like a haircut, a surgery, or a legal defense).

Another way of thinking about this is that media content can be copied with zero marginal cost.

Because it can be copied with zero marginal cost, media content can’t simply be made and sold the way that conventional products and services are. There are a few different ways it can be monetized.


The most innocuous way is commission or patronage, where someone pays someone else to create a work because they want that work. This is totally unproblematic. You want a piece of art, you pay an artist, they make it for you; great. Maybe you share copies with the world, maybe you don’t; whatever. It’s good either way.

Unfortunately, it’s hard to sustain most artists and innovators on that model alone. (In a sense I’m using a patronage model, because I have a Patreon. But I’m not making anywhere near enough to live on that way.)

The second way is intellectual property, which I have written about before, and surely will again. If you can enforce limits on who is allowed to copy a work, then you can make a work and sell it for profit without fear of being undercut by someone else who simply copies it and sells it for cheaper. A detailed discussion of that is beyond the scope of this post, but you can read those previous posts, and I can give you the TLDR version: Some degree of intellectual property is probably necessary, but in our current society, it has clearly been taken much too far. I think artists and authors deserve to be able to copyright (or maybe copyleft) their work—but probably not for 70 years after their death.

And then there is a third way, the most insidious way: advertising. If you embed advertisements for other products and services within your content, you can then sell those ad slots for profit. This is how newspapers stay afloat, mainly; subscriptions have never been the majority of their revenue. It’s how TV was supported before cable and streaming—and cable usually has ads too, and streaming is starting to.

There is something fundamentally different about advertising as a service. Whereas most products and services you encounter in a capitalist society are made for you, designed for you to use, advertising it made at you, designed to manipulate you.

I’ve heard it put well this way:

If you’re not paying, you aren’t the customer; you’re the product.

Monetizing content by advertising effectively makes your readers (or viewers, players, etc.) into the product instead of the customer.

I call this effect adversarial design.

I chose this term because it not only conveys the right sense of being an adversary: it also includes the word ‘ad’ and the same Latin root ‘advertere‘ as ‘advertising’.

When a company designs a car or a phone, they want it to appeal to customers—they want you to like it. Yes, they want to take your money; but it’s a mutually beneficial exchange. They get money, you get a product; you’re both happier.

When a company designs an ad, they want it to affect customers—they want you to do what it says, whether you like it or not. And they wouldn’t be doing it if they thought you would buy it anyway—so they are basically trying to make you do something you wouldn’t otherwise have done.

In other words, when designing a product, corporations want to be your friend.

When designing an ad, they become your enemy.

You would absolutely prefer not to have ads. You don’t want your attention taken in this way. But they way that these corporations make money—disgustingly huge sums of money—is by forcing those ads in your face anyway.

Yes, to be fair, there might be some kinds of ads that aren’t too bad. Simple, informative, unobtrusive ads that inform you that something is available you might not otherwise have known about. Movie trailers are like this; people often enjoy watching movie trailers, and they want to see what movies are going to come out next. That’s fine. I have no objection to that.

But it should be clear to anyone who has, um, used the Internet in the past decade that we have gone far, far beyond that sort of advertising. Ads have become aggressive, manipulative, aggravating, and—above all—utterly ubiquitous. You can’t escape them. They’re everywhere. Even when you use ad-block software (which I highly recommend, particularly Adblock Plus—which is free), you still can’t completely escape them.

That’s another thing that should make it pretty clear that there’s something wrong with ads: People are willing to make efforts or even pay money to make ads go away.

Whenever there is a game I like that’s ad-supported but you can pay to make the ads go away, I always feel like I’m being extorted, even if what I have to pay would have been a totally reasonable price for the game. Come on, just sell me the game. Don’t give me the game for free and then make me pay to make it not unpleasant. Don’t add anti-features.

This is clearly not a problem that market competition alone will solve. Even in highly competitive markets, advertising is still ubiquitous, aggressive and manipulative. In fact, competition may even make it worse—a true monopoly wouldn’t need to advertise very much.

Consider Coke and Pepsi ads; they’re actually relatively pleasant, aren’t they? Because all they’re trying to do is remind you and make you thirsty so you’ll buy more of the product you were already buying. They aren’t really trying to get you to buy something you wouldn’t have otherwise. They know that their duopoly is solid, and only a true Black Swan event would unseat their hegemony.

And have you ever seen an ad for your gas company? I don’t think I have—probably because I didn’t have a choice in who my gas company was; there was only one that covered my area. So why bother advertising to me?

If competition won’t fix this, what will? Is there some regulation we could impose that would make advertising less obtrusive? People have tried, without much success. I think imposing an advertising tax would help, but even that might not do enough.

What I really think we need right now is to recognize the problem and invest in solving it. Right now we have megacorporations which are thoroughly (literally) invested in making advertising more obtrusive and more ubiquitous. We need other institutions—maybe government, maybe civil society more generally—that are similarly invested in counteracting it.


Otherwise, it’s only going to get worse.

Administering medicine to the dead

Jan 28 JDN 2460339

Here are a couple of pithy quotes that go around rationalist circles from time to time:

“To argue with a man who has renounced the use and authority of reason, […] is like administering medicine to the dead[…].”

Thomas Paine, The American Crisis

“It is useless to attempt to reason a man out of a thing he was never reasoned into.”

Jonathan Swift

You usually hear that abridged version, but Thomas Paine’s full quotation is actually rather interesting:

“To argue with a man who has renounced the use and authority of reason, and whose philosophy consists in holding humanity in contempt, is like administering medicine to the dead, or endeavoring to convert an atheist by scripture.”

― Thomas Paine, The American Crisis

It is indeed quite ineffective to convert an atheist by scripture (though that doesn’t seem to stop them from trying). Yet this quotation seems to claim that the opposite should be equally ineffective: It should be impossible to convert a theist by reason.

Well, then, how else are we supposed to do it!?

Indeed, how did we become atheists in the first place!?

You were born an atheist? No, you were born having absolutely no opinion about God whatsoever. (You were born not realizing that objects don’t fade from existence when you stop seeing them! In a sense, we were all born believing ourselves to be God.)

Maybe you were raised by atheists, and religion never tempted you at all. Lucky you. I guess you didn’t have to be reasoned into atheism.

Well, most of us weren’t. Most of us were raised into religion, and told that it held all the most important truths of morality and the universe, and that believing anything else was horrible and evil and would result in us being punished eternally.

And yet, somehow, somewhere along the way, we realized that wasn’t true. And we were able to realize that because people made rational arguments.

Maybe we heard those arguments in person. Maybe we read them online. Maybe we read them in books that were written by people who died long before we were born. But somehow, somewhere people actually presented the evidence for atheism, and convinced us.

That is, they reasoned us out of something that we were not reasoned into.

I know it can happen. I have seen it happen. It has happened to me.

And it was one of the most important events in my entire life. More than almost anything else, it made me who I am today.

I’m scared that if you keep saying it’s impossible, people will stop trying to do it—and then it will stop happening to people like me.

So please, please stop telling people it’s impossible!

Quotes like these encourage you to simply write off entire swaths of humanity—most of humanity, in fact—judging them as worthless, insane, impossible to reach. When you should be reaching out and trying to convince people of the truth, quotes like these instead tell you to give up and consider anyone who doesn’t already agree with you as your enemy.

Indeed, it seems to me that the only logical conclusion of quotes like these is violence. If it’s impossible to reason with people who oppose us, then what choice do we have, but to fight them?

Violence is a weapon anyone can use.

Reason is the one weapon in the universe that works better when you’re right.

Reason is the sword that only the righteous can wield. Reason is the shield that only protects the truth. Reason is the only way we can ever be sure that the right people win—instead of just whoever happens to be strongest.

Yes, it’s true: reason isn’t always effective, and probably isn’t as effective as it should be. Convincing people to change their minds through rational argument is difficult and frustrating and often painful for both you and them—but it absolutely does happen, and our civilization would have long ago collapsed if it didn’t.

Even people who claim to have renounced all reason really haven’t: they still know 2+2=4 and they still look both ways when they cross the street. Whatever they’ve renounced, it isn’t reason; and maybe, with enough effort, we can help them see that—by reason, of course.

In fact, maybe even literally administering medicine to the dead isn’t such a terrible idea.

There are degrees of death, after all: Someone whose heart has stopped is in a different state than someone whose cerebral activity has ceased, and both of them clearly stand a better chance of being resuscitated than someone who has been vaporized by an explosion.

As our technology improves, more and more states that were previously considered irretrievably dead will instead be considered severe states of illness or injury from which it is possible to recover. We can now restart many stopped hearts; we are working on restarting stopped brains. (Of course we’ll probably never be able to restore someone who got vaporized—unless we figure out how to make backup copies of people?)

Most of the people who now live in the world’s hundreds of thousands of ICU beds would have been considered dead even just 100 years ago. But many of them will recover, because we didn’t give up on them.

So don’t give up on people with crazy beliefs either.

They may seem like they are too far gone, like nothing in the world could ever bring them back to the light of reason. But you don’t actually know that for sure, and the only way to find out is to try.

Of course, you won’t convince everyone of everything immediately. No matter how good your evidence is, that’s just not how this works. But you probably will convince someone of something eventually, and that is still well worthwhile.

You may not even see the effects yourself—people are often loathe to admit when they’ve been persuaded. But others will see them. And you will see the effects of other people’s persuasion.

And in the end, reason is really all we have. It’s the only way to know that what we’re trying to make people believe is the truth.

Don’t give up on reason.

And don’t give up on other people, whatever they might believe.

Reflections at the crossroads

Jan 21 JDN 2460332

When this post goes live, I will have just passed my 36th birthday. (That means I’ve lived for about 1.1 billion seconds, so in order to be as rich as Elon Musk, I’d need to have made, on average, since birth, $200 per second—$720,000 per hour.)

I certainly feel a lot better turning 36 than I did 35. I don’t have any particular additional accomplishments to point to, but my life has already changed quite a bit, in just that one year: Most importantly, I quit my job at the University of Edinburgh, and I am currently in the process of moving out of the UK and back home to Michigan. (We moved the cat over Christmas, and the movers have already come and taken most of our things away; it’s really just us and our luggage now.)

But I still don’t know how to field the question that people have been asking me since I announced my decision to do this months ago:

“What’s next?”

I’m at a crossroads now, trying to determine which path to take. Actually maybe it’s more like a roundabout; it has a whole bunch of different paths, surely not just two or three. The road straight ahead is labeled “stay in academia”; the others at the roundabout are things like “freelance writing”, “software programming”, “consulting”, and “tabletop game publishing”. There’s one well-paved and superficially enticing road that I’m fairly sure I don’t want to take, labeled “corporate finance”.

Right now, I’m just kind of driving around in circles.

Most people don’t seem to quit their jobs without a clear plan for where they will go next. Often they wait until they have another offer in hand that they intend to take. But when I realized just how miserable that job was making me, I made the—perhaps bold, perhaps courageous, perhaps foolish—decision to get out as soon as I possibly could.

It’s still hard for me to fully understand why working at Edinburgh made me so miserable. Many features of an academic career are very appealing to me. I love teaching, I like doing research; I like the relatively flexible hours (and kinda need them, because of my migraines).

I often construct formal decision models to help me make big choices—generally it’s a linear model, where I simply rate each option by its relative quality in a particular dimension, then try different weightings of all the different dimensions. I’ve used this successfully to pick out cars, laptops, even universities. I’m not entrusting my decisions to an algorithm; I often find myself tweaking the parameters to try to get a particular result—but that in itself tells me what I really want, deep down. (Don’t do that in research—people do, and it’s bad—but if the goal is to make yourself happy, your gut feelings are important too.)

My decision models consistently rank university teaching quite high. It generally only gets beaten by freelance writing—which means that maybe I should give freelance writing another try after all.

And yet, my actual experience at Edinburgh was miserable.

What went wrong?

Well, first of all, I should acknowledge that when I separate out the job “university professor” into teaching and research as separate jobs in my decision model, and include all that goes into both jobs—not just the actual teaching, but the grading and administrative tasks; not just doing the research, but also trying to fund and publish it—they both drop lower on the list, and research drops down a lot.

Also, I would rate them both even lower now, having more direct experience of just how awful the exam-grading, grant-writing and journal-submitting can be.

Designing and then grading an exam was tremendously stressful: I knew that many of my students’ futures rested on how they did on exams like this (especially in the UK system, where exams are absurdly overweighted! In most of my classes, the final exam was at least 60% of the grade!). I struggled mightily to make the exam as fair as I could, all the while knowing that it would never really feel fair and I didn’t even have the time to make it the best it could be. You really can’t assess how well someone understands an entire subject in a multiple-choice exam designed to take 90 minutes. It’s impossible.

The worst part of research for me was the rejection.

I mentioned in a previous post how I am hypersensitive to rejection; applying for grants and submitting to journals was clearly the worst feelings of rejection I’ve felt in any job. It felt like they were evaluting not only the value of my work, but my worth as a scientist. Failure felt like being told that my entire career was a waste of time.

It was even worse than the feeling of rejection in freelance writing (which is one of the few things that my model tells me is bad about freelancing as a career for me, along with relatively low and uncertain income). I think the difference is that a book publisher is saying “We don’t think we can sell it.”—’we’ and ‘sell’ being vital. They aren’t saying “this is a bad book; it shouldn’t exist; writing it was a waste of time.”; they’re just saying “It’s not a subgenre we generally work with.” or “We don’t think it’s what the market wants right now.” or even “I personally don’t care for it.”. They acknowledge their own subjective perspective and the fact that it’s ultimately dependent on forecasting the whims of an extremely fickle marketplace. They aren’t really judging my book, and they certainly aren’t judging me.

But in research publishing, it was different. Yes, it’s all in very polite language, thoroughly spiced with sophisticated jargon (though some reviewers are more tactful than others). But when your grant application gets rejected by a funding agency or your paper gets rejected by a journal, the sense really basically is “This project is not worth doing.”; “This isn’t good science.”; “It was/would be a waste of time and money.”; “This (theory or experiment you’ve spent years working on) isn’t interesting or important.” Nobody ever came out and said those things, nor did they come out and say “You’re a bad economist and you should feel bad.”; but honestly a couple of the reviews did kinda read to me like they wanted to say that. They thought that the whole idea that human beings care about each other is fundamentally stupid and naive and not worth talking about, much less running experiments on.

It isn’t so much that I believed them that my work was bad science. I did make some mistakes along the way (but nothing vital; I’ve seen far worse errors by Nobel Laureates). I didn’t have very large samples (because every person I add to the experiment is money I have to pay, and therefore funding I have to come up with). But overall I do believe that my work is sufficiently rigorous to be worth publishing in scientific journals.

It’s more that I came to feel that my work is considered bad, that the kind of work I wanted to do would forever be an uphill battle against an implacable enemy. I already feel exhausted by that battle, and it had only barely begun. I had thought that behavioral economics was a more successful paradigm by now, that it had largely displaced the neoclassical assumptions that came before it; but I was wrong. Except specifically in journals dedicated to experimental and behavioral economics (of which prestigious journals are few—I quickly exhausted them), it really felt like a lot of the feedback I was getting amounted to, “I refuse to believe your paradigm.”.

Part of the problem, also, was that there simply aren’t that many prestigious journals, and they don’t take that many papers. The top 5 journals—which, for whatever reason, command far more respect than any other journals among economists—each accept only about 5-10% of their submissions. Surely more than that are worth publishing; and, to be fair, much of what they reject probably gets published later somewhere else. But it makes a shockingly large difference in your career how many “top 5s” you have; other publications almost don’t matter at all. So once you don’t get into any of those (which of course I didn’t), should you even bother trying to publish somewhere else?

And what else almost doesn’t matter? Your teaching. As long as you show up to class and grade your exams on time (and don’t, like, break the law or something), research universities basically don’t seem to care how good a teacher you are. That was certainly my experience at Edinburgh. (Honestly even their responses to professors sexually abusing their students are pretty unimpressive.)

Some of the other faculty cared, I could tell; there were even some attempts to build a community of colleagues to support each other in improving teaching. But the administration seemed almost actively opposed to it; they didn’t offer any funding to support the program—they wouldn’t even buy us pizza at the meetings, the sort of thing I had as an undergrad for my activist groups—and they wanted to take the time we spent in such pedagogy meetings out of our grading time (probably because if they didn’t, they’d either have to give us less grading, or some of us would be over our allotted hours and they’d owe us compensation).

And honestly, it is teaching that I consider the higher calling.

The difference between 0 people knowing something and 1 knowing it is called research; the difference between 1 person knowing it and 8 billion knowing it is called education.

Yes, of course, research is important. But if all the research suddenly stopped, our civilization would stagnate at its current level of technology, but otherwise continue unimpaired. (Frankly it might spare us the cyberpunk dystopia/AI apocalypse we seem to be hurtling rapidly toward.) Whereas if all education suddenly stopped, our civilization would slowly decline until it ultimately collapsed into the Stone Age. (Actually it might even be worse than that; even Stone Age cultures pass on knowledge to their children, just not through formal teaching. If you include all the ways parents teach their children, it may be literally true that humans cannot survive without education.)

Yet research universities seem to get all of their prestige from their research, not their teaching, and prestige is the thing they absolutely value above all else, so they devote the vast majority of their energy toward valuing and supporting research rather than teaching. In many ways, the administrators seem to see teaching as an obligation, as something they have to do in order to make money that they can spend on what they really care about, which is research.

As such, they are always making classes bigger and bigger, trying to squeeze out more tuition dollars (well, in this case, pounds) from the same number of faculty contact hours. It becomes impossible to get to know all of your students, much less give them all sufficient individual attention. At Edinburgh they even had the gall to refer to their seminars as “tutorials” when they typically had 20+ students. (That is not tutoring!)And then of course there were the lectures, which often had over 200 students.

I suppose it could be worse: It could be athletics they spend all their money on, like most Big Ten universities. (The University of Michigan actually seems to strike a pretty good balance: they are certainly not hurting for athletic funding, but they also devote sizeable chunks of their budget to research, medicine, and yes, even teaching. And unlike virtually all other varsity athletic programs, University of Michigan athletics turns a profit!)

If all the varsity athletics in the world suddenly disappeared… I’m not convinced we’d be any worse off, actually. We’d lose a source of entertainment, but it could probably be easily replaced by, say, Netflix. And universities could re-focus their efforts on academics, instead of acting like a free training and selection system for the pro leagues. The University of California, Irvine certainly seemed no worse off for its lack of varsity football. (Though I admit it felt a bit strange, even to a consummate nerd like me, to have a varsity League of Legends team.)

They keep making the experience of teaching worse and worse, even as they cut faculty salaries and make our jobs more and more precarious.

That might be what really made me most miserable, knowing how expendable I was to the university. If I hadn’t quit when I did, I would have been out after another semester anyway, and going through this same process a bit later. It wasn’t even that I was denied tenure; it was never on the table in the first place. And perhaps because they knew I wouldn’t stay anyway, they didn’t invest anything in mentoring or supporting me. Ostensibly I was supposed to be assigned a faculty mentor immediately; I know the first semester was crazy because of COVID, but after two and a half years I still didn’t have one. (I had a small research budget, which they reduced in the second year; that was about all the support I got. I used it—once.)

So if I do continue on that “academia” road, I’m going to need to do a lot of things differently. I’m not going to put up with a lot of things that I did. I’ll demand a long-term position—if not tenure-track, at least renewable indefinitely, like a lecturer position (as it is in the US, where the tenure-track position is called “assistant professor” and “lecturer” is permanent but not tenured; in the UK, “lecturers” are tenure-track—except at Oxford, and as of 2021, Cambridge—just to confuse you). Above all, I’ll only be applying to schools that actually have some track record for valuing teaching and supporting their faculty.

And if I can’t find any such positions? Then I just won’t apply at all. I’m not going in with the “I’ll take what I can get” mentality I had last time. Our household finances are stable enough that I can afford to wait awhile.

But maybe I won’t even do that. Maybe I’ll take a different path entirely.

For now, I just don’t know.

Empathy is not enough

Jan 14 JDN 2460325

A review of Against Empathy by Paul Bloom

The title Against Empathy is clearly intentionally provocative, to the point of being obnoxious: How can you be against empathy? But the book really does largely hew toward the conclusion that empathy, far from being an unalloyed good as we may imagine it to be, is overall harmful and detrimental to society.

Bloom defines empathy narrowly, but sensibly, as the capacity to feel other people’s emotions automatically—to feel hurt when you see someone hurt, afraid when you see someone afraid. He argues surprisingly well that this capacity isn’t really such a great thing after all, because it often makes us help small numbers of people who are like us rather than large numbers of people who are different from us.

But something about the book rubs me the wrong way all throughout, and I think I finally put my finger on it:

If empathy is bad… compared to what?

Compared to some theoretical ideal of perfect compassion where we love all sentient beings in the universe equally and act only according to maxims that would yield the greatest benefit for all, okay, maybe empathy is bad.

But that is an impossible ideal. No human being has ever approached it. Even our greatest humanitarians are not like that.

Indeed, one thing has clearly characterized the very best human beings, and that is empathy. Every one of them has been highly empathetic.

The case for empathy gets even stronger if you consider the other extreme: What are human beings like when they lack empathy? Why, those people are psychopaths, and they are responsible for the majority of violent crimes and nearly all the most terrible atrocities.

Empirically, if you look at humans as we actually are, it really seems like this function is monotonic: More empathy makes people behave better. Less empathy makes them behave worse.

Yet Bloom does have a point, nevertheless.

There are real-world cases where empathy seems to have done more harm than good.

I think his best examples come from analysis of charitable donations. Most people barely give anything to charity, which we might think of as a lack of empathy. But a lot of people do give to a great deal to charity—yet the charities they give to and the gifts they give are often woefully inefficient.

Let’s even set aside cases like the Salvation Army, where the charity is actively detrimental to society due to the distortions of ideology. The Salvation Army is in fact trying to do good—they’re just starting from a fundamentally evil outlook on the universe. (And if that sounds harsh to you? Take a look at what they say about people like me.)

No, let’s consider charities that are well-intentioned, and not blinded by fanatical ideology, who really are trying to work toward good things. Most of them are just… really bad at it.

The most cost-effective charities, like the ones GiveWell gives top ratings to, can save a life for about $3,000-5,000, or about $150 to $250 per QALY.

But a typical charity is far, far less efficient than that. It’s difficult to get good figures on it, but I think it would be generous to say that a typical charity is as efficient as the standard cost-effectiveness threshold used in US healthcare, which is $50,000 per QALY. That’s already two hundred times less efficient.

And many charities appear to be even below that, where their marginal dollars don’t really seem to have any appreciable benefit in terms of QALY. Maybe $1 million per QALY—spend enough, and they’d get a QALY eventually.

Other times, people give gifts to good charities, but the gifts they give are useless—the Red Cross is frequently inundated with clothing and toys that it has absolutely no use for. (Please, please, I implore you: Give them money. They can buy what they need. And they know what they need a lot better than you do.)

Why do people give to charities that don’t really seem to accomplish anything? Because they see ads that tug on their heartstrings, or get solicited donations directly by people on the street or door-to-door canvassers. In other words, empathy.

Why do people give clothing and toys to the Red Cross after a disaster, instead of just writing a check or sending a credit card payment? Because they can see those crying faces in their minds, and they know that if they were a crying child, they’d want a toy to comfort them, not some boring, useless check. In other words, empathy.

Empathy is what you’re feeling when you see those Sarah McLachlan ads with sad puppies in them, designed to make you want to give money to the ASPCA.

Now, I’m not saying you shouldn’t give to the ASPCA. Actually animal welfare advocacy is one of those issues where cost-effectiveness is really hard to assess—like political donations, and for much the same reason. If we actually managed to tilt policy so that factory farming were banned, the direct impact on billions of animals spared that suffering—while indubitably enormous—might actually be less important, morally, than the impact on public health and climate change from people eating less meat. I don’t know what multiplier to apply to a cow’s suffering to convert her QALY into mine. But I do know that the world currently eats far too much meat, and it’s cooking the planet along with the cows. Meat accounts for 60% of food-related greenhouse gases, and 35% of all greenhouse gases.

But I am saying that if you give to the ASPCA, it should be because you support their advocacy against factory farming—not because you saw pictures of very sad puppies.

And empathy, unfortunately, doesn’t really work that way.

When you get right down to it, what Paul Bloom is really opposing is scope neglect, which is something I’ve written about before.

We just aren’t capable of genuinely feeling the pain of a million people, or a thousand, or probably even a hundred. (Maybe we can do a hundred; that’s under our Dunbar number, after all.) So when confronted with global problems that affect millions of people, our empathy system just kind of overloads and shuts down.

ERROR: OVERFLOW IN EMPATHY SYSTEM. ABORT, RETRY, IGNORE?

But when confronted with one suffering person—or five, or ten, or twenty—we can actually feel empathy for them. We can look at their crying face and we may share their tears.

Charities know this; that’s why Sarah McLachlan does those ASPCA ads. And if that makes people donate to good causes, that’s a good thing. (If it makes them donate to the Salvation Army, that’s a different story.)

The problem is, it really doesn’t tell us what causes are best to donate to. Almost any cause is going to alleviate some suffering of someone, somewhere; but there’s an enormous difference between $250 per QALY, $50,000per QALY, and $1 million per QALY. Your $50 donation would add either two and a half months, eight hours, or just over 26 minutes of joy to someone else’s life, respectively. (In the latter case, it may literally be better—morally—for you to go out to lunch or buy a video game.)

To really know the best places to give to, you simply can’t rely on your feelings of empathy toward the victims. You need to do research—you need to do math. (Or someone does, anyway; you can also trust GiveWell to do it for you.)

Paul Bloom is right about this. Empathy doesn’t solve this problem. Empathy is not enough.

But where I think he loses me is in suggesting that we don’t need empathy at all—that we could somehow simply dispense with it. His offer is to replace it with an even-handed, universal-minded utilitarian compassion, a caring for all beings in the universe that values all their interests evenly.

That sounds awfully appealing—other than the fact that it’s obviously impossible.

Maybe it’s something we can all aspire to. Maybe it’s something we as a civilization can someday change ourselves to become capable of feeling, in some distant transhuman future. Maybe even, sometimes, at our very best moments, we can even approximate it.

But as a realistic guide for how most people should live their lives? It’s a non-starter.

In the real world, people with little or no empathy are terrible. They don’t replace it with compassion; they replace it with selfishness, greed, and impulsivity.

Indeed, in the real world, empathy and compassion seem to go hand-in-hand: The greatest humanitarians do seem like they better approximate that universal caring (though of course they never truly achieve it). But they are also invariably people of extremely high empathy.

And so, Dr. Bloom, I offer you a new title, perhaps not as catchy or striking—perhaps it would even have sold fewer books. But I think it captures the correct part of your thesis much better:

Empathy is not enough.

Depression and the War on Drugs

Jan 7 JDN 2460318

There exists, right now, an extremely powerful antidepressant which is extremely cheap and has minimal side effects.

It’s so safe that it has no known lethal dose, and—unlike SSRIs—it is not known to trigger suicide. It is shockingly effective: it works in a matter of hours—not weeks like a typical SSRI—and even a single moderate dose can have benefits lasting months. It isn’t patented, because it comes from a natural source. That natural source is so easy to grow, you can do it by yourself at home for less than $100.

Why in the world aren’t we all using it?

I’ll tell you why: This wonder drug is called psilocybin. It is a Schedule I narcotic, which means that simply possessing it is a federal crime in the United States. Carrying it across the border is a felony.

It is also illegal in most other countries, including the UK, Australia, Belgium, Finland, Denmark, Sweden, Norway (#ScandinaviaIsNotAlwaysBetter), France, Germany, Hungary, Ireland, Japan, the list goes on….

Actually, it’s faster to list the places it’s not illegal: Austria, the Bahamas, Brazil, the British Virgin Islands, Jamaica, Nepal, the Netherlands, and Samoa. That’s it for true legalization, though it’s also decriminalized or unenforced in some other countries.

The best known antidepressant lies unused, because we made it illegal.

Similar stories hold for other amazingly beneficial drugs:

LSD also has powerful antidepressant effects with minimal side effects, and is likewise so ludicrously safe that we are not aware of a single fatal overdose ever happening in any human being. And it’s also Schedule I banned.

Ahayuasca is the same story: A great antidepressant, very safe, minimal side effects—and highly illegal.

There is also no evidence that psilocybin, LSD, or ahayuasca are addictive; and far from promoting the sort of violent, anti-social behavior that alcohol does, they actually seem to make people more compassionate.

This is pure speculation, but I think we should try psilocybin as a possible treatment for psychopathy. And if that works, maybe having a psilocybin trip should be a prerequisite for eligibility for any major elected office. (I often find it a bit silly how the biggest fans of psychedelics talk about the drugs radically changing the world, bringing peace and prosperity through a shift in consciousness; but if psilocybin could make all the world’s leaders more compassionate, that might actually have that kind of impact.)

Ketamine and MDMA at least do have some overdose risk and major side effects, and are genuinely addictive—but it’s not really clear that they’re any worse than SSRIs, and they certainly aren’t any worse than alcohol.

Alcohol may actually be the most widely-used antidepressant, and yet is clearly utterly ineffective; in fact, alcoholics consistently show depression increasing over time. Alcohol has a fatal dose so low it’s a common accident; it is also implicated in violent behavior, including half of all rapes—and in the majority of those rape cases, all consumption of alcohol was voluntary.

Yet alcohol can be bought over-the-counter at any grocery store.

The good news is that this is starting to change.

Recent changes in the law have allowed the use of psychedelic drugs in medical research—which is part of how we now know just how shockingly effective they are at treating depression.

Some jurisdictions in the US—notably, the whole state of Colorado—have decriminalized psilocybin, and Oregon has made it outright legal. Yet even this situation is precarious; just as has occurred with cannabis legalization, it’s still difficult to run a business selling psilocybin even in Oregon, because banks don’t want to deal with a business that sells something which is federally illegal.

Fortunately, this, too, is starting to change: A bill passed the US Senate a few months ago that would legalize banking to cannabis businesses in states where it is legal, and President Biden recently pardoned everyone in federal prison for simple cannabis possession. Now, why can’t we just make cannabis legal!?

The War on Drugs hasn’t just been a disaster for all the thousands of people needlessly imprisoned.

(Of course they had it the worst, and we should set them all free immediately—preferably with some form of restitution.)

The War of Drugs has also been a disaster for all the people who couldn’t get the treatment they needed, because we made that medicine illegal.

And for what? What are we even trying to accomplish here?

Prohibition was a failure—and a disaster of its own—but I can at least understand why it was done. When a drug kills nearly a hundred thousand people a year and is implicated in half of all rapes, that seems like a pretty damn good reason to want that drug gone. The question there becomes how we can best reduce alcohol use without the awful consequences that Prohibition caused—and so far, really high taxes seem to be the best method, and they absolutely do reduce crime.

But where was the disaster caused by cannabis, psilocybin, or ahayuasca? These drugs are made by plants and fungi; like alcohol, they have been used by humans for thousands of years. Where are the overdoses? Where is the crime? Psychedelics have none of these problems.

Honestly, it’s kind of amazing that these drugs aren’t more associated with organized crime than they are.

When alcohol was banned, it seemed to immediately trigger a huge expansion of the Mafia, as only they were willing and able to provide for the enormous demand of this highly addictive neurotoxin. But psilocybin has been illegal for decades, and yet there’s no sign of organized crime having anything to do with it. In fact, psilocybin use is associated with lower rates of arrest—which actually makes sense to me, because like I said, it makes you more compassionate.

That’s how idiotic and ridiculous our drug laws are:

We made a drug that causes crime legal, and we made a drug that prevents crime illegal.

Note that this also destroys any conspiracy theory suggesting that the government wants to keep us all docile and obedient: psilocybin is way better at making people docile than alcohol. No, this isn’t the product of some evil conspiracy.

Hanlon’s Razor: Never attribute to malice what can be adequately explained by stupidity.

This isn’t malice; it’s just massive, global, utterly catastrophic stupidity.

I might attribute this to Puritanical American attitude toward pleasure (Pleasure is suspect, pleasure is dangerous), but I don’t think of Sweden as particularly Puritanical, and they also ban most psychedelics. I guess the most libertine countries—the Netherlands, Brazil—seem to be the ones that have legalized them; but it doesn’t really seem like one should have to be that libertine to want the world’s cheapest, safest, most effective antidepressants to be widely available. I have very mixed feelings about Amsterdam’s (in)famous red light district, but absolutely no hesitation in supporting their legalization of psilocybin truffles.

Honestly, I think patriarchy might be part of this. Alcohol is seen as a very masculine drug—maybe because it can make you angry and violent. Psychedelics seem more feminine; they make you sensitive, compassionate and loving.

Even the way that psychedelics make you feel more connected with your body is sort of feminine; we seem to have a common notion that men are their minds, but women are their bodies.

Here, try it. Someone has said, “I feel really insecure about my body.” Quick: What is that person’s gender? Now suppose someone has said, “I’m very proud of my mind.” What is that person’s gender?

(No, it’s not just because the former is insecure and the latter is proud—though we do also gender those emotions, and there’s statistical evidence that men are generally more confident, though that’s never been my experience of manhood. Try it with the emotions swapped and it still works, just not quite as well.)

I’m not suggesting that this makes sense. Both men and women are precisely as physical and mental as each other—we are all both, and that is a deep truth about our nature. But I know that my mind makes an automatic association between mind/body and male/female, and I suspect yours does as well, because we came from similar cultural norms. (This goes at least back to Classical Rome, where the animus, the rational soul, was masculine, while the anima, the emotional one, was feminine.)

That is, it may be that we banned psychedelics because they were girly. The men in charge were worried about us becoming soft and weak. The drug that’s tied to thousands of rapes and car collisions is manly. The drug that brings you peace, joy, and compassion is not.

Think about the things that the mainstream objected to about Hippies: Men with long hair and makeup, women wearing pants, bright colors, flowery patterns, kindness and peacemongering—all threats to the patriarchal order.

Whatever it is, we need to stop. Millions of people are suffering, and we could so easily help them; all we need to do is stop locking people up for taking medicine.

A new direction

Dec 31 JDN 2460311

CW: Spiders [it’ll make sense in context]

My time at the University of Edinburgh is officially over. For me it was a surprisingly gradual transition: Because of the holiday break, I had already turned in my laptop and ID badge over a week ago, and because my medical leave, I hadn’t really done much actual work for quite some time. But this is still a momentous final deadline; it’s really, truly, finally over.

I now know with some certainty that leaving Edinburgh early was the right choice, and if anything I should have left sooner or never taken the job in the first place. (It seems I am like Randall Munroe after all.) But what I don’t know is where to go next.

We won’t be starving or homeless. My husband still has his freelance work, and my mother has graciously offered to let us stay in her spare room for awhile. We have some savings to draw upon. Our income will be low enough that payments on my student loans will be frozen. We’ll be able to get by, even if I can’t find work for awhile. But I certainly don’t want to live like that forever.

I’ve been trying to come up with ideas for new career paths, including ones I would never have considered before. Right now I am considering: Back into academia (but much choosier about what sort of school and position), into government or an international aid agency, re-training to work in software development, doing my own freelance writing (then I must decide: fiction or nonfiction? Commercial publishing, or self-published?), publishing our own tabletop games (we have one almost ready for crowdfunding, and another that I could probably finish relatively quickly), opening a game shop or escape room, or even just being a stay-at-home parent (surely the hardest to achieve financially; and while on the one hand it seems like an awful waste of a PhD, on the other hand it would really prove once and for all that I do understand the sunk cost fallacy, and therefore be a sign of my ultimate devotion to behavioral economics). The one mainstream option for an econ PhD that I’m not seriously considering is the private sector: If academia was this soul-sucking, I’m not sure I could survive corporate America.

Maybe none of these are yet the right answer. Or maybe some combination is.

What I’m really feeling right now is a deep uncertainty.

Also, fear. Fear of the unknown. Fear of failure. Fear of rejection. Almost any path I could take involves rejection—though of different kinds, and surely some more than others.

I’ve always been deeply and intensely affected by rejection. Some of it comes from formative experiences I had as a child and a teenager; some of it may simply be innate, the rejection-sensitive dysphoria that often comes with ADHD (which I now believe I have, perhaps mildly). (Come to think of it, even those formative experiences may have hit so hard because of my innate predisposition.)

But wherever it comes from, my intense fear of rejection is probably my greatest career obstacle. In today’s economy, just applying for a job—any job—requires bearing dozens of rejections. Openings get hundreds of applicants, so even being fully qualified is no guarantee of anything.

This makes it far more debilitating than most other kinds of irrational fear. I am also hematophobic, but that doesn’t really get in my way all that much; in the normal course of life, one generally tries to avoid bleeding anyway. (Now that MSM can donate blood, it does prevent me from doing that; and I do feel a little bad about that, since there have been blood shortages recently.)

But rejection phobia basically feels like this:

Imagine you are severely arachnophobic, just absolutely terrified of spiders. You are afraid to touch them, afraid to look at them, afraid to be near them, afraid to even think about them too much. (Given how common it is, you may not even have to imagine.)

Now, imagine (perhaps not too vividly, if you are genuinely arachnophobic!) that every job, every job, in every industry, regardless of what skills are required or what the work entails, requires you to first walk through a long hallway which is covered from floor to ceiling in live spiders. This is simply a condition of employment in our society: Everyone must be able to walk through the hallway full of spiders. Some jobs have longer hallways than others, some have more or less aggressive spiders, and almost none of the spiders are genuinely dangerous; but every job, everywhere, requires passing through a hallway of spiders.

That’s basically how I feel right now.

Freelance writing is the most obvious example—we could say this is an especially long hallway with especially large and aggressive spiders. To succeed as a freelance writer requires continually submitting work you have put your heart and soul into, and receiving in response curtly-worded form rejection letters over and over and over, every single time. And even once your work is successful, there will always be critics to deal with.

Yet even a more conventional job, say in academia or government, requires submitting dozens of applications and getting rejected dozens of times. Sometimes it’s also a curt form letter; other times, you make it all the way through multiple rounds of in-depth interviews and still get turned down. The latter honestly stings a lot more than the former, even though it’s in some sense a sign of your competence: they wouldn’t have taken you that far if you were unqualified; they just think they found someone better. (Did they actually? Who knows?) But investing all that effort for zero reward feels devastating.

The other extreme might be becoming a stay-at-home parent. There aren’t as many spiders in this hallway. While biological children aren’t really an option for us, foster agencies really can’t afford to be choosy. Since we don’t have any obvious major red flags, we will probably be able to adopt if we choose to—there will be bureaucratic red tape, no doubt, but not repeated rejections. But there is one very big rejection—one single, genuinely dangerous spider that lurks in a dark corner of the hallway: What if I am rejected by the child? What if they don’t want me as their parent?

Another alternative is starting a business—such as selling our own games, or opening an escape room. Even self-publishing has more of this character than traditional freelance writing. The only direct, explicit sort of rejection we’d have to worry about there is small business loans; and actually with my PhD and our good credit, we could reasonably expect to get accepted sooner or later. But there is a subtler kind of rejection: What if the market doesn’t want us? What if the sort of games or books (or escape experiences, or whatever) we have to offer just aren’t what the world seems to want? Most startup businesses fail quickly; why should ours be any different? (I wonder if I’d be able to get a small business loan on the grounds that I forecasted only a 50% chance of failing in the first year, instead of the baseline 80%. Somehow, I suspect not.)

I keep searching for a career option with no threat of rejection, and it just… doesn’t seem to exist. The best I can come up with is going off the grid and living as hermits in the woods somewhere. (This sounds pretty miserable for totally different reasons—as well as being an awful, frankly unconscionable waste of my talents.) As long as I continue to live within human society and try to contribute to the world, rejection will rear its ugly head.

Ultimately, I think my only real option is to find a way to cope with rejection—or certain forms of rejection. The hallways full of spiders aren’t going away. I have to find a way to walk through them.