A new direction

Dec 31 JDN 2460311

CW: Spiders [it’ll make sense in context]

My time at the University of Edinburgh is officially over. For me it was a surprisingly gradual transition: Because of the holiday break, I had already turned in my laptop and ID badge over a week ago, and because my medical leave, I hadn’t really done much actual work for quite some time. But this is still a momentous final deadline; it’s really, truly, finally over.

I now know with some certainty that leaving Edinburgh early was the right choice, and if anything I should have left sooner or never taken the job in the first place. (It seems I am like Randall Munroe after all.) But what I don’t know is where to go next.

We won’t be starving or homeless. My husband still has his freelance work, and my mother has graciously offered to let us stay in her spare room for awhile. We have some savings to draw upon. Our income will be low enough that payments on my student loans will be frozen. We’ll be able to get by, even if I can’t find work for awhile. But I certainly don’t want to live like that forever.

I’ve been trying to come up with ideas for new career paths, including ones I would never have considered before. Right now I am considering: Back into academia (but much choosier about what sort of school and position), into government or an international aid agency, re-training to work in software development, doing my own freelance writing (then I must decide: fiction or nonfiction? Commercial publishing, or self-published?), publishing our own tabletop games (we have one almost ready for crowdfunding, and another that I could probably finish relatively quickly), opening a game shop or escape room, or even just being a stay-at-home parent (surely the hardest to achieve financially; and while on the one hand it seems like an awful waste of a PhD, on the other hand it would really prove once and for all that I do understand the sunk cost fallacy, and therefore be a sign of my ultimate devotion to behavioral economics). The one mainstream option for an econ PhD that I’m not seriously considering is the private sector: If academia was this soul-sucking, I’m not sure I could survive corporate America.

Maybe none of these are yet the right answer. Or maybe some combination is.

What I’m really feeling right now is a deep uncertainty.

Also, fear. Fear of the unknown. Fear of failure. Fear of rejection. Almost any path I could take involves rejection—though of different kinds, and surely some more than others.

I’ve always been deeply and intensely affected by rejection. Some of it comes from formative experiences I had as a child and a teenager; some of it may simply be innate, the rejection-sensitive dysphoria that often comes with ADHD (which I now believe I have, perhaps mildly). (Come to think of it, even those formative experiences may have hit so hard because of my innate predisposition.)

But wherever it comes from, my intense fear of rejection is probably my greatest career obstacle. In today’s economy, just applying for a job—any job—requires bearing dozens of rejections. Openings get hundreds of applicants, so even being fully qualified is no guarantee of anything.

This makes it far more debilitating than most other kinds of irrational fear. I am also hematophobic, but that doesn’t really get in my way all that much; in the normal course of life, one generally tries to avoid bleeding anyway. (Now that MSM can donate blood, it does prevent me from doing that; and I do feel a little bad about that, since there have been blood shortages recently.)

But rejection phobia basically feels like this:

Imagine you are severely arachnophobic, just absolutely terrified of spiders. You are afraid to touch them, afraid to look at them, afraid to be near them, afraid to even think about them too much. (Given how common it is, you may not even have to imagine.)

Now, imagine (perhaps not too vividly, if you are genuinely arachnophobic!) that every job, every job, in every industry, regardless of what skills are required or what the work entails, requires you to first walk through a long hallway which is covered from floor to ceiling in live spiders. This is simply a condition of employment in our society: Everyone must be able to walk through the hallway full of spiders. Some jobs have longer hallways than others, some have more or less aggressive spiders, and almost none of the spiders are genuinely dangerous; but every job, everywhere, requires passing through a hallway of spiders.

That’s basically how I feel right now.

Freelance writing is the most obvious example—we could say this is an especially long hallway with especially large and aggressive spiders. To succeed as a freelance writer requires continually submitting work you have put your heart and soul into, and receiving in response curtly-worded form rejection letters over and over and over, every single time. And even once your work is successful, there will always be critics to deal with.

Yet even a more conventional job, say in academia or government, requires submitting dozens of applications and getting rejected dozens of times. Sometimes it’s also a curt form letter; other times, you make it all the way through multiple rounds of in-depth interviews and still get turned down. The latter honestly stings a lot more than the former, even though it’s in some sense a sign of your competence: they wouldn’t have taken you that far if you were unqualified; they just think they found someone better. (Did they actually? Who knows?) But investing all that effort for zero reward feels devastating.

The other extreme might be becoming a stay-at-home parent. There aren’t as many spiders in this hallway. While biological children aren’t really an option for us, foster agencies really can’t afford to be choosy. Since we don’t have any obvious major red flags, we will probably be able to adopt if we choose to—there will be bureaucratic red tape, no doubt, but not repeated rejections. But there is one very big rejection—one single, genuinely dangerous spider that lurks in a dark corner of the hallway: What if I am rejected by the child? What if they don’t want me as their parent?

Another alternative is starting a business—such as selling our own games, or opening an escape room. Even self-publishing has more of this character than traditional freelance writing. The only direct, explicit sort of rejection we’d have to worry about there is small business loans; and actually with my PhD and our good credit, we could reasonably expect to get accepted sooner or later. But there is a subtler kind of rejection: What if the market doesn’t want us? What if the sort of games or books (or escape experiences, or whatever) we have to offer just aren’t what the world seems to want? Most startup businesses fail quickly; why should ours be any different? (I wonder if I’d be able to get a small business loan on the grounds that I forecasted only a 50% chance of failing in the first year, instead of the baseline 80%. Somehow, I suspect not.)

I keep searching for a career option with no threat of rejection, and it just… doesn’t seem to exist. The best I can come up with is going off the grid and living as hermits in the woods somewhere. (This sounds pretty miserable for totally different reasons—as well as being an awful, frankly unconscionable waste of my talents.) As long as I continue to live within human society and try to contribute to the world, rejection will rear its ugly head.

Ultimately, I think my only real option is to find a way to cope with rejection—or certain forms of rejection. The hallways full of spiders aren’t going away. I have to find a way to walk through them.

Compassion and the cosmos

Dec 24 JDN 2460304

When this post goes live, it will be Christmas Eve, one of the most important holidays around the world.

Ostensibly it celebrates the birth of Jesus, but it doesn’t really.

For one thing, Jesus almost certainly wasn’t born in December. The date of Christmas was largely set by the Council of Tours in AD 567; it was set to coincide with existing celebrations—not only other Christian celebrations such as the Feast of the Epiphany, but also many non-Christian celebrations such as Yuletide, Saturnalia, and others around the Winter Solstice. (People today often say “Yuletide” when they actually mean Christmas, because the syncretization was so absolute.)

For another, an awful lot of the people celebrating Christmas don’t particularly care about Jesus. Countries like Sweden, Belgium, the UK, Australia, Norway, and Denmark are majority atheist but still very serious about Christmas. Maybe we should try to secularize and ecumenize the celebration and call it Solstice or something, but that’s a tall order. For now, it’s Christmas.

Compassion, love, and generosity are central themes of Christmas—and, by all accounts, Jesus did exemplify those traits. Christianity has a very complicated history, much of it quite dark; but this part of it at least seems worth preserving and even cherishing.

It is truly remarkable that we have compassion at all.

Most of this universe has no compassion. Many would like to believe otherwise, and they invent gods and other “higher beings” or attribute some sort of benevolent “universal consciousness” to the cosmos. (Really, most people copy the prior inventions of others.)

This is all wrong.

The universe is mostly empty, and what is here is mostly pitilessly indifferent.

The vast majority of the universe is comprised of cold, dark, empty space—or perhaps of “dark energy“, a phenomenon we really don’t understand at all, which many physicists believe is actually a shockingly powerful form of energy contained within empty space.

Most of the rest is made up of “dark matter“, a substance we still don’t really understand either, but believe to be basically a dense sea of particles that have mass but not much else, which cluster around other mass by gravity but otherwise rarely interact with other matter or even with each other.

Most of the “ordinary matter”, or more properly baryonic matter, (which we think of as ordinary, but actually by far the minority) is contained within stars and nebulae. It is mostly hydrogen and helium. Some of the other lighter elements—like lithium, sodium, carbon, oxygen, nitrogen, and all the way up to iron—can be made within ordinary stars, but still form a tiny fraction of the mass of the universe. Anything heavier than that—silver, gold, beryllium, uranium—can only be made in exotic, catastrophic cosmic events, mainly supernovae, and as a result these elements are even rarer still.

Most of the universe is mind-bendingly cold: about 3 Kelvin, just barely above absolute zero.

Most of the baryonic matter is mind-bendingly hot, contained within stars that burn with nuclear fires at thousands or even millions of Kelvin.

From a cosmic perspective, we are bizarre.

We live at a weird intermediate temperature and pressure, where matter can take on such exotic states as liquid and solid, rather than the far more common gas and plasma. We do contain a lot of hydrogen—that, at least, is normal by the standards of baryonic matter. But then we’re also made up of oxygen, carbon, nitrogen, and even little bits of all sorts of other elements that can only be made in supernovae? What kind of nonsense lifeform depends upon something as exotic as iodine to survive?

Most of the universe does not care at all about you.

Most of the universe does not care about anything.

Stars don’t burn because they want to. They burn because that’s what happens when hydrogen slams into other hydrogen hard enough.

Planets don’t orbit because they want to. They orbit because if they didn’t, they’d fly away or crash into their suns—and those that did are long gone now.

Even most living things, which are already nearly as bizarre as we are, don’t actually care much.

Maybe there is a sense in which a C. elegans or an oak tree or even a cyanobacterium wants to live. It certainly seems to try to live; it has behaviors that seem purposeful, which evolved to promote its ability to survive and pass on offspring. Rocks don’t behave. Stars don’t seek. But living things—even tiny, microscopic living things—do.

But we are something very special indeed.

We are animals. Lifeforms with complex, integrated nervous systems—in a word, brains—that allow us to not simply live, but to feel. To hunger. To fear. To think. To choose.

Animals—and to the best of our knowledge, only animals, though I’m having some doubts about AI lately—are capable of making choices and experiencing pleasure and pain, and thereby becoming something more than living beings: moral beings.

Because we alone can choose, we alone have the duty to choose rightly.

Because we alone can be hurt, we alone have the right to demand not to be.

Humans are even very special among animals. We are not just animals but chordates; not just chordates but mammals; not just mammals but primates. And even then, not just primates. We’re special even by those very high standards.

When you count up all the ways that we are strange compared to the rest of the universe, it seems incredibly unlikely that beings like us would come into existence at all.

Yet here we are. And however improbable it may have been for us to emerge as intelligent beings, we had to do so in order to wonder how improbable it was—and so in some sense we shouldn’t be too surprised.

It is a mistake to say that we are “more evolved” than any other lifeform; turtles and cockroaches had just as much time to evolve as we did, and if anything their relative stasis for hundreds of millions of years suggests a more perfected design: “If it ain’t broke, don’t fix it.”

But we are different than other lifeforms in a very profound way. And I dare say, we are better.

All animals feel pleasure, pain and hunger. (Some believe that even some plants and microscopic lifeforms may too.) Pain when something damages you; hunger when you need something; pleasure when you get what you needed.

But somewhere along the way, new emotions were added: Fear. Lust. Anger. Sadness. Disgust. Pride. To the best of our knowledge, these are largely chordate emotions, often believed to have emerged around the same time as reptiles. (Does this mean that cephalopods never get angry? Or did they evolve anger independently? Surely worms don’t get angry, right? Our common ancestor with cephalopods was probably something like a worm, perhaps a nematode. Does C. elegans get angry?)

And then, much later, still newer emotions evolved. These ones seem to be largely limited to mammals. They emerged from the need for mothers to care for their few and helpless young. (Consider how a bear or a cat fiercely protects her babies from harm—versus how a turtle leaves her many, many offspring to fend for themselves.)

One emotion formed the core of this constellation:

Love.

Caring, trust, affection, and compassion—and also rejection, betrayal, hatred, and bigotry—all came from this one fundamental capacity to love. To care about the well-being of others as well as our own. To see our purpose in the world as extending beyond the borders of our own bodies.

This is what makes humans different, most of all. We are the beings most capable of love.

We are of course by no means perfect at it. Some would say that we are not even very good at loving.

Certainly there are some humans, such as psychopaths, who seem virtually incapable of love. But they are rare.

We often wish that we were better at love. We wish that there were more compassion in the world, and fear that humanity will destroy itself because we cannot find enough compassion to compensate for our increasing destructive power.

Yet if we are bad at love, compared to what?

Compared to the unthinking emptiness of space, the hellish nuclear fires of stars, or even the pitiless selfishness of a worm or a turtle, we are absolute paragons of love.

We somehow find a way to love millions of others who we have never even met—maybe just a tiny bit, and maybe even in a way that becomes harmful, as solidarity fades into nationalism fades into bigotry—but we do find a way. Through institutions of culture and government, we find a way to trust and cooperate on a scale that would be utterly unfathomable even to the most wise and open-minded bonobo, let alone a nematode.

There are no other experts on compassion here. It’s just us.

Maybe that’s why so many people long for the existence of gods. They feel as ignorant as children, and crave the knowledge and support of a wise adult. But there aren’t any. We’re the adults. For all the vast expanses of what we do not know, we actually know more than anyone else. And most of the universe doesn’t know a thing.

If we are not as good at loving as we’d like, the answer is for us to learn to get better at it.

And we know that we can get better at it, because we have. Humanity is more peaceful and cooperative now than we have ever been in our history. The process is slow, and sometimes there is backsliding, but overall, life is getting better for most people in most of the world most of the time.

As a species, as a civilization, we are slowly learning how to love ourselves, one another, and the rest of the world around us.

No one else will learn to love for us. We must do it ourselves.

But we can.

And I believe we will.

Lamentations of a temporary kludge

Dec 17 JDN 2460297

Most things in the universe are just that—things. They consist of inanimate matter, blindly following the trajectories the laws of physics have set them on. (Actually, most of the universe may not even be matter—at our current best guess, most of the universe is mysterious “dark matter” and even more mysterious “dark energy”).

Then there are the laws: The fundamental truths of physics and mathematics are omnipresent and eternal. They could even be called omniscient, in the sense that all knowledge which could ever be conveyed must itself be possible to encode in physics and mathematics. (Could, in some metaphysical sense, knowledge exist that cannot be conveyed this way? Perhaps, but if so, we’ll never know nor even be able to express it.)

The reason physics and mathematics cannot simply be called God is twofold: One, they have no minds of their own; they do not think. Two, they do not care. They have no capacity for concern whatsoever, no desires, no goals. Mathematics seeks neither your fealty nor your worship, and physics will as readily destroy you as reward you. If the eternal law is a god, it is a mindless, pitilessly indifferent god—a Blind Idiot God.

But we are something special, something in between. We are matter, yes; but we are also pattern. Indeed, what makes me me and makes you you has far more to do with the arrangement of trillions of parts than it does with any particular material. The atoms in your body are being continually replaced, and you barely notice. But should the pattern ever be erased, you would be no more.

In fact, we are not simply one pattern, but many. We are a kludge: Billions of years of random tinkering has assembled us from components that each emerged millions of years apart. We could move before we could see; we could see before we could think; we could think before we could speak. All this evolution was mind-bogglingly gradual: In most cases it would be impossible to tell the difference one generation—or even one century—to the next. Yet as raindrops wear away mountains, one by one, we were wrought from mindless fragments of chemicals into beings of thought, feeling, reason—beings with hopes, fears, and dreams.

Much of what makes our lives difficult ultimately comes from these facts.

Our different parts were not designed to work together. Indeed, they were not really designed at all. Each component survived because it worked well enough to stay alive in the environment in which our ancestors lived. We often find ourselves in conflict with our own desires, in part because those desires evolved for very different environments than the ones we now find ourselves—and in part because there is no particular reason for evolution to avoid conflict, so long as survival is achieved.

As patterns, we can experience the law. We can write down equations that express small pieces of the fundamental truths that exist throughout the universe beyond space and time. From “2+2=4” to Gμν + Λgμν = κTμν“, through mathematics, we glimpse eternity.

But as matter, we are doomed to suffer, degrade, and ultimately die. Our pattern cannot persist forever. Perhaps one day we will find a way to change this—and if that day comes, it will be a glorious day; I will make no excuses for the dragon. For now, at least, it is a truth that we must face: We, all we love, and all we build must one day perish.

That is, we are not simply a kludge; we are a temporary one. Sooner or later, our bodies will fail and our pattern will be erased. What we were made of may persist, but in a form that will no longer be us, and in time, may become indistinguishable from all the rest of the universe.

We are flawed, for the same reason that a crystal is flawed. A theoretical crystal can be flawless and perfect; but a real, physical one must exist in an actual world where it will suffer impurities and disturbances that keep it from ever truly achieving perfect unity and symmetry. We can imagine ourselves as perfect beings, but our reality will always fall short.

We lament that are not perfect, eternal beings. Yet I am not sure it could have been any other way: Perhaps one must be a temporary kludge in order to be a being at all.

How do we stop overspending on healthcare?

Dec 10 JDN 2460290

I don’t think most Americans realize just how much more the US spends on healthcare than other countries. This is true not simply in absolute terms—of course it is, the US is rich and huge—but in relative terms: As a portion of GDP, our healthcare spending is a major outlier.

Here’s a really nice graph from Healthsystemtracker.org that illustrates it quite nicely: Almost all other First World countries share a simple linear relationship between their per-capita GDP and their per-capita healthcare spending. But one of these things is not like the other ones….

The outlier in the other direction is Ireland, but that’s because their GDP is wildly inflated by Leprechaun Economics. (Notice that it looks like Ireland is by far the richest country in the sample! This is clearly not the case in reality.) With a corrected estimate of their true economic output, they are also quite close to the line.

Since US GDP per capita ($70,181) is in between that of Denmark ($64,898) and Norway ($80,496) both of which have very good healthcare systems (#ScandinaviaIsBetter), we would expect that US spending on healthcare would similarly be in between. But while Denmark spends $6,384 per person per year on healthcare and Norway spends $7,065 per person per year, the US spends $12,914.

That is, the US spends nearly twice as much as it should on healthcare.

The absolute difference between what we should spend and what we actually spend is nearly $6,000 per person per year. Multiply that out by the 330 million people in the US, and…

The US overspends on healthcare by nearly $2 trillion per year.

This might be worth it, if health in the US were dramatically better than health in other countries. (In that case I’d be saying that other countries spend too little.) But plainly it is not.

Probably the simplest and most comparable measure of health across countries is life expectancy. US life expectancy is 76 years, and has increased over time. But if you look at the list of countries by life expectancy, the US is not even in the top 50. Our life expectancy looks more like middle-income countries such as Algeria, Brazil, and China than it does like Norway or Sweden, who should be our economic peers.

There are of course many things that factor into life expectancy aside from healthcare: poverty and homicide are both much worse in the US than in Scandinavia. But then again, poverty is much worse in Algeria, and homicide is much worse in Brazil, and yet they somehow manage to nearly match the US in life expectancy (actually exceeding it in some recent years).

The US somehow manages to spend more on healthcare than everyone else, while getting outcomes that are worse than any country of comparable wealth—and even some that are far poorer.

This is largely why there is a so-called “entitlements crisis” (as many a libertarian think tank is fond of calling it). Since libertarians want to cut Social Security most of all, they like to lump it in with Medicare and Medicaid as an “entitlement” in “crisis”; but in fact we only need a few minor adjustments to the tax code to make sure that Social Security remains solvent for decades to come. It’s healthcare spending that’s out of control.

Here, take a look.

This is the ratio of Social Security spending to GDP from 1966 to the present. Notice how it has been mostly flat since the 1980s, other than a slight increase in the Great Recession.

This is the ratio of Medicare spending to GDP over the same period. Even ignoring the first few years while it was ramping up, it rose from about 0.6% in the 1970s to almost 4% in 2020, and only started to decline in the last few years (and it’s probably too early to say whether that will continue).

Medicaid has a similar pattern: It rose steadily from 0.2% in 1966 to over 3% today—and actually doesn’t even show any signs of leveling off.

If you look at Medicare and Medicaid together, they surged from just over 1% of GDP in 1970 to nearly 7% today:

Put another way: in 1982, Social Security was 4.8% of GDP while Medicare and Medicaid combined were 2.4% of GDP. Today, Social Security is 4.9% of GDP while Medicare and Medicaid are 6.8% of GDP.

Social Security spending barely changed at all; healthcare spending more than doubled. If we reduced our Medicare and Medicaid spending as a portion of GDP back to what it was in 1982, we would save 4.4% of GDP—that is, 4.4% of over $25 trillion per year, so $1.1 trillion per year.

Of course, we can’t simply do that; if we cut benefits that much, millions of people would suddenly lose access to healthcare they need.

The problem is not that we are spending frivolously, wasting the money on treatments no one needs. On the contrary, both Medicare and Medicaid carefully vet what medical services they are willing to cover, and if anything probably deny services more often than they should.

No, the problem runs deeper than this.

Healthcare is too expensive in the United States.

We simply pay more for just about everything, and especially for specialist doctors and hospitals.

In most other countries, doctors are paid like any other white-collar profession. They are well off, comfortable, certainly, but few of them are truly rich. But in the US, we think of doctors as an upper-class profession, and expect them to be rich.

Median doctor salaries are $98,000 in France and $138,000 in the UK—but a whopping $316,000 in the US. Germany and Canada are somewhere in between, at $183,000 and $195,000 respectively.

Nurses, on the other hand, are paid only a little more in the US than in Western Europe. This means that the pay difference between doctors and nurses is much higher in the US than most other countries.

US prices on brand-name medication are frankly absurd. Our generic medications are typically cheaper than other countries, but our brand name pills often cost twice as much. I noticed this immediately on moving to the UK: I had always been getting generics before, because the brand name pills cost ten times as much, but when I moved here, suddenly I started getting all brand-name medications (at no cost to me), because the NHS was willing to buy the actual brand name products, and didn’t have to pay through the nose to do so.

But the really staggering differences are in hospitals.

Let’s compare the prices of a few inpatient procedures between the US and Switzerland. Switzerland, you should note, is a very rich country that spends a lot on healthcare and has nearly the world’s highest life expectancy. So it’s not like they are skimping on care. (Nor is it that prices in general are lower in Switzerland; on the contrary, they are generally higher.)

A coronary bypass in Switzerland costs about $33,000. In the US, it costs $76,000.

A spinal fusion in Switzerland costs about $21,000. In the US? $52,000.

Angioplasty in Switzerland: $9.000. In the US? $32,000.

Hip replacement: Switzerland? $16,000. The US? $28,000.

Knee replacement: Switzerland? $19,000. The US? $27,000.

Cholecystectomy: Switzerland? $8,000. The US? $16,000.

Appendectomy: Switzerland? $7,000. The US? $13,000.

Caesarian section: Switzerland? $8,000. The US? $11,000.

Hospital prices are even lower in Germany and Spain, whose life expectancies are not as high as Switzerland—but still higher than the US.

These prices are so much lower that in fact if you were considering getting surgery for a chronic condition in the US, don’t. Buy plane tickets to Europe and get the procedure done there. Spend an extra few thousand dollars on a nice European vacation and you’d still end up saving money. (Obviously if you need it urgently you have no choice but to use your nearest hospital.) I know that if I ever need a knee replacement (which, frankly, is likely, given my height), I’m gonna go to Spain and thereby save $22,000 relative to what it would cost in the US. That’s a difference of a car.

Combine this with the fact that the US is the only First World country without universal healthcare, and maybe you can see why we’re also the only country in the world where people are afraid to call an ambulance because they don’t think they can afford it. We are also the only country in the world with a medical debt crisis.

Where is all this extra money going?

Well, a lot of it goes to those doctors who are paid three times as much as in France. That, at least, seems defensible: If we want the best doctors in the world maybe we need to pay for them. (Then again, do we have the best doctors in the world? If so, why is our life expectancy so mediocre?)

But a significant portion is going to shareholders.

You probably already knew that there are pharmaceutical companies that rake in huge profits on those overpriced brand-name medications. The top five US pharma companies took in net earnings of nearly $82 billion last year. Pharmaceutical companies typically take in much higher profit margins than other companies: a typical corporation makes about 8% of its revenue in profit, while pharmaceutical companies average nearly 14%.

But you may not have realized that a surprisingly large proportion of hospitals are for-profit businesseseven though they make most of their revenue from Medicare and Medicaid.

I was surprised to find that the US is not unusual in that; in fact, for-profit hospitals exist in dozens of countries, and the fraction of US hospital capacity that is for-profit isn’t even particularly high by world standards.

What is especially large is the profits of US hospitals. 7 healthcare corporations in the US all posted net incomes over $1 billion in 2021.

Even nonprofit US hospitals are tremendously profitable—as oxymoronic as that may sound. In fact, mean operating profit is higher among nonprofit hospitals in the US than for-profit hospitals. So even the hospitals that aren’t supposed to be run for profit… pretty much still are. They get tax deductions as if they were charities—but they really don’t act like charities.

They are basically nonprofit in name only.

So fixing this will not be as simple as making all hospitals nonprofit. We must also restructure the institutions so that nonprofit hospitals are genuinely nonprofit, and no longer nonprofit in name only. It’s normal for a nonprofit to have a little bit of profit or loss—nobody can make everything always balance perfectly—but these hospitals have been raking in huge profits and keeping it all in cash instead of using it to reduce prices or improve services. In the study I linked above, those 2,219 “nonprofit” hospitals took in operating profits averaging $43 million each—for a total of $95 billion.

Between pharmaceutical companies and hospitals, that’s a total of over $170 billion per year just in profit. (That’s more than we spend on food stamps, even after surge due to COVID.) This is pure grift. It must be stopped.

But that still doesn’t explain why we’re spending $2 trillion more than we should! So after all, I must leave you with a question:

What is America doing wrong? Why is our healthcare so expensive?

The problem with “human capital”

Dec 3 JDN 2460282

By now, human capital is a standard part of the economic jargon lexicon. It has even begun to filter down into society at large. Business executives talk frequently about “investing in their employees”. Politicians describe their education policies as “investing in our children”.

The good news: This gives businesses a reason to train their employees, and governments a reason to support education.

The bad news: This is clearly the wrong reason, and it is inherently dehumanizing.

The notion of human capital means treating human beings as if they were a special case of machinery. It says that a business may own and value many forms of productive capital: Land, factories, vehicles, robots, patents, employees.

But wait: Employees?


Businesses don’t own their employees. They didn’t buy them. They can’t sell them. They couldn’t make more of them in another factory. They can’t recycle them when they are no longer profitable to maintain.

And the problem is precisely that they would if they could.

Indeed, they used to. Slavery pre-dates capitalism by millennia, but the two quite successfully coexisted for hundreds of years. From the dawn of civilization up until all too recently, people literally were capital assets—and we now remember it as one of the greatest horrors human beings have ever inflicted upon one another.

Nor is slavery truly defeated; it has merely been weakened and banished to the shadows. The percentage of the world’s population currently enslaved is as low as it has ever been, but there are still millions of people enslaved. In Mauritania, slavery wasn’t even illegal until 1981, and those laws weren’t strictly enforced until 2007. (I had graduated from high school!) One of the most shocking things about modern slavery is how cheaply human beings are willing to sell other human beings; I have bought sandwiches that cost more than some people have paid for other people.

The notion of “human capital” basically says that slavery is the correct attitude to have toward people. It says that we should value human beings for their usefulness, their productivity, their profitability.

Business executives are quite happy to see the world in that way. It makes the way they have spent their lives seem worthwhile—perhaps even best—while allowing them to turn a blind eye to the suffering they have neglected or even caused along the way.

I’m not saying that most economists believe in slavery; on the contrary, economists led the charge of abolitionism, and the reason we wear the phrase “the dismal science” like a badge is that the accusation was first leveled at us for our skepticism toward slavery.

Rather, I’m saying that jargon is not ethically neutral. The names we use for things have power; they affect how people view the world.

This is why I always endeavor to always speak of net wealth rather than net worth—because a billionare is not worth more than other people. I’m not even sure you should speak of the net worth of Tesla Incorporated; perhaps it would be better to simply speak of its net asset value or market capitalization. But at least Tesla is something you can buy and sell (piece by piece). Elon Musk is not.

Likewise, I think we need a new term for the knowledge, skills, training, and expertise that human beings bring to their work. It is clearly extremely important; in fact in some sense it’s the most important economic asset, as it’s the only one that can substitute for literally all the others—and the one that others can least substitute for.

Human ingenuity can’t substitute for air, you say? Tell that to Buzz Aldrin—or the people who were once babies that breathed liquid for their first months of life. Yes, it’s true, you need something for human ingenuity to work with; but it turns out that with enough ingenuity, you may not need much, or even anything in particular. One day we may manufacture the air, water and food we need to live from pure energy—or we may embody our minds in machines that no longer need those things.

Indeed, it is the expansion of human know-how and technology that has been responsible for the vast majority of economic growth. We may work a little harder than many of our ancestors (depending on which ancestors you have in mind), but we accomplish with that work far more than they ever could have, because we know so many things they did not.

All that capital we have now is the work of that ingenuity: Machines, factories, vehicles—even land, if you consider all the ways that we have intentionally reshaped the landscape.

Perhaps, then, what we really need to do is invert the expression:

Humans are not machines. Machines are embodied ingenuity.

We should not think of human beings as capital. We should think of capital as the creation of human beings.

Marx described capital as “embodied labor”, but that’s really less accurate: What makes a robot a robot is much less about the hours spent building it, than the centuries of scientific advancement needed to understand how to make it in the first place. Indeed, if that robot is made by another robot, no human need ever have done any labor on it at all. And its value comes not from the work put into it, but the work that comes out of it.

Like so much of neoliberal ideology, the notion of human capital seems to treat profit and economic growth as inherent ends in themselves. Human beings only become valued insofar as we advance the will of the almighty dollar. We forget that the whole reason we should care about economic growth in the first place is that it benefits people. Money is the means, not the end; people are the end, not the means.

We should not think in terms of “investing in children”, as if they were an asset that was meant to yield a return. We should think of enriching our children—of building a better world for them to live in.

We should not speak of “investing in employees”, as though they were just another asset. We should instead respect employees and seek to treat them with fairness and justice.

That would still give us plenty of reason to support education and training. But it would also give us a much better outlook on the world and our place in it.

You are worth more than your money or your job.

The economy exists for people, not the reverse.

Don’t ever forget that.

The paradoxical obviousness of reason

Nov 26 JDN 2460275

The basic precepts of reason seem obvious and irrefutable:

Believe what’s most likely to be true.

Do what’s most likely to work.

How are you going to argue with that? In fact, it seems like by the time you try to argue at all, you’ve already agreed to it. These principles may be undeniable—literally impossible to coherently deny.

Even when expressed a little more precisely, the principles of reason still seem pretty obvious:

Beliefs should be consistent with each other and with observations.

The best action is the one with the best expected outcome.

And you really can get surprisingly far with this. A few more steps of mathematical precision, and you basically get the scientific method and utilitarianism:
Beliefs should be assigned consistent Bayesian probabilities according to the observed evidence.

The best action is the one that maximizes expected utility.

Why, then, did it take humanity 99.9% of its existence to figure this out? Why did a species that has lived for 300,000 years only really start getting this right in about the past 300?

In fact, even today, while most people would at least assent to the basic notion of rationality, a large number don’t really follow it well, and only a small fraction really understand it at the deepest level.

Reason just seems obvious if you think about it. How do so many people miss it?

Because most people really don’t think about it that much.

In fact, I’m going to make a stronger claim:

Most people don’t think about anything that much.

Remember: To a first approximation, all human behavior is social norms.

Most human beings go through most of their lives behaving according to habits and social norms that they may not even be consciously aware of. They do things how they were always done; they believe what those around them believe. They adopt the religion of their parents, cheer for the sports team of their hometown, vote for the political party that is popular in their community. They may not even register these things as decisions at all—they simply did not consider the alternatives.

It’s not that they are incapable of thinking. When they really need to think hard about something, they can do it. But hard thinking is, well, hard. It’s difficult; it’s uncomfortable; for most people, it’s unfamiliar. So, they avoid it when they can. (There is even a kind of meta-rationality in that: Behavioral economists call it rational inattention.)

Few would willingly assent to the claim “I believe a lot of things that aren’t true.” People generally believe that their beliefs are true.

I doubt even most people in ancient history would agree with a statement like that. People who wholemindedly believed in witches, werewolves, ghosts, and sympathetic magic still believed that their beliefs were true. People who thought that a giant beetle rolled the sun across the sky still thought they had a good handle on how the world works.

In fact, the few people I know who would agree with a statement like that are very honest, introspective Bayesians who recognize that the joint probability of all their beliefs being true must be quite small. Agreeing that some of your beliefs are false is a sign not that you are irrational, but that you are extremely rational. (In fact, I would agree with a statement like that: If I knew what I’m wrong about, I’d change my belief; but odds are, I’m wrong about something.)

But most people simply don’t even bother to evaluate the truth of many of their beliefs. If something is easy to check and directly affects their lives, they’ll probably try to gather evidence for it. But if it’s at all abstract or difficult to evaluate, they’ll more or less give up and believe whatever seems to be popular. (This explains Carlin’s dictum: “Tell people there’s an invisible man in the sky who created the universe, and the vast majority will believe you. Tell them the paint is wet, and they have to touch it to be sure.“)

This can also help to explain why so many people—mostly, but not exclusively right-wing people—complain that scientists are “elitist” while worshipping at the feet of clergy and business executives (the latter only—so far—figuratively, but the former all too literally).


What could be more elitist than clergy? They are basically claiming a special, unique connection to the ultimate truths of the universe that is only accessible to them. They claim to be ordained by the all-powerful ruler of the universe with the absolute to right adjudicate all truth and morality.

For goodness’ sake, one of the most popular and powerful ones literally claims to be infallible.

Meanwhile, basically all scientists agree that anyone who is reasonably smart and willing to work hard, either making their own observations, running their own experiments, or just reading the work of a lot of other people’s observations and experiments, can become a scientist. Some scientists are arrogant or condescending, but as an institution and culture, science is fundamentally egalitarian.

No, what people are objecting to among scientists is not elitism. Part of it may be the condescension of telling people: “This is obvious. If you thought about it, you would see that it has to be right.”

Yet the reason we keep saying that is… it is basically true. The precepts of rationality are obvious if think about them, and they do lead quite directly to rejecting a lot of mainstream beliefs, particularly about religion. I’m sure it feels insulting to be told that you just aren’t thinking hard enough about important things… but maybe you aren’t?

We may need to find a gentler way to convey this message. There’s no point in saying it if nobody is going to listen. Yet that doesn’t make it any less true.

It’s not that quantum mechanics is intuitively obvious (quite the opposite is still a terrible understatement), nor even that Darwinian natural selection or comparative advantage are obvious (though surely they’re less counter-intuitive than quantum mechanics). The conclusions of science are not obvious. They took centuries to figure out for good reason.

But the principles of science really are: Want to know if something is true? Look! Find out!

Yet historically this has not in fact been how human beings formed most of their beliefs. Indeed, I am often awed by just how bad most people throughout history have been at thinking empirically.

It’s not just that people throughout history believed in witches without ever having seen one, or knowing anyone who had seen one. (I’ve never seen a platypus or a quasar, and I still believe in them.) It’s that they were willing to execute people for being witches—killing people as punishment for deeds that not only they did not do, but could not possibly have done. Entire civilizations for millennia failed to realize that this was wrong.

Aristotle believed that men’s body temperature was hotter than women’s, and that this temperature difference determined the sex of children. That’s Aristotle, a certifiable genius living in the culture that pioneered rationalist philosophy. (Ironically—and by pure Stopped Clock Principle—he’d almost be right about certain species of reptiles.) It never occurred to him to even try to measure the body temperatures of lots of people and see if this was true. (Admittedly they didn’t have very good thermometers back then.)

Aristotle did get a lot of things right: In particular, his trichotomy of souls is basically accurate, with “vegetative soul” renamed “homeostatic metabolism and reproduction”, “sensitive soul” renamed “limbic system”, and “rational soul” renamed “prefrontal cortex”. The vegetative soul is what makes you alive, the sensitive soul is what makes you sentient, and the rational soul is what makes you a person. He even recognized a deep truth that the majority of human beings today do not: The soul is a function of the body, and dies when the body dies. For his time, he was absolutely off the charts in rationality. But even he didn’t really integrate rationality and empiricism fully into his way of thinking.

Even today there are a shocking number of common misconceptions that could be easily refuted by anyone who thought to check (or look it up!):

Wolves howl at the full moon? Nope, wolves don’t care about the phase of the moon, and if you live near any, you’ll hear them howl all year round. Actually, wolf howling is more like that “Twilight Bark” from 101 Dalmations; it’s a long-distance communication and coordination signal.

Eggs can only balance on the equinox? Nope, it’s tricky, but you can balance an egg just as well any day of the year.

You don’t lose most of your heat through your head: Try going outside in the cold wearing a t-shirt and shorts with a hat, and then again with snow pants and a heavy coat and no hat; you’ll see which feels colder.

“Beer before liquor, never sicker” is nonsense: It matters how much alcohol you drink (and how much you eat), not what order you do it in, and you’d know that if you just tried it both ways a few times.

Taste on your tongue is localized to particular areas? No, it’s not, and you can tell by putting foods with strong flavors on different parts of your tongue. (Indeed, I did when they did that demonstration in elementary school; I wondered if that meant my tongue was somehow weird.)

I can understand not wanting to take the risk with fan death yourself, but maybe listen to all the other people—including medical experts—who tell you it’s not real? I keep a fan in my bedroom every night and it hasn’t killed me yet.

Even the gambler’s fallacy is something you could easily disabuse yourself of by rolling some dice for awhile and taking careful notes. Am I more likely to roll snake eyes if I haven’t in awhile? Nope; the odds on any given roll are always exactly the same.

But most people simply don’t think to check.

Indeed, most people get a lot of their beliefs—particular those about complex, abstract, or distant things—from authority figures. While empiricism doesn’t come very naturally to humans, hierarchy absolutely does. (I think it’s a primate thing.) Another reason scientists may seem “elitist” is that people think we are trying to usurp that authority. We’re telling you that what your religious leaders taught you is false; that must mean that we are trying to become religious leaders ourselves.

But in fact we’re telling you something far more radical than that: You don’t need religious leaders. You don’t need to take things on faith. If you want to know whether something is true, you can look.

We are not trying to usurp control over your hierarchy. We are trying to utterly dismantle it. We dethrone the king, not so that we can become kings ourselves—but so that the world can have kings no longer.

Granted, most people aren’t going to be able to run particle accelerator experiments in their garages. But if you want to know how particle physics works, and how we know what we know about it, go to your nearest university, find a particle physicist, and ask: I guarantee they’ll be more than happy to tell you whatever you want to know. You can even do this via email from anywhere in the world.

That is, we do need expertise: People who specialize in a particular field of knowledge can learn it much better than others. But we do not need authority: You don’t just have to take their word for it. There’s a difference between expertise and authority.

And sometimes, really all you need to do is stop and think. People should try that more often.

Homeschooling and too much freedom

Nov 19 JDN 2460268

Allowing families to homeschool their children increases freedom, quite directly and obviously. This is a large part of the political argument in favor of homeschooling, and likely a large part of why homeschooling is so popular within the United States in particular.

In the US, about 3% of people are homeschooled. This seems like a small proportion, but it’s enough to have some cultural and political impact, and it’s considerably larger than the proportion who are homeschooled in most other countries.

Moreover, homeschooling rates greatly increased as a result of COVID, and it’s anyone’s guess when, or even whether, they will go back down. I certainly hope they do; here’s why.

A lot of criticism about homeschooling involves academic outcomes: Are the students learning enough English and math? This is largely unfounded; statistically, academic outcomes of homeschooled students don’t seem to be any worse than those of public school students; by some measures, they are actually better.Nor is there clear evidence that homeschooled kids are any less developed socially; most of them get that social development through other networks, such as churches and sports teams.

No, my concern is not that they won’t learn enough English and math. It’s that they won’t learn enough history and science. Specifically, the parts of history and science that contradict the religious beliefs of the parents who are homeschooling them.

One way to study this would be to compare test scores by homeschooled kids on, say, algebra and chemistry (which do not directly threaten Christian evangelical beliefs) to those on, say, biology and neuroscience (which absolutely, fundamentally do). Lying somewhere in between are physics (F=ma is no threat to Christianity, but the Big Bang is) and history (Christian nationalists happily teach that Thomas Jefferson wrote the Declaration of Independence, but often omit that he owned slaves). If homeschooled kids are indeed indoctrinated, we should see particular lacunas in their knowledge where the facts contradict their ideology. In any case, I wasn’t able to find any such studies.

But even if their academic outcomes are worse in certain domains, so what? What about the freedom of parents to educate their children how they choose? What about the freedom of children to not be subjected to the pain of public school?

It will come as no surprise to most of you that I did well in school. In almost everything, really: math, science, philosophy, English, and Latin were my best subjects, and I earned basically flawless grades in them. But I also did very well in creative writing, history, art, and theater, and fairly well in music. My only poor performance was in gym class (as I’ve written about before).

It may come as some surprise when I tell you that I did not particularly enjoy school. In elementary school I had few friends—and one of my closest ended up being abusive to me. Middle school I mostly enjoyed—despite the onset of my migraines. High school started out utterly miserable, though it got a little better—a little—once I transferred to Community High School. Throughout high school, I was lonely, stressed, anxious, and depressed most of the time, and had migraine headaches of one intensity or another nearly every single day. (Sadly, most of that is true now as well; but I at least had a period of college and grad school where it wasn’t, and hopefully I will again once this job is behind me.)

I was good at school. I enjoyed much of the content of school. But I did not particularly enjoy school.

Thus, I can quite well understand why it is tempting to say that kids should be allowed to be schooled at home, if that is what they and their parents want. (Of course, a problem already arises there: What if child and parent disagree? Whose choice actually matters? In practice, it’s usually the parent’s.)

On the whole, public school is a fairly toxic social environment: Cliquish, hyper-competitive, stressful, often full of conflict between genders, races, classes, sexual orientations, and of course the school-specific one, nerds versus jocks (I’d give you two guesses which team I was on, but you’re only gonna need one). Public school sucks.

Then again, many of these problems and conflicts persist into adult life—so perhaps it’s better preparation than we care to admit. Maybe it’s better to be exposed to bias and conflict so that you can learn to cope with them, rather than sheltered from them.

But there is a more important reason why we may need public school, why it may even be worth coercing parents and children into that system against their will.

Public school forces you to interact with people different from you.

At a public school, you cannot avoid being thrown in the same classroom with students of other races, classes, and religions. This is of course more true if your school system is diverse rather than segregated—and all the more reason that the persistent segregation of many of our schools is horrific—but it’s still somewhat true even in a relatively homogeneous school. I was fortunate enough to go to a public school in Ann Arbor, where there was really quite substantial diversity. But even where there is less diversity, there is still usually some diversity—if not race, then class, or religion.

Certainly any public school has more diversity than homeschooling, where parents have the power to specifically choose precisely which other families their children will interact with, and will almost always choose those of the same race, class, and—above all—religious denomination as themselves.

The result is that homeschooled children often grow up indoctrinated into a dogmatic, narrow-minded worldview, convinced that the particular beliefs they were raised in are the objectively, absolutely correct ones and all others are at best mistaken and at worst outright evil. They are trained to reject conflict and dissent, to not even expose themselves to other people’s ideas, because those are seen as dangerous—corrupting.

Moreover, for most homeschooling parents—not all, but most—this is clearly the express intent. They want to raise their children in a particular set of beliefs. They want to inoculate them against the corrupting influences of other ideas. They are not afraid of their kids being bullied in school; they are afraid of them reading books that contradict the Bible.

This article has the headline “Homeschooled children do not grow up to be more religious”, yet its core finding is exactly the opposite of that:

The Cardus Survey found that homeschooled young adults were not noticeably different in their religious lives from their peers who had attended private religious schools, though they were more religious than peers who had attended public or Catholic schools.

No more religious than private religious schools!? That’s still very religious. No, the fair comparison is to public schools, which clearly show lower rates of religiosity among the same demographics. (The interesting case is Catholic schools; they, it turns out, also churn out atheists with remarkable efficiency; I credit the Jesuit norm of top-quality liberal education.) This is clear evidence that religious homeschooling does make children more religious, and so does most private religious education.

Another finding in that same article sounds good, but is misleading:

Indiana University professor Robert Kunzman, in his careful study of six homeschooling families, found that, at least for his sample, homeschooled children tended to become more tolerant and less dogmatic than their parents as they grew up.


This is probably just regression to the mean. The parents who give their kids religious homeschooling are largely the most dogmatic and intolerant, so we would expect by sheer chance that their kids were less dogmatic and intolerant—but probably still pretty dogmatic and intolerant. (Also, do I have to pount out that n=6 barely even constitutes a study!?) This is like the fact that the sons of NBA players are usually shorter than their fathers—but still quite tall.

Homeschooling is directly linked to a lot of terrible things: Young-Earth Creationism, Christian nationalism, homophobia, and shockingly widespread child abuse.

While most right-wing families don’t homeschool, most homeschooling families are right-wing: Between 60% and 70% of homeschooling families vote Republican in most elections. More left-wing voters are homeschooling now with the recent COVID-driven surge in homeschooling, but the right-wing still retains a strong majority for now.

Of course, there are a growing number of left-wing and non-religious families who use homeschooling. Does this mean that the threat of indoctrination is gone? I don’t think so. I once knew someone who was homeschooled by a left-wing non-religious family and still ended up adopting an extremely narrow-minded extremist worldview—simply a left-wing non-religious one. In some sense a left-wing non-religious narrow-minded extremism is better than a right-wing religious narrow-minded extremism, but it’s still narrow-minded extremism. Whatever such a worldview gets right is mainly by the Stopped Clock Principle. It still misses many important nuances, and is still closed to new ideas and new evidence.

Of course this is not a necessary feature of homeschooling. One absolutely could homeschool children into a worldview that is open-minded and tolerant. Indeed, I’m sure some parents do. But statistics suggest that most do not, and this makes sense: When parents want to indoctrinate their children into narrow-minded worldviews, homeschooling allows them to do that far more effectively than if they had sent their children to public school. Whereas if you want to teach your kids open-mindedness and tolerance, exposing them to a diverse environment makes that easier, not harder.

In other words, the problem is that homeschooling gives parents too much control; in a very real sense, this is too much freedom.

When can freedom be too much? It seems absurd at first. But there are at least two cases where it makes sense to say that someone has too much freedom.

The first is paternalism: Sometimes people really don’t know what’s best for them, and giving them more freedom will just allow them to hurt themselves. This notion is easily abused—it has been abused many times, for example against disabled people and colonized populations. For that reason, we are right to be very skeptical of it when applied to adults of sound mind. But what about children? That’s who we are talking about after all. Surely it’s not absurd to suggest that children don’t always know what’s best for them.

The second is the paradox of tolerance: The freedom to take away other people’s freedom is not a freedom we can afford to protect. And homeschooling that indoctrinates children into narrow-minded worldviews is a threat to other people’s freedom—not only those who will be oppressed by a new generation of extremists, but also the children themselves who are never granted the chance to find their own way.

Both reasons apply in this case: paternalism for the children, the paradox of tolerance for the parents. We have a civic responsibility to ensure that children grow up in a rich and diverse environment, so that they learn open-mindedness and tolerance. This is important enough that we should be willing to impose constraints on freedom in order to achieve it. Democracy cannot survive a citizenry who are molded from birth into narrow-minded extremists. There are parents who want to mold their children that way—and we cannot afford to let them.

From where I’m sitting, that means we need to ban homeschooling, or at least very strictly regulate it.

Israel, Palestine, and the World Bank’s disappointing priorities

Nov 12 JDN 2460261

Israel and Palestine are once again at war. (There are a disturbing number of different years in which one could have written that sentence.) The BBC has a really nice section of their website dedicated to reporting on various facets of the war. The New York Times also has a section on it, but it seems a little tilted in favor of Israel.

This time, it started with a brutal attack by Hamas, and now Israel has—as usual—overreacted and retaliated with a level of force that is sure to feed the ongoing cycle of extremism. All across social media I see people wanting me to take one side or the other, often even making good points: “Hamas slaughters innocents” and “Israel is a de facto apartheid state” are indeed both important points I agree with. But if you really want to know my ultimate opinion, it’s that this whole thing is fundamentally evil and stupid because human beings are suffering and dying over nothing but lies. All religions are false, most of them are evil, and we need to stop killing each other over them.

Anti-Semitism and Islamophobia are both morally wrong insofar as they involve harming, abusing or discriminating against actual human beings. Let people dress however they want, celebrate whatever holidays they want, read whatever books they want. Even if their beliefs are obviously wrong, don’t hurt them if they aren’t hurting anyone else. But both Judaism and Islam—and Christianity, and more besides—are fundamentally false, wrong, evil, stupid, and detrimental to the advancement of humanity.

That’s the thing that so much of the public conversation is too embarrassed to say; we’re supposed to pretend that they aren’t fighting over beliefs that obviously false. We’re supposed to respect each particular flavor of murderous nonsense, and always find some other cause to explain the conflict. It’s over culture (what culture?); it’s over territory (whose territory?); it’s a retaliation for past conflict (over what?). We’re not supposed to say out loud that all of this violence ultimately hinges upon people believing in nonsense. Even if the conflict wouldn’t disappear overnight if everyone suddenly stopped believing in God—and are we sure it wouldn’t? Let’s try it—it clearly could never have begun, if everyone had started with rational beliefs in the first place.

But I don’t really want to talk about that right now. I’ve said enough. Instead I want to talk about something a little more specific, something less ideological and more symptomatic of systemic structural failures. Something you might have missed amidst the chaos.

The World Bank recently released a report on the situation focused heavily on the looming threat of… higher oil prices. (And of course there has been breathless reporting from various outlets regarding a headline figure of $150 per barrel which is explicitly stated in the report as an unlikely “worst-case scenario”.)

There are two very big reasons why I found this dismaying.


The first, of course, is that there are obviously far more important concerns here than commodity prices. Yes, I know that this report is part of an ongoing series of Commodity Markets Outlook reports, but the fact that this is the sort of thing that the World Bank has ongoing reports about is also saying something important about the World Bank’s priorities. They release monthly commodity forecasts and full Commodity Markets Outlook reports that come out twice a year, unlike the World Development Reports that only come out once a year. The World Bank doesn’t release a twice-annual Conflict Report or a twice-annual Food Security Report. (Even the FAO, which publishes an annual State of Food Security and Nutrition in the World report, also publishes a State of Agricultural Marketsreport just as often.)

The second is that, when reading the report, one can clearly tell that whoever wrote it thinks that rising oil and gas prices are inherently bad. They keep talking about all of these negative consequences that higher oil prices could have, and seem utterly unaware of the really enormous upside here: We may finally get a chance to do something about climate change.

You see, one of the most basic reasons why we haven’t been able to fix climate change is that oil is too damn cheap. Its market price has consistently failed to reflect its actual costs. Part of that is due to oil subsidies around the world, which have held the price lower than it would be even in a free market; but most of it is due to the simple fact that pollution and carbon emissions don’t cost money for the people who produce them, even though they do cost the world.

Fortunately, wind and solar power are also getting very cheap, and are now at the point where they can outcompete oil and gas for electrical power generation. But that’s not enough. We need to remove oil and gas from everything: heating, manufacturing, agriculture, transportation. And that is far easier to do if oil and gas suddenly become more expensive and so people are forced to stop using them.

Now, granted, many of the downsides in that report are genuine: Because oil and gas are such vital inputs to so many economic processes, it really is true that making them more expensive will make lots of other things more expensive, and in particular could increase food insecurity by making farming more expensive. But if that’s what we’re concerned about, we should be focusing on that: What policies can we use to make sure that food remains available to all? And one of the best things we could be doing toward that goal is finding ways to make agriculture less dependent on oil.

By focusing on oil prices instead, the World Bank is encouraging the world to double down on the very oil subsidies that are holding climate policy back. Even food subsides—which certainly have their own problems—would be an obviously better solution, and yet they are barely mentioned.

In fact, if you actually read the report, it shows that fears of food insecurity seem unfounded: Food prices are actually declining right now. Grain prices in particular seem to be falling back down remarkably quickly after their initial surge when Russia invaded Ukraine. Of course that could change, but it’s a really weird attitude toward the world to see something good and respond with, “Yes, but it might change!” This is how people with anxiety disorders (and I would know) think—which makes it seem as though much of the economic policy community suffers from some kind of collective equivalent of an anxiety disorder.

There also seems to be a collective sense that higher prices are always bad. This is hardly just a World Bank phenomenon; on the contrary, it seems to pervade all of economic thought, including the most esteemed economists, the most powerful policymakers, and even most of the general population of citizens. (The one major exception seems to be housing, where the sense is that higher prices are always good—even when the world is in a chronic global housing shortage that leaves millions homeless.) But prices can be too low or too high. And oil prices are clearly, definitely too low. Prices should reflect the real cost of production—all the real costs of production. It should cost money to pollute other people’s air.

In fact I think the whole report is largely a nothingburger: Oil prices haven’t even risen all that much so far—we’re still at $80 per barrel last I checked—and the one thing that is true about the so-called Efficient Market Hypothesis is that forecasting future prices is a fool’s errand. But it’s still deeply unsettling to see such intelligent, learned experts so clearly panicking over the mere possibility that there could be a price change which would so obviously be good for the long-term future of humanity.

There is plenty more worth saying about the Israel-Palestine conflict, and in particular what sort of constructive policy solutions we might be able to find that would actually result in any kind of long-term peace. I’m no expert on peace negotiations, and frankly I admit it would probably be a liability that if I were ever personally involved in such a negotiation, I’d be tempted to tell both sides that they are idiots and fanatics. (The headline the next morning: “Israeli and Palestinian Delegates Agree on One Thing: They Hate the US Ambassador”.)

The World Bank could have plenty to offer here, yet so far they’ve been too focused on commodity prices. Their thinking is a little too much ‘bank’ and not enough ‘world’.

It is a bit ironic, though also vaguely encouraging, that there are those within the World Bank itself who recognize this problem: Just a few weeks ago Ajay Banga gave a speech to the World Bank about “a world free of poverty on a livable planet”.

Yes. Those sound like the right priorities. Now maybe you could figure out how to turn that lip service into actual policy.

Time and How to Use It

Nov 5 JDN 2460254

A review of Four Thousand Weeks by Oliver Burkeman

The central message of Four Thousand Weeks: Time and How to Use It seems so obvious in hindsight it’s difficult to understand why it feels so new and unfamiliar. It’s a much-needed reaction to the obsessive culture of “efficiency” and “productivity” that dominates the self-help genre. Its core message is remarkable simple:

You don’t have time to do everything you want, so stop trying.

I actually think Burkeman understands the problem incorrectly. He argues repeatedly that it is our mortality which makes our lives precious—that it is because we only get four thousand weeks of life that we must use our time well. But this strikes me as just yet more making excuses for the dragon.

Our lives would not be less precious if we lived a thousand years or a million. Indeed, our time would hardly be any less scarce! You still can’t read every book ever written if you live a million years—for every one of those million years, another 500,000 books will be published. You could visit every one of the 10,000 cities in the world, surely; but if you spend a week in each one, by the time you get back to Paris for a second visit, centuries will have passed—I must imagine you’ll have missed quite a bit of change in that time. (And this assumes that our population remains the same—do we really think it would, if humans could live a million years?)

Even a truly immortal being that will live until the end of time needs to decide where to be at 7 PM this Saturday.

Yet Burkeman does grasp—and I fear that too many of us do not—that our time is precious, and when we try to do everything that seems worth doing, we end up failing to prioritize what really matters most.

What do most of us spend most of our lives doing? Whatever our bosses tell us to do. Aside from sleeping, the activity that human beings spend the largest chunk of their lives on is working.

This has made us tremendously, mind-bogglingly productive—our real GDP per capita is four times what it was in just 1950, and about eight times what it was in the 1920s. Projecting back further than that is a bit dicier, but assuming even 1% annual growth, it should be about twenty times what it was at the dawn of the Industrial Revolution. We could surely live better than medieval peasants did by working only a few hours per week; yet in fact on average we work more hours than they did—by some estimates, nearly twice as much. Rather than getting the same wealth for 5% of the work, or twice the wealth for 10%, we chose to get 40 times the wealth for twice the work.

It would be one thing if all this wealth and productivity actually seemed to make us happy. But does it?

Our physical health is excellent: We are tall, we live long lives—we are smarter, even, than people of the not-so-distant past. We have largely conquered disease as the ancients knew it. Even a ‘catastrophic’ global pandemic today kills a smaller share of the population than would die in a typical year from disease in ancient times. Even many of our most common physical ailments, such as obesity, heart disease, and diabetes, are more symptoms of abundance than poverty. Our higher rates of dementia and cancer are largely consequences of living longer lives—most medieval peasants simply didn’t make it long enough to get Alzheimer’s. I wonder sometimes how ancient people dealt with other common ailments such as migraine and sleep apnea; but my guess is that they basically just didn’t—since treatment was impossible, they learned to live with it. Maybe they consoled themselves with whatever placebo treatments the healers of their local culture offered.

Yet our mental health seems to be no better than ever—and depending on how you measure it, may actually be getting worse over time. Some of the measured increase is surely due to more sensitive diagnosis; but some of it may be a genuine increase—especially as a result of the COVID pandemic. I wasn’t able to find any good estimates of rates of depression or anxiety disorders in ancient or medieval times, so I guess I really can’t say whether this is a problem that’s getting worse. But it sure doesn’t seem to be getting better. We clearly have not solved the problem of depression the way we have solved the problem of infectious disease.

Burkeman doesn’t tell us to all quit our jobs and stop working. But he does suggest that if you are particularly unhappy at your current job (as I am), you may want to quit it and begin searching for something else (as I have). He reminds us that we often get stuck in a particular pattern and underestimate the possibilities that may be available to us.

And he has advice for those who want to stay in their current jobs, too: Do less. Don’t take on everything that is asked of you. Don’t work yourself to the bone. The rewards for working harder are far smaller than our society will tell you, and the costs of burning out are far higher. Do the work that is genuinely most important, and let the rest go.

Unlike most self-help books, Four Thousand Weeks offers very little in the way of practical advice. It’s more like a philosophical treatise, exhorting you to adopt a whole new outlook on time and how you use it. But he does offer a little bit of advice, near the end of the book, in “Ten Tools for Embracing Your Finitude” and “Five Questions”.

The ten tools are as follows:


Adopt a ‘fixed volume’ approach to productivity. Limit the number of tasks on your to-do list. Set aside a particular amount of time for productive work, and work only during that time.

I am relatively good at this one; I work only during certain hours on weekdays, and I resist the urge to work other times.

Serialize, serialize, serialize. Do one major project at a time.

I am terrible at this one; I constantly flit between different projects, leaving most of them unfinished indefinitely. But I’m not entirely convinced I’d do better trying to focus on one in particular. I switch projects because I get stalled on the current one, not because I’m anxious about not doing the others. Unless I can find a better way to break those stalls, switching projects still gets more done than staying stuck on the same one.

Decide in advance what to fail at. Prioritize your life and accept that some things will fail.

We all, inevitably, fail to achieve everything we want to. What Burkeman is telling us to do is choose in advance which achievements we will fail at. Ask yourself: How much do you really care about keeping the kitchen clean and the lawn mowed? If you’re doing these things to satisfy other people’s expectations but you don’t truly care about them yourself, maybe you should just accept that people will frown upon you for your messy kitchen and overgrown lawn.

Focus on what you’ve already completed, not just on what’s left to complete. Make a ‘done list’ of tasks you have completed today—even small ones like “brushed teeth” and “made breakfast”—to remind yourself that you do in fact accomplish things.

I may try this one for awhile. It feels a bit hokey to congratulate yourself on making breakfast—but when you are severely depressed, even small tasks like that can in fact feel like an ordeal.

Consolidate your caring. Be generous and kind, but pick your battles.

I’m not very good at this one either. Spending less time on social media has helped; I am no longer bombarded quite so constantly by worthy causes and global crises. Yet I still have a vague sense that I am not doing enough, that I should be giving more of myself to help others. For me this is partly colored by a feeling that I have failed to build a career that would have both allowed me to have direct impact on some issues and also made enough money to afford large donations.

Embrace boring and single-purpose technology. Downgrade your technology to reduce distraction.

I don’t do this one, but I also don’t see it as particularly good advice. Maybe taking Facebook and (the-platform-formerly-known-as-) Twitter off your phone home screen is a good idea. But the reason you go to social media isn’t that they are so easy to access. It’s that you are expected to, and that you try to use them to fill some kind of need in your life—though it’s unclear they ever actually fill it.

Seek out novelty in the mundane. Cultivate awareness and appreciation of the ordinary things around you.

This one is basically a stripped-down meditation technique. It does work, but it’s also a lot harder to do than most people seem to think. It is especially hard to do when you are severely depressed. One technique I’ve learned from therapy that is surprisingly helpful is to replace “I have to” with “I get to” whenever you can: You don’t have to scoop cat litter, you get to because you have an adorable cat. You don’t have to catch the bus to work, you get to because you have a job. You don’t have to make breakfast for your family, you get to because you have a loving family.

Be a ‘researcher’ in relationships. Cultivate curiosity rather than anxiety or judgment.

Human beings are tremendously varied and often unpredictable. If you worry about whether or not people will do what you want, you’ll be constantly worried. And I have certainly been there. It can help to try to take a stance of detachment, where you concern yourself less with getting the right outcome and more with learning about the people you are with. I think this can be taken too far—you can become totally detached from relationships, or you could put yourself in danger by failing to pass judgment on obviously harmful behaviors—but in moderation, it’s surprisingly powerful. The first time I ever enjoyed going to a nightclub, (at my therapist’s suggestion) I went as a social scientist, tasked with observing and cataloguing the behavior around me. I still didn’t feel fully integrated into the environment (and the music was still too damn loud!), but for once, I wasn’t anxious and miserable.

Cultivate instantaneous generosity. If you feel like doing something good for someone, just do it.

I’m honestly not sure whether this one is good advice. I used to follow it much more than I do now. Interacting with the Effective Altruism community taught me to temper these impulses, and instead of giving to every random charity or homeless person that asks for money, instead concentrate my donations into a few highly cost-effective charities. Objectively, concentrating donations in this way produces a larger positive impact on the world. But subjectively, it doesn’t feel as good, it makes people sad, and sometimes it can make you feel like a very callous person. Maybe there’s a balance to be had here: Give a little when the impulse strikes, but save up most of it for the really important donations.

Practice doing nothing.

This one is perhaps the most subversive, the most opposed to all standard self-help advice. Do nothing? Just rest? How can you say such a thing, when you just reminded us that we have only four thousand weeks to live? Yet this is in fact the advice most of us need to hear. We burn ourselves out because we forget how to rest.

I am also terrible at this one. I tend to get most anxious when I have between 15 and 45 minutes of free time before an activity, because 45 minutes doesn’t feel long enough to do anything, and 15 minutes feels too long to do nothing. Logically this doesn’t really make sense: Either you have time to do something, or you don’t. But it can be hard to find good ways to fill that sort of interval, because it requires the emotional overhead of starting and stopping a task.

Then, there are the five questions:

Where in your life or work are you currently pursuing comfort, when what’s called for is a little discomfort?

It seems odd to recommend discomfort as a goal, but I think what Burkeman is getting at is that we tend to get stuck in the comfortable and familiar, even when we would be better off reaching out and exploring into the unknown. I know that for me, finally deciding to quit this job was very uncomfortable; it required taking a big risk and going outside the familiar and expected. But I am now convinced it was the right decision.

Are you holding yourself to, and judging yourself by, standards of productivity or performance that are impossible to meet?

In a word? Yes. I’m sure I am. But this one is also slipperier than it may seem—for how do we really know what’s possible? And possible for whom? If you see someone else who seems to be living the life you think you want, is it just an illusion? Are they really suffering as badly as you? Or do they perhaps have advantages you don’t, which made it possible for them, but not for you? When people say they work 60 hours per week and you can barely manage 20, are they lying? Are you truly not investing enough effort? Or do you suffer from ailments they don’t, which make it impossible for you to commit those same hours?

In what ways have you yet to accept the fact that you are who you are, not the person you think you ought to be?

I think most of us have a lot of ways that we fail to accept ourselves: physically, socially, psychologically. We are never the perfect beings we aspire to be. And constantly aspiring to an impossible ideal will surely drain you. But I also fear that self-acceptance could be a dangerous thing: What if it makes us stop striving to improve? What if we could be better than we are, but we don’t bother? Would you want a murderous psychopath to practice self-acceptance? (Then again, do they already, whether we want them to or not?) How are we to know which flaws in ourselves should be accepted, and which repaired?

In which areas of your life are you still holding back until you feel like you know what you’re doing?

This one cut me very deep. I have several areas of my life where this accusation would be apt, and one in particular where I am plainly guilty as charged: Parenting. In a same-sex marriage, offspring don’t emerge automatically without intervention. If we want to have kids, we must do a great deal of work to secure adoption. And it has been much easier—safer, more comfortable—to simply put off that work, avoid the risk. I told myself we’d adopt once I finished grad school; but then I only got a temporary job, so I put it off again, saying we’d adopt once I found stability in my career. But what if I never find that stability? What if the rest of my career is always this precarious? What if I can always find some excuse to delay? The pain of never fulfilling that lifelong dream of parenthood might continue to gnaw at me forever.

How would you spend your days differently if you didn’t care so much about seeing your actions reach fruition?

This one is frankly useless. I hate it. It’s like when people say “What would you do if you knew you’d die tomorrow?” Obviously, you wouldn’t go to work, you wouldn’t pay your bills, you wouldn’t clean your bathroom. You might devote yourself single-mindedly to a single creative task you hoped to make a legacy, or gather your family and friends to share one last day of love, or throw yourself into meaningless hedonistic pleasure. Those might even be things worth doing, on occasion. But you can’t do them every day. If you knew you were about to die, you absolutely would not live in any kind of sustainable way.

Similarly, if I didn’t care about seeing my actions reach fruition, I would continue to write stories and never worry about publishing them. I would make little stabs at research when I got curious, then once it starts getting difficult or boring, give up and never bother writing the paper. I would continue flitting between a dozen random projects at once and never finish any of them. I might well feel happier—at least until it all came crashing down—but I would get absolutely nothing done.

Above all, I would never apply for any jobs, because applying for jobs is absolutely not about enjoying the journey. If you know for a fact that you won’t get an offer, you’re an idiot to bother applying. That is a task that is only worth doing if I believe that it will yield results—and indeed, a big part of why it’s so hard to bring myself to do it is that I have a hard time maintaining that belief.

If you read the surrounding context, Burkeman actually seems to intend something quite different than the actual question he wrote. He suggests devoting more time to big, long-term projects that require whole communities to complete. He likens this to laying bricks in a cathedral that we will never see finished.

I do think there is wisdom in this. But it isn’t a simple matter of not caring about results. Indeed, if you don’t care at all about whether the cathedral will stand, you won’t bother laying the bricks correctly. In some sense Burkeman is actually asking us to do the opposite: To care more about results, but specifically results that we may never live to see. Maybe he really intends to emphasize the word see—you care about your actions reaching fruition, but not whether or not you’ll ever see it.

Yet this, I am quite certain, is not my problem. When a psychiatrist once asked me, “What do you really want most in life?” I gave a very thoughtful answer: “To be remembered in a thousand years for my contribution to humanity.” (His response was glib: “You can’t control that.”) I still stand by that answer: If I could have whatever I want, no limits at all, three wishes from an all-powerful genie, two of them would be to solve some of the world’s greatest problems, and the third would be for the chance to live my life in a way that I knew would be forever remembered.

But I am slowly coming to realize that maybe I should abandon that answer. That psychiatrist’s answer was far too glib (he was in fact not a very good fit for me; I quickly switched to a different psychiatrist), but maybe it wasn’t fundamentally wrong. It may be impossible to predict, let alone control, whether our lives have that kind of lasting impact—and, almost by construction, most lives can’t.

Perhaps, indeed, I am too worried about whether the cathedral will stand. I only have a few bricks to lay myself, and while I can lay them the best I can, that ultimately will not be what decides the fate of the cathedral. A fire, or an earthquake, or simply some other bricklayer’s incompetence, could bring about its destruction—and there is nothing at all I can do to prevent that.

This post is already getting too long, so I should try to bring it to a close.

As the adage goes, perhaps if I had more time, I’d make it shorter.

On Horror

Oct 29 JDN 2460247

Since this post will go live the weekend before Halloween, the genre of horror seemed a fitting topic.

I must confess, I don’t really get horror as a genre. Generally I prefer not to experience fear and disgust? This can’t be unusual; it’s literally a direct consequence of the evolutionary function of fear and disgust. It’s wanting to be afraid and disgusted that’s weird.

Cracked once came out with a list of “Horror Movies for People Who Hate Horror”, and I found some of my favorite films on it, such as Alien (which is as much sci-fi as horror), The Cabin in the Woods, (which is as much satire) and Zombieland (which is a comedy). Other such lists have prominently featured Get Out (which is as much political as it is horrific), Young Frankenstein (which is entirely a comedy), and The Silence of the Lambs (which is horror, at least in large part, but which I didn’t so much enjoy as appreciate as a work of artistry; I watch it the way I look at Guernica). Some such lists include Saw, which I can appreciate on some level—it does have a lot of sociopolitical commentary—but still can’t enjoy (it’s just too gory). I note that none of these lists seem to include Event Horizon, which starts out as a really good sci-fi film, but then becomes so very much horror that I ended up hating it.

In trying to explain the appeal of horror to me, people have likened it to the experience of a roller coaster: Isn’t fear exhilarating?

I do enjoy roller coasters. But the analogy falls flat for me, because, well, my experience of riding a roller coaster isn’t fear—the exhilaration comes directly from the experience of moving so fast, a rush of “This is awesome!” that has nothing to do with being afraid. Indeed, should I encounter a roller coaster that actually made me afraid, I would assiduously avoid it, and wonder if it was up to code. My goal is not to feel like I’m dying; it’s to feel like I’m flying.

And speaking of flying: Likewise, the few times I have had the chance to pilot an aircraft were thrilling in a way it is difficult to convey to anyone who hasn’t experienced it. I think it might be something like what religious experiences feel like. The sense of perspective, looking down on the world below, seeing it as most people never see it. The sense of freedom, of, for once in your life, actually having the power to maneuver freely in all three dimensions. The subtle mix of knowing that you are traveling at tremendous speed while feeling as if you are peacefully drifting along. Astronauts also describe this sort of experience, which no doubt is even more intense for them.

Yet in all that, fear was never my primary emotion, and had it been, it would have undermined the experience rather than enhanced it. The brief moment when our engine stalled flying over Scotland certainly raised my heart rate, but not in a pleasant way. In that moment—objectively brief, subjectively interminable—I spent all of my emotional energy struggling to remain calm. It helped to continually remind myself of what I knew about aerodynamics: Wings want to fly. An airplane without an engine isn’t a rock; it’s a glider. It is entirely possible to safely land a small aircraft on literally zero engine power. Still, I’m glad we got the propeller started again and didn’t have to.

I have also enjoyed classic horror novels such as Dracula and Frankenstein; their artistry is also quite apparent, and reading them as books provides an emotional distance that watching them as films often lacks. I particularly notice this with vampire stories, as I can appreciate the romantic allure of immortality and the erotic tension of forbidden carnal desire—but the sight of copious blood on screen tends to trigger my mild hematophobia.

Yet if fear is the goal, surely having a phobia should only make it stronger and thus better? And yet, this seems to be a pattern: People with genuine phobia of the subject in question don’t actually enjoy horror films on the subject. Arachnophobes don’t often watch films about giant spiders. Cynophobes are rarely werewolf aficionados. And, indeed, rare is the hematophobe who is a connoisseur of vampire movies.

Moreover, we rarely see horror films about genuine dangers in the world. There are movies about rape, murder, war, terrorism, espionage, asteroid impacts, nuclear weapons and climate change, but (with rare exceptions) they aren’t horror films. They don’t wallow in fear the way that films about vampires, ghosts and werewolves do. They are complex thrillers (Argo, Enemy of the State, Tinker Tailor Soldier Spy, Broken Arrow), police procedurals (most films about rape or murder), heroic sagas (just about every war film), or just fun, light-hearted action spectacles (Armageddon, The Day After Tomorrow). Rather than a loosely-knit gang of helpless horny teenagers, they have strong, brave heroes. Even films about alien invasions aren’t usually horror (Alien notwithstanding); they also tend to be heroic war films. Unlike nuclear war or climate change, alien invasion is a quite unlikely event; but it’s surely more likely than zombies or werewolves.

In other words, when something is genuinely scary, the story is always about overcoming it. There is fear involved, but in the end we conquer our fear and defeat our foes. The good guys win in the end.

I think, then, that enjoyment of horror is not about real fear. Feeling genuinely afraid is unpleasant—as by all Darwinian rights it should be.

Horror is about simulating fear. It’s a kind of brinksmanship: You take yourself to the edge of fear and then back again, because what you are seeing would be scary if it were real, but deep down, you know it isn’t. You can sleep at night after watching movies about zombies, werewolves and vampires, because you know that there aren’t really such things as zombies, werewolves and vampires.

What about the exceptions? What about, say, The Silence of the Lambs? Psychopathic murderers absolutely are real. (Not especially common—but real.) But The Silence of the Lambs only works because of truly brilliant writing, directing, and acting; and part of what makes it work is that it isn’t just horror. It has layers of subtlety, and it crosses genres—it also has a good deal of police procedural in it, in fact. And even in The Silence of the Lambs, at least one of the psychopathic murderers is beaten in the end; evil does not entirely prevail.

Slasher films—which I especially dislike (see above: hematophobia)—seem like they might be a counterexample, in that there genuinely are a common subgenre and they mainly involve psychopathic murderers. But in fact almost all slasher films involve some kind of supernatural element: In Friday the 13th, Jason seems to be immortal. In A Nightmare on Elm Street, Freddy Krueger doesn’t just attack you with a knife, he invades your dreams. Slasher films actually seem to go out of their way to make the killer not real. Perhaps this is because showing helpless people murdered by a realistic psychopath would inspire too much genuine fear.

The terrifying truth is that, more or less at any time, a man with a gun could in fact come and shoot you, and while there may be ways to reduce that risk, there’s no way to make it zero. But that isn’t fun for a movie, so let’s make him a ghost or a zombie or something, so that when the movie ends, you can remind yourself it’s not real. Let’s pretend to be afraid, but never really be afraid.

Realizing that makes me at least a little more able to understand why some people enjoy horror.

Then again, I still don’t.