Reflections at the crossroads

Jan 21 JDN 2460332

When this post goes live, I will have just passed my 36th birthday. (That means I’ve lived for about 1.1 billion seconds, so in order to be as rich as Elon Musk, I’d need to have made, on average, since birth, $200 per second—$720,000 per hour.)

I certainly feel a lot better turning 36 than I did 35. I don’t have any particular additional accomplishments to point to, but my life has already changed quite a bit, in just that one year: Most importantly, I quit my job at the University of Edinburgh, and I am currently in the process of moving out of the UK and back home to Michigan. (We moved the cat over Christmas, and the movers have already come and taken most of our things away; it’s really just us and our luggage now.)

But I still don’t know how to field the question that people have been asking me since I announced my decision to do this months ago:

“What’s next?”

I’m at a crossroads now, trying to determine which path to take. Actually maybe it’s more like a roundabout; it has a whole bunch of different paths, surely not just two or three. The road straight ahead is labeled “stay in academia”; the others at the roundabout are things like “freelance writing”, “software programming”, “consulting”, and “tabletop game publishing”. There’s one well-paved and superficially enticing road that I’m fairly sure I don’t want to take, labeled “corporate finance”.

Right now, I’m just kind of driving around in circles.

Most people don’t seem to quit their jobs without a clear plan for where they will go next. Often they wait until they have another offer in hand that they intend to take. But when I realized just how miserable that job was making me, I made the—perhaps bold, perhaps courageous, perhaps foolish—decision to get out as soon as I possibly could.

It’s still hard for me to fully understand why working at Edinburgh made me so miserable. Many features of an academic career are very appealing to me. I love teaching, I like doing research; I like the relatively flexible hours (and kinda need them, because of my migraines).

I often construct formal decision models to help me make big choices—generally it’s a linear model, where I simply rate each option by its relative quality in a particular dimension, then try different weightings of all the different dimensions. I’ve used this successfully to pick out cars, laptops, even universities. I’m not entrusting my decisions to an algorithm; I often find myself tweaking the parameters to try to get a particular result—but that in itself tells me what I really want, deep down. (Don’t do that in research—people do, and it’s bad—but if the goal is to make yourself happy, your gut feelings are important too.)

My decision models consistently rank university teaching quite high. It generally only gets beaten by freelance writing—which means that maybe I should give freelance writing another try after all.

And yet, my actual experience at Edinburgh was miserable.

What went wrong?

Well, first of all, I should acknowledge that when I separate out the job “university professor” into teaching and research as separate jobs in my decision model, and include all that goes into both jobs—not just the actual teaching, but the grading and administrative tasks; not just doing the research, but also trying to fund and publish it—they both drop lower on the list, and research drops down a lot.

Also, I would rate them both even lower now, having more direct experience of just how awful the exam-grading, grant-writing and journal-submitting can be.

Designing and then grading an exam was tremendously stressful: I knew that many of my students’ futures rested on how they did on exams like this (especially in the UK system, where exams are absurdly overweighted! In most of my classes, the final exam was at least 60% of the grade!). I struggled mightily to make the exam as fair as I could, all the while knowing that it would never really feel fair and I didn’t even have the time to make it the best it could be. You really can’t assess how well someone understands an entire subject in a multiple-choice exam designed to take 90 minutes. It’s impossible.

The worst part of research for me was the rejection.

I mentioned in a previous post how I am hypersensitive to rejection; applying for grants and submitting to journals was clearly the worst feelings of rejection I’ve felt in any job. It felt like they were evaluting not only the value of my work, but my worth as a scientist. Failure felt like being told that my entire career was a waste of time.

It was even worse than the feeling of rejection in freelance writing (which is one of the few things that my model tells me is bad about freelancing as a career for me, along with relatively low and uncertain income). I think the difference is that a book publisher is saying “We don’t think we can sell it.”—’we’ and ‘sell’ being vital. They aren’t saying “this is a bad book; it shouldn’t exist; writing it was a waste of time.”; they’re just saying “It’s not a subgenre we generally work with.” or “We don’t think it’s what the market wants right now.” or even “I personally don’t care for it.”. They acknowledge their own subjective perspective and the fact that it’s ultimately dependent on forecasting the whims of an extremely fickle marketplace. They aren’t really judging my book, and they certainly aren’t judging me.

But in research publishing, it was different. Yes, it’s all in very polite language, thoroughly spiced with sophisticated jargon (though some reviewers are more tactful than others). But when your grant application gets rejected by a funding agency or your paper gets rejected by a journal, the sense really basically is “This project is not worth doing.”; “This isn’t good science.”; “It was/would be a waste of time and money.”; “This (theory or experiment you’ve spent years working on) isn’t interesting or important.” Nobody ever came out and said those things, nor did they come out and say “You’re a bad economist and you should feel bad.”; but honestly a couple of the reviews did kinda read to me like they wanted to say that. They thought that the whole idea that human beings care about each other is fundamentally stupid and naive and not worth talking about, much less running experiments on.

It isn’t so much that I believed them that my work was bad science. I did make some mistakes along the way (but nothing vital; I’ve seen far worse errors by Nobel Laureates). I didn’t have very large samples (because every person I add to the experiment is money I have to pay, and therefore funding I have to come up with). But overall I do believe that my work is sufficiently rigorous to be worth publishing in scientific journals.

It’s more that I came to feel that my work is considered bad, that the kind of work I wanted to do would forever be an uphill battle against an implacable enemy. I already feel exhausted by that battle, and it had only barely begun. I had thought that behavioral economics was a more successful paradigm by now, that it had largely displaced the neoclassical assumptions that came before it; but I was wrong. Except specifically in journals dedicated to experimental and behavioral economics (of which prestigious journals are few—I quickly exhausted them), it really felt like a lot of the feedback I was getting amounted to, “I refuse to believe your paradigm.”.

Part of the problem, also, was that there simply aren’t that many prestigious journals, and they don’t take that many papers. The top 5 journals—which, for whatever reason, command far more respect than any other journals among economists—each accept only about 5-10% of their submissions. Surely more than that are worth publishing; and, to be fair, much of what they reject probably gets published later somewhere else. But it makes a shockingly large difference in your career how many “top 5s” you have; other publications almost don’t matter at all. So once you don’t get into any of those (which of course I didn’t), should you even bother trying to publish somewhere else?

And what else almost doesn’t matter? Your teaching. As long as you show up to class and grade your exams on time (and don’t, like, break the law or something), research universities basically don’t seem to care how good a teacher you are. That was certainly my experience at Edinburgh. (Honestly even their responses to professors sexually abusing their students are pretty unimpressive.)

Some of the other faculty cared, I could tell; there were even some attempts to build a community of colleagues to support each other in improving teaching. But the administration seemed almost actively opposed to it; they didn’t offer any funding to support the program—they wouldn’t even buy us pizza at the meetings, the sort of thing I had as an undergrad for my activist groups—and they wanted to take the time we spent in such pedagogy meetings out of our grading time (probably because if they didn’t, they’d either have to give us less grading, or some of us would be over our allotted hours and they’d owe us compensation).

And honestly, it is teaching that I consider the higher calling.

The difference between 0 people knowing something and 1 knowing it is called research; the difference between 1 person knowing it and 8 billion knowing it is called education.

Yes, of course, research is important. But if all the research suddenly stopped, our civilization would stagnate at its current level of technology, but otherwise continue unimpaired. (Frankly it might spare us the cyberpunk dystopia/AI apocalypse we seem to be hurtling rapidly toward.) Whereas if all education suddenly stopped, our civilization would slowly decline until it ultimately collapsed into the Stone Age. (Actually it might even be worse than that; even Stone Age cultures pass on knowledge to their children, just not through formal teaching. If you include all the ways parents teach their children, it may be literally true that humans cannot survive without education.)

Yet research universities seem to get all of their prestige from their research, not their teaching, and prestige is the thing they absolutely value above all else, so they devote the vast majority of their energy toward valuing and supporting research rather than teaching. In many ways, the administrators seem to see teaching as an obligation, as something they have to do in order to make money that they can spend on what they really care about, which is research.

As such, they are always making classes bigger and bigger, trying to squeeze out more tuition dollars (well, in this case, pounds) from the same number of faculty contact hours. It becomes impossible to get to know all of your students, much less give them all sufficient individual attention. At Edinburgh they even had the gall to refer to their seminars as “tutorials” when they typically had 20+ students. (That is not tutoring!)And then of course there were the lectures, which often had over 200 students.

I suppose it could be worse: It could be athletics they spend all their money on, like most Big Ten universities. (The University of Michigan actually seems to strike a pretty good balance: they are certainly not hurting for athletic funding, but they also devote sizeable chunks of their budget to research, medicine, and yes, even teaching. And unlike virtually all other varsity athletic programs, University of Michigan athletics turns a profit!)

If all the varsity athletics in the world suddenly disappeared… I’m not convinced we’d be any worse off, actually. We’d lose a source of entertainment, but it could probably be easily replaced by, say, Netflix. And universities could re-focus their efforts on academics, instead of acting like a free training and selection system for the pro leagues. The University of California, Irvine certainly seemed no worse off for its lack of varsity football. (Though I admit it felt a bit strange, even to a consummate nerd like me, to have a varsity League of Legends team.)

They keep making the experience of teaching worse and worse, even as they cut faculty salaries and make our jobs more and more precarious.

That might be what really made me most miserable, knowing how expendable I was to the university. If I hadn’t quit when I did, I would have been out after another semester anyway, and going through this same process a bit later. It wasn’t even that I was denied tenure; it was never on the table in the first place. And perhaps because they knew I wouldn’t stay anyway, they didn’t invest anything in mentoring or supporting me. Ostensibly I was supposed to be assigned a faculty mentor immediately; I know the first semester was crazy because of COVID, but after two and a half years I still didn’t have one. (I had a small research budget, which they reduced in the second year; that was about all the support I got. I used it—once.)

So if I do continue on that “academia” road, I’m going to need to do a lot of things differently. I’m not going to put up with a lot of things that I did. I’ll demand a long-term position—if not tenure-track, at least renewable indefinitely, like a lecturer position (as it is in the US, where the tenure-track position is called “assistant professor” and “lecturer” is permanent but not tenured; in the UK, “lecturers” are tenure-track—except at Oxford, and as of 2021, Cambridge—just to confuse you). Above all, I’ll only be applying to schools that actually have some track record for valuing teaching and supporting their faculty.

And if I can’t find any such positions? Then I just won’t apply at all. I’m not going in with the “I’ll take what I can get” mentality I had last time. Our household finances are stable enough that I can afford to wait awhile.

But maybe I won’t even do that. Maybe I’ll take a different path entirely.

For now, I just don’t know.

Empathy is not enough

Jan 14 JDN 2460325

A review of Against Empathy by Paul Bloom

The title Against Empathy is clearly intentionally provocative, to the point of being obnoxious: How can you be against empathy? But the book really does largely hew toward the conclusion that empathy, far from being an unalloyed good as we may imagine it to be, is overall harmful and detrimental to society.

Bloom defines empathy narrowly, but sensibly, as the capacity to feel other people’s emotions automatically—to feel hurt when you see someone hurt, afraid when you see someone afraid. He argues surprisingly well that this capacity isn’t really such a great thing after all, because it often makes us help small numbers of people who are like us rather than large numbers of people who are different from us.

But something about the book rubs me the wrong way all throughout, and I think I finally put my finger on it:

If empathy is bad… compared to what?

Compared to some theoretical ideal of perfect compassion where we love all sentient beings in the universe equally and act only according to maxims that would yield the greatest benefit for all, okay, maybe empathy is bad.

But that is an impossible ideal. No human being has ever approached it. Even our greatest humanitarians are not like that.

Indeed, one thing has clearly characterized the very best human beings, and that is empathy. Every one of them has been highly empathetic.

The case for empathy gets even stronger if you consider the other extreme: What are human beings like when they lack empathy? Why, those people are psychopaths, and they are responsible for the majority of violent crimes and nearly all the most terrible atrocities.

Empirically, if you look at humans as we actually are, it really seems like this function is monotonic: More empathy makes people behave better. Less empathy makes them behave worse.

Yet Bloom does have a point, nevertheless.

There are real-world cases where empathy seems to have done more harm than good.

I think his best examples come from analysis of charitable donations. Most people barely give anything to charity, which we might think of as a lack of empathy. But a lot of people do give to a great deal to charity—yet the charities they give to and the gifts they give are often woefully inefficient.

Let’s even set aside cases like the Salvation Army, where the charity is actively detrimental to society due to the distortions of ideology. The Salvation Army is in fact trying to do good—they’re just starting from a fundamentally evil outlook on the universe. (And if that sounds harsh to you? Take a look at what they say about people like me.)

No, let’s consider charities that are well-intentioned, and not blinded by fanatical ideology, who really are trying to work toward good things. Most of them are just… really bad at it.

The most cost-effective charities, like the ones GiveWell gives top ratings to, can save a life for about $3,000-5,000, or about $150 to $250 per QALY.

But a typical charity is far, far less efficient than that. It’s difficult to get good figures on it, but I think it would be generous to say that a typical charity is as efficient as the standard cost-effectiveness threshold used in US healthcare, which is $50,000 per QALY. That’s already two hundred times less efficient.

And many charities appear to be even below that, where their marginal dollars don’t really seem to have any appreciable benefit in terms of QALY. Maybe $1 million per QALY—spend enough, and they’d get a QALY eventually.

Other times, people give gifts to good charities, but the gifts they give are useless—the Red Cross is frequently inundated with clothing and toys that it has absolutely no use for. (Please, please, I implore you: Give them money. They can buy what they need. And they know what they need a lot better than you do.)

Why do people give to charities that don’t really seem to accomplish anything? Because they see ads that tug on their heartstrings, or get solicited donations directly by people on the street or door-to-door canvassers. In other words, empathy.

Why do people give clothing and toys to the Red Cross after a disaster, instead of just writing a check or sending a credit card payment? Because they can see those crying faces in their minds, and they know that if they were a crying child, they’d want a toy to comfort them, not some boring, useless check. In other words, empathy.

Empathy is what you’re feeling when you see those Sarah McLachlan ads with sad puppies in them, designed to make you want to give money to the ASPCA.

Now, I’m not saying you shouldn’t give to the ASPCA. Actually animal welfare advocacy is one of those issues where cost-effectiveness is really hard to assess—like political donations, and for much the same reason. If we actually managed to tilt policy so that factory farming were banned, the direct impact on billions of animals spared that suffering—while indubitably enormous—might actually be less important, morally, than the impact on public health and climate change from people eating less meat. I don’t know what multiplier to apply to a cow’s suffering to convert her QALY into mine. But I do know that the world currently eats far too much meat, and it’s cooking the planet along with the cows. Meat accounts for 60% of food-related greenhouse gases, and 35% of all greenhouse gases.

But I am saying that if you give to the ASPCA, it should be because you support their advocacy against factory farming—not because you saw pictures of very sad puppies.

And empathy, unfortunately, doesn’t really work that way.

When you get right down to it, what Paul Bloom is really opposing is scope neglect, which is something I’ve written about before.

We just aren’t capable of genuinely feeling the pain of a million people, or a thousand, or probably even a hundred. (Maybe we can do a hundred; that’s under our Dunbar number, after all.) So when confronted with global problems that affect millions of people, our empathy system just kind of overloads and shuts down.

ERROR: OVERFLOW IN EMPATHY SYSTEM. ABORT, RETRY, IGNORE?

But when confronted with one suffering person—or five, or ten, or twenty—we can actually feel empathy for them. We can look at their crying face and we may share their tears.

Charities know this; that’s why Sarah McLachlan does those ASPCA ads. And if that makes people donate to good causes, that’s a good thing. (If it makes them donate to the Salvation Army, that’s a different story.)

The problem is, it really doesn’t tell us what causes are best to donate to. Almost any cause is going to alleviate some suffering of someone, somewhere; but there’s an enormous difference between $250 per QALY, $50,000per QALY, and $1 million per QALY. Your $50 donation would add either two and a half months, eight hours, or just over 26 minutes of joy to someone else’s life, respectively. (In the latter case, it may literally be better—morally—for you to go out to lunch or buy a video game.)

To really know the best places to give to, you simply can’t rely on your feelings of empathy toward the victims. You need to do research—you need to do math. (Or someone does, anyway; you can also trust GiveWell to do it for you.)

Paul Bloom is right about this. Empathy doesn’t solve this problem. Empathy is not enough.

But where I think he loses me is in suggesting that we don’t need empathy at all—that we could somehow simply dispense with it. His offer is to replace it with an even-handed, universal-minded utilitarian compassion, a caring for all beings in the universe that values all their interests evenly.

That sounds awfully appealing—other than the fact that it’s obviously impossible.

Maybe it’s something we can all aspire to. Maybe it’s something we as a civilization can someday change ourselves to become capable of feeling, in some distant transhuman future. Maybe even, sometimes, at our very best moments, we can even approximate it.

But as a realistic guide for how most people should live their lives? It’s a non-starter.

In the real world, people with little or no empathy are terrible. They don’t replace it with compassion; they replace it with selfishness, greed, and impulsivity.

Indeed, in the real world, empathy and compassion seem to go hand-in-hand: The greatest humanitarians do seem like they better approximate that universal caring (though of course they never truly achieve it). But they are also invariably people of extremely high empathy.

And so, Dr. Bloom, I offer you a new title, perhaps not as catchy or striking—perhaps it would even have sold fewer books. But I think it captures the correct part of your thesis much better:

Empathy is not enough.

Depression and the War on Drugs

Jan 7 JDN 2460318

There exists, right now, an extremely powerful antidepressant which is extremely cheap and has minimal side effects.

It’s so safe that it has no known lethal dose, and—unlike SSRIs—it is not known to trigger suicide. It is shockingly effective: it works in a matter of hours—not weeks like a typical SSRI—and even a single moderate dose can have benefits lasting months. It isn’t patented, because it comes from a natural source. That natural source is so easy to grow, you can do it by yourself at home for less than $100.

Why in the world aren’t we all using it?

I’ll tell you why: This wonder drug is called psilocybin. It is a Schedule I narcotic, which means that simply possessing it is a federal crime in the United States. Carrying it across the border is a felony.

It is also illegal in most other countries, including the UK, Australia, Belgium, Finland, Denmark, Sweden, Norway (#ScandinaviaIsNotAlwaysBetter), France, Germany, Hungary, Ireland, Japan, the list goes on….

Actually, it’s faster to list the places it’s not illegal: Austria, the Bahamas, Brazil, the British Virgin Islands, Jamaica, Nepal, the Netherlands, and Samoa. That’s it for true legalization, though it’s also decriminalized or unenforced in some other countries.

The best known antidepressant lies unused, because we made it illegal.

Similar stories hold for other amazingly beneficial drugs:

LSD also has powerful antidepressant effects with minimal side effects, and is likewise so ludicrously safe that we are not aware of a single fatal overdose ever happening in any human being. And it’s also Schedule I banned.

Ahayuasca is the same story: A great antidepressant, very safe, minimal side effects—and highly illegal.

There is also no evidence that psilocybin, LSD, or ahayuasca are addictive; and far from promoting the sort of violent, anti-social behavior that alcohol does, they actually seem to make people more compassionate.

This is pure speculation, but I think we should try psilocybin as a possible treatment for psychopathy. And if that works, maybe having a psilocybin trip should be a prerequisite for eligibility for any major elected office. (I often find it a bit silly how the biggest fans of psychedelics talk about the drugs radically changing the world, bringing peace and prosperity through a shift in consciousness; but if psilocybin could make all the world’s leaders more compassionate, that might actually have that kind of impact.)

Ketamine and MDMA at least do have some overdose risk and major side effects, and are genuinely addictive—but it’s not really clear that they’re any worse than SSRIs, and they certainly aren’t any worse than alcohol.

Alcohol may actually be the most widely-used antidepressant, and yet is clearly utterly ineffective; in fact, alcoholics consistently show depression increasing over time. Alcohol has a fatal dose so low it’s a common accident; it is also implicated in violent behavior, including half of all rapes—and in the majority of those rape cases, all consumption of alcohol was voluntary.

Yet alcohol can be bought over-the-counter at any grocery store.

The good news is that this is starting to change.

Recent changes in the law have allowed the use of psychedelic drugs in medical research—which is part of how we now know just how shockingly effective they are at treating depression.

Some jurisdictions in the US—notably, the whole state of Colorado—have decriminalized psilocybin, and Oregon has made it outright legal. Yet even this situation is precarious; just as has occurred with cannabis legalization, it’s still difficult to run a business selling psilocybin even in Oregon, because banks don’t want to deal with a business that sells something which is federally illegal.

Fortunately, this, too, is starting to change: A bill passed the US Senate a few months ago that would legalize banking to cannabis businesses in states where it is legal, and President Biden recently pardoned everyone in federal prison for simple cannabis possession. Now, why can’t we just make cannabis legal!?

The War on Drugs hasn’t just been a disaster for all the thousands of people needlessly imprisoned.

(Of course they had it the worst, and we should set them all free immediately—preferably with some form of restitution.)

The War of Drugs has also been a disaster for all the people who couldn’t get the treatment they needed, because we made that medicine illegal.

And for what? What are we even trying to accomplish here?

Prohibition was a failure—and a disaster of its own—but I can at least understand why it was done. When a drug kills nearly a hundred thousand people a year and is implicated in half of all rapes, that seems like a pretty damn good reason to want that drug gone. The question there becomes how we can best reduce alcohol use without the awful consequences that Prohibition caused—and so far, really high taxes seem to be the best method, and they absolutely do reduce crime.

But where was the disaster caused by cannabis, psilocybin, or ahayuasca? These drugs are made by plants and fungi; like alcohol, they have been used by humans for thousands of years. Where are the overdoses? Where is the crime? Psychedelics have none of these problems.

Honestly, it’s kind of amazing that these drugs aren’t more associated with organized crime than they are.

When alcohol was banned, it seemed to immediately trigger a huge expansion of the Mafia, as only they were willing and able to provide for the enormous demand of this highly addictive neurotoxin. But psilocybin has been illegal for decades, and yet there’s no sign of organized crime having anything to do with it. In fact, psilocybin use is associated with lower rates of arrest—which actually makes sense to me, because like I said, it makes you more compassionate.

That’s how idiotic and ridiculous our drug laws are:

We made a drug that causes crime legal, and we made a drug that prevents crime illegal.

Note that this also destroys any conspiracy theory suggesting that the government wants to keep us all docile and obedient: psilocybin is way better at making people docile than alcohol. No, this isn’t the product of some evil conspiracy.

Hanlon’s Razor: Never attribute to malice what can be adequately explained by stupidity.

This isn’t malice; it’s just massive, global, utterly catastrophic stupidity.

I might attribute this to Puritanical American attitude toward pleasure (Pleasure is suspect, pleasure is dangerous), but I don’t think of Sweden as particularly Puritanical, and they also ban most psychedelics. I guess the most libertine countries—the Netherlands, Brazil—seem to be the ones that have legalized them; but it doesn’t really seem like one should have to be that libertine to want the world’s cheapest, safest, most effective antidepressants to be widely available. I have very mixed feelings about Amsterdam’s (in)famous red light district, but absolutely no hesitation in supporting their legalization of psilocybin truffles.

Honestly, I think patriarchy might be part of this. Alcohol is seen as a very masculine drug—maybe because it can make you angry and violent. Psychedelics seem more feminine; they make you sensitive, compassionate and loving.

Even the way that psychedelics make you feel more connected with your body is sort of feminine; we seem to have a common notion that men are their minds, but women are their bodies.

Here, try it. Someone has said, “I feel really insecure about my body.” Quick: What is that person’s gender? Now suppose someone has said, “I’m very proud of my mind.” What is that person’s gender?

(No, it’s not just because the former is insecure and the latter is proud—though we do also gender those emotions, and there’s statistical evidence that men are generally more confident, though that’s never been my experience of manhood. Try it with the emotions swapped and it still works, just not quite as well.)

I’m not suggesting that this makes sense. Both men and women are precisely as physical and mental as each other—we are all both, and that is a deep truth about our nature. But I know that my mind makes an automatic association between mind/body and male/female, and I suspect yours does as well, because we came from similar cultural norms. (This goes at least back to Classical Rome, where the animus, the rational soul, was masculine, while the anima, the emotional one, was feminine.)

That is, it may be that we banned psychedelics because they were girly. The men in charge were worried about us becoming soft and weak. The drug that’s tied to thousands of rapes and car collisions is manly. The drug that brings you peace, joy, and compassion is not.

Think about the things that the mainstream objected to about Hippies: Men with long hair and makeup, women wearing pants, bright colors, flowery patterns, kindness and peacemongering—all threats to the patriarchal order.

Whatever it is, we need to stop. Millions of people are suffering, and we could so easily help them; all we need to do is stop locking people up for taking medicine.

A new direction

Dec 31 JDN 2460311

CW: Spiders [it’ll make sense in context]

My time at the University of Edinburgh is officially over. For me it was a surprisingly gradual transition: Because of the holiday break, I had already turned in my laptop and ID badge over a week ago, and because my medical leave, I hadn’t really done much actual work for quite some time. But this is still a momentous final deadline; it’s really, truly, finally over.

I now know with some certainty that leaving Edinburgh early was the right choice, and if anything I should have left sooner or never taken the job in the first place. (It seems I am like Randall Munroe after all.) But what I don’t know is where to go next.

We won’t be starving or homeless. My husband still has his freelance work, and my mother has graciously offered to let us stay in her spare room for awhile. We have some savings to draw upon. Our income will be low enough that payments on my student loans will be frozen. We’ll be able to get by, even if I can’t find work for awhile. But I certainly don’t want to live like that forever.

I’ve been trying to come up with ideas for new career paths, including ones I would never have considered before. Right now I am considering: Back into academia (but much choosier about what sort of school and position), into government or an international aid agency, re-training to work in software development, doing my own freelance writing (then I must decide: fiction or nonfiction? Commercial publishing, or self-published?), publishing our own tabletop games (we have one almost ready for crowdfunding, and another that I could probably finish relatively quickly), opening a game shop or escape room, or even just being a stay-at-home parent (surely the hardest to achieve financially; and while on the one hand it seems like an awful waste of a PhD, on the other hand it would really prove once and for all that I do understand the sunk cost fallacy, and therefore be a sign of my ultimate devotion to behavioral economics). The one mainstream option for an econ PhD that I’m not seriously considering is the private sector: If academia was this soul-sucking, I’m not sure I could survive corporate America.

Maybe none of these are yet the right answer. Or maybe some combination is.

What I’m really feeling right now is a deep uncertainty.

Also, fear. Fear of the unknown. Fear of failure. Fear of rejection. Almost any path I could take involves rejection—though of different kinds, and surely some more than others.

I’ve always been deeply and intensely affected by rejection. Some of it comes from formative experiences I had as a child and a teenager; some of it may simply be innate, the rejection-sensitive dysphoria that often comes with ADHD (which I now believe I have, perhaps mildly). (Come to think of it, even those formative experiences may have hit so hard because of my innate predisposition.)

But wherever it comes from, my intense fear of rejection is probably my greatest career obstacle. In today’s economy, just applying for a job—any job—requires bearing dozens of rejections. Openings get hundreds of applicants, so even being fully qualified is no guarantee of anything.

This makes it far more debilitating than most other kinds of irrational fear. I am also hematophobic, but that doesn’t really get in my way all that much; in the normal course of life, one generally tries to avoid bleeding anyway. (Now that MSM can donate blood, it does prevent me from doing that; and I do feel a little bad about that, since there have been blood shortages recently.)

But rejection phobia basically feels like this:

Imagine you are severely arachnophobic, just absolutely terrified of spiders. You are afraid to touch them, afraid to look at them, afraid to be near them, afraid to even think about them too much. (Given how common it is, you may not even have to imagine.)

Now, imagine (perhaps not too vividly, if you are genuinely arachnophobic!) that every job, every job, in every industry, regardless of what skills are required or what the work entails, requires you to first walk through a long hallway which is covered from floor to ceiling in live spiders. This is simply a condition of employment in our society: Everyone must be able to walk through the hallway full of spiders. Some jobs have longer hallways than others, some have more or less aggressive spiders, and almost none of the spiders are genuinely dangerous; but every job, everywhere, requires passing through a hallway of spiders.

That’s basically how I feel right now.

Freelance writing is the most obvious example—we could say this is an especially long hallway with especially large and aggressive spiders. To succeed as a freelance writer requires continually submitting work you have put your heart and soul into, and receiving in response curtly-worded form rejection letters over and over and over, every single time. And even once your work is successful, there will always be critics to deal with.

Yet even a more conventional job, say in academia or government, requires submitting dozens of applications and getting rejected dozens of times. Sometimes it’s also a curt form letter; other times, you make it all the way through multiple rounds of in-depth interviews and still get turned down. The latter honestly stings a lot more than the former, even though it’s in some sense a sign of your competence: they wouldn’t have taken you that far if you were unqualified; they just think they found someone better. (Did they actually? Who knows?) But investing all that effort for zero reward feels devastating.

The other extreme might be becoming a stay-at-home parent. There aren’t as many spiders in this hallway. While biological children aren’t really an option for us, foster agencies really can’t afford to be choosy. Since we don’t have any obvious major red flags, we will probably be able to adopt if we choose to—there will be bureaucratic red tape, no doubt, but not repeated rejections. But there is one very big rejection—one single, genuinely dangerous spider that lurks in a dark corner of the hallway: What if I am rejected by the child? What if they don’t want me as their parent?

Another alternative is starting a business—such as selling our own games, or opening an escape room. Even self-publishing has more of this character than traditional freelance writing. The only direct, explicit sort of rejection we’d have to worry about there is small business loans; and actually with my PhD and our good credit, we could reasonably expect to get accepted sooner or later. But there is a subtler kind of rejection: What if the market doesn’t want us? What if the sort of games or books (or escape experiences, or whatever) we have to offer just aren’t what the world seems to want? Most startup businesses fail quickly; why should ours be any different? (I wonder if I’d be able to get a small business loan on the grounds that I forecasted only a 50% chance of failing in the first year, instead of the baseline 80%. Somehow, I suspect not.)

I keep searching for a career option with no threat of rejection, and it just… doesn’t seem to exist. The best I can come up with is going off the grid and living as hermits in the woods somewhere. (This sounds pretty miserable for totally different reasons—as well as being an awful, frankly unconscionable waste of my talents.) As long as I continue to live within human society and try to contribute to the world, rejection will rear its ugly head.

Ultimately, I think my only real option is to find a way to cope with rejection—or certain forms of rejection. The hallways full of spiders aren’t going away. I have to find a way to walk through them.

Compassion and the cosmos

Dec 24 JDN 2460304

When this post goes live, it will be Christmas Eve, one of the most important holidays around the world.

Ostensibly it celebrates the birth of Jesus, but it doesn’t really.

For one thing, Jesus almost certainly wasn’t born in December. The date of Christmas was largely set by the Council of Tours in AD 567; it was set to coincide with existing celebrations—not only other Christian celebrations such as the Feast of the Epiphany, but also many non-Christian celebrations such as Yuletide, Saturnalia, and others around the Winter Solstice. (People today often say “Yuletide” when they actually mean Christmas, because the syncretization was so absolute.)

For another, an awful lot of the people celebrating Christmas don’t particularly care about Jesus. Countries like Sweden, Belgium, the UK, Australia, Norway, and Denmark are majority atheist but still very serious about Christmas. Maybe we should try to secularize and ecumenize the celebration and call it Solstice or something, but that’s a tall order. For now, it’s Christmas.

Compassion, love, and generosity are central themes of Christmas—and, by all accounts, Jesus did exemplify those traits. Christianity has a very complicated history, much of it quite dark; but this part of it at least seems worth preserving and even cherishing.

It is truly remarkable that we have compassion at all.

Most of this universe has no compassion. Many would like to believe otherwise, and they invent gods and other “higher beings” or attribute some sort of benevolent “universal consciousness” to the cosmos. (Really, most people copy the prior inventions of others.)

This is all wrong.

The universe is mostly empty, and what is here is mostly pitilessly indifferent.

The vast majority of the universe is comprised of cold, dark, empty space—or perhaps of “dark energy“, a phenomenon we really don’t understand at all, which many physicists believe is actually a shockingly powerful form of energy contained within empty space.

Most of the rest is made up of “dark matter“, a substance we still don’t really understand either, but believe to be basically a dense sea of particles that have mass but not much else, which cluster around other mass by gravity but otherwise rarely interact with other matter or even with each other.

Most of the “ordinary matter”, or more properly baryonic matter, (which we think of as ordinary, but actually by far the minority) is contained within stars and nebulae. It is mostly hydrogen and helium. Some of the other lighter elements—like lithium, sodium, carbon, oxygen, nitrogen, and all the way up to iron—can be made within ordinary stars, but still form a tiny fraction of the mass of the universe. Anything heavier than that—silver, gold, beryllium, uranium—can only be made in exotic, catastrophic cosmic events, mainly supernovae, and as a result these elements are even rarer still.

Most of the universe is mind-bendingly cold: about 3 Kelvin, just barely above absolute zero.

Most of the baryonic matter is mind-bendingly hot, contained within stars that burn with nuclear fires at thousands or even millions of Kelvin.

From a cosmic perspective, we are bizarre.

We live at a weird intermediate temperature and pressure, where matter can take on such exotic states as liquid and solid, rather than the far more common gas and plasma. We do contain a lot of hydrogen—that, at least, is normal by the standards of baryonic matter. But then we’re also made up of oxygen, carbon, nitrogen, and even little bits of all sorts of other elements that can only be made in supernovae? What kind of nonsense lifeform depends upon something as exotic as iodine to survive?

Most of the universe does not care at all about you.

Most of the universe does not care about anything.

Stars don’t burn because they want to. They burn because that’s what happens when hydrogen slams into other hydrogen hard enough.

Planets don’t orbit because they want to. They orbit because if they didn’t, they’d fly away or crash into their suns—and those that did are long gone now.

Even most living things, which are already nearly as bizarre as we are, don’t actually care much.

Maybe there is a sense in which a C. elegans or an oak tree or even a cyanobacterium wants to live. It certainly seems to try to live; it has behaviors that seem purposeful, which evolved to promote its ability to survive and pass on offspring. Rocks don’t behave. Stars don’t seek. But living things—even tiny, microscopic living things—do.

But we are something very special indeed.

We are animals. Lifeforms with complex, integrated nervous systems—in a word, brains—that allow us to not simply live, but to feel. To hunger. To fear. To think. To choose.

Animals—and to the best of our knowledge, only animals, though I’m having some doubts about AI lately—are capable of making choices and experiencing pleasure and pain, and thereby becoming something more than living beings: moral beings.

Because we alone can choose, we alone have the duty to choose rightly.

Because we alone can be hurt, we alone have the right to demand not to be.

Humans are even very special among animals. We are not just animals but chordates; not just chordates but mammals; not just mammals but primates. And even then, not just primates. We’re special even by those very high standards.

When you count up all the ways that we are strange compared to the rest of the universe, it seems incredibly unlikely that beings like us would come into existence at all.

Yet here we are. And however improbable it may have been for us to emerge as intelligent beings, we had to do so in order to wonder how improbable it was—and so in some sense we shouldn’t be too surprised.

It is a mistake to say that we are “more evolved” than any other lifeform; turtles and cockroaches had just as much time to evolve as we did, and if anything their relative stasis for hundreds of millions of years suggests a more perfected design: “If it ain’t broke, don’t fix it.”

But we are different than other lifeforms in a very profound way. And I dare say, we are better.

All animals feel pleasure, pain and hunger. (Some believe that even some plants and microscopic lifeforms may too.) Pain when something damages you; hunger when you need something; pleasure when you get what you needed.

But somewhere along the way, new emotions were added: Fear. Lust. Anger. Sadness. Disgust. Pride. To the best of our knowledge, these are largely chordate emotions, often believed to have emerged around the same time as reptiles. (Does this mean that cephalopods never get angry? Or did they evolve anger independently? Surely worms don’t get angry, right? Our common ancestor with cephalopods was probably something like a worm, perhaps a nematode. Does C. elegans get angry?)

And then, much later, still newer emotions evolved. These ones seem to be largely limited to mammals. They emerged from the need for mothers to care for their few and helpless young. (Consider how a bear or a cat fiercely protects her babies from harm—versus how a turtle leaves her many, many offspring to fend for themselves.)

One emotion formed the core of this constellation:

Love.

Caring, trust, affection, and compassion—and also rejection, betrayal, hatred, and bigotry—all came from this one fundamental capacity to love. To care about the well-being of others as well as our own. To see our purpose in the world as extending beyond the borders of our own bodies.

This is what makes humans different, most of all. We are the beings most capable of love.

We are of course by no means perfect at it. Some would say that we are not even very good at loving.

Certainly there are some humans, such as psychopaths, who seem virtually incapable of love. But they are rare.

We often wish that we were better at love. We wish that there were more compassion in the world, and fear that humanity will destroy itself because we cannot find enough compassion to compensate for our increasing destructive power.

Yet if we are bad at love, compared to what?

Compared to the unthinking emptiness of space, the hellish nuclear fires of stars, or even the pitiless selfishness of a worm or a turtle, we are absolute paragons of love.

We somehow find a way to love millions of others who we have never even met—maybe just a tiny bit, and maybe even in a way that becomes harmful, as solidarity fades into nationalism fades into bigotry—but we do find a way. Through institutions of culture and government, we find a way to trust and cooperate on a scale that would be utterly unfathomable even to the most wise and open-minded bonobo, let alone a nematode.

There are no other experts on compassion here. It’s just us.

Maybe that’s why so many people long for the existence of gods. They feel as ignorant as children, and crave the knowledge and support of a wise adult. But there aren’t any. We’re the adults. For all the vast expanses of what we do not know, we actually know more than anyone else. And most of the universe doesn’t know a thing.

If we are not as good at loving as we’d like, the answer is for us to learn to get better at it.

And we know that we can get better at it, because we have. Humanity is more peaceful and cooperative now than we have ever been in our history. The process is slow, and sometimes there is backsliding, but overall, life is getting better for most people in most of the world most of the time.

As a species, as a civilization, we are slowly learning how to love ourselves, one another, and the rest of the world around us.

No one else will learn to love for us. We must do it ourselves.

But we can.

And I believe we will.

Lamentations of a temporary kludge

Dec 17 JDN 2460297

Most things in the universe are just that—things. They consist of inanimate matter, blindly following the trajectories the laws of physics have set them on. (Actually, most of the universe may not even be matter—at our current best guess, most of the universe is mysterious “dark matter” and even more mysterious “dark energy”).

Then there are the laws: The fundamental truths of physics and mathematics are omnipresent and eternal. They could even be called omniscient, in the sense that all knowledge which could ever be conveyed must itself be possible to encode in physics and mathematics. (Could, in some metaphysical sense, knowledge exist that cannot be conveyed this way? Perhaps, but if so, we’ll never know nor even be able to express it.)

The reason physics and mathematics cannot simply be called God is twofold: One, they have no minds of their own; they do not think. Two, they do not care. They have no capacity for concern whatsoever, no desires, no goals. Mathematics seeks neither your fealty nor your worship, and physics will as readily destroy you as reward you. If the eternal law is a god, it is a mindless, pitilessly indifferent god—a Blind Idiot God.

But we are something special, something in between. We are matter, yes; but we are also pattern. Indeed, what makes me me and makes you you has far more to do with the arrangement of trillions of parts than it does with any particular material. The atoms in your body are being continually replaced, and you barely notice. But should the pattern ever be erased, you would be no more.

In fact, we are not simply one pattern, but many. We are a kludge: Billions of years of random tinkering has assembled us from components that each emerged millions of years apart. We could move before we could see; we could see before we could think; we could think before we could speak. All this evolution was mind-bogglingly gradual: In most cases it would be impossible to tell the difference one generation—or even one century—to the next. Yet as raindrops wear away mountains, one by one, we were wrought from mindless fragments of chemicals into beings of thought, feeling, reason—beings with hopes, fears, and dreams.

Much of what makes our lives difficult ultimately comes from these facts.

Our different parts were not designed to work together. Indeed, they were not really designed at all. Each component survived because it worked well enough to stay alive in the environment in which our ancestors lived. We often find ourselves in conflict with our own desires, in part because those desires evolved for very different environments than the ones we now find ourselves—and in part because there is no particular reason for evolution to avoid conflict, so long as survival is achieved.

As patterns, we can experience the law. We can write down equations that express small pieces of the fundamental truths that exist throughout the universe beyond space and time. From “2+2=4” to Gμν + Λgμν = κTμν“, through mathematics, we glimpse eternity.

But as matter, we are doomed to suffer, degrade, and ultimately die. Our pattern cannot persist forever. Perhaps one day we will find a way to change this—and if that day comes, it will be a glorious day; I will make no excuses for the dragon. For now, at least, it is a truth that we must face: We, all we love, and all we build must one day perish.

That is, we are not simply a kludge; we are a temporary one. Sooner or later, our bodies will fail and our pattern will be erased. What we were made of may persist, but in a form that will no longer be us, and in time, may become indistinguishable from all the rest of the universe.

We are flawed, for the same reason that a crystal is flawed. A theoretical crystal can be flawless and perfect; but a real, physical one must exist in an actual world where it will suffer impurities and disturbances that keep it from ever truly achieving perfect unity and symmetry. We can imagine ourselves as perfect beings, but our reality will always fall short.

We lament that are not perfect, eternal beings. Yet I am not sure it could have been any other way: Perhaps one must be a temporary kludge in order to be a being at all.

How do we stop overspending on healthcare?

Dec 10 JDN 2460290

I don’t think most Americans realize just how much more the US spends on healthcare than other countries. This is true not simply in absolute terms—of course it is, the US is rich and huge—but in relative terms: As a portion of GDP, our healthcare spending is a major outlier.

Here’s a really nice graph from Healthsystemtracker.org that illustrates it quite nicely: Almost all other First World countries share a simple linear relationship between their per-capita GDP and their per-capita healthcare spending. But one of these things is not like the other ones….

The outlier in the other direction is Ireland, but that’s because their GDP is wildly inflated by Leprechaun Economics. (Notice that it looks like Ireland is by far the richest country in the sample! This is clearly not the case in reality.) With a corrected estimate of their true economic output, they are also quite close to the line.

Since US GDP per capita ($70,181) is in between that of Denmark ($64,898) and Norway ($80,496) both of which have very good healthcare systems (#ScandinaviaIsBetter), we would expect that US spending on healthcare would similarly be in between. But while Denmark spends $6,384 per person per year on healthcare and Norway spends $7,065 per person per year, the US spends $12,914.

That is, the US spends nearly twice as much as it should on healthcare.

The absolute difference between what we should spend and what we actually spend is nearly $6,000 per person per year. Multiply that out by the 330 million people in the US, and…

The US overspends on healthcare by nearly $2 trillion per year.

This might be worth it, if health in the US were dramatically better than health in other countries. (In that case I’d be saying that other countries spend too little.) But plainly it is not.

Probably the simplest and most comparable measure of health across countries is life expectancy. US life expectancy is 76 years, and has increased over time. But if you look at the list of countries by life expectancy, the US is not even in the top 50. Our life expectancy looks more like middle-income countries such as Algeria, Brazil, and China than it does like Norway or Sweden, who should be our economic peers.

There are of course many things that factor into life expectancy aside from healthcare: poverty and homicide are both much worse in the US than in Scandinavia. But then again, poverty is much worse in Algeria, and homicide is much worse in Brazil, and yet they somehow manage to nearly match the US in life expectancy (actually exceeding it in some recent years).

The US somehow manages to spend more on healthcare than everyone else, while getting outcomes that are worse than any country of comparable wealth—and even some that are far poorer.

This is largely why there is a so-called “entitlements crisis” (as many a libertarian think tank is fond of calling it). Since libertarians want to cut Social Security most of all, they like to lump it in with Medicare and Medicaid as an “entitlement” in “crisis”; but in fact we only need a few minor adjustments to the tax code to make sure that Social Security remains solvent for decades to come. It’s healthcare spending that’s out of control.

Here, take a look.

This is the ratio of Social Security spending to GDP from 1966 to the present. Notice how it has been mostly flat since the 1980s, other than a slight increase in the Great Recession.

This is the ratio of Medicare spending to GDP over the same period. Even ignoring the first few years while it was ramping up, it rose from about 0.6% in the 1970s to almost 4% in 2020, and only started to decline in the last few years (and it’s probably too early to say whether that will continue).

Medicaid has a similar pattern: It rose steadily from 0.2% in 1966 to over 3% today—and actually doesn’t even show any signs of leveling off.

If you look at Medicare and Medicaid together, they surged from just over 1% of GDP in 1970 to nearly 7% today:

Put another way: in 1982, Social Security was 4.8% of GDP while Medicare and Medicaid combined were 2.4% of GDP. Today, Social Security is 4.9% of GDP while Medicare and Medicaid are 6.8% of GDP.

Social Security spending barely changed at all; healthcare spending more than doubled. If we reduced our Medicare and Medicaid spending as a portion of GDP back to what it was in 1982, we would save 4.4% of GDP—that is, 4.4% of over $25 trillion per year, so $1.1 trillion per year.

Of course, we can’t simply do that; if we cut benefits that much, millions of people would suddenly lose access to healthcare they need.

The problem is not that we are spending frivolously, wasting the money on treatments no one needs. On the contrary, both Medicare and Medicaid carefully vet what medical services they are willing to cover, and if anything probably deny services more often than they should.

No, the problem runs deeper than this.

Healthcare is too expensive in the United States.

We simply pay more for just about everything, and especially for specialist doctors and hospitals.

In most other countries, doctors are paid like any other white-collar profession. They are well off, comfortable, certainly, but few of them are truly rich. But in the US, we think of doctors as an upper-class profession, and expect them to be rich.

Median doctor salaries are $98,000 in France and $138,000 in the UK—but a whopping $316,000 in the US. Germany and Canada are somewhere in between, at $183,000 and $195,000 respectively.

Nurses, on the other hand, are paid only a little more in the US than in Western Europe. This means that the pay difference between doctors and nurses is much higher in the US than most other countries.

US prices on brand-name medication are frankly absurd. Our generic medications are typically cheaper than other countries, but our brand name pills often cost twice as much. I noticed this immediately on moving to the UK: I had always been getting generics before, because the brand name pills cost ten times as much, but when I moved here, suddenly I started getting all brand-name medications (at no cost to me), because the NHS was willing to buy the actual brand name products, and didn’t have to pay through the nose to do so.

But the really staggering differences are in hospitals.

Let’s compare the prices of a few inpatient procedures between the US and Switzerland. Switzerland, you should note, is a very rich country that spends a lot on healthcare and has nearly the world’s highest life expectancy. So it’s not like they are skimping on care. (Nor is it that prices in general are lower in Switzerland; on the contrary, they are generally higher.)

A coronary bypass in Switzerland costs about $33,000. In the US, it costs $76,000.

A spinal fusion in Switzerland costs about $21,000. In the US? $52,000.

Angioplasty in Switzerland: $9.000. In the US? $32,000.

Hip replacement: Switzerland? $16,000. The US? $28,000.

Knee replacement: Switzerland? $19,000. The US? $27,000.

Cholecystectomy: Switzerland? $8,000. The US? $16,000.

Appendectomy: Switzerland? $7,000. The US? $13,000.

Caesarian section: Switzerland? $8,000. The US? $11,000.

Hospital prices are even lower in Germany and Spain, whose life expectancies are not as high as Switzerland—but still higher than the US.

These prices are so much lower that in fact if you were considering getting surgery for a chronic condition in the US, don’t. Buy plane tickets to Europe and get the procedure done there. Spend an extra few thousand dollars on a nice European vacation and you’d still end up saving money. (Obviously if you need it urgently you have no choice but to use your nearest hospital.) I know that if I ever need a knee replacement (which, frankly, is likely, given my height), I’m gonna go to Spain and thereby save $22,000 relative to what it would cost in the US. That’s a difference of a car.

Combine this with the fact that the US is the only First World country without universal healthcare, and maybe you can see why we’re also the only country in the world where people are afraid to call an ambulance because they don’t think they can afford it. We are also the only country in the world with a medical debt crisis.

Where is all this extra money going?

Well, a lot of it goes to those doctors who are paid three times as much as in France. That, at least, seems defensible: If we want the best doctors in the world maybe we need to pay for them. (Then again, do we have the best doctors in the world? If so, why is our life expectancy so mediocre?)

But a significant portion is going to shareholders.

You probably already knew that there are pharmaceutical companies that rake in huge profits on those overpriced brand-name medications. The top five US pharma companies took in net earnings of nearly $82 billion last year. Pharmaceutical companies typically take in much higher profit margins than other companies: a typical corporation makes about 8% of its revenue in profit, while pharmaceutical companies average nearly 14%.

But you may not have realized that a surprisingly large proportion of hospitals are for-profit businesseseven though they make most of their revenue from Medicare and Medicaid.

I was surprised to find that the US is not unusual in that; in fact, for-profit hospitals exist in dozens of countries, and the fraction of US hospital capacity that is for-profit isn’t even particularly high by world standards.

What is especially large is the profits of US hospitals. 7 healthcare corporations in the US all posted net incomes over $1 billion in 2021.

Even nonprofit US hospitals are tremendously profitable—as oxymoronic as that may sound. In fact, mean operating profit is higher among nonprofit hospitals in the US than for-profit hospitals. So even the hospitals that aren’t supposed to be run for profit… pretty much still are. They get tax deductions as if they were charities—but they really don’t act like charities.

They are basically nonprofit in name only.

So fixing this will not be as simple as making all hospitals nonprofit. We must also restructure the institutions so that nonprofit hospitals are genuinely nonprofit, and no longer nonprofit in name only. It’s normal for a nonprofit to have a little bit of profit or loss—nobody can make everything always balance perfectly—but these hospitals have been raking in huge profits and keeping it all in cash instead of using it to reduce prices or improve services. In the study I linked above, those 2,219 “nonprofit” hospitals took in operating profits averaging $43 million each—for a total of $95 billion.

Between pharmaceutical companies and hospitals, that’s a total of over $170 billion per year just in profit. (That’s more than we spend on food stamps, even after surge due to COVID.) This is pure grift. It must be stopped.

But that still doesn’t explain why we’re spending $2 trillion more than we should! So after all, I must leave you with a question:

What is America doing wrong? Why is our healthcare so expensive?

The problem with “human capital”

Dec 3 JDN 2460282

By now, human capital is a standard part of the economic jargon lexicon. It has even begun to filter down into society at large. Business executives talk frequently about “investing in their employees”. Politicians describe their education policies as “investing in our children”.

The good news: This gives businesses a reason to train their employees, and governments a reason to support education.

The bad news: This is clearly the wrong reason, and it is inherently dehumanizing.

The notion of human capital means treating human beings as if they were a special case of machinery. It says that a business may own and value many forms of productive capital: Land, factories, vehicles, robots, patents, employees.

But wait: Employees?


Businesses don’t own their employees. They didn’t buy them. They can’t sell them. They couldn’t make more of them in another factory. They can’t recycle them when they are no longer profitable to maintain.

And the problem is precisely that they would if they could.

Indeed, they used to. Slavery pre-dates capitalism by millennia, but the two quite successfully coexisted for hundreds of years. From the dawn of civilization up until all too recently, people literally were capital assets—and we now remember it as one of the greatest horrors human beings have ever inflicted upon one another.

Nor is slavery truly defeated; it has merely been weakened and banished to the shadows. The percentage of the world’s population currently enslaved is as low as it has ever been, but there are still millions of people enslaved. In Mauritania, slavery wasn’t even illegal until 1981, and those laws weren’t strictly enforced until 2007. (I had graduated from high school!) One of the most shocking things about modern slavery is how cheaply human beings are willing to sell other human beings; I have bought sandwiches that cost more than some people have paid for other people.

The notion of “human capital” basically says that slavery is the correct attitude to have toward people. It says that we should value human beings for their usefulness, their productivity, their profitability.

Business executives are quite happy to see the world in that way. It makes the way they have spent their lives seem worthwhile—perhaps even best—while allowing them to turn a blind eye to the suffering they have neglected or even caused along the way.

I’m not saying that most economists believe in slavery; on the contrary, economists led the charge of abolitionism, and the reason we wear the phrase “the dismal science” like a badge is that the accusation was first leveled at us for our skepticism toward slavery.

Rather, I’m saying that jargon is not ethically neutral. The names we use for things have power; they affect how people view the world.

This is why I always endeavor to always speak of net wealth rather than net worth—because a billionare is not worth more than other people. I’m not even sure you should speak of the net worth of Tesla Incorporated; perhaps it would be better to simply speak of its net asset value or market capitalization. But at least Tesla is something you can buy and sell (piece by piece). Elon Musk is not.

Likewise, I think we need a new term for the knowledge, skills, training, and expertise that human beings bring to their work. It is clearly extremely important; in fact in some sense it’s the most important economic asset, as it’s the only one that can substitute for literally all the others—and the one that others can least substitute for.

Human ingenuity can’t substitute for air, you say? Tell that to Buzz Aldrin—or the people who were once babies that breathed liquid for their first months of life. Yes, it’s true, you need something for human ingenuity to work with; but it turns out that with enough ingenuity, you may not need much, or even anything in particular. One day we may manufacture the air, water and food we need to live from pure energy—or we may embody our minds in machines that no longer need those things.

Indeed, it is the expansion of human know-how and technology that has been responsible for the vast majority of economic growth. We may work a little harder than many of our ancestors (depending on which ancestors you have in mind), but we accomplish with that work far more than they ever could have, because we know so many things they did not.

All that capital we have now is the work of that ingenuity: Machines, factories, vehicles—even land, if you consider all the ways that we have intentionally reshaped the landscape.

Perhaps, then, what we really need to do is invert the expression:

Humans are not machines. Machines are embodied ingenuity.

We should not think of human beings as capital. We should think of capital as the creation of human beings.

Marx described capital as “embodied labor”, but that’s really less accurate: What makes a robot a robot is much less about the hours spent building it, than the centuries of scientific advancement needed to understand how to make it in the first place. Indeed, if that robot is made by another robot, no human need ever have done any labor on it at all. And its value comes not from the work put into it, but the work that comes out of it.

Like so much of neoliberal ideology, the notion of human capital seems to treat profit and economic growth as inherent ends in themselves. Human beings only become valued insofar as we advance the will of the almighty dollar. We forget that the whole reason we should care about economic growth in the first place is that it benefits people. Money is the means, not the end; people are the end, not the means.

We should not think in terms of “investing in children”, as if they were an asset that was meant to yield a return. We should think of enriching our children—of building a better world for them to live in.

We should not speak of “investing in employees”, as though they were just another asset. We should instead respect employees and seek to treat them with fairness and justice.

That would still give us plenty of reason to support education and training. But it would also give us a much better outlook on the world and our place in it.

You are worth more than your money or your job.

The economy exists for people, not the reverse.

Don’t ever forget that.

The paradoxical obviousness of reason

Nov 26 JDN 2460275

The basic precepts of reason seem obvious and irrefutable:

Believe what’s most likely to be true.

Do what’s most likely to work.

How are you going to argue with that? In fact, it seems like by the time you try to argue at all, you’ve already agreed to it. These principles may be undeniable—literally impossible to coherently deny.

Even when expressed a little more precisely, the principles of reason still seem pretty obvious:

Beliefs should be consistent with each other and with observations.

The best action is the one with the best expected outcome.

And you really can get surprisingly far with this. A few more steps of mathematical precision, and you basically get the scientific method and utilitarianism:
Beliefs should be assigned consistent Bayesian probabilities according to the observed evidence.

The best action is the one that maximizes expected utility.

Why, then, did it take humanity 99.9% of its existence to figure this out? Why did a species that has lived for 300,000 years only really start getting this right in about the past 300?

In fact, even today, while most people would at least assent to the basic notion of rationality, a large number don’t really follow it well, and only a small fraction really understand it at the deepest level.

Reason just seems obvious if you think about it. How do so many people miss it?

Because most people really don’t think about it that much.

In fact, I’m going to make a stronger claim:

Most people don’t think about anything that much.

Remember: To a first approximation, all human behavior is social norms.

Most human beings go through most of their lives behaving according to habits and social norms that they may not even be consciously aware of. They do things how they were always done; they believe what those around them believe. They adopt the religion of their parents, cheer for the sports team of their hometown, vote for the political party that is popular in their community. They may not even register these things as decisions at all—they simply did not consider the alternatives.

It’s not that they are incapable of thinking. When they really need to think hard about something, they can do it. But hard thinking is, well, hard. It’s difficult; it’s uncomfortable; for most people, it’s unfamiliar. So, they avoid it when they can. (There is even a kind of meta-rationality in that: Behavioral economists call it rational inattention.)

Few would willingly assent to the claim “I believe a lot of things that aren’t true.” People generally believe that their beliefs are true.

I doubt even most people in ancient history would agree with a statement like that. People who wholemindedly believed in witches, werewolves, ghosts, and sympathetic magic still believed that their beliefs were true. People who thought that a giant beetle rolled the sun across the sky still thought they had a good handle on how the world works.

In fact, the few people I know who would agree with a statement like that are very honest, introspective Bayesians who recognize that the joint probability of all their beliefs being true must be quite small. Agreeing that some of your beliefs are false is a sign not that you are irrational, but that you are extremely rational. (In fact, I would agree with a statement like that: If I knew what I’m wrong about, I’d change my belief; but odds are, I’m wrong about something.)

But most people simply don’t even bother to evaluate the truth of many of their beliefs. If something is easy to check and directly affects their lives, they’ll probably try to gather evidence for it. But if it’s at all abstract or difficult to evaluate, they’ll more or less give up and believe whatever seems to be popular. (This explains Carlin’s dictum: “Tell people there’s an invisible man in the sky who created the universe, and the vast majority will believe you. Tell them the paint is wet, and they have to touch it to be sure.“)

This can also help to explain why so many people—mostly, but not exclusively right-wing people—complain that scientists are “elitist” while worshipping at the feet of clergy and business executives (the latter only—so far—figuratively, but the former all too literally).


What could be more elitist than clergy? They are basically claiming a special, unique connection to the ultimate truths of the universe that is only accessible to them. They claim to be ordained by the all-powerful ruler of the universe with the absolute to right adjudicate all truth and morality.

For goodness’ sake, one of the most popular and powerful ones literally claims to be infallible.

Meanwhile, basically all scientists agree that anyone who is reasonably smart and willing to work hard, either making their own observations, running their own experiments, or just reading the work of a lot of other people’s observations and experiments, can become a scientist. Some scientists are arrogant or condescending, but as an institution and culture, science is fundamentally egalitarian.

No, what people are objecting to among scientists is not elitism. Part of it may be the condescension of telling people: “This is obvious. If you thought about it, you would see that it has to be right.”

Yet the reason we keep saying that is… it is basically true. The precepts of rationality are obvious if think about them, and they do lead quite directly to rejecting a lot of mainstream beliefs, particularly about religion. I’m sure it feels insulting to be told that you just aren’t thinking hard enough about important things… but maybe you aren’t?

We may need to find a gentler way to convey this message. There’s no point in saying it if nobody is going to listen. Yet that doesn’t make it any less true.

It’s not that quantum mechanics is intuitively obvious (quite the opposite is still a terrible understatement), nor even that Darwinian natural selection or comparative advantage are obvious (though surely they’re less counter-intuitive than quantum mechanics). The conclusions of science are not obvious. They took centuries to figure out for good reason.

But the principles of science really are: Want to know if something is true? Look! Find out!

Yet historically this has not in fact been how human beings formed most of their beliefs. Indeed, I am often awed by just how bad most people throughout history have been at thinking empirically.

It’s not just that people throughout history believed in witches without ever having seen one, or knowing anyone who had seen one. (I’ve never seen a platypus or a quasar, and I still believe in them.) It’s that they were willing to execute people for being witches—killing people as punishment for deeds that not only they did not do, but could not possibly have done. Entire civilizations for millennia failed to realize that this was wrong.

Aristotle believed that men’s body temperature was hotter than women’s, and that this temperature difference determined the sex of children. That’s Aristotle, a certifiable genius living in the culture that pioneered rationalist philosophy. (Ironically—and by pure Stopped Clock Principle—he’d almost be right about certain species of reptiles.) It never occurred to him to even try to measure the body temperatures of lots of people and see if this was true. (Admittedly they didn’t have very good thermometers back then.)

Aristotle did get a lot of things right: In particular, his trichotomy of souls is basically accurate, with “vegetative soul” renamed “homeostatic metabolism and reproduction”, “sensitive soul” renamed “limbic system”, and “rational soul” renamed “prefrontal cortex”. The vegetative soul is what makes you alive, the sensitive soul is what makes you sentient, and the rational soul is what makes you a person. He even recognized a deep truth that the majority of human beings today do not: The soul is a function of the body, and dies when the body dies. For his time, he was absolutely off the charts in rationality. But even he didn’t really integrate rationality and empiricism fully into his way of thinking.

Even today there are a shocking number of common misconceptions that could be easily refuted by anyone who thought to check (or look it up!):

Wolves howl at the full moon? Nope, wolves don’t care about the phase of the moon, and if you live near any, you’ll hear them howl all year round. Actually, wolf howling is more like that “Twilight Bark” from 101 Dalmations; it’s a long-distance communication and coordination signal.

Eggs can only balance on the equinox? Nope, it’s tricky, but you can balance an egg just as well any day of the year.

You don’t lose most of your heat through your head: Try going outside in the cold wearing a t-shirt and shorts with a hat, and then again with snow pants and a heavy coat and no hat; you’ll see which feels colder.

“Beer before liquor, never sicker” is nonsense: It matters how much alcohol you drink (and how much you eat), not what order you do it in, and you’d know that if you just tried it both ways a few times.

Taste on your tongue is localized to particular areas? No, it’s not, and you can tell by putting foods with strong flavors on different parts of your tongue. (Indeed, I did when they did that demonstration in elementary school; I wondered if that meant my tongue was somehow weird.)

I can understand not wanting to take the risk with fan death yourself, but maybe listen to all the other people—including medical experts—who tell you it’s not real? I keep a fan in my bedroom every night and it hasn’t killed me yet.

Even the gambler’s fallacy is something you could easily disabuse yourself of by rolling some dice for awhile and taking careful notes. Am I more likely to roll snake eyes if I haven’t in awhile? Nope; the odds on any given roll are always exactly the same.

But most people simply don’t think to check.

Indeed, most people get a lot of their beliefs—particular those about complex, abstract, or distant things—from authority figures. While empiricism doesn’t come very naturally to humans, hierarchy absolutely does. (I think it’s a primate thing.) Another reason scientists may seem “elitist” is that people think we are trying to usurp that authority. We’re telling you that what your religious leaders taught you is false; that must mean that we are trying to become religious leaders ourselves.

But in fact we’re telling you something far more radical than that: You don’t need religious leaders. You don’t need to take things on faith. If you want to know whether something is true, you can look.

We are not trying to usurp control over your hierarchy. We are trying to utterly dismantle it. We dethrone the king, not so that we can become kings ourselves—but so that the world can have kings no longer.

Granted, most people aren’t going to be able to run particle accelerator experiments in their garages. But if you want to know how particle physics works, and how we know what we know about it, go to your nearest university, find a particle physicist, and ask: I guarantee they’ll be more than happy to tell you whatever you want to know. You can even do this via email from anywhere in the world.

That is, we do need expertise: People who specialize in a particular field of knowledge can learn it much better than others. But we do not need authority: You don’t just have to take their word for it. There’s a difference between expertise and authority.

And sometimes, really all you need to do is stop and think. People should try that more often.

Homeschooling and too much freedom

Nov 19 JDN 2460268

Allowing families to homeschool their children increases freedom, quite directly and obviously. This is a large part of the political argument in favor of homeschooling, and likely a large part of why homeschooling is so popular within the United States in particular.

In the US, about 3% of people are homeschooled. This seems like a small proportion, but it’s enough to have some cultural and political impact, and it’s considerably larger than the proportion who are homeschooled in most other countries.

Moreover, homeschooling rates greatly increased as a result of COVID, and it’s anyone’s guess when, or even whether, they will go back down. I certainly hope they do; here’s why.

A lot of criticism about homeschooling involves academic outcomes: Are the students learning enough English and math? This is largely unfounded; statistically, academic outcomes of homeschooled students don’t seem to be any worse than those of public school students; by some measures, they are actually better.Nor is there clear evidence that homeschooled kids are any less developed socially; most of them get that social development through other networks, such as churches and sports teams.

No, my concern is not that they won’t learn enough English and math. It’s that they won’t learn enough history and science. Specifically, the parts of history and science that contradict the religious beliefs of the parents who are homeschooling them.

One way to study this would be to compare test scores by homeschooled kids on, say, algebra and chemistry (which do not directly threaten Christian evangelical beliefs) to those on, say, biology and neuroscience (which absolutely, fundamentally do). Lying somewhere in between are physics (F=ma is no threat to Christianity, but the Big Bang is) and history (Christian nationalists happily teach that Thomas Jefferson wrote the Declaration of Independence, but often omit that he owned slaves). If homeschooled kids are indeed indoctrinated, we should see particular lacunas in their knowledge where the facts contradict their ideology. In any case, I wasn’t able to find any such studies.

But even if their academic outcomes are worse in certain domains, so what? What about the freedom of parents to educate their children how they choose? What about the freedom of children to not be subjected to the pain of public school?

It will come as no surprise to most of you that I did well in school. In almost everything, really: math, science, philosophy, English, and Latin were my best subjects, and I earned basically flawless grades in them. But I also did very well in creative writing, history, art, and theater, and fairly well in music. My only poor performance was in gym class (as I’ve written about before).

It may come as some surprise when I tell you that I did not particularly enjoy school. In elementary school I had few friends—and one of my closest ended up being abusive to me. Middle school I mostly enjoyed—despite the onset of my migraines. High school started out utterly miserable, though it got a little better—a little—once I transferred to Community High School. Throughout high school, I was lonely, stressed, anxious, and depressed most of the time, and had migraine headaches of one intensity or another nearly every single day. (Sadly, most of that is true now as well; but I at least had a period of college and grad school where it wasn’t, and hopefully I will again once this job is behind me.)

I was good at school. I enjoyed much of the content of school. But I did not particularly enjoy school.

Thus, I can quite well understand why it is tempting to say that kids should be allowed to be schooled at home, if that is what they and their parents want. (Of course, a problem already arises there: What if child and parent disagree? Whose choice actually matters? In practice, it’s usually the parent’s.)

On the whole, public school is a fairly toxic social environment: Cliquish, hyper-competitive, stressful, often full of conflict between genders, races, classes, sexual orientations, and of course the school-specific one, nerds versus jocks (I’d give you two guesses which team I was on, but you’re only gonna need one). Public school sucks.

Then again, many of these problems and conflicts persist into adult life—so perhaps it’s better preparation than we care to admit. Maybe it’s better to be exposed to bias and conflict so that you can learn to cope with them, rather than sheltered from them.

But there is a more important reason why we may need public school, why it may even be worth coercing parents and children into that system against their will.

Public school forces you to interact with people different from you.

At a public school, you cannot avoid being thrown in the same classroom with students of other races, classes, and religions. This is of course more true if your school system is diverse rather than segregated—and all the more reason that the persistent segregation of many of our schools is horrific—but it’s still somewhat true even in a relatively homogeneous school. I was fortunate enough to go to a public school in Ann Arbor, where there was really quite substantial diversity. But even where there is less diversity, there is still usually some diversity—if not race, then class, or religion.

Certainly any public school has more diversity than homeschooling, where parents have the power to specifically choose precisely which other families their children will interact with, and will almost always choose those of the same race, class, and—above all—religious denomination as themselves.

The result is that homeschooled children often grow up indoctrinated into a dogmatic, narrow-minded worldview, convinced that the particular beliefs they were raised in are the objectively, absolutely correct ones and all others are at best mistaken and at worst outright evil. They are trained to reject conflict and dissent, to not even expose themselves to other people’s ideas, because those are seen as dangerous—corrupting.

Moreover, for most homeschooling parents—not all, but most—this is clearly the express intent. They want to raise their children in a particular set of beliefs. They want to inoculate them against the corrupting influences of other ideas. They are not afraid of their kids being bullied in school; they are afraid of them reading books that contradict the Bible.

This article has the headline “Homeschooled children do not grow up to be more religious”, yet its core finding is exactly the opposite of that:

The Cardus Survey found that homeschooled young adults were not noticeably different in their religious lives from their peers who had attended private religious schools, though they were more religious than peers who had attended public or Catholic schools.

No more religious than private religious schools!? That’s still very religious. No, the fair comparison is to public schools, which clearly show lower rates of religiosity among the same demographics. (The interesting case is Catholic schools; they, it turns out, also churn out atheists with remarkable efficiency; I credit the Jesuit norm of top-quality liberal education.) This is clear evidence that religious homeschooling does make children more religious, and so does most private religious education.

Another finding in that same article sounds good, but is misleading:

Indiana University professor Robert Kunzman, in his careful study of six homeschooling families, found that, at least for his sample, homeschooled children tended to become more tolerant and less dogmatic than their parents as they grew up.


This is probably just regression to the mean. The parents who give their kids religious homeschooling are largely the most dogmatic and intolerant, so we would expect by sheer chance that their kids were less dogmatic and intolerant—but probably still pretty dogmatic and intolerant. (Also, do I have to pount out that n=6 barely even constitutes a study!?) This is like the fact that the sons of NBA players are usually shorter than their fathers—but still quite tall.

Homeschooling is directly linked to a lot of terrible things: Young-Earth Creationism, Christian nationalism, homophobia, and shockingly widespread child abuse.

While most right-wing families don’t homeschool, most homeschooling families are right-wing: Between 60% and 70% of homeschooling families vote Republican in most elections. More left-wing voters are homeschooling now with the recent COVID-driven surge in homeschooling, but the right-wing still retains a strong majority for now.

Of course, there are a growing number of left-wing and non-religious families who use homeschooling. Does this mean that the threat of indoctrination is gone? I don’t think so. I once knew someone who was homeschooled by a left-wing non-religious family and still ended up adopting an extremely narrow-minded extremist worldview—simply a left-wing non-religious one. In some sense a left-wing non-religious narrow-minded extremism is better than a right-wing religious narrow-minded extremism, but it’s still narrow-minded extremism. Whatever such a worldview gets right is mainly by the Stopped Clock Principle. It still misses many important nuances, and is still closed to new ideas and new evidence.

Of course this is not a necessary feature of homeschooling. One absolutely could homeschool children into a worldview that is open-minded and tolerant. Indeed, I’m sure some parents do. But statistics suggest that most do not, and this makes sense: When parents want to indoctrinate their children into narrow-minded worldviews, homeschooling allows them to do that far more effectively than if they had sent their children to public school. Whereas if you want to teach your kids open-mindedness and tolerance, exposing them to a diverse environment makes that easier, not harder.

In other words, the problem is that homeschooling gives parents too much control; in a very real sense, this is too much freedom.

When can freedom be too much? It seems absurd at first. But there are at least two cases where it makes sense to say that someone has too much freedom.

The first is paternalism: Sometimes people really don’t know what’s best for them, and giving them more freedom will just allow them to hurt themselves. This notion is easily abused—it has been abused many times, for example against disabled people and colonized populations. For that reason, we are right to be very skeptical of it when applied to adults of sound mind. But what about children? That’s who we are talking about after all. Surely it’s not absurd to suggest that children don’t always know what’s best for them.

The second is the paradox of tolerance: The freedom to take away other people’s freedom is not a freedom we can afford to protect. And homeschooling that indoctrinates children into narrow-minded worldviews is a threat to other people’s freedom—not only those who will be oppressed by a new generation of extremists, but also the children themselves who are never granted the chance to find their own way.

Both reasons apply in this case: paternalism for the children, the paradox of tolerance for the parents. We have a civic responsibility to ensure that children grow up in a rich and diverse environment, so that they learn open-mindedness and tolerance. This is important enough that we should be willing to impose constraints on freedom in order to achieve it. Democracy cannot survive a citizenry who are molded from birth into narrow-minded extremists. There are parents who want to mold their children that way—and we cannot afford to let them.

From where I’m sitting, that means we need to ban homeschooling, or at least very strictly regulate it.