Why do we have holidays about death and fear?

Oct 26 JDN 2460975

I confess, I don’t think I ever really got Halloween. As a kid I enjoyed dressing up in costumes and getting candy, but the part about being scared—or pretending to be scared, or approximating being scared, or decorating with things like bats and spiders that some people find scary but I don’t especially—never really made a whole lot of sense to me. The one Halloween decoration that does genuinely cause me any fear is excessive amounts of blood (I have a mild hematophobia acquired from a childhood injury), and that experience is aversive—I want to avoid it, not experience more of it. (I’ve written about my feelings toward horror as a genre previously.)

Dia de los Muertos makes a bit more sense to me: A time to reflect about our own mortality, a religious festival about communing with the souls of your ancestors. But that doesn’t really fully explain all the decorated skulls. (It’s apparently hotly debated within the historical community whether these are really different holidays: Scholars disagree as to whether Dia de los Muertos has Native roots or is really just a rebranded Allhallowtide.)

It just generally seems weird to me to have a holiday about death and fear. Aren’t those things… bad? But maybe the point of the holiday is actually to dull them a little, to make them less threatening by the act of trying to celebrate them. Skeletons are scary, but plastic skeletons aren’t so bad; skulls are scary, but decorated skulls are less so. Maybe by playing around with it, we can take some of the bite out of the fear and grief.

My general indifference toward Halloween as an adult is apparently pretty unusual among LGBT people, many of whom seem to treat Halloween season as a kind of second Pride Month. I think the main draw is the opportunity to don a costume and thereby adopt a new identity. And that can be fun, sometimes; but somehow each year I find it feels like such a chore to actually go find a Halloween costume I want to wear.

Maybe part of it is that most people aren’t doing that sort of thing all the time, the way I am by playing games (especially role-playing games). Costumes do add to the immersion of the experience, but do they really add enough to justify the cost of buying one and the effort of wearing it? Maybe I’d just rather boot up Skyrim for the 27th playthrough. But I suppose most people don’t play such games, or not nearly as often as I do; so for them, a chance to be someone else once a year is an opportunity they can’t afford to pass up.

What is the real impact of AI on the environment?

Oct 19 JDN 2460968

The conventional wisdom is that AI is consuming a huge amount of electricity and water for very little benefit, but when I delved a bit deeper into the data, the results came out a lot more ambiguous. I still agree with the “very little benefit” part, but the energy costs of AI may not actually be as high as many people believe.

So how much energy does AI really use?

This article in MIT Technology Reviewestimates that by 2028, AI will account for 50% of data center usage and 6% of all US energy. But two things strike me about that:

  1. This is a forecast. It’s not what’s currently happening.
  2. 6% of all US energy doesn’t really sound that high, actually.

Note that transportation accounts for 37% of US energy consumed. Clearly we need to bring that down; but it seems odd to panic about a forecast of something that uses one-sixth of that.

Currently, AI is only 14% of data center energy usage. That forecast has it rising to 50%. Could that happen? Sure. But it hasn’t happened yet. Data centers are being rapidly expanded, but that’s not just for AI; it’s for everything the Internet does, as more and more people get access to the Internet and use it for more and more demanding tasks (like cloud computing and video streaming).

Indeed, a lot of the worry really seems to be related to forecasts. Here’s an even more extreme forecast suggesting that AI will account for 21% of global energy usage by 2030. What’s that based on? I have no idea; they don’t say. The article just basically says it “could happen”; okay, sure, a lot of things could happen. And I feel like this sort of forecast comes from the same wide-eyed people who say that the Singularity is imminent and AI will soon bring us to a glorious utopia. (And hey, if it did, that would obviously be worth 21% of global energy usage!)

Even more striking to me is the fact that a lot of other uses of data centers are clearly much more demanding. YouTube uses about 50 times as much energy as ChatGPT; yet nobody seems to be panicking that YouTube is an environmental disaster.

What is a genuine problem is that data centers have strong economies of scale, and so it’s advantageous to build a few very large ones instead of a lot of small ones; and when you build a large data center in a small town it puts a lot of strain on the local energy grid. But that’s not the same thing as saying that data centers in general are wastes of energy; on the contrary, they’re the backbone of the Internet and we all use them almost constantly every day. We should be working on ways to make sure that small towns aren’t harmed by building data centers near them; but we shouldn’t stop building data centers.

What about water usage?

Well, here’s an article estimating that training ChatGPT-3 evaporated hundreds of thousands of liters of fresh water. Once again I have a few notes about that:

  1. Evaporating water is just about the best thing you could do to it aside from leaving it there. It’s much better than polluting it (which is what most water usage does); it’s not even close. That water will simply rain back down later.
  2. Total water usage in the US is estimated at over 300 billion gallons (1.1 trillion liters) per day. Most of that is due to power generation and irrigation. (The best way to save water as a consumer? Become vegetarian—then you’re getting a lot more calories per irrigated acre.)
  3. A typical US household uses about 100 gallons (380 liters) of water per person per day.

So this means that training ChatGPT-3 cost about 4 seconds of US water consumption, or the same as what a single small town uses each day. Once again, that doesn’t seem like something worth panicking over.

A lot of this seems to be that people hear big-sounding numbers and don’t really have the necessary perspective on those numbers. Of course any service that is used by millions of people is going to consume what sounds like a lot of electricity. But in terms of usage per person, or compared to other services with similar reach, AI really doesn’t seem to be uniquely demanding.

This is not to let AI off the hook.

I still agree that the benefits of AI have so far been small, and the risks—both in the relatively short term, of disrupting our economy and causing unemployment, and in the long term, even endangering human civilization itself—are large. I would in fact support an international ban on all for-profit and military research and development of AI; a technology this powerful should be under the control of academic institutions and civilian governments, not corporations.

But I don’t think we need to worry too much about the environmental impact of AI just yet. If we clean up our energy grid (which has just gotten much easier thanks to cheap renewables) and transportation systems, the additional power draw from data centers really won’t be such a big problem.

Why are so many famous people so awful?

Oct 12 JDN 2460961

J.K. Rowling is a transphobic bigot. H.P. Lovecraft was an overt racist. Orson Scott Card is homophobic, and so was Frank Herbert. Robert Heinlein was a misogynist. Isaac Asimov was a serial groper and sexual harasser. Neil Gaiman has been credibly accused of multiple sexual assaults.

That’s just among sci-fi and fantasy authors whose work I admire. I could easily go on with lots of other famous people and lots of other serious allegations. (I suppose Bill Cosby and Roman Polanski seem like particularly apt examples.)

Some of these are worse than others; since they don’t seem to be guilty of any actual crimes, we might even cut some slack to Lovecraft, Herbert and Heinlein for being products of their times. (It seems very hard to make that defense for Asimov and Gaiman, with Rowling and Card somewhere in between because they aren’t criminals, but ‘their time’ is now.)

There are of course exceptions: Among sci-fi authors, for instance, Ursula Le Guin, Becky Chambers, Alistair Reynolds and Andy Weir all seem to be ethically unimpeachable. (As far as I know? To be honest, I still feel blind-sided by Neil Gaiman.)

But there really does seem to be pattern here:

Famous people are often bad people.

I guess I’m not quite sure what the baseline rate of being racist, sexist, or homophobic is (and frankly maybe it’s pretty high); but the baseline rate of committing multiple sexual assaults is definitely lower than the rate at which famous men get credibly accused of such.

Lord Acton famously remarked similarly:

Power tends to corrupt and absolute power corrupts absolutely. Great men are almost always bad men, even when they exercise influence and not authority; still more when you superadd the tendency of the certainty of corruption by authority.

I think this account is wrong, however. Abraham Lincoln, Mahatma Gandhi, and Nelson Mandela were certainly powerful—and certainly flawed—but they do not seem corrupt to me. I don’t think that Gandhi beat his wife because he led the Indian National Congress, and Mandela supported terrorists precisely during the period when he had the least power and the fewest options. (It’s almost tautologically true that Lincoln couldn’t have suspended habeas corpusif he weren’t extremely powerful—but that doesn’t mean that it was the power that shaped his character.)

I don’t think the problem is that power corrupts. I think the problem is that the corrupt seek power, and are very good at obtaining it.

In fact, I think the reason that so many famous people are such awful people is that our society rewards being awful. People will flock to you if you are overconfident and good at self-promoting, and as long as they like your work, they don’t seem to mind who you hurt along the way; this makes a perfect recipe for rewarding narcissists and psychopaths with fame, fortune, and power.

If you doubt that this is the case:

How else do you explain Donald Trump?

The man has absolutely no redeeming qualities. He is incompetent, willfully ignorant, deeply incurious, arrogant, manipulative, and a pathological liar. He’s also a racist, misogynist, and admitted sexual assaulter. He has been doing everything in his power to prevent the release of the Epstein Files, which strongly suggests he has in fact sexually assaulted teenagers. He’s also a fascist, and now that he has consolidated power, he is rapidly pushing the United States toward becoming a fascist state—complete with masked men with guns who break into your home and carry you away without warrants or trials.

Yet tens of millions of Americans voted for him to become President of the United States—twice.

Basically, it seems to be that Trump said he was great, and they believed him. Simply projecting confidence—however utterly unearned that confidence might be—was good enough.

When it comes to the authors I started this post with, one might ask whether their writing talents were what brought them fame, independently or in spite of their moral flaws. To some extent that is probably true. But we also don’t really know how good they are, compared to all the other writers whose work never got published or never got read. Especially during times—all too recently—when writers who were women, queer, or people of color simply couldn’t get their work published, who knows what genius we might have missed out on? Dune the first book is a masterpiece, but by the time we get to Heretics of Dune the books have definitely lost their luster; maybe there were some other authors with better books that could have been published, but never were because Herbert had the clout and the privilege and those authors didn’t.

I do think genuine merit has some correlation with success. But I think the correlation is much weaker than is commonly supposed. A lot of very obviously terrible and/or incompetent people are extremely successful in life. Many of them were born with advantages—certainly true of Elon Musk and Donald Trump—but not all of them.

Indeed, there are so many awful successful people that I am led to conclude that moral behavior has almost nothing to do with success. I don’t think people actively go out of their way to support authors, musicians, actors, business owners or politicians who are morally terrible; but it’s difficult for me to reject the hypothesis that they literally don’t care. Indeed, when evidence emerges that someone powerful is terrible, usually their supporters will desperately search for reasons why the allegations can’t be true, rather than seriously considering no longer supporting them.

I don’t know what to do about this.

I don’t know how to get people to believe allegations more, or care about them more; and that honestly seems easier than changing the fundamental structure of our society in a way that narcissists and psychopaths are no longer rewarded with power. The basic ways that we decide who gets jobs, who gets published, and who gets elected seem to be deeply, fundamentally broken; they are selecting all the wrong people, and our whole civilization is suffering the consequences.


We are so far from a just world that I honestly can’t see how to get there from here, or even how to move substantially closer.

But I think we still have to try.

Taylor Swift and the means of production

Oct 5 JDN 2460954

This post is one I’ve been meaning to write for awhile, but current events keep taking precedence.

In 2023, Taylor Swift did something very interesting from an economic perspective, which turns out to have profound implications for our economic future.

She re-recorded an entire album and released it through a different record company.

The album was called 1989 (Taylor’s Version), and she created it because for the last four years she had been fighting with Big Machine Records over the rights to her previous work, including the original album 1989.

A Marxist might well say she seized the means of production! (How rich does she have to get before she becomes bourgeoisie, I wonder? Is she already there, even though she’s one of a handful of billionaires who can truly say they were self-made?)

But really she did something even more interesting than that. It was more like she said:

Seize the means of production? I am the means of production.”

Singing and songwriting are what is known as a human-capital-intensive industry. That is, the most important factor of production is not land, or natural resources, or physical capital (yes, you need musical instruments, amplifiers, recording equipment and the like—but these are a small fraction of what it costs to get Talor Swift for a concert), or even labor in the ordinary sense. It’s one where so-called (honestly poorly named) “human capital” is the most important factor of production.

A labor-intensive industry is one where you just need a lot of work to be done, but you can get essentially anyone to do it: Cleaning floors is labor-intensive. A lot of construction work is labor-intensive (though excavators and the like also make it capital-intensive).

No, for a human-capital-intensive industry, what you need is expertise or talent. You don’t need a lot of people doing back-breaking work; you need a few people who are very good at doing the specific thing you need to get done.

Taylor Swift was able to re-record and re-release her songs because the one factor of production that couldn’t be easily substituted was herself. Big Machine Records overplayed their hand; they thought they could control her because they owned the rights to her recordings. But she didn’t need her recordings; she could just sing the songs again.

But now I’m sure you’re wondering: So what?

Well, Taylor Swift’s story is, in large part, the story of us all.

For most of the 18th, 19th, and 20th centuries, human beings in developed countries saw a rapid increase in their standard of living.

Yes, a lot of countries got left behind until quite recently.

Yes, this process seems to have stalled in the 21st century, with “real GDP” continuing to rise but inequality and cost of living rising fast enough that most people don’t feel any richer (and I’ll get to why that may be the case in a moment).

But for millions of people, the gains were real, and substantial. What was it that brought about this change?

The story we are usually told is that it was capital; that as industries transitioned from labor-intensive to capital-intensive, worker productivity greatly increased, and this allowed us to increase our standard of living.

That’s part of the story. But it can’t be the whole thing.

Why not, you ask?

Because very few people actually own the capital.

When capital ownership is so heavily concentrated, any increases in productivity due to capital-intensive production can simply be captured by the rich people who own the capital. Competition was supposed to fix this, compelling them to raise wages to match productivity, but we often haven’t actually had competitive markets; we’ve had oligopolies that consolidate market power in a handful of corporations. We had Standard Oil before, and we have Microsoft now. (Did you know that Microsoft not only owns more than half the consumer operating system industry, but after acquiring Activision Blizzard, is now the largest video game company in the world?) In the presence of an oligopoly, the owners of the capital will reap the gains from capital-intensive productivity.

But standards of living did rise. So what happened?

The answer is that production didn’t just become capital-intensive. It became human-capital-intensive.

More and more jobs required skills that an average person didn’t have. This created incentives for expanding public education, making workers not just more productive, but also more aware of how things work and in a stronger bargaining position.

Today, it’s very clear that the jobs which are most human-capital-intensive—like doctors, lawyers, researchers, and software developers—are the ones with the highest pay and the greatest social esteem. (I’m still not 100% sure why stock traders are so well-paid; it really isn’t that hard to be a stock trader. I could write you an algorithm in 50 lines of Python that would beat the average trader (mostly by buying ETFs). But they pretend to be human-capital-intensive by hiring Harvard grads, and they certainly pay as if they are.)

The most capital-intensive industries—like factory work—are reasonably well-paid, but not that well-paid, and actually seem to be rapidly disappearing as the capital simply replaces the workers. Factory worker productivity is now staggeringly high thanks to all this automation, but the workers themselves have gained only a small fraction of this increase in higher wages; by far the bigger effect has been increased profits for the capital owners and reduced employment in manufacturing.

And of course the real money is all in capital ownership. Elon Musk doesn’t have $400 billion because he’s a great engineer who works very hard. He has $400 billion because he owns a corporation that is extremely highly valued (indeed, clearly overvalued) in the stock market. Maybe being a great engineer or working very hard helped him get there, but it was neither necessary nor sufficient (and I’m sure that his dad’s emerald mine also helped).

Indeed, this is why I’m so worried about artificial intelligence.

Most forms of automation replace labor, in the conventional labor-intensive sense: Because you have factory robots, you need fewer factory workers; because you have mountaintop removal, you need fewer coal miners. It takes fewer people to do the same amount of work. But you still need people to plan and direct the process, and in fact those people need to be skilled experts in order to be effective—so there’s a complementarity between automation and human capital.

But AI doesn’t work like that. AI substitutes for human capital. It doesn’t just replace labor; it replaces expertise.

So far, AI is currently too unreliable to replace any but entry-level workers in human-capital-intensive industries (though there is some evidence it’s already doing that). But it will most likely get more reliable over time, if not via the current LLM paradigm, than through the next one that comes after. At some point, AI will come to replace experienced software developers, and then veteran doctors—and I don’t think we’ll be ready.

The long-term pattern here seems to be transitioning away from human-capital-intensive production to purely capital-intensive production. And if we don’t change the fact that capital ownership is heavily concentrated and so many of our markets are oligopolies—which we absolutely do not seem poised to do anything about; Democrats do next to nothing and Republicans actively and purposefully make it worse—then this transition will be a recipe for even more staggering inequality than before, where the rich will get even more spectacularly mind-bogglingly rich while the rest of us stagnate or even see our real standard of living fall.

The tech bros promise us that AI will bring about a utopian future, but that would only work if capital ownership were equally shared. If they continue to own all the AIs, they may get a utopia—but we sure won’t.

We can’t all be Taylor Swift. (And if AI music catches on, she may not be able to much longer either.)

Reflections on the Charlie Kirk assassination

Sep 28 JDN 2460947

No doubt you are well aware that Charlie Kirk was shot and killed on September 10. His memorial service was held on September 21, and filled a stadium in Arizona.

There have been a lot of wildly different takes on this event. It’s enough to make you start questioning your own sanity. So while what I have to say may not be that different from what Krugman (or for that matter Jacobin) had to say, I still thought I would try to contribute to the small part of the conversation that’s setting the record straight.

First of all, let me say that this is clearly a political assassination, and as a matter of principle, that kind of thing should not be condoned in a democracy.

The whole point of a democratic system is that we don’t win by killing or silencing our opponents, we win by persuading or out-voting them. As long as someone is not engaging in speech acts that directly command or incite violence (like, say, inciting people to attack the Capitol), they should be allowed to speak in peace; even abhorrent views should be not be met with violence.

Free speech isn’t just about government censorship (though that is also a major problem right now); it’s a moral principle that underlies the foundation of liberal democracy. We don’t resolve conflicts with violence unless absolutely necessary.

So I want to be absolutely clear about this: Killing Charlie Kirk was not acceptable, and the assassin should be tried in a court of law and, if duly convicted, imprisoned for a very long time.

Second of all, we still don’t know the assassin’s motive, so stop speculating until we do.

At first it looked like the killer was left-wing. Then it looked like maybe he was right-wing. Now it looks like maybe he’s left-wing again. Maybe his views aren’t easily categorized that way; maybe he’s an anarcho-capitalist, or an anarcho-communist, or a Scientologist. I won’t say it doesn’t matter; it clearly does matter. But we simply do not know yet.

There is an incredibly common and incredibly harmful thing that people do after any major crime: They start spreading rumors and speculating about things that we actually know next to nothing about. Stop it. Don’t contribute to that.


The whole reason we have a court system is to actually figure out the real truth, which takes a lot of time and effort. The courts are one American institution that’s actually still functioning pretty well in this horrific cyberpunk/Trumpistan era; let them do their job.

It could be months or years before we really fully understand what happened here. Accept that. You don’t need to know the answer right now, and it’s far more dangerous to think you know the answer when you actually don’t.

But finally, I need to point out that Charlie Kirk was an absolutely abhorrent, despicable husk of a human being and no one should be honoring him.

First of all, he himself advocated for political violence against his opponents. I won’t say anyone deserves what happened to him—but if anyone did, it would be him, because he specifically rallied his followers to do exactly this sort of thing to other people.

He was also bigoted in almost every conceivable way: Racist, sexist, ableist, homophobic, and of course transphobic. He maintained a McCarthy-esque list of college professors that he encouraged people to harass for being too left-wing. He was a covert White supremacist, and only a little bit covert. He was not covert at all about his blatant sexism and misogyny that seems like it came from the 1950s instead of the 2020s.

He encouraged his—predominantly White, male, straight, cisgender, middle-class—audience to hate every marginalized group you can think of: women, people of color, LGBT people, poor people, homeless people, people with disabilities. Not content to merely be an abhorrent psychopath himself, he actively campaigned against the concept of empathy.

Charlie Kirk deserves no honors. The world is better off without him. He made his entire career out of ruining the lives of innocent people and actively making the world a worse place.

It was wrong to kill Charlie Kirk. But if you’re sad he’s gone, what is wrong with you!?

For my mother, on her 79th birthday

Sep 21 JDN 2460940

When this post goes live, it will be mother’s 79th birthday. I think birthdays are not a very happy time for her anymore.

I suppose nobody really likes getting older; children are excited to grow up, but once you hit about 25 or 26 (the age at which you can rent a car at the normal rate and the age at which you have to get your own health insurance, respectively) and it becomes “getting older” instead of “growing up”, the excitement rapidly wears off. Even by 30, I don’t think most people are very enthusiastic about their birthdays. Indeed, for some people, I think it might be downhill past 21—you wanted to become an adult, but you had no interest in aging beyond that point.

But I think it gets worse as you get older. As you get into your seventies and eighties, you begin to wonder which birthday will finally be your last; actually I think my mother has been wondering about this even earlier than that, because her brothers died in their fifties, her sister died in her sixties, and my father died at 63. At this point she has outlived a lot of people she loved. I think there is a survivor’s guilt that sets in: “Why do I get to keep going, when they didn’t?”

These are also very hard times in general; Trump and the people who enable him have done tremendous damage to our government, our society, and the world at large in a shockingly short amount of time. It feels like all the safeguards we were supposed to have suddenly collapsed and we gave free rein to a madman.

But while there are many loved ones we have lost, there are many we still have; and nor need our set of loved ones be fixed, only to dwindle with each new funeral. We can meet new people, and they can become part of our lives. New children can be born into our family, and they can make our family grow. It is my sincere hope that my mother still has grandchildren yet to meet; in my case they would probably need to be adopted, as the usual biological route is pretty much out of the question, and surrogacy seems beyond our budget for the foreseeable future. But we would still love them, and she could still love them, and it is worth sticking around in this world in order to be a part of their lives.

I also believe that this is not the end for American liberal democracy. This is a terrible time, no doubt. Much that we thought would never happen already has, and more still will. It must be so unsettling, so uncanny, for someone who grew up in the triumphant years after America helped defeat fascism in Europe, to grow older and then see homegrown American fascism rise ascendant here. Even those of us who knew history all too well still seem doomed to repeat it.

At this point it is clear that victory over corruption, racism, and authoritarianism will not be easy, will not be swift, may never be permanent—and is not even guaranteed. But it is still possible. There is still enough hope left that we can and must keep fighting for an America worth saving. I do not know when we will win; I do not even know for certain that we will, in fact, win. But I believe we will.

I believe that while it seems powerful—and does everything it can to both promote that image and abuse what power it does have—fascism is a fundamentally weak system, a fundamentally fragile system, which simply cannot sustain itself once a handful of critical leaders are dead, deposed, or discredited. Liberal democracy is kinder, gentler—and also slower, at times even clumsier—than authoritarianism, and so it may seem weak to those whose view of strength is that of the savanna ape or the playground bully; but this is an illusion. Liberal democracy is fundamentally strong, fundamentally resilient. There is power in kindness, inclusion, and cooperation that the greedy and cruel cannot see. Fascism in Germany arrived and disappeared within a generation; democracy in America has stood for nearly 250 years.

We don’t know how much more time we have, Mom; none of us do. I have heard it said that you should live your life as though you will live both a short life and a long one; but honestly, you should probably live your life as though you will live a randomly-decided amount of time that is statistically predicted by actuarial tables—because you will. Yes, the older you get, the less time you have left (almost tautologically); but especially in this age of rapid technological change, none of us really know whether we’ll die tomorrow or live another hundred years.

I think right now, you feel like there isn’t much left to look forward to. But I promise you there is. Maybe it’s hard to see right now; indeed, maybe you—or I, or anyone—won’t even ever get to see it. But a brighter future is possible, and it’s worth it to keep going, especially if there’s any way that we might be able to make that brighter future happen sooner.

Passion projects and burnout

Sep 14 JDN 2460933

I have seen a shockingly precipitous decline in my depression and anxiety scores over the last couple of weeks, from average Burns Scores of about 15/29 to about 7/20. This represents a decline from “mild depression” and “moderate anxiety” to “normal but unhappy” and “mild anxiety”; but under the circumstances (Trump is still President, I’m still unemployed), I think it may literally mean a complete loss of pathological symptoms.

I’m not on any new medications. I did recently change therapists, but I don’t think this one is substantially better than the last one. My life situation hasn’t changed. The political situation in the United States is if anything getting worse. So what happened?

I found a passion project.

A month and a half ago, I started XBOX Game Camp, and was assigned to a team of game developers to make a game over the next three months (so we’re about halfway there now). I was anxious at first, because I have limited experience in video game development (a few game jams, some Twine games, and playing around with RenPy and Unity) and absolutely no formal training in it; but once we got organized, I found myself Lead Producer on the project and also the second-best programmer. I also got through a major learning curve barrier in Unreal Engine, which is what the team decided to use.

But that wasn’t my real passion project; instead, it enabled me to create one. With that boost in confidence and also increased comfortability with Unreal, I soon realized that, with the help of some free or cheap 3D assets from Fab and Sketchfab, I now had the tools I needed to make my own 3D video game all by myself—something that I would never have thought possible.

And having this chance to create more or less whatever I want (constrained by availability of assets and my own programming skills—but both of which are far less constraining than I had previously believed) has had an extremely powerful effect on my mood. I not only feel less depression and anxiety, I also feel more excitement, more jeu de vivre. I finally feel like I’m recovering from the years of burnout I got from academia.

That got me wondering: How unusual is this?

The empirical literature on burnout doesn’t generally talk about this; it’s mostly about conventional psychiatric interventions like medication and cognitive behavioral therapy. There are also some studies on mindfulness.

But there are more than a few sources of anecdotal reports and expert advice suggesting that passion projects can make a big difference. A lot of what burnout seems to be is disillusionment from your work, loss of passion for it. Finding other work that you can be passionate about can go a long way at fixing that problem.

Of course, like anything else, I’m sure this is no miracle cure. (Indeed, I’m feeling much worse today in particular, but I think that’s because I went through a grueling six-hour dental surgery yesterday—awake the whole time—and now I’m in pain and it was hard to sleep.) But it has made a big difference for me the last few weeks, so if you are going through anything similar, it might be worth a try to find a passion project of your own.

The AI bubble is going to crash hard

Sep 7 JDN 2460926

Based on the fact that it only sort of works and yet corps immediately put it in everything, I had long suspected that the current wave of AI was a bubble. But after reading Ed Zitron’s epic takedowns of the entire industry, I am not only convinced it’s a bubble; I’m convinced it is probably the worst bubble we’ve had in a very long time. This isn’t the dot-com crash; it’s worse.

The similarity to the dot-com crash is clear, however: This a huge amount of hype over a new technology that genuinely could be a game-changer (the Internet certainly was!), but won’t be in the time horizon on which the most optimistic investors have assumed it will be. The gap between “it sort of works” and “it radically changes our economy” is… pretty large, actually. It’s not something you close in a few years.


The headline figure here is that based on current projections, US corporations will have spent $560 billion on capital expenditure, for anticipated revenue of only $35 billion.

They won’t pay it off for 16 years!? That kind of payoff rate would make sense for large-scale physical infrastructure, like a hydroelectric dam. It absolutely does not make sense in an industry that is dependent upon cutting-edge technology that wears out fast and becomes obsolete even faster. They must think that revenue is going to increase to something much higher, very soon.

The corps seem to be banking on the most optimistic view of AI: That it will soon—very soon—bring about a radical increase in productivity that brings GDP surging to new heights, or even a true Singularity where AI fundamentally changes the nature of human existence.

Given the kind of errors I’ve seen LLMs make when I tried to use them to find research papers or help me with tedious coding, this is definitely not what’s going to happen. Claude gives an impressive interview, and (with significant guidance and error-correction) it also managed pretty well at making some simple text-based games; but it often recommended papers to me that didn’t exist, and through further experimentation, I discovered that it could not write me a functional C++ GUI if its existence depended on it. Somewhere on the Internet I heard someone describe LLMs as answering not the question you asked directly, but the question, “What would a good answer to this question look like?” and that seems very accurate. It always gives an answer that looks valid—but not necessarily one that is valid.

AI will find some usefulness in certain industries, I’m sure; and maybe the next paradigm (or the one after that) will really, truly, effect a radical change on our society. (Right now the best thing to use LLMs for seems to be cheating at school—and it also seems to be the most common use. Not exactly the great breakthrough we were hoping for.) But LLMs are just not reliable enough to actually use for anything important, and sooner or later, most of the people using them are going to figure that out.

Of course, by the Efficient Roulette Hypothesis, it’s extremely difficult to predict exactly when a bubble will burst, and it could well be that NVIDIA stock will continue to grow at astronomical rates for several years yet—or it could be that the bubble bursts tomorrow and NVIDIA stock collapses, if not to worthless, then to far below its current price.

Krugman has an idea of what might be the point that bursts the bubble: Energy costs. There is a clear mismatch between the anticipated energy needs of these ever-growing data centers and the actual energy production we’ve been installing—especially now that Trump and his ilk have gutted subsidies for solar and wind power. That’s definitely something to watch out for.

But the really scary thing is that the AI bubble actually seems to be the only thing holding the US economy above water right now. It’s the reason why Trump’s terrible policies haven’t been as disastrous as economists predicted they would; our economy is being sustained by this enormous amount of capital investment.

US GDP is about $30 trillion right now, but $500 billion of that is just AI investment. That’s over 1.6%, and last quarter our annualized GDP growth rate was 3.3%—so roughly half of our GDP growth was just due to building more data centers that probably won’t even be profitable.

Between that, the tariffs, the loss of immigrants, and rising energy costs, a crashing AI bubble could bring down the whole stock market with it.

So I guess what I’m saying is: Don’t believe the AI hype, and you might want to sell some stocks.

Grief, a rationalist perspective

Aug 31 JDN 2460919

This post goes live on the 8th anniversary of my father’s death. Thus it seems an appropriate time to write about grief—indeed, it’s somewhat difficult for me to think about much else.

Far too often, the only perspectives on grief we hear are religious ones. Often, these take the form of consolation: “He’s in a better place now.” “You’ll see him again someday.”

Rationalism doesn’t offer such consolations. Technically one can be an atheist and still believe in an afterlife; but rationalism is stronger than mere atheism. It requires that we believe in scientific facts, and the permanent end of consciousness at death is a scientific fact. We know from direct experiments and observations in neuroscience that a destroyed brain cannot think, feel, see, hear, or remember—when your brain shuts down, whatever you are now will be gone.

It is the Basic Fact of Cognitive Science: There is no soul but the brain.

Moreover, I think, deep down, we all know that death is the end. Even religious people grieve. Their words may say that their loved one is in a better place, but their tears tell a different story.

Maybe it’s an evolutionary instinct, programmed deep into our minds like an ancestral memory, a voice that screams in our minds, insistent on being heard:

Death is bad!”

If there is one crucial instinct a lifeform needs in order to survive, surely it is something like that one: The preference for life over death. In order to live in a hostile world, you have to want to live.

There are some people who don’t want to live, people who become suicidal. Sometimes even the person we are grieving was someone who chose to take their own life. Generally this is because they believe that their life from then on would be defined only by suffering. Usually, I would say they are wrong about that; but in some cases, maybe they are right, and choosing death is rational. Most of the time, life is worth living, even when we can’t see that.

But aside from such extreme circumstances, most of us feel most of the time that death is one of the worst things that could happen to us or our loved ones. And it makes sense that we feel that way. It is right to feel that way. It is rational to feel that way.

This is why grief hurts so much.

This is why you are not okay.

If the afterlife were real—or even plausible—then grief would not hurt so much. A loved one dying would be like a loved one traveling away to somewhere nice; bittersweet perhaps, maybe even sad—but not devastating the way that grief is. You don’t hold a funeral for someone who just booked a one-way trip to Hawaii, even if you know they aren’t ever coming back.

Religion tries to be consoling, but it typically fails. Because that voice in our heads is still there, repeating endlessly: “Death is bad!” “Death is bad!” “Death is bad!”

But what if religion does give people some comfort in such a difficult time? What if supposing something as nonsensical as Heaven numbs the pain for a little while?

In my view, you’d be better off using drugs. Drugs have side effects and can be addictive, but at least they don’t require you to fundamentally abandon your ontology. Mainstream religion isn’t simply false; it’s absurd. It’s one of the falsest things anyone has ever believed about anything. It’s obviously false. It’s ridiculous. It has never deserved any of the respect and reverence it so often receives.

And in a great many cases, religion is evil. Religion teaches people to be obedient to authoritarians, and to oppress those who are different. Some of the greatest atrocities in history were committed in the name of religion, and some of the worst oppression going on today is done in the name of religion.

Rationalists should give religion no quarter. It is better for someone to find solace in alcohol or cannabis than for them to find solace in religion.

And maybe, in the end, it’s better if they don’t find solace at all.

Grief is good. Grief is healthy. Grief is what we should feel when something as terrible as death happens. That voice screaming “Death is bad!” is right, and we should listen to it.

No, what we need is to not be paralyzed by grief, destroyed by grief. We need to withstand our grief, get through it. We must learn to be strong enough to bear what seems unbearable, not console ourselves with lies.

If you are a responsible adult, then when something terrible happens to you, you don’t pretend it isn’t real. You don’t conjure up a fantasy world in which everything is fine. You face your terrors. You learn to survive them. You make yourself strong enough to carry on. The death of a loved one is a terrible thing; you shouldn’t pretend otherwise. But it doesn’t have to destroy you. You can grow, and heal, and move on.

Moreover, grief has a noble purpose. From our grief we must find motivation to challenge death, to fight death wherever we find it. Those we have already lost are gone; it’s too late for them. But it’s not too late for the rest of us. We can keep fighting.

And through economic development and medical science, we do keep fighting.

In fact, little by little, we are winning the war on death.

Death has already lost its hold upon our children. For most of human history, nearly a third of children died before the age of 5. Now less than 1% do, in rich countries, and even in the poorest countries, it’s typically under 10%. With a little more development—development that is already happening in many places—we can soon bring everyone in the world to the high standard of the First World. We have basically won the war on infant and child mortality.

And death is losing its hold on the rest of us, too. Life expectancy at adulthood is also increasing, and more and more people are living into their nineties and even their hundreds.

It’s true, there still aren’t many people living to be 120 (and some researchers believe it will be a long time before this changes). But living to be 85 instead of 65 is already an extra 20 years of life—and these can be happy, healthy years too, not years of pain and suffering. They say that 60 is the new 50; physiologically, we are so much healthier than our ancestors that it’s as if we were ten years younger.

My sincere hope is that our grief for those we have lost and fear of losing those we still have will drive us forward to even greater progress in combating death. I believe that one day we will finally be able to slow, halt, perhaps even reverse aging itself, rendering us effectively immortal.

Religion promises us immortality, but it isn’t real.

Science offers us the possibility of immortality that’s real.

It won’t be easy to get there. It won’t happen any time soon. In all likelihood, we won’t live to see it ourselves. But one day, our descendants may achieve the grandest goal of all: Finally conquering death.

And even long before that glorious day, our lives are already being made longer and healthier by science. We are pushing death back, step by step, day by day. We are fighting, and we are winning.

Moreover, we as individuals are not powerless in this fight: you can fight death a little harder yourself, by becoming an organ donor, or by donating to organizations that fight global poverty or advance medical science. Let your grief drive you to help others, so that they don’t have to grieve as you do.

And if you need consolation from your grief, let it come from this truth: Death is rarer now today than it was yesterday, and will be rarer still tomorrow. We can’t bring back who we have lost, but we can keep ourselves from losing more so soon.

Solving the student debt problem

Aug 24 JDN 2460912

A lot of people speak about student debt as a “crisis”, which makes it sound like the problem is urgent and will have severe consequences if we don’t soon intervene. I don’t think that’s right. While it’s miserable to be unable to pay your student loans, student loans don’t seem to be driving people to bankruptcy or homelessness the way that medical bills do.

Instead I think what we have here is a long-term problem, something that’s been building for a long time and will slowly but surely continue getting worse if we don’t change course. (I guess you can still call it a “crisis” if you want; climate change is also like this, and arguably a crisis.)

But there is a problem here: Student loan balances are rising much faster than other kinds of debt, and the burden falls the worst on Black women and students who went to for-profit schools. A big part of the problem seems to be predatory schools that charge high prices and make big promises but offer poor results.

Making all this worse is the fact that some of the most important income-based repayment plans were overturned by a federal court, forcing everyone who was on them into forebearance. Income-based repayment was a big reason why student loans actually weren’t as bad a burden as their high loan balances might suggest; unlike a personal loan or a mortgage, if you didn’t have enough income to repay your student loans at the full amount, you could get on a plan that would let you make smaller payments, and if you paid on that plan for long enough—even if it didn’t add up to the full balance—your loans would be forgiven.

Now the forebearance is ending for a lot of borrowers, and so they are going into default; and most of that loan forgiveness has been ruled illegal. (Supposedly this is because Congress didn’t approve it. I’ll believe that was the reason when the courts overrule Trump’s tariffs, which clearly have just as thin a legal justification and will cause far more harm to us and the rest of the world.)

In theory, student loans don’t really seem like a bad idea.

College is expensive, because it requires highly-trained professors, who demand high salaries. (The tuition money also goes other places, of course….)

College is valuable, because it provides you with knowledge and skills that can improve your life and also increase your long-term earnings. It’s a big difference: Median salary for someone with a college degree is about $60k, while median salary for someone with only a high school diploma is about $34k.

Most people don’t have enough liquidity to pay for college.

So, we provide loans, so that people can pay for college, and then when they make more money after graduating, they can pay the loans back.

That’s the theory, anyway.

The problem is that average or even median salaries obscure a lot of variation. Some college graduates become doctors, lawyers, or stockbrokers and make huge salaries. Others can’t find jobs at all. In the absence of income-based repayment plans, all students have to pay back their loans in full, regardless of their actual income after graduation.

There is inherent risk in trying to build a career. Our loan system—especially with the recent changes—puts most of this risk on the student. We treat it as their fault they can’t get a good job, and then punish them with loans they can’t afford to repay.

In fact, right now the job market is pretty badfor recent graduates—while usually unemployment for recent college grads is lower than that of the general population, since about 2018 it has actually been higher. (It’s no longer sky-high like it was during COVID; 4.8% is not bad in the scheme of things.)

Actually the job market may even be worse than it looks, because new hires are actually the lowest rate they’ve been since 2020. Our relatively low unemployment currently seems to reflect a lack of layoffs, not a healthy churn of people entering and leaving jobs. People seem to be locked into their jobs, and if they do leave them, finding another is quite difficult.

What I think we need is a system that makes the government take on more of the risk, instead of the students.

There are lots of ways to do this. Actually, the income-based repayment systems we used to have weren’t too bad.

But there is actually a way to do it without student loans at all. College could be free, paid for by taxes.


Now, I know what you’re thinking: Isn’t this unfair to people who didn’t go to college? Why should they have to pay?

Who said they were paying?

There could simply be a portion of the income tax that you only pay if you have a bachelor’s degree. Then you would only pay this tax if you both graduated from college and make a lot of money.

I don’t think this would create a strong incentive not to get a bachelor’s degree; the benefits of doing so remain quite large, even if your taxes were a bit higher as a result.

It might create incentives to major in subjects that aren’t as closely linked to higher earnings—liberal arts instead of engineering, medicine, law, or business. But this I see as fundamentally a public good: The world needs people with liberal arts education. If the market fails to provide for them, the government should step in.

This plan is not as progressive as Elizabeth Warren’s proposal to use wealth taxes to fund free college; but it might be more politically feasible. The argument that people who didn’t go to college shouldn’t have to pay for people who did actually seems reasonable to me; but this system would ensure that in fact they don’t.

The transfer of wealth here would be from people who went to college and make a lot of money to people who went to college and don’t make a lot of money. It would be the government bearing some of the financial risk of taking on a career in an uncertain world.