Why do we have holidays about death and fear?

Oct 26 JDN 2460975

I confess, I don’t think I ever really got Halloween. As a kid I enjoyed dressing up in costumes and getting candy, but the part about being scared—or pretending to be scared, or approximating being scared, or decorating with things like bats and spiders that some people find scary but I don’t especially—never really made a whole lot of sense to me. The one Halloween decoration that does genuinely cause me any fear is excessive amounts of blood (I have a mild hematophobia acquired from a childhood injury), and that experience is aversive—I want to avoid it, not experience more of it. (I’ve written about my feelings toward horror as a genre previously.)

Dia de los Muertos makes a bit more sense to me: A time to reflect about our own mortality, a religious festival about communing with the souls of your ancestors. But that doesn’t really fully explain all the decorated skulls. (It’s apparently hotly debated within the historical community whether these are really different holidays: Scholars disagree as to whether Dia de los Muertos has Native roots or is really just a rebranded Allhallowtide.)

It just generally seems weird to me to have a holiday about death and fear. Aren’t those things… bad? But maybe the point of the holiday is actually to dull them a little, to make them less threatening by the act of trying to celebrate them. Skeletons are scary, but plastic skeletons aren’t so bad; skulls are scary, but decorated skulls are less so. Maybe by playing around with it, we can take some of the bite out of the fear and grief.

My general indifference toward Halloween as an adult is apparently pretty unusual among LGBT people, many of whom seem to treat Halloween season as a kind of second Pride Month. I think the main draw is the opportunity to don a costume and thereby adopt a new identity. And that can be fun, sometimes; but somehow each year I find it feels like such a chore to actually go find a Halloween costume I want to wear.

Maybe part of it is that most people aren’t doing that sort of thing all the time, the way I am by playing games (especially role-playing games). Costumes do add to the immersion of the experience, but do they really add enough to justify the cost of buying one and the effort of wearing it? Maybe I’d just rather boot up Skyrim for the 27th playthrough. But I suppose most people don’t play such games, or not nearly as often as I do; so for them, a chance to be someone else once a year is an opportunity they can’t afford to pass up.

What is the real impact of AI on the environment?

Oct 19 JDN 2460968

The conventional wisdom is that AI is consuming a huge amount of electricity and water for very little benefit, but when I delved a bit deeper into the data, the results came out a lot more ambiguous. I still agree with the “very little benefit” part, but the energy costs of AI may not actually be as high as many people believe.

So how much energy does AI really use?

This article in MIT Technology Reviewestimates that by 2028, AI will account for 50% of data center usage and 6% of all US energy. But two things strike me about that:

  1. This is a forecast. It’s not what’s currently happening.
  2. 6% of all US energy doesn’t really sound that high, actually.

Note that transportation accounts for 37% of US energy consumed. Clearly we need to bring that down; but it seems odd to panic about a forecast of something that uses one-sixth of that.

Currently, AI is only 14% of data center energy usage. That forecast has it rising to 50%. Could that happen? Sure. But it hasn’t happened yet. Data centers are being rapidly expanded, but that’s not just for AI; it’s for everything the Internet does, as more and more people get access to the Internet and use it for more and more demanding tasks (like cloud computing and video streaming).

Indeed, a lot of the worry really seems to be related to forecasts. Here’s an even more extreme forecast suggesting that AI will account for 21% of global energy usage by 2030. What’s that based on? I have no idea; they don’t say. The article just basically says it “could happen”; okay, sure, a lot of things could happen. And I feel like this sort of forecast comes from the same wide-eyed people who say that the Singularity is imminent and AI will soon bring us to a glorious utopia. (And hey, if it did, that would obviously be worth 21% of global energy usage!)

Even more striking to me is the fact that a lot of other uses of data centers are clearly much more demanding. YouTube uses about 50 times as much energy as ChatGPT; yet nobody seems to be panicking that YouTube is an environmental disaster.

What is a genuine problem is that data centers have strong economies of scale, and so it’s advantageous to build a few very large ones instead of a lot of small ones; and when you build a large data center in a small town it puts a lot of strain on the local energy grid. But that’s not the same thing as saying that data centers in general are wastes of energy; on the contrary, they’re the backbone of the Internet and we all use them almost constantly every day. We should be working on ways to make sure that small towns aren’t harmed by building data centers near them; but we shouldn’t stop building data centers.

What about water usage?

Well, here’s an article estimating that training ChatGPT-3 evaporated hundreds of thousands of liters of fresh water. Once again I have a few notes about that:

  1. Evaporating water is just about the best thing you could do to it aside from leaving it there. It’s much better than polluting it (which is what most water usage does); it’s not even close. That water will simply rain back down later.
  2. Total water usage in the US is estimated at over 300 billion gallons (1.1 trillion liters) per day. Most of that is due to power generation and irrigation. (The best way to save water as a consumer? Become vegetarian—then you’re getting a lot more calories per irrigated acre.)
  3. A typical US household uses about 100 gallons (380 liters) of water per person per day.

So this means that training ChatGPT-3 cost about 4 seconds of US water consumption, or the same as what a single small town uses each day. Once again, that doesn’t seem like something worth panicking over.

A lot of this seems to be that people hear big-sounding numbers and don’t really have the necessary perspective on those numbers. Of course any service that is used by millions of people is going to consume what sounds like a lot of electricity. But in terms of usage per person, or compared to other services with similar reach, AI really doesn’t seem to be uniquely demanding.

This is not to let AI off the hook.

I still agree that the benefits of AI have so far been small, and the risks—both in the relatively short term, of disrupting our economy and causing unemployment, and in the long term, even endangering human civilization itself—are large. I would in fact support an international ban on all for-profit and military research and development of AI; a technology this powerful should be under the control of academic institutions and civilian governments, not corporations.

But I don’t think we need to worry too much about the environmental impact of AI just yet. If we clean up our energy grid (which has just gotten much easier thanks to cheap renewables) and transportation systems, the additional power draw from data centers really won’t be such a big problem.

Why are so many famous people so awful?

Oct 12 JDN 2460961

J.K. Rowling is a transphobic bigot. H.P. Lovecraft was an overt racist. Orson Scott Card is homophobic, and so was Frank Herbert. Robert Heinlein was a misogynist. Isaac Asimov was a serial groper and sexual harasser. Neil Gaiman has been credibly accused of multiple sexual assaults.

That’s just among sci-fi and fantasy authors whose work I admire. I could easily go on with lots of other famous people and lots of other serious allegations. (I suppose Bill Cosby and Roman Polanski seem like particularly apt examples.)

Some of these are worse than others; since they don’t seem to be guilty of any actual crimes, we might even cut some slack to Lovecraft, Herbert and Heinlein for being products of their times. (It seems very hard to make that defense for Asimov and Gaiman, with Rowling and Card somewhere in between because they aren’t criminals, but ‘their time’ is now.)

There are of course exceptions: Among sci-fi authors, for instance, Ursula Le Guin, Becky Chambers, Alistair Reynolds and Andy Weir all seem to be ethically unimpeachable. (As far as I know? To be honest, I still feel blind-sided by Neil Gaiman.)

But there really does seem to be pattern here:

Famous people are often bad people.

I guess I’m not quite sure what the baseline rate of being racist, sexist, or homophobic is (and frankly maybe it’s pretty high); but the baseline rate of committing multiple sexual assaults is definitely lower than the rate at which famous men get credibly accused of such.

Lord Acton famously remarked similarly:

Power tends to corrupt and absolute power corrupts absolutely. Great men are almost always bad men, even when they exercise influence and not authority; still more when you superadd the tendency of the certainty of corruption by authority.

I think this account is wrong, however. Abraham Lincoln, Mahatma Gandhi, and Nelson Mandela were certainly powerful—and certainly flawed—but they do not seem corrupt to me. I don’t think that Gandhi beat his wife because he led the Indian National Congress, and Mandela supported terrorists precisely during the period when he had the least power and the fewest options. (It’s almost tautologically true that Lincoln couldn’t have suspended habeas corpusif he weren’t extremely powerful—but that doesn’t mean that it was the power that shaped his character.)

I don’t think the problem is that power corrupts. I think the problem is that the corrupt seek power, and are very good at obtaining it.

In fact, I think the reason that so many famous people are such awful people is that our society rewards being awful. People will flock to you if you are overconfident and good at self-promoting, and as long as they like your work, they don’t seem to mind who you hurt along the way; this makes a perfect recipe for rewarding narcissists and psychopaths with fame, fortune, and power.

If you doubt that this is the case:

How else do you explain Donald Trump?

The man has absolutely no redeeming qualities. He is incompetent, willfully ignorant, deeply incurious, arrogant, manipulative, and a pathological liar. He’s also a racist, misogynist, and admitted sexual assaulter. He has been doing everything in his power to prevent the release of the Epstein Files, which strongly suggests he has in fact sexually assaulted teenagers. He’s also a fascist, and now that he has consolidated power, he is rapidly pushing the United States toward becoming a fascist state—complete with masked men with guns who break into your home and carry you away without warrants or trials.

Yet tens of millions of Americans voted for him to become President of the United States—twice.

Basically, it seems to be that Trump said he was great, and they believed him. Simply projecting confidence—however utterly unearned that confidence might be—was good enough.

When it comes to the authors I started this post with, one might ask whether their writing talents were what brought them fame, independently or in spite of their moral flaws. To some extent that is probably true. But we also don’t really know how good they are, compared to all the other writers whose work never got published or never got read. Especially during times—all too recently—when writers who were women, queer, or people of color simply couldn’t get their work published, who knows what genius we might have missed out on? Dune the first book is a masterpiece, but by the time we get to Heretics of Dune the books have definitely lost their luster; maybe there were some other authors with better books that could have been published, but never were because Herbert had the clout and the privilege and those authors didn’t.

I do think genuine merit has some correlation with success. But I think the correlation is much weaker than is commonly supposed. A lot of very obviously terrible and/or incompetent people are extremely successful in life. Many of them were born with advantages—certainly true of Elon Musk and Donald Trump—but not all of them.

Indeed, there are so many awful successful people that I am led to conclude that moral behavior has almost nothing to do with success. I don’t think people actively go out of their way to support authors, musicians, actors, business owners or politicians who are morally terrible; but it’s difficult for me to reject the hypothesis that they literally don’t care. Indeed, when evidence emerges that someone powerful is terrible, usually their supporters will desperately search for reasons why the allegations can’t be true, rather than seriously considering no longer supporting them.

I don’t know what to do about this.

I don’t know how to get people to believe allegations more, or care about them more; and that honestly seems easier than changing the fundamental structure of our society in a way that narcissists and psychopaths are no longer rewarded with power. The basic ways that we decide who gets jobs, who gets published, and who gets elected seem to be deeply, fundamentally broken; they are selecting all the wrong people, and our whole civilization is suffering the consequences.


We are so far from a just world that I honestly can’t see how to get there from here, or even how to move substantially closer.

But I think we still have to try.

Taylor Swift and the means of production

Oct 5 JDN 2460954

This post is one I’ve been meaning to write for awhile, but current events keep taking precedence.

In 2023, Taylor Swift did something very interesting from an economic perspective, which turns out to have profound implications for our economic future.

She re-recorded an entire album and released it through a different record company.

The album was called 1989 (Taylor’s Version), and she created it because for the last four years she had been fighting with Big Machine Records over the rights to her previous work, including the original album 1989.

A Marxist might well say she seized the means of production! (How rich does she have to get before she becomes bourgeoisie, I wonder? Is she already there, even though she’s one of a handful of billionaires who can truly say they were self-made?)

But really she did something even more interesting than that. It was more like she said:

Seize the means of production? I am the means of production.”

Singing and songwriting are what is known as a human-capital-intensive industry. That is, the most important factor of production is not land, or natural resources, or physical capital (yes, you need musical instruments, amplifiers, recording equipment and the like—but these are a small fraction of what it costs to get Talor Swift for a concert), or even labor in the ordinary sense. It’s one where so-called (honestly poorly named) “human capital” is the most important factor of production.

A labor-intensive industry is one where you just need a lot of work to be done, but you can get essentially anyone to do it: Cleaning floors is labor-intensive. A lot of construction work is labor-intensive (though excavators and the like also make it capital-intensive).

No, for a human-capital-intensive industry, what you need is expertise or talent. You don’t need a lot of people doing back-breaking work; you need a few people who are very good at doing the specific thing you need to get done.

Taylor Swift was able to re-record and re-release her songs because the one factor of production that couldn’t be easily substituted was herself. Big Machine Records overplayed their hand; they thought they could control her because they owned the rights to her recordings. But she didn’t need her recordings; she could just sing the songs again.

But now I’m sure you’re wondering: So what?

Well, Taylor Swift’s story is, in large part, the story of us all.

For most of the 18th, 19th, and 20th centuries, human beings in developed countries saw a rapid increase in their standard of living.

Yes, a lot of countries got left behind until quite recently.

Yes, this process seems to have stalled in the 21st century, with “real GDP” continuing to rise but inequality and cost of living rising fast enough that most people don’t feel any richer (and I’ll get to why that may be the case in a moment).

But for millions of people, the gains were real, and substantial. What was it that brought about this change?

The story we are usually told is that it was capital; that as industries transitioned from labor-intensive to capital-intensive, worker productivity greatly increased, and this allowed us to increase our standard of living.

That’s part of the story. But it can’t be the whole thing.

Why not, you ask?

Because very few people actually own the capital.

When capital ownership is so heavily concentrated, any increases in productivity due to capital-intensive production can simply be captured by the rich people who own the capital. Competition was supposed to fix this, compelling them to raise wages to match productivity, but we often haven’t actually had competitive markets; we’ve had oligopolies that consolidate market power in a handful of corporations. We had Standard Oil before, and we have Microsoft now. (Did you know that Microsoft not only owns more than half the consumer operating system industry, but after acquiring Activision Blizzard, is now the largest video game company in the world?) In the presence of an oligopoly, the owners of the capital will reap the gains from capital-intensive productivity.

But standards of living did rise. So what happened?

The answer is that production didn’t just become capital-intensive. It became human-capital-intensive.

More and more jobs required skills that an average person didn’t have. This created incentives for expanding public education, making workers not just more productive, but also more aware of how things work and in a stronger bargaining position.

Today, it’s very clear that the jobs which are most human-capital-intensive—like doctors, lawyers, researchers, and software developers—are the ones with the highest pay and the greatest social esteem. (I’m still not 100% sure why stock traders are so well-paid; it really isn’t that hard to be a stock trader. I could write you an algorithm in 50 lines of Python that would beat the average trader (mostly by buying ETFs). But they pretend to be human-capital-intensive by hiring Harvard grads, and they certainly pay as if they are.)

The most capital-intensive industries—like factory work—are reasonably well-paid, but not that well-paid, and actually seem to be rapidly disappearing as the capital simply replaces the workers. Factory worker productivity is now staggeringly high thanks to all this automation, but the workers themselves have gained only a small fraction of this increase in higher wages; by far the bigger effect has been increased profits for the capital owners and reduced employment in manufacturing.

And of course the real money is all in capital ownership. Elon Musk doesn’t have $400 billion because he’s a great engineer who works very hard. He has $400 billion because he owns a corporation that is extremely highly valued (indeed, clearly overvalued) in the stock market. Maybe being a great engineer or working very hard helped him get there, but it was neither necessary nor sufficient (and I’m sure that his dad’s emerald mine also helped).

Indeed, this is why I’m so worried about artificial intelligence.

Most forms of automation replace labor, in the conventional labor-intensive sense: Because you have factory robots, you need fewer factory workers; because you have mountaintop removal, you need fewer coal miners. It takes fewer people to do the same amount of work. But you still need people to plan and direct the process, and in fact those people need to be skilled experts in order to be effective—so there’s a complementarity between automation and human capital.

But AI doesn’t work like that. AI substitutes for human capital. It doesn’t just replace labor; it replaces expertise.

So far, AI is currently too unreliable to replace any but entry-level workers in human-capital-intensive industries (though there is some evidence it’s already doing that). But it will most likely get more reliable over time, if not via the current LLM paradigm, than through the next one that comes after. At some point, AI will come to replace experienced software developers, and then veteran doctors—and I don’t think we’ll be ready.

The long-term pattern here seems to be transitioning away from human-capital-intensive production to purely capital-intensive production. And if we don’t change the fact that capital ownership is heavily concentrated and so many of our markets are oligopolies—which we absolutely do not seem poised to do anything about; Democrats do next to nothing and Republicans actively and purposefully make it worse—then this transition will be a recipe for even more staggering inequality than before, where the rich will get even more spectacularly mind-bogglingly rich while the rest of us stagnate or even see our real standard of living fall.

The tech bros promise us that AI will bring about a utopian future, but that would only work if capital ownership were equally shared. If they continue to own all the AIs, they may get a utopia—but we sure won’t.

We can’t all be Taylor Swift. (And if AI music catches on, she may not be able to much longer either.)