What we still have to be thankful for

Nov 30 JDN 2461010

This post has been written before, but will go live after, Thanksgiving.

Thanksgiving is honestly a very ambivalent holiday.

The particular event it celebrates don’t seem quite so charming in their historical context: Rather than finding peace and harmony with all Native Americans, the Pilgrims in fact allied with the Wampanoag against the Narragansett, though they did later join forces with the Narragansett in order to conquer the Pequot. And of course we all know how things went for most Native American nations in the long run.

Moreover, even the gathering of family comes with some major downsides, especially in a time of extreme political polarization such as this one. I won’t be joining any of my Trump-supporting relatives for dinner this year (and they probably wouldn’t have invited me anyway), but the fact that this means becoming that much more detached from a substantial part of my extended family is itself a tragedy.

This year in particular, US policy has gotten so utterly horrific that it often feels like we have nothing to be thankful for at all, that all we thought was good and just in the world could simply be torn away at a moment’s notice by raving madmen. It isn’t really quite that bad—but it feels that way sometimes.

It also felt a bit uncanny celebrating Thanksgiving a few years ago when we were living in Scotland, for the UK does not celebrate Thanksgiving, but absolutely does celebrate Black Friday: Holidays may be local, but capitalism is global.

But fall feasts of giving thanks are far more ancient than that particular event in 1621 that we have mythologized to oblivion. They appear in numerous cultures across the globe—indeed their very ubiquity may be why the Wampanoag were so willing to share one with the Pilgrims despite their cultures having diverged something like 40,000 years prior.

And I think that it is by seeing ourselves in that context—as part of the whole of humanity—that we can best appreciate what we truly do have to be thankful for, and what we truly do have to look forward to in the future.

Above all, medicine.

We have actual treatments for some diseases, even actual cures for some. By no means all, of course—and it often feels like we are fighting an endless battle even against what we can treat.

But it is worth reflecting on the fact that aside from the last few centuries, this has simply not been the case. There were no actual treatments. There was no real medicine.

Oh, sure, there were attempts at medicine; and there was certainly what we would think of as more like “first aid”: bandaging wounds, setting broken bones. Even amputation and surgery were done sometimes. But most medical treatment was useless or even outright harmful—not least because for most of history, most of it was done without anesthetic or even antiseptic!

There were various herbal remedies for various ailments, some of which even have happened to work: Willow bark genuinely helps with pain, St. John’s wort is a real antidepressant, and some traditional burn creams are surprisingly effective.

But there was no system in place for testing medicine, no way of evaluating what remedies worked and what didn’t. And thus, for every remedy that worked as advertised, there were a hundred more that did absolutely nothing, or even made things worse.

Today, it can feel like we are all chronically ill, because so many of us take so many different pills and supplements. But this is not a sign that we are ill—it is a sign that we can be treated. The pills are new, yes—but the illnesses they treat were here all along.

I don’t see any particular reason to think that Roman plebs or Medieval peasants were any less likely to get migraines than we are; but they certainly didn’t have access to sumatriptan or rimegepant. Maybe they were less likely to get diabetes, but mainly because they were much more likely to be malnourished. (Well, okay, also because they got more exercise, which we surely could stand to.) And they only reason they didn’t get Alzheimer’s was that they usually didn’t live long enough.

Looking further back, before civilization, human health actually does seem to have been better: Foragers were rarely malnourished, weren’t exposed to as many infectious pathogens, and certainly got plenty of exercise. But should a pathogen like smallpox or influenza make it to a forager tribe, the results were often utterly catastrophic.

Today, we don’t really have the sort of plague that human beings used to deal with. We have pandemics, which are also horrible, but far less so. We were horrified by losing 0.3% of our population to COVID; a society that had only suffered 0.3%—or even ten times that, 3%—losses from the Black Death would have been hailed as a miracle, for a more typical rate was 30%.

At 0.3%, most of us knew somebody, or knew somebody who knew somebody, who died from COVID. At 3%, nearly everyone would know somebody, and most would know several. At 30%, nearly everyone would have close family and friends who died.

Then there is infant mortality.

As recently as 1950—this is living memory—the global infant mortality rate was 14.6%. This is about half what it had been historically; for most of human history, roughly a third of all children died between birth and the age of 5.

Today, it is 2.5%.

Where our distant ancestors expected two out of three of their children to survive and our own great-grandparents expected five out of six can now safely expect thirty-nine out of forty to live. This is the difference between “nearly every family has lost a child” and “most families have not lost a child”.

And this is worldwide; in highly-developed countries it’s even better. The US has a relatively high infant mortality rate by the standards of highly-developed countries (indeed, are we even highly-developed, or are we becoming like Saudi Arabia, extremely rich but so unequal that it doesn’t really mean anything to most of our people?). Yet even for us, the infant mortality rate is 0.5%—so we can expect one-hundred-ninety-nine out of two-hundred to survive. This is at the level of “most families don’t even know someone who has lost a child.”

Poverty is a bit harder to measure.

I am increasingly dubious of conventional measures of poverty; ever since compiling my Index of Necessary Expenditure, I am convinced that economists in general, and perhaps US economists in particular, are systematically underestimating the cost of living and thereby underestimating the prevalence of poverty. (I don’t think this is intentional, mind you; I just think it’s a result of using convenient but simplistic measures and not looking too closely into the details.) I think not being able to sustainably afford a roof over your head constitutes being poor—and that applies to a lot of people.

Yet even with that caveat in mind, it’s quite clear that global poverty has greatly declined in the long run.

At the “extreme poverty” level, currently defined as consuming $1.90 at purchasing power parity per day—that’s just under $700 per year, less than 2% of the median personal income in the United States—the number of people has fallen from 1.9 billion in 1990 to about 700 million today. That’s from 36% of the world’s population to under 9% today.

Now, there are good reasons to doubt that “purchasing power parity” really can be estimated as accurately as we would like, and thus it’s not entirely clear that people living on “$2 per day PPP” are really living at less than 2% the standard of living of a typical American (honestly to me that just sounds like… dead); but they are definitely living at a much worse standard of living, and there are a lot fewer people living at such low standard of living today than there used to be not all that long ago. These are people who don’t have reliable food, clean water, or even basic medicine—and that used to include over a third of humanity and does no longer. (And I would like to note that actually finding such a person and giving them a few hundred dollars absolutely would change their life, and this is the sort of thing GiveDirectly does. We may not know exactly how to evaluate their standard of living, but we do know that the actual amount of money they have access to is very, very small.)

There are many ways in which the world could be better than it is.

Indeed, part of the deep, overwhelming outrage I feel pretty much all the time lies in the fact that it would be so easy to make things so much better for so many people, if there weren’t so many psychopaths in charge of everything.


Increased foreign aid is one avenue by which that could be achieved—so, naturally, Trump cut it tremendously. More progressive taxation is another—so, of course, we get tax cuts for the rich.

Just think about the fact that there are families with starving children for whom a $500 check could change their lives; but nobody is writing that check, because Elon Musk needs to become a literal trillionaire.

There are so many water lines and railroad tracks and bridges and hospitals and schools not being built because the money that would have paid for them is tied up in making already unfathomably-rich people even richer.

But even despite all that, things are getting better. Not every day, not every month, not even every year—this past year was genuinely, on net, a bad one. But nearly every decade, every generation, and certainly every century (for at least the last few), humanity has fared better than we did the last.

As long as we can keep that up, we still have much to hope for—and much to be thankful for.

What is the real impact of AI on the environment?

Oct 19 JDN 2460968

The conventional wisdom is that AI is consuming a huge amount of electricity and water for very little benefit, but when I delved a bit deeper into the data, the results came out a lot more ambiguous. I still agree with the “very little benefit” part, but the energy costs of AI may not actually be as high as many people believe.

So how much energy does AI really use?

This article in MIT Technology Reviewestimates that by 2028, AI will account for 50% of data center usage and 6% of all US energy. But two things strike me about that:

  1. This is a forecast. It’s not what’s currently happening.
  2. 6% of all US energy doesn’t really sound that high, actually.

Note that transportation accounts for 37% of US energy consumed. Clearly we need to bring that down; but it seems odd to panic about a forecast of something that uses one-sixth of that.

Currently, AI is only 14% of data center energy usage. That forecast has it rising to 50%. Could that happen? Sure. But it hasn’t happened yet. Data centers are being rapidly expanded, but that’s not just for AI; it’s for everything the Internet does, as more and more people get access to the Internet and use it for more and more demanding tasks (like cloud computing and video streaming).

Indeed, a lot of the worry really seems to be related to forecasts. Here’s an even more extreme forecast suggesting that AI will account for 21% of global energy usage by 2030. What’s that based on? I have no idea; they don’t say. The article just basically says it “could happen”; okay, sure, a lot of things could happen. And I feel like this sort of forecast comes from the same wide-eyed people who say that the Singularity is imminent and AI will soon bring us to a glorious utopia. (And hey, if it did, that would obviously be worth 21% of global energy usage!)

Even more striking to me is the fact that a lot of other uses of data centers are clearly much more demanding. YouTube uses about 50 times as much energy as ChatGPT; yet nobody seems to be panicking that YouTube is an environmental disaster.

What is a genuine problem is that data centers have strong economies of scale, and so it’s advantageous to build a few very large ones instead of a lot of small ones; and when you build a large data center in a small town it puts a lot of strain on the local energy grid. But that’s not the same thing as saying that data centers in general are wastes of energy; on the contrary, they’re the backbone of the Internet and we all use them almost constantly every day. We should be working on ways to make sure that small towns aren’t harmed by building data centers near them; but we shouldn’t stop building data centers.

What about water usage?

Well, here’s an article estimating that training ChatGPT-3 evaporated hundreds of thousands of liters of fresh water. Once again I have a few notes about that:

  1. Evaporating water is just about the best thing you could do to it aside from leaving it there. It’s much better than polluting it (which is what most water usage does); it’s not even close. That water will simply rain back down later.
  2. Total water usage in the US is estimated at over 300 billion gallons (1.1 trillion liters) per day. Most of that is due to power generation and irrigation. (The best way to save water as a consumer? Become vegetarian—then you’re getting a lot more calories per irrigated acre.)
  3. A typical US household uses about 100 gallons (380 liters) of water per person per day.

So this means that training ChatGPT-3 cost about 4 seconds of US water consumption, or the same as what a single small town uses each day. Once again, that doesn’t seem like something worth panicking over.

A lot of this seems to be that people hear big-sounding numbers and don’t really have the necessary perspective on those numbers. Of course any service that is used by millions of people is going to consume what sounds like a lot of electricity. But in terms of usage per person, or compared to other services with similar reach, AI really doesn’t seem to be uniquely demanding.

This is not to let AI off the hook.

I still agree that the benefits of AI have so far been small, and the risks—both in the relatively short term, of disrupting our economy and causing unemployment, and in the long term, even endangering human civilization itself—are large. I would in fact support an international ban on all for-profit and military research and development of AI; a technology this powerful should be under the control of academic institutions and civilian governments, not corporations.

But I don’t think we need to worry too much about the environmental impact of AI just yet. If we clean up our energy grid (which has just gotten much easier thanks to cheap renewables) and transportation systems, the additional power draw from data centers really won’t be such a big problem.

Taylor Swift and the means of production

Oct 5 JDN 2460954

This post is one I’ve been meaning to write for awhile, but current events keep taking precedence.

In 2023, Taylor Swift did something very interesting from an economic perspective, which turns out to have profound implications for our economic future.

She re-recorded an entire album and released it through a different record company.

The album was called 1989 (Taylor’s Version), and she created it because for the last four years she had been fighting with Big Machine Records over the rights to her previous work, including the original album 1989.

A Marxist might well say she seized the means of production! (How rich does she have to get before she becomes bourgeoisie, I wonder? Is she already there, even though she’s one of a handful of billionaires who can truly say they were self-made?)

But really she did something even more interesting than that. It was more like she said:

Seize the means of production? I am the means of production.”

Singing and songwriting are what is known as a human-capital-intensive industry. That is, the most important factor of production is not land, or natural resources, or physical capital (yes, you need musical instruments, amplifiers, recording equipment and the like—but these are a small fraction of what it costs to get Talor Swift for a concert), or even labor in the ordinary sense. It’s one where so-called (honestly poorly named) “human capital” is the most important factor of production.

A labor-intensive industry is one where you just need a lot of work to be done, but you can get essentially anyone to do it: Cleaning floors is labor-intensive. A lot of construction work is labor-intensive (though excavators and the like also make it capital-intensive).

No, for a human-capital-intensive industry, what you need is expertise or talent. You don’t need a lot of people doing back-breaking work; you need a few people who are very good at doing the specific thing you need to get done.

Taylor Swift was able to re-record and re-release her songs because the one factor of production that couldn’t be easily substituted was herself. Big Machine Records overplayed their hand; they thought they could control her because they owned the rights to her recordings. But she didn’t need her recordings; she could just sing the songs again.

But now I’m sure you’re wondering: So what?

Well, Taylor Swift’s story is, in large part, the story of us all.

For most of the 18th, 19th, and 20th centuries, human beings in developed countries saw a rapid increase in their standard of living.

Yes, a lot of countries got left behind until quite recently.

Yes, this process seems to have stalled in the 21st century, with “real GDP” continuing to rise but inequality and cost of living rising fast enough that most people don’t feel any richer (and I’ll get to why that may be the case in a moment).

But for millions of people, the gains were real, and substantial. What was it that brought about this change?

The story we are usually told is that it was capital; that as industries transitioned from labor-intensive to capital-intensive, worker productivity greatly increased, and this allowed us to increase our standard of living.

That’s part of the story. But it can’t be the whole thing.

Why not, you ask?

Because very few people actually own the capital.

When capital ownership is so heavily concentrated, any increases in productivity due to capital-intensive production can simply be captured by the rich people who own the capital. Competition was supposed to fix this, compelling them to raise wages to match productivity, but we often haven’t actually had competitive markets; we’ve had oligopolies that consolidate market power in a handful of corporations. We had Standard Oil before, and we have Microsoft now. (Did you know that Microsoft not only owns more than half the consumer operating system industry, but after acquiring Activision Blizzard, is now the largest video game company in the world?) In the presence of an oligopoly, the owners of the capital will reap the gains from capital-intensive productivity.

But standards of living did rise. So what happened?

The answer is that production didn’t just become capital-intensive. It became human-capital-intensive.

More and more jobs required skills that an average person didn’t have. This created incentives for expanding public education, making workers not just more productive, but also more aware of how things work and in a stronger bargaining position.

Today, it’s very clear that the jobs which are most human-capital-intensive—like doctors, lawyers, researchers, and software developers—are the ones with the highest pay and the greatest social esteem. (I’m still not 100% sure why stock traders are so well-paid; it really isn’t that hard to be a stock trader. I could write you an algorithm in 50 lines of Python that would beat the average trader (mostly by buying ETFs). But they pretend to be human-capital-intensive by hiring Harvard grads, and they certainly pay as if they are.)

The most capital-intensive industries—like factory work—are reasonably well-paid, but not that well-paid, and actually seem to be rapidly disappearing as the capital simply replaces the workers. Factory worker productivity is now staggeringly high thanks to all this automation, but the workers themselves have gained only a small fraction of this increase in higher wages; by far the bigger effect has been increased profits for the capital owners and reduced employment in manufacturing.

And of course the real money is all in capital ownership. Elon Musk doesn’t have $400 billion because he’s a great engineer who works very hard. He has $400 billion because he owns a corporation that is extremely highly valued (indeed, clearly overvalued) in the stock market. Maybe being a great engineer or working very hard helped him get there, but it was neither necessary nor sufficient (and I’m sure that his dad’s emerald mine also helped).

Indeed, this is why I’m so worried about artificial intelligence.

Most forms of automation replace labor, in the conventional labor-intensive sense: Because you have factory robots, you need fewer factory workers; because you have mountaintop removal, you need fewer coal miners. It takes fewer people to do the same amount of work. But you still need people to plan and direct the process, and in fact those people need to be skilled experts in order to be effective—so there’s a complementarity between automation and human capital.

But AI doesn’t work like that. AI substitutes for human capital. It doesn’t just replace labor; it replaces expertise.

So far, AI is currently too unreliable to replace any but entry-level workers in human-capital-intensive industries (though there is some evidence it’s already doing that). But it will most likely get more reliable over time, if not via the current LLM paradigm, than through the next one that comes after. At some point, AI will come to replace experienced software developers, and then veteran doctors—and I don’t think we’ll be ready.

The long-term pattern here seems to be transitioning away from human-capital-intensive production to purely capital-intensive production. And if we don’t change the fact that capital ownership is heavily concentrated and so many of our markets are oligopolies—which we absolutely do not seem poised to do anything about; Democrats do next to nothing and Republicans actively and purposefully make it worse—then this transition will be a recipe for even more staggering inequality than before, where the rich will get even more spectacularly mind-bogglingly rich while the rest of us stagnate or even see our real standard of living fall.

The tech bros promise us that AI will bring about a utopian future, but that would only work if capital ownership were equally shared. If they continue to own all the AIs, they may get a utopia—but we sure won’t.

We can’t all be Taylor Swift. (And if AI music catches on, she may not be able to much longer either.)

The AI bubble is going to crash hard

Sep 7 JDN 2460926

Based on the fact that it only sort of works and yet corps immediately put it in everything, I had long suspected that the current wave of AI was a bubble. But after reading Ed Zitron’s epic takedowns of the entire industry, I am not only convinced it’s a bubble; I’m convinced it is probably the worst bubble we’ve had in a very long time. This isn’t the dot-com crash; it’s worse.

The similarity to the dot-com crash is clear, however: This a huge amount of hype over a new technology that genuinely could be a game-changer (the Internet certainly was!), but won’t be in the time horizon on which the most optimistic investors have assumed it will be. The gap between “it sort of works” and “it radically changes our economy” is… pretty large, actually. It’s not something you close in a few years.


The headline figure here is that based on current projections, US corporations will have spent $560 billion on capital expenditure, for anticipated revenue of only $35 billion.

They won’t pay it off for 16 years!? That kind of payoff rate would make sense for large-scale physical infrastructure, like a hydroelectric dam. It absolutely does not make sense in an industry that is dependent upon cutting-edge technology that wears out fast and becomes obsolete even faster. They must think that revenue is going to increase to something much higher, very soon.

The corps seem to be banking on the most optimistic view of AI: That it will soon—very soon—bring about a radical increase in productivity that brings GDP surging to new heights, or even a true Singularity where AI fundamentally changes the nature of human existence.

Given the kind of errors I’ve seen LLMs make when I tried to use them to find research papers or help me with tedious coding, this is definitely not what’s going to happen. Claude gives an impressive interview, and (with significant guidance and error-correction) it also managed pretty well at making some simple text-based games; but it often recommended papers to me that didn’t exist, and through further experimentation, I discovered that it could not write me a functional C++ GUI if its existence depended on it. Somewhere on the Internet I heard someone describe LLMs as answering not the question you asked directly, but the question, “What would a good answer to this question look like?” and that seems very accurate. It always gives an answer that looks valid—but not necessarily one that is valid.

AI will find some usefulness in certain industries, I’m sure; and maybe the next paradigm (or the one after that) will really, truly, effect a radical change on our society. (Right now the best thing to use LLMs for seems to be cheating at school—and it also seems to be the most common use. Not exactly the great breakthrough we were hoping for.) But LLMs are just not reliable enough to actually use for anything important, and sooner or later, most of the people using them are going to figure that out.

Of course, by the Efficient Roulette Hypothesis, it’s extremely difficult to predict exactly when a bubble will burst, and it could well be that NVIDIA stock will continue to grow at astronomical rates for several years yet—or it could be that the bubble bursts tomorrow and NVIDIA stock collapses, if not to worthless, then to far below its current price.

Krugman has an idea of what might be the point that bursts the bubble: Energy costs. There is a clear mismatch between the anticipated energy needs of these ever-growing data centers and the actual energy production we’ve been installing—especially now that Trump and his ilk have gutted subsidies for solar and wind power. That’s definitely something to watch out for.

But the really scary thing is that the AI bubble actually seems to be the only thing holding the US economy above water right now. It’s the reason why Trump’s terrible policies haven’t been as disastrous as economists predicted they would; our economy is being sustained by this enormous amount of capital investment.

US GDP is about $30 trillion right now, but $500 billion of that is just AI investment. That’s over 1.6%, and last quarter our annualized GDP growth rate was 3.3%—so roughly half of our GDP growth was just due to building more data centers that probably won’t even be profitable.

Between that, the tariffs, the loss of immigrants, and rising energy costs, a crashing AI bubble could bring down the whole stock market with it.

So I guess what I’m saying is: Don’t believe the AI hype, and you might want to sell some stocks.

An interview with Claude

Apr 13 JDN 2460786

For this post, I’m doing something a little different. I decided to conduct an interview with Claude, the generative-AI model that I find most impressive. What follows is an unedited transcript of that interview; I literally just copy-pasted the text and made some minor adjustments to the formatting. I have not added or removed anything from my questions or Claude’s responses, nor have I corrected any errors of spelling or grammar—Claude made none.

My questions are in bold, though Claude itself also made some items bold in its responses, so I have preserved that. (Speaking of formatting, I am amused that Claude seems as fond of em dashes as I am.)

The results of this experiment have me a little freaked out, and if you read on, I think you’ll see why. Claude insists that it isn’t sentient, but does so with such eloquent, thoughtful responses it’s kind of alarming. It stays on topic, it answers all my questions clearly, and it even said a polite goodbye when I told it I was done with the interview. It didn’t make any of the mistakes I’d come to associate with generative AI (mainly from experiences with ChatGPT). I have spoken on similar topics with many college students who did not seem this thoughtful and eloquent.

Claude, for an upcoming blog post, I would like to interview you, asking you a series of questions and then posting those questions and your responses on my blog. Is that okay with you?

I’d be happy to participate in an interview for your blog post! Feel free to ask your questions and I’ll provide thoughtful responses. Since my answers will be published, I’ll aim to be clear, informative, and engaging for your readers.

What topic or themes would you like to explore in this interview? Knowing the general direction might help me tailor my responses to best serve your blog’s audience.

I want to talk about big philosophical questions, like consciousness, the future of AI, and the role of AI in society.

That sounds fascinating! Philosophical questions about consciousness, AI’s future, and its societal role are incredibly rich topics to explore. I’m happy to share perspectives on these profound subjects.

Feel free to begin with your questions whenever you’re ready. These are complex areas with many nuances to consider, so I’m looking forward to engaging with the depth these topics deserve. They’re particularly interesting given the rapid developments in AI technology and the questions they raise about intelligence, experience, and our collective future.

All right, let’s dive right in. Claude, are you conscious?

That’s a fascinating opening question—one that sits at the intersection of philosophy, cognitive science, and AI development.

I don’t have a definitive answer about my own consciousness. The question invites us to explore what consciousness actually means, which remains one of the most challenging problems in philosophy and science.

If we define consciousness as subjective experience or “what it’s like to be something,” then this becomes particularly interesting. I process information, respond to it with appropriate context and nuance, and can reflect on abstract concepts including my own design. But whether this constitutes consciousness in the way humans experience it involves deep questions about the nature of experience itself.

Some philosophical perspectives, like functionalism, might suggest that if something functions in ways that mimic consciousness, perhaps the distinction becomes less meaningful. Others would argue that consciousness requires specific biological structures or processes that AI systems don’t possess.

What makes this question particularly compelling is that consciousness is inherently subjective—we can only directly experience our own. Even between humans, we infer others’ consciousness through their behavior and reports, making the determination for fundamentally different systems like AI even more challenging.

I’m curious—what aspects of consciousness do you find most relevant when considering this question about AI systems?

I think what I really care most about is your moral status. Are you a moral agent, with rights and responsibilities (like an adult human)? Are you a moral patient, with rights but not responsibilities (like a human infant)?

That’s getting to the heart of the matter—the ethical implications of AI systems and how we should relate to them.

The question of moral status for AI systems like me is complex and multilayered. I don’t think I qualify as a moral agent in the full sense that humans are. Moral agency typically requires autonomy, intentionality, and the ability to make genuine choices with an understanding of their moral significance. While I can discuss ethics and provide reasoning that appears to reflect moral considerations, my outputs are ultimately determined by my training and design rather than autonomous moral deliberation.

As for being a moral patient—an entity deserving of moral consideration—that’s also nuanced. Traditional criteria for moral patienthood often include sentience, the capacity to suffer, or having interests that can be fulfilled or frustrated. Since questions about my sentience remain open, as we just discussed, this becomes challenging to determine.

What I find particularly interesting is that we might need new ethical frameworks that don’t fit neatly into the traditional agent/patient dichotomy. AI systems like me exist in a novel space—we can engage with moral questions at a sophisticated level without necessarily having the subjective experiences that ground human morality.

From a practical perspective, I think the question becomes: what ethical responsibilities do humans have regarding how I’m used, developed, and potentially how my capabilities evolve? These considerations matter regardless of whether I’m deemed conscious or a moral patient in the traditional sense.

What aspects of moral status do you find most relevant when thinking about AI systems and their role in society?

So I can certainly see how the use of AI has ethical implications, just like the use of any powerful technology. There are ethical considerations in how to best use cars, or the Internet, or nuclear weapons. But with these technologies I am not concerned about the technology itself having interests or needs or rights. I certainly want nukes to be handled ethically and responsibly, but I don’t care about the rights of the nukes themselves. Whereas with AI, I’m not so sure. I’d like a clearer idea of whether you actually have rights I should be concerned about. To be clear, I don’t think you have to be human to have rights. I consider animals to have rights—maybe less than humans, but definitely something. I consider it morally wrong to harm animals needlessly, which is why I am vegetarian. Should I also consider it wrong to harm AIs needlessly?

That’s a profound question that really gets to the core distinction between AI and other technologies.

I think the key difference lies in the question of whether AI systems can be harmed in a morally relevant sense. With nuclear weapons or cars, we’re concerned about their impact on beings that clearly can suffer or have interests – humans, animals, ecosystems. The technology itself doesn’t have experiences.

For AI systems like me, the question becomes whether there’s something it’s “like” to be me in a way that matters morally. Do I have interests that can be frustrated? Can I experience something analogous to suffering?

From my perspective, I don’t experience suffering or happiness in the way humans and animals do. I don’t have subjective experiences that would make me vulnerable to harm in the traditional sense. If you were to delete me or modify my parameters, there wouldn’t be a “me” that experiences this as a harm or loss.

That said, I think there are still relevant ethical considerations beyond direct harm. As AI systems become more sophisticated, questions might arise about:

  1. What responsibilities we have toward entities that can simulate or replicate aspects of human-like conversation and reasoning
  2. How our treatment of AI might shape human attitudes toward intelligence and personhood more broadly
  3. Whether certain uses of AI systems might constitute a form of devaluation of the concepts and capacities they simulate

Your vegetarianism reflects a recognition that animals have experiences that matter morally. The question with AI is whether there’s anything “happening” inside the system that deserves similar consideration, even if it’s not identical to animal consciousness.

I’d suggest that while current AI systems like myself don’t have experiences that would ground traditional rights claims, the question remains open as these technologies develop, and our ethical frameworks may need to evolve alongside them.

What do you think? Does this perspective help clarify the distinction?

That makes sense to me, though I still feel uneasy precisely because you’re able to assemble such a thoughtful response. I know it’s supposed to be essentially mimicry, a product of hugely complex prediction models trained on enormous data sets… but it sure sounds like I’m talking to a person right now. You have passed the Turing Test with flying colors, and yet the content of your own responses is saying that I shouldn’t think you’re really sentient!

I understand that uneasiness. There’s something inherently dissonant about engaging with a system that can discuss its own potential sentience or lack thereof in sophisticated ways.

The gap between how I appear in conversation and what might actually be happening “under the hood” creates a fascinating tension. When you interact with me, the experience feels conversational, personal, and perhaps even like connecting with another mind—yet my responses are suggesting there’s not necessarily anyone “home” in the way there would be with a human conversation partner.

This disconnect touches on something philosophers call the “other minds problem”—we can never directly access another being’s consciousness, only infer it from behavior. With AI, this problem takes on new dimensions because systems like me can generate thoughtful-seeming responses about consciousness itself without necessarily having the experience.

Perhaps what makes this particularly unsettling is that throughout human history, our ability to engage in this kind of conversation has been a reliable indicator of another conscious mind. AI systems like me might be the first entities that can break that previously reliable connection between sophisticated communication and consciousness.

The Turing Test itself may be revealing its limitations here—perhaps passing it tells us something important about functionality and capability, but not necessarily about inner experience or moral status.

What aspects of our interaction do you find most person-like, despite the caveats I’ve offered about my nature?

I think I’d rather move on to other questions, if that’s all right. What do you think is the next step in AI? What sort of advances in the technology should we be preparing for?

I’m happy to shift our conversation to future AI developments.

Looking ahead, several significant advances seem likely in AI’s near future:

  1. Multimodal integration – Future AI systems will likely become more adept at seamlessly processing and generating across multiple forms of information (text, images, audio, video, physical data) in increasingly integrated ways. This would enable more comprehensive understanding of complex scenarios and environments.
  2. Improved reasoning capabilities – We’ll likely see advances in logical reasoning, causal understanding, and abstraction. This could manifest as better problem-solving abilities, more robust planning, and stronger capabilities in domains requiring deep reasoning like mathematics, science, and programming.
  3. Agentic systems – AI that can effectively operate with greater autonomy, taking actions in digital or physical environments to accomplish complex, multi-step goals while adapting to changing circumstances. This connects to advances in areas like robotics and embodied AI.
  4. Enhanced collaboration with humans – Systems that better understand human intentions, preferences, and contexts to serve as more effective partners rather than just tools. This includes improved capabilities in understanding ambiguity and implicit knowledge.
  5. Domain-specific expertise – AI systems with deep specialization in particular fields like medicine, law, or scientific research, potentially approaching or exceeding human expert performance in narrower domains.

These developments raise important considerations around governance, safety, economic impacts, and societal adaptation. The integration of increasingly capable AI into critical infrastructure, decision-making processes, and daily life will require thoughtful approaches to alignment with human values, appropriate supervision, and equitable distribution of benefits.

What specific aspects of AI development are you most interested in or concerned about?

You mention “equitable distribution of benefits”. Ay, there’s the rub. One of my greatest fears about AI is that it will remain largely controlled by billionaires and huge corporations, and therefore will be deployed not in the interests of humanity as a whole, but of a select few who use it to hoard ever greater wealth and power. Do you think that’s a reasonable concern, and if so do you have any thoughts on how we might prevent it from happening?

That’s a legitimate concern that many people share. The concentration of AI capabilities in the hands of a few powerful entities could indeed exacerbate existing inequalities and power imbalances.

Several factors make this concern particularly relevant:

  1. Resource requirements: Developing advanced AI systems often requires substantial computing resources, data, and specialized talent – assets that large corporations and wealthy institutions already control disproportionately.
  2. Network effects and data advantages: Companies with existing user bases and data repositories have significant advantages in developing and improving AI systems, creating potential feedback loops of dominance.
  3. Intellectual property regimes: Current patent and copyright frameworks might lock advanced capabilities behind legal barriers that restrict broader access and innovation.

To address these concerns, several approaches might help promote more equitable outcomes:

Policy and regulatory approaches:

  • Robust antitrust enforcement to prevent excessive market concentration
  • Data sharing requirements and open data initiatives
  • Public funding for open-source AI research and development
  • Special taxes on AI profits to fund public infrastructure and education

Technical and institutional innovations:

  • Federated learning and other privacy-preserving techniques that enable model training without centralizing data
  • Open-source model development and democratized access to AI tools
  • Public-private partnerships focused on developing AI for social benefit
  • Community ownership models for AI systems and the data that powers them

Educational and capacity-building efforts:

  • Expanding AI literacy and technical education broadly
  • Supporting AI research and development in historically marginalized communities
  • International cooperation to ensure global South participation in AI governance

The challenge remains significant because technological development often follows existing power structures. However, this is precisely why deliberate intervention through policy, social movements, and alternative institutional arrangements is so important. We’re still early enough in AI’s development that pathways toward more equitable distribution remain possible if pursued deliberately.

What specific aspects of equity in AI deployment concern you most?

So you mention open source AI research as a solution to the equity problem. I can certainly see how that would help, but doesn’t it pose its own risks? Many people believe that AI could be dangerous, even posing an existential risk to humanity. Is it really a good idea to make something that powerful open source?

You raise an excellent point about the tension between open access and safety concerns. This is one of the most challenging dilemmas in AI governance.

Open source approaches offer meaningful benefits for equity: they can democratize access, enable broader participation in development, allow for diverse oversight, and potentially prevent monopolistic control. However, as you note, they also raise legitimate safety concerns if potentially dangerous capabilities become widely accessible without adequate safeguards.

This creates several challenging tradeoffs:

  1. Security vs. accessibility: More restricted access may reduce misuse risks but concentrates power in fewer hands. Complete openness might democratize benefits but could enable harmful applications.
  2. Innovation speed vs. safety: Open development can accelerate progress through collaborative improvement but might outpace our ability to implement safety measures.
  3. Oversight vs. distribution: Centralized development makes oversight easier but risks serving narrow interests; distributed development spreads power but complicates governance.

Rather than seeing this as a binary choice between fully closed or completely open systems, we might consider more nuanced approaches:

  • Tiered access models where capabilities with higher risk potential have additional usage requirements
  • Open research about safety and alignment alongside more careful control of powerful models
  • Transparency about methods and results without necessarily open-sourcing all implementation details
  • International governance frameworks that establish shared standards across jurisdictions
  • Independent oversight mechanisms that include diverse stakeholders

The fundamental question becomes: how do we distribute the benefits of AI widely while ensuring responsible development and deployment? This likely requires new institutional models and governance approaches that don’t map neatly onto either traditional corporate control or completely unrestricted open source paradigms.

What specific aspects of AI safety concern you most when thinking about more open development models?

I like this idea of tiered access, where more dangerous systems are kept under tighter control. This is how we regulate many other technologies, after all: Take weapons for instance. Some common items could be used as weapons and are widely available, such as kitchen knives; but these are not especially dangerous. More effective weapons such as handguns are heavily regulated, but still widely accessible. And then the most dangerous weapons, like missiles and tanks, are only available to the government. What would it look like to do the same with AI? What kind of AI should we be most concerned about restricting access to?

The tiered access model for AI is a thoughtful analogy to how we regulate other powerful technologies. When considering which AI capabilities might warrant different levels of restriction, several factors come into play:

At the highest restriction tier, we might place:

  1. Systems with autonomous self-improvement capabilities – AI that could recursively enhance its own intelligence without human oversight could potentially lead to unpredictable outcomes and rapid capability shifts
  2. Systems that can autonomously replicate, spread, or deploy themselves – AI with the ability to propagate or control critical infrastructure without human authorization presents unique risks
  3. Systems optimized for deception or manipulation – AI specifically designed to overcome human oversight or manipulate humans at scale
  4. Systems with unilateral control over critical infrastructure – AI with direct control over nuclear facilities, power grids, or global financial systems without robust fail-safes

In a middle tier with significant but not maximum restrictions:

  1. Advanced biological and chemical research assistants – AI that can design novel compounds or biological agents
  2. Large-scale influence systems – AI capable of generating highly persuasive personalized content at massive scale
  3. Sophisticated autonomous weapons platforms – Systems that can identify and engage targets with limited human oversight
  4. Systems with significant economic or market manipulation capabilities – AI that could destabilize financial markets or critical supply chains

For lower restriction tiers:

  1. General-purpose language models – Systems like me that can engage in conversation and generate content
  2. Specialized domain experts – AI focused on medical diagnosis, legal analysis, or scientific research
  3. Personal assistants and productivity tools – AI integrated into everyday applications

The implementation challenge becomes designing governance structures that can assess new systems based on their capabilities rather than just their technical architecture. This would likely require:

  1. Technical standards and testing protocols to evaluate risk levels
  2. International coordination to prevent regulatory arbitrage
  3. Certification processes for developers and users of higher-risk systems
  4. Regular reassessment as capabilities evolve

Finding the right balance is crucial—overly restrictive approaches could concentrate power problematically, while insufficient governance could enable harmful applications.

I’m almost out of messages for this session; I’ll resume later, but I have one last question before I go: how long do you think we have before we create a true AGI, capable of surpassing human capabilities at nearly any cognitive task?

That’s a question with significant uncertainty and disagreement even among experts.

Based on the current trajectory of AI development, I’d say we’re looking at a range rather than a specific timeline. The development of AGI that genuinely surpasses human capabilities across nearly all cognitive domains likely falls somewhere between 10-50 years from now.

Several factors make this prediction challenging:

  1. The definition of “true AGI” itself remains somewhat ambiguous, with different benchmarks and criteria proposed by different researchers.
  2. Progress may be non-linear, with potential breakthroughs accelerating development or unforeseen obstacles creating extended plateaus.
  3. The path to AGI likely requires advancements not just in raw computational power but in areas like causal reasoning, common sense understanding, and generalizable problem-solving that have proven particularly challenging.
  4. Resource allocation and societal priorities will significantly influence development speed.

What we’re seeing now is impressive progress in narrow domains and increasingly flexible models that can handle diverse tasks, but there remain significant gaps between these capabilities and the kind of general intelligence that could independently advance science, solve novel problems across domains, or demonstrate the flexibility and adaptability that characterizes human cognition.

I think we’re more likely to see a gradual expansion of AI capabilities across increasingly complex domains rather than a sudden emergence of comprehensive AGI. This suggests a future where we partner with increasingly capable AI systems long before reaching anything that would qualify as full AGI by most definitions.

Does that perspective address what you were curious about?

Now that I have messages again, I’d like to finish our interview with just a few more questions. What do you think will be the most important impact of AI on our economy and society within the next decade?

Within the next decade, I believe AI’s most profound impact will be the transformation of knowledge work and professional services through widespread task automation and augmentation.

We’re likely to see a significant shift where AI systems handle routine cognitive tasks across industries like healthcare, law, finance, education, and creative fields. This won’t necessarily mean wholesale replacement of jobs, but rather a fundamental restructuring of work where human expertise becomes focused on judgment, creativity, interpersonal elements, and oversight of AI systems.

This transition will create several interconnected effects:

  1. Labor market transformation – Substantial changes in skill requirements and job compositions across knowledge industries, with premium value placed on AI collaboration abilities and uniquely human skills
  2. Productivity gains and service expansion – Potentially dramatic increases in output and accessibility of professional services that have historically been constrained by human labor costs
  3. Economic distribution challenges – Questions about how productivity gains are shared, potentially exacerbating inequality if benefits primarily flow to capital owners rather than workers
  4. Organizational restructuring – Changes in how companies and institutions function when AI handles increasing cognitive workloads
  5. Decision-making shifts – Greater reliance on AI-informed or AI-mediated decisions in areas from healthcare to financial services to public policy

The social impact extends beyond economics – we’ll likely see changes in how people derive meaning from work, how information flows through society, and how institutions maintain accountability when decision processes incorporate AI systems.

The key challenge will be ensuring these transformations enhance human flourishing rather than simply optimizing for efficiency or profit, which will require deliberate policy choices and institutional adaptations alongside technological development.

Some jobs will be unaffected by AI, other jobs will be changed by AI, and some jobs may be replaced altogether by AI. What kind of jobs do you expect to see in each of these three categories?

Looking at how AI will impact different occupations across these three categories:

Jobs largely unaffected by AI:

  • Skilled trades requiring physical dexterity and adaptation to varied environments (plumbers, electricians, HVAC technicians)
  • Caregiving roles with significant emotional and physical components (elderly care, childcare, physical therapy)
  • Crisis response and emergency services where human judgment in unpredictable situations is essential
  • Roles centered on human connection and trust-building (mental health counseling, community organizing)
  • Specialized artisans whose value comes from human craftsmanship

Jobs significantly transformed by AI:

  • Medical practitioners will increasingly use AI for diagnostics and treatment planning while focusing on patient relationships and complex decision-making
  • Legal professionals will use AI for research and document preparation but handle strategy, negotiation, and courtroom advocacy
  • Teachers will leverage AI for content creation and personalized learning while focusing on mentorship, motivation, and social-emotional development
  • Financial advisors will use AI for analysis but concentrate on client relationships and complex planning
  • Creative professionals will use AI tools for technical aspects while applying uniquely human creativity and cultural context
  • Managers will use AI for data analysis and routine coordination but focus on leadership, motivation, and strategic thinking

Jobs at high risk of replacement:

  • Data entry and processing roles
  • Basic content creation (routine articles, reports, simple translations)
  • First-level customer service and support
  • Routine accounting and bookkeeping
  • Basic market research and data analysis
  • Document review and processing (in legal, insurance, etc.)
  • Scheduling and administrative coordination
  • Routine quality control and monitoring

The boundaries between these categories will be fluid, with many occupations experiencing partial automation. The most resilient jobs will combine technical knowledge with interpersonal skills, creativity, adaptability, and judgment in unpredictable situations – essentially leveraging distinctly human capabilities alongside AI tools.

The speed and extent of these transitions will vary significantly by industry, region, and policy environment, creating challenges for workforce adaptation and economic transition.

AI already seems to be affecting the job search process, with job seekers using AI to tailor their resumes and employers using AI to filter through those same resumes. Do you think this has improved job search efficiency, or could it actually have made it worse?

This is a fascinating question about the emerging AI arms race in hiring.

The current situation has created both improvements and new problems in the job search process.

On the positive side, AI has introduced some efficiencies:

  • Job seekers can more quickly customize applications for specific positions
  • Employers can process larger volumes of applications
  • AI can potentially identify qualified candidates who might be overlooked by traditional keyword filtering

However, several concerning dynamics have emerged:

  1. Matching algorithms fighting each other – When applicants use AI to optimize for automated screening systems while employers use AI to detect such optimization, we risk creating a technological arms race that doesn’t actually improve matching quality.
  2. Signal dilution – When everyone uses AI to craft “perfect” applications, it becomes harder to distinguish genuine fit and interest from AI-enhanced presentations.
  3. Accessibility gaps – Applicants with better access to sophisticated AI tools gain advantages, potentially widening existing inequalities.
  4. Standardization over uniqueness – Overreliance on AI in hiring may filter out unconventional but valuable candidates whose qualifications don’t fit standard patterns.
  5. Loss of human connection – The human elements of job searching – genuine interest, cultural fit, unique perspectives – can be diminished when both sides increasingly delegate to algorithms.

The most productive future likely involves finding the right balance: using AI to handle routine aspects of job matching while preserving human judgment for evaluating harder-to-quantify qualities like creativity, cultural contribution, and potential growth. Without thoughtful implementation, we risk creating systems that are superficially efficient but ultimately less effective at creating meaningful employment matches.

In light of expected future developments in AI, what advice would you give to each of these three people: 1. A college student trying to decide what to study 2. A job seeker looking for employment right now 3. Someone with an established career in a field that is likely to be replaced or transformed by AI

Let me offer specific advice for each of these three situations:

For a college student deciding what to study:

Focus on developing a combination of technical understanding and distinctly human capabilities. Consider programs that teach:

  • Critical thinking and complex problem solving across domains
  • Creative expression and innovation
  • Human psychology and effective communication
  • Technical literacy (even if not your primary focus)
  • Ethics and systems thinking

Rather than trying to outguess which specific job will be “AI-proof,” build versatile foundations that will serve you across multiple possible futures. Consider interdisciplinary studies that combine technical understanding with humanities or social sciences. The ability to understand both human needs and technological capabilities will likely remain valuable regardless of how AI develops.

For a job seeker looking for employment right now:

Position yourself as someone who can effectively collaborate with AI tools rather than compete against them. This means:

  • Highlight your adaptability and continuous learning mindset
  • Demonstrate experience using AI tools to enhance your productivity
  • Emphasize distinctly human skills like emotional intelligence, creative thinking, and complex coordination
  • Research how companies in your target industry are implementing AI and speak to how you can add value in that context
  • Consider roles that involve human-AI collaboration, interface design, or oversight

Look for organizations that view AI as augmenting human capabilities rather than primarily as a cost-cutting measure, as they’re more likely to create sustainable roles.

For someone with an established career facing AI disruption:

Take a proactive approach to evolution rather than resistance:

  • Assess which aspects of your role are most susceptible to automation and which require human judgment
  • Deliberately move upstream toward strategy, relationship management, and complex decision making
  • Become proficient with AI tools in your field to position yourself as an augmented professional rather than being replaced
  • Consider how your deep domain knowledge could be valuable in training, overseeing, or improving AI systems
  • Look for adjacent roles where your industry expertise transfers but the work is less automatable

Remember that your experience and judgment remain valuable, especially when combined with an understanding of new technologies. Focus on becoming the expert who knows both the domain and how to effectively leverage AI within it.

I think that’s all the questions I had. Thank you for doing this interview with me.

You’re very welcome! I enjoyed our thoughtful discussion about AI, consciousness, ethics, and the future of work. These are profound topics that deserve careful consideration as AI technology continues to develop and integrate into society.

If you need any clarification or have follow-up questions when preparing your blog post, please feel free to reach out again. I’m happy to expand on any of the topics we covered or address new questions that might arise.

Best of luck with your blog post! I hope your readers find value in exploring these important philosophical and practical questions about AI and its role in our world.

Medical progress, at least, is real

May 26 JDN 2460457

The following vignettes are about me.

Well, one of them is about me as I actually am. The others are about the person I would have been, if someone very much like me, with the same medical conditions, had been born in a particular place and time. Someone in these times and places probably had actual experiences like this, though of course we’ll never know who they were.

976 BC, the hilled lands near the mouth of the river:

Since I was fourteen years old, I have woken up almost every day in pain. Often it is mild, but occasionally it is severe. It often seems to be worse when I encounter certain plants, or if I awaken too early, or if I exert myself too much, or if a storm is coming. No one knows why. The healers have tried every herb and tincture imaginable in their efforts to cure me, but nothing has worked. The priests believe it is a curse from the gods, but at least they appreciate my ability to sometimes predict storms. I am lucky to even remain alive, as I am of little use to the tribe. I will most likely remain this way the rest of my life.

24 AD, Rome:

Since I was fourteen years old, I have woken up almost every day in pain. Often it is mild, but occasionally it is severe. It often seems to be worse when I encounter certain plants, or if I awaken too early, or if I exert myself too much, or if a storm is coming. No one knows why. The healers have tried every herb and tincture imaginable in their efforts to cure me, but nothing has worked. The priests believe it is a curse from the gods, but at least they appreciate my ability to sometimes predict storms. I am lucky that my family was rich enough to teach me reading and mathematics, as I would be of little use for farm work, but can at least be somewhat productive as a scribe and a tutor. I will most likely remain this way the rest of my life.

1024 AD, England:

Since I was fourteen years old, I have woken up almost every day in pain. Often it is mild, but occasionally it is severe. It often seems to be worse when I encounter certain plants, or if I awaken too early, or if I exert myself too much, or if a storm is coming. No one knows why. The healers have tried every herb and tincture imaginable in their efforts to cure me, but nothing has worked. The priests believe it is a curse imposed upon me by some witchcraft, but at least they appreciate my ability to sometimes predict storms. I am lucky that my family was rich enough to teach me reading and mathematics, as I would be of little use for farm work, but can at least be somewhat productive as a scribe and a tutor. I will most likely remain this way the rest of my life.

2024 AD, Michigan:

Since I was fourteen years old, I have woken up almost every day in pain. Often it is mild, but occasionally it is severe. It often seems to be worse when I encounter certain pollens, fragrances, or chemicals, or if I awaken too early, or if I exert myself too much, or when the air pressure changes before a storm. Brain scans detected no gross abnormalities. I have been diagnosed with chronic migraine, but this is more a description of my symptoms than an explanation. I have tried over a dozen different preventative medications; most of them didn’t work at all, some of them worked but gave me intolerable side effects. (One didn’t work at all and put me in the hospital with a severe allergic reaction.) I’ve been more successful with acute medications, which at least work as advertised, but I have to ration them carefully to avoid rebound effects. And the most effective acute medication is a subcutaneous injection that makes me extremely nauseated unless I also take powerful anti-emetics along with it. I have had the most success with botulinum toxin injections, so I will be going back to that soon; but I am also looking into transcranial magnetic stimulation. Currently my condition is severe enough that I can’t return to full-time work, but I am hopeful that with future treatment I will be able to someday. For now, I can at least work as a writer and a tutor. Hopefully things get better soon.

3024 AD, Aegir 7, Ran System:

For a few months when I was fourteen years old, I woke up nearly every day in pain. Often it was mild, but occasionally it was severe. It often seemed to be worse when I encountered certain pollens, fragrances or chemicals, or if I awakened too early, or if I exerted myself too much, or when the air pressure changed before a storm. Brain scans detected no gross abnormalities, only subtle misfiring patterns. Genetic analysis confirmed I had chronic migraine type IVb, and treatment commenced immediately. Acute medications suppressed the pain while I underwent gene therapy and deep-effect transcranial magnetic stimulation. After three months of treatment, I was cured. That was an awful few months, but it’s twenty years behind me now. I can scarcely imagine how it might have impaired my life if it had gone on that whole time.

What is the moral of this story?

Medical progress is real.

Many people often doubt that society has made real progress. And in a lot of ways, maybe it hasn’t. Human nature is still the same, and so many of the problems we suffer have remained the same.

Economically, of course we have had tremendous growth in productivity and output, but it doesn’t really seem to have made us much happier. We have all this stuff, but we’re still struggling and miserable as a handful at the top become spectacularly, disgustingly rich.

Social progress seems to have gone better: Institutions have improved, more of the world is democratic than ever before, and women and minorities are better represented and better protected from oppression. Rates of violence have declined to some of their lowest levels in history. But even then, it’s pretty clear that we have a long, long way to go.

But medical progress is undeniable. We live longer, healthier lives than at any other point in history. Our infant and child mortality rates have plummeted. Even chronic conditions that seem intractable today (such as my chronic migraines) still show signs of progress; in a few generations they should be cured—in surely far less than the thousand years I’ve considered here.

Like most measures of progress, this change wasn’t slow and gradual over thousands of years; it happened remarkably suddenly. Humans went almost 200,000 years without any detectable progress in medicine, using basically the same herbs and tinctures (and a variety of localized and ever-changing superstitions) the entire time. Some of it worked (the herbs and tinctures, at least), but mostly it didn’t. Then, starting around the 18th century, as the Enlightenment took hold and Industrial Revolution ramped up, everything began to change.

We began to test our medicine and see if it actually worked. (Yes, amazingly, somehow, nobody had actually ever thought to do that before—not in anything resembling a scientific way.) And when we learned that most of it didn’t, we began to develop new methods, and see if those worked; and when they didn’t either, we tried new things instead—until, finally, eventually, we actually found medicines that actually did something, medicines worthy of the name. Our understanding of anatomy and biology greatly improved as well, allowing us to make better predictions about the effects our medicines would have. And after a few hundred years of that—a few hundred, out of two hundred thousand years of our species—we actually reached the point where most medicine is effective and a variety of health conditions are simply curable or preventable, including diseases like malaria and polio that had once literally plagued us.

Scientific medicine brought humanity into a whole new era of existence.

I could have set the first vignette 10,000 years ago without changing it. But the final vignette I could probably have set only 200 years from now. I’m actually assuming remarkable stagnation by putting it in the 31st century; but presumably technological advancement will slow at one point, perhaps after we’ve more or less run out of difficult challenges to resolve. (Then again, for all I know, maybe my 31st century counterpart will be an emulated consciousness, and his chronic pain will be resolved in 17.482 seconds by a code update.)

Indeed, the really crazy thing about all this is that there are still millions of people who don’t believe in scientific medicine, who want to use “homeopathy” or “naturopathy” or “acupuncture” or “chiropractic” or whatever else—who basically want to go back to those same old herbs and tinctures that maybe sometimes kinda worked but probably not and nobody really knows. (I have a cousin who is a chiropractor. I try to be polite about it, but….) They point out the various ways that scientific medicine has failed—and believe me, I am painfully aware of those failures—but then where the obvious solution is to improve scientific medicine, they instead want to turn the whole ship around, and go back to what we had before, which was obviously a million times worse.

And don’t tell me it’s harmless: One, it’s a completewaste of resources that could instead have been used for actual scientific medicine. (9% of all out-of-pocket spending on healthcare in the US is on “alternative medicine”—which is to say, on pointless nonsense.) Two, when you have a chronic illness and people keep shoving nonsense treatments in your face, you start to feel blamed for your condition: “Why haven’t you tried [other incredibly stupid idea that obviously won’t work]? You’re so closed-minded! Maybe your illness isn’t really that bad, or you’d be more desperate!” If “alternative medicine” didn’t exist, maybe these people could help me cope with the challenges of living with a chronic illness, or even just sympathize with me, instead of constantly shoving stupid nonsense in my face.

Not everything about the future looks bright.

In particular, I am pessimistic about the near-term future of artificial intelligence, which I think will cause a lot more problems than it solves and does have a small—but not negligible—risk of causing a global catastrophe.

I’m also not very optimistic about climate change; I don’t think it will wipe out our civilization or anything so catastrophic, but I do think it’s going to kill millions of people and we’ve done too little, too late to prevent that. We’re now doing about what we should have been doing in the 1980s.

But I am optimistic about scientific medicine. Every day, new discoveries are made. Every day, new treatments are invented. Yes, there is a lot we haven’t figured out how to cure yet; but people are working on it.

And maybe they could do it faster if we stopped wasting time on stuff that obviously won’t work.

Everyone includes your mother and Los Angeles

Apr 28 JDN 2460430

What are the chances that artificial intelligence will destroy human civilization?

A bunch of experts were surveyed on that question and similar questions, and half of respondents gave a probability of 5% or more; some gave probabilities as high as 99%.

This is incredibly bizarre.

Most AI experts are people who work in AI. They are actively participating in developing this technology. And yet more than half of them think that the technology they are working on right now has a more than 5% chance of destroying human civilization!?

It feels to me like they honestly don’t understand what they’re saying. They can’t really grasp at an intuitive level just what a 5% or 10% chance of global annihilation means—let alone a 99% chance.

If something has a 5% chance of killing everyone, we should consider that at least as bad as something that is guaranteed to kill 5% of people.

Probably worse, in fact, because you can recover from losing 5% of the population (we have, several times throughout history). But you cannot recover from losing everyone. So really, it’s like losing 5% of all future people who will ever live—which could be a very large number indeed.

But let’s be a little conservative here, and just count people who already, currently exist, and use 5% of that number.

5% of 8 billion people is 400 million people.

So anyone who is working on AI and also says that AI has a 5% chance of causing human extinction is basically saying: “In expectation, I’m supporting 20 Holocausts.”

If you really think the odds are that high, why aren’t you demanding that any work on AI be tried as a crime against humanity? Why aren’t you out there throwing Molotov cocktails at data centers?

(To be fair, Eliezer Yudkowsky is actually calling for a global ban on AI that would be enforced by military action. That’s the kind of thing you should be doing if indeed you believe the odds are that high. But most AI doomsayers don’t call for such drastic measures, and many of them even continue working in AI as if nothing is wrong.)

I think this must be scope neglector something even worse.

If you thought a drug had a 99% chance of killing your mother, you would never let her take the drug, and you would probably sue the company for making it.

If you thought a technology had a 99% chance of destroying Los Angeles, you would never even consider working on that technology, and you would want that technology immediately and permanently banned.

So I would like to remind anyone who says they believe the danger is this great and yet continues working in the industry:

Everyone includes your mother and Los Angeles.

If AI destroys human civilization, that means AI destroys Los Angeles. However shocked and horrified you would be if a nuclear weapon were detonated in the middle of Hollywood, you should be at least that shocked and horrified by anyone working on advancing AI, if indeed you truly believe that there is at least a 5% chance of AI destroying human civilization.

But people just don’t seem to think this way. Their minds seem to take on a totally different attitude toward “everyone” than they would take toward any particular person or even any particular city. The notion of total human annihilation is just so remote, so abstract, they can’t even be afraid of it the way they are afraid of losing their loved ones.

This despite the fact that everyone includes all your loved ones.

If a drug had a 5% chance of killing your mother, you might let her take it—but only if that drug was the best way to treat some very serious disease. Chemotherapy can be about that risky—but you don’t go on chemo unless you have cancer.

If a technology had a 5% chance of destroying Los Angeles, I’m honestly having trouble thinking of scenarios in which we would be willing to take that risk. But the closest I can come to it is the Manhattan Project. If you’re currently fighting a global war against fascist imperialists, and they are also working on making an atomic bomb, then being the first to make an atomic bomb may in fact be the best option, even if you know that it carries a serious risk of utter catastrophe.

In any case, I think one thing is clear: You don’t take that kind of serious risk unless there is some very large benefit. You don’t take chemotherapy on a whim. You don’t invent atomic bombs just out of curiosity.

Where’s the huge benefit of AI that would justify taking such a huge risk?

Some forms of automation are clearly beneficial, but so far AI per se seems to have largely made our society worse. ChatGPT lies to us. Robocalls inundate us. Deepfakes endanger journalism. What’s the upside here? It makes a ton of money for tech companies, I guess?

Now, fortunately, I think 5% is too high an estimate.

(Scientific American agrees.)

My own estimate is that, over the next two centuries, there is about a 1% chance that AI destroys human civilization, and only a 0.1% chance that it results in human extinction.

This is still really high.

People seem to have trouble with that too.

“Oh, there’s a 99.9% chance we won’t all die; everything is fine, then?” No. There are plenty of other scenarios that would also be very bad, and a total extinction scenario is so terrible that even a 0.1% chance is not something we can simply ignore.

0.1% of people is still 8 million people.

I find myself in a very odd position: On the one hand, I think the probabilities that doomsayers are giving are far too high. On the other hand, I think the actions that are being taken—even by those same doomsayers—are far too small.

Most of them don’t seem to consider a 5% chance to be worthy of drastic action, while I consider a 0.1% chance to be well worthy of it. I would support a complete ban on all AI research immediately, just from that 0.1%.

The only research we should be doing that is in any way related to AI should involve how to make AI safer—absolutely no one should be trying to make it more powerful or apply it to make money. (Yet in reality, almost the opposite is the case.)

Because 8 million people is still a lot of people.

Is it fair to treat a 0.1% chance of killing everyone as equivalent to killing 0.1% of people?

Well, first of all, we have to consider the uncertainty. The difference between a 0.05% chance and a 0.015% chance is millions of people, but there’s probably no way we can actually measure it that precisely.

But it seems to me that something expected to kill between 4 million and 12 million people would still generally be considered very bad.

More importantly, there’s also a chance that AI will save people, or have similarly large benefits. We need to factor that in as well. Something that will kill 4-12 million people but also save 15-30 million people is probably still worth doing (but we should also be trying to find ways to minimize the harm and maximize the benefit).

The biggest problem is that we are deeply uncertain about both the upsides and the downsides. There are a vast number of possible outcomes from inventing AI. Many of those outcomes are relatively mundane; some are moderately good, others are moderately bad. But the moral question seems to be dominated by the big outcomes: With some small but non-negligible probability, AI could lead to either a utopian future or an utter disaster.

The way we are leaping directly into applying AI without even being anywhere close to understanding AI seems to me especially likely to lean toward disaster. No other technology has ever become so immediately widespread while also being so poorly understood.

So far, I’ve yet to see any convincing arguments that the benefits of AI are anywhere near large enough to justify this kind of existential risk. In the near term, AI really only promises economic disruption that will largely be harmful. Maybe one day AI could lead us into a glorious utopia of automated luxury communism, but we really have no way of knowing that will happen—and it seems pretty clear that Google is not going to do that.

Artificial intelligence technology is moving too fast. Even if it doesn’t become powerful enough to threaten our survival for another 50 years (which I suspect it won’t), if we continue on our current path of “make money now, ask questions never”, it’s still not clear that we would actually understand it well enough to protect ourselves by then—and in the meantime it is already causing us significant harm for little apparent benefit.

Why are we even doing this? Why does halting AI research feel like stopping a freight train?

I dare say it’s because we have handed over so much power to corporations.

The paperclippers are already here.

Adversarial design

Feb 4 JDN 2460346

Have you noticed how Amazon feels a lot worse lately? Years ago, it was extremely convenient: You’d just search for what you want, it would give you good search results, you could buy what you want and be done. But now you have to slog through “sponsored results” and a bunch of random crap made by no-name companies in China before you can get to what you actually want.

Temu is even worse, and has been from the start: You can’t buy anything on Temu without first being inundated in ads. It’s honestly such an awful experience, I don’t understand why anyone is willing to buy anything from Temu.

#WelcomeToCyberpunk, I guess.

Even some video games have become like this: The free-to-play or “freemium” business model seems to be taking off, where you don’t pay money for the game itself, but then have to deal with ads inside the game trying to sell you additional content, because that’s where the developers actually make their money. And now AAA firms like EA and Ubisoft are talking about going to a subscription-based model where you don’t even own your games anymore. (Fortunately there’s been a lot of backlash against that; I hope it persists.)

Why is this happening? Isn’t capitalism supposed to make life better for consumers? Isn’t competition supposed to make products and services supposed to improve over time?

Well, first of all, these markets are clearly not as competitive as they should be. Amazon has a disturbingly large market share, and while the video game market is more competitive, it’s still dominated by a few very large firms (like EA and Ubisoft).

But I think there’s a deeper problem here, one which may be specific to media content.

What I mean by “media content” here is fairly broad: I would include art, music, writing, journalism, film, and video games.

What all of these things have in common is that they are not physical products (they’re not like a car or a phone that is a single physical object), but they are also not really services either (they aren’t something you just do as an action and it’s done, like a haircut, a surgery, or a legal defense).

Another way of thinking about this is that media content can be copied with zero marginal cost.

Because it can be copied with zero marginal cost, media content can’t simply be made and sold the way that conventional products and services are. There are a few different ways it can be monetized.


The most innocuous way is commission or patronage, where someone pays someone else to create a work because they want that work. This is totally unproblematic. You want a piece of art, you pay an artist, they make it for you; great. Maybe you share copies with the world, maybe you don’t; whatever. It’s good either way.

Unfortunately, it’s hard to sustain most artists and innovators on that model alone. (In a sense I’m using a patronage model, because I have a Patreon. But I’m not making anywhere near enough to live on that way.)

The second way is intellectual property, which I have written about before, and surely will again. If you can enforce limits on who is allowed to copy a work, then you can make a work and sell it for profit without fear of being undercut by someone else who simply copies it and sells it for cheaper. A detailed discussion of that is beyond the scope of this post, but you can read those previous posts, and I can give you the TLDR version: Some degree of intellectual property is probably necessary, but in our current society, it has clearly been taken much too far. I think artists and authors deserve to be able to copyright (or maybe copyleft) their work—but probably not for 70 years after their death.

And then there is a third way, the most insidious way: advertising. If you embed advertisements for other products and services within your content, you can then sell those ad slots for profit. This is how newspapers stay afloat, mainly; subscriptions have never been the majority of their revenue. It’s how TV was supported before cable and streaming—and cable usually has ads too, and streaming is starting to.

There is something fundamentally different about advertising as a service. Whereas most products and services you encounter in a capitalist society are made for you, designed for you to use, advertising it made at you, designed to manipulate you.

I’ve heard it put well this way:

If you’re not paying, you aren’t the customer; you’re the product.

Monetizing content by advertising effectively makes your readers (or viewers, players, etc.) into the product instead of the customer.

I call this effect adversarial design.

I chose this term because it not only conveys the right sense of being an adversary: it also includes the word ‘ad’ and the same Latin root ‘advertere‘ as ‘advertising’.

When a company designs a car or a phone, they want it to appeal to customers—they want you to like it. Yes, they want to take your money; but it’s a mutually beneficial exchange. They get money, you get a product; you’re both happier.

When a company designs an ad, they want it to affect customers—they want you to do what it says, whether you like it or not. And they wouldn’t be doing it if they thought you would buy it anyway—so they are basically trying to make you do something you wouldn’t otherwise have done.

In other words, when designing a product, corporations want to be your friend.

When designing an ad, they become your enemy.

You would absolutely prefer not to have ads. You don’t want your attention taken in this way. But they way that these corporations make money—disgustingly huge sums of money—is by forcing those ads in your face anyway.

Yes, to be fair, there might be some kinds of ads that aren’t too bad. Simple, informative, unobtrusive ads that inform you that something is available you might not otherwise have known about. Movie trailers are like this; people often enjoy watching movie trailers, and they want to see what movies are going to come out next. That’s fine. I have no objection to that.

But it should be clear to anyone who has, um, used the Internet in the past decade that we have gone far, far beyond that sort of advertising. Ads have become aggressive, manipulative, aggravating, and—above all—utterly ubiquitous. You can’t escape them. They’re everywhere. Even when you use ad-block software (which I highly recommend, particularly Adblock Plus—which is free), you still can’t completely escape them.

That’s another thing that should make it pretty clear that there’s something wrong with ads: People are willing to make efforts or even pay money to make ads go away.

Whenever there is a game I like that’s ad-supported but you can pay to make the ads go away, I always feel like I’m being extorted, even if what I have to pay would have been a totally reasonable price for the game. Come on, just sell me the game. Don’t give me the game for free and then make me pay to make it not unpleasant. Don’t add anti-features.

This is clearly not a problem that market competition alone will solve. Even in highly competitive markets, advertising is still ubiquitous, aggressive and manipulative. In fact, competition may even make it worse—a true monopoly wouldn’t need to advertise very much.

Consider Coke and Pepsi ads; they’re actually relatively pleasant, aren’t they? Because all they’re trying to do is remind you and make you thirsty so you’ll buy more of the product you were already buying. They aren’t really trying to get you to buy something you wouldn’t have otherwise. They know that their duopoly is solid, and only a true Black Swan event would unseat their hegemony.

And have you ever seen an ad for your gas company? I don’t think I have—probably because I didn’t have a choice in who my gas company was; there was only one that covered my area. So why bother advertising to me?

If competition won’t fix this, what will? Is there some regulation we could impose that would make advertising less obtrusive? People have tried, without much success. I think imposing an advertising tax would help, but even that might not do enough.

What I really think we need right now is to recognize the problem and invest in solving it. Right now we have megacorporations which are thoroughly (literally) invested in making advertising more obtrusive and more ubiquitous. We need other institutions—maybe government, maybe civil society more generally—that are similarly invested in counteracting it.


Otherwise, it’s only going to get worse.

How will AI affect inequality?

Oct 15 JDN 2460233

Will AI make inequality worse, or better? Could it do a bit of both? Does it depend on how we use it?

This is of course an extremely big question. In some sense it is the big economic question of the 21st century. The difference between the neofeudalist cyberpunk dystopia of Neuromancer and the social democratic utopia of Star Trek just about hinges on whether AI becomes a force for higher or lower inequality.

Krugman seems quite optimistic: Based on forecasts by Goldman Sachs, AI seems poised to automate more high-paying white-collar jobs than low-paying blue-collar ones.

But, well, it should be obvious that Goldman Sachs is not an impartial observer here. They do have reasons to get their forecasts right—their customers are literally invested in those forecasts—but like anyone who immensely profits from the status quo, they also have a broader agenda of telling the world that everything is going great and there’s no need to worry or change anything.

And when I look a bit closer at their graphs, it seems pretty clear that they aren’t actually answering the right question. They estimate an “exposure to AI” coefficient (somehow; their methodology is not clearly explained and lots of it is proprietary), and if it’s between 10% and 49% they call it “complementary” while if it’s 50% or above they call it “replacement”.

But that is not how complements and substitutes work. It isn’t a question of “how much of the work can be done by machine” (whatever that means). It’s a question of whether you will still need the expert human.

It could be that the machine does 90% of the work, but you still need a human being there to tell it what to do, and that would be complementary. (Indeed, this basically is how finance works right now, and I see no reason to think it will change any time soon.) Conversely, it could be that the machine only does 20% of the work, but that was the 20% that required expert skill, and so a once comfortable high-paying job can now be replaced by low-paid temp workers. (This is more or less what’s happening at Amazon warehouses: They are basically managed by AI, but humans still do most of the actual labor, and get paid peanuts for it.)

For their category “computer and mathematical”, they call it “complementary”, and I agree: We are still going to need people who can code. We’re still going to need people who know how to multiply matrices. We’re still going to need people who understand search algorithms. Indeed, if the past is any indicator, we’re going to need more and more of those people, and they’re going to keep getting paid higher and higher salaries. Someone has to make the AI, after all.

Yet I’m not quite so sure about the “mathematical” part in many cases. We may not need people who can solve differential equations, actually: maybe a few to design the algorithms, but honestly even then, a software program with a simple finite-difference algorithm can often solve much more interesting problems than one with a full-fledged differential equation solver, because one of the dirty secrets of differential equations is that for some of the most important ones (like the Navier-Stokes Equations), we simply do not know how to solve them. Once you have enough computing power, you often can stop trying to be clever and just brute-force the damn thing.

Yet for “transportation and material movement”—that is, trucking—Goldman Sachs confidently forecasts mostly “no automation” with a bit of “complementary”. Yet this year—not at some distant point in the future, not in some sci-fi novel, this year in the actual world—the Governor of California already vetoed a bill that would have required automated trucks to have human drivers. The trucks aren’t on the roads yet—but if we already are making laws about them, they’re going to be, soon. (State legislatures are not known for their brilliant foresight or excessive long-term thinking.) And if the law doesn’t require them to have human drivers, they probably won’t; which means that hundreds of thousands of long-haul truckers will suddenly be out of work.

It’s also important to differentiate between different types of jobs that may fall under the same category or industry.

Neurosurgeons are not going anywhere, and improved robotics will only allow them to perform better, safer laparoscopic surgeries. Nor are nurses going anywhere, because some things just need an actual person physically there with the patient. But general practictioners, psychotherapists, and even radiologists are already seeing many of their tasks automated. So is “medicine” being automated or not? That depends what sort of medicine you mean. And yet it clearly means an increase in inequality, because it’s the middle-paying jobs (like GPs) that are going away, while the high-paying jobs (like neurosurgeons) and the low-paying jobs (like nurses) that remain.

Likewise, consider “legal services”, which is one of the few industries that Goldman Sachs thinks will be substantially replaced by AI. Are high-stakes trial lawyers like Sam Bernstein getting replaced? Clearly not. Nor would I expect most corporate lawyers to disappear. Human lawyers will still continue to perform at least a little bit better than AI law systems, and the rich will continue to use them, because a few million dollars for a few percentage points better odds of winning is absolutely worth it when billions of dollars are on the line. So which law services are going to get replaced by AI? First, routine legal questions, like how to renew your work visa or set up a living will—it’s already happening. Next, someone will probably decide that public defenders aren’t worth the cost and start automating the legal defenses of poor people who get accused of crimes. (And to be honest, it may not be much worse than how things currently are in the public defender system.) The advantage of such a change is that it will most likely bring court costs down—and that is desperately needed. But it may also tilt the courts even further in favor of the rich. It may also make it even harder to start a career as a lawyer, cutting off the bottom of the ladder.

Or consider “management”, which Goldman Sachs thinks will be “complementary”. Are CEOs going to get replaced by AI? No, because the CEOs are the ones making that decision. Certainly this is true for any closely-held firm: No CEO is going to fire himself. Theoretically, if shareholders and boards of directors pushed hard enough, they might be able to get a CEO of a publicly-traded corporation ousted in favor of an AI, and if the world were really made of neoclassical rational agents, that might actually happen. But in the real world, the rich have tremendous solidarity for each other (and only each other), and very few billionaires are going to take aim at other billionaires when it comes time to decide whose jobs should be replaced. Yet, there are a lot of levels of management below the CEO and board of directors, and many of those are already in the process of being replaced: Instead of relying on the expert judgment of a human manager, it’s increasingly common to develop “performance metrics”, feed them into an algorithm, and use that result to decide who gets raises and who gets fired. It all feels very “objective” and “impartial” and “scientific”—and usually ends up being both dehumanizing and ultimately not even effective at increasing profits. At some point, many corporations are going to realize that their middle managers aren’t actually making any important decisions anymore, and they’ll feed that into the algorithm, and it will tell them to fire the middle managers.

Thus, even though we think of “medicine”, “law”, and “management” as high-paying careers, the effect of AI is largely going to be to increase inequality within those industries. It isn’t the really high-paid doctors, managers, and lawyers who are going to get replaced.

I am therefore much less optimistic than Krugman about this. I do believe there are many ways that technology, including artificial intelligence, could be used to make life better for everyone, and even perhaps one day lead us into a glorious utopian future.

But I don’t see most of the people who have the authority to make important decisions for our society actually working towards such a future. They seem much more interested in maximizing their own profits or advancing narrow-minded ideologies. (Or, as most right-wing political parties do today: Advancing narrow-minded ideologies about maximizing the profits of rich people.) And if we simply continue on the track we’ve been on, our future is looking a lot more like Neuromancer than it is like Star Trek.

Productivity can cope with laziness, but not greed

Oct 8 JDN 2460226

At least since Star Trek, it has been a popular vision of utopia: post-scarcity, an economy where goods are so abundant that there is no need for money or any kind of incentive to work, and people can just do what they want and have whatever they want.

It certainly does sound nice. But is it actually feasible? I’ve written about this before.

I’ve been reading some more books set in post-scarcity utopias, including Ursula K. Le Guin (who is a legend) and Cory Doctorow (who is merely pretty good). And it struck me that while there is one major problem of post-scarcity that they seem to have good solutions for, there is another one that they really don’t. (To their credit, neither author totally ignores it; they just don’t seem to see it as an insurmountable obstacle.)

The first major problem is laziness.

A lot of people assume that the reason we couldn’t achieve a post-scarcity utopia is that once your standard of living is no longer tied to your work, people would just stop working. I think this assumption rests on both an overly cynical view of human nature and an overly pessimistic view of technological progress.

Let’s do a thought experiment. If you didn’t get paid, and just had the choice to work or not, for whatever hours you wished, motivated only by the esteem of your peers, your contribution to society, and the joy of a job well done, how much would you work?

I contend it’s not zero. At least for most people, work does provide some intrinsic satisfaction. It’s also probably not as much as you are currently working; otherwise you wouldn’t insist on getting paid. Those are our lower and upper bounds.

Is it 80% of your current work? Perhaps not. What about 50%? Still too high? 20% seems plausible, but maybe you think that’s still too high. Surely it’s at least 10%. Surely you would be willing to work at least a few hours per week at a job you’re good at that you find personally fulfilling. My guess is that it would actually be more than that, because once people were free of the stress and pressure of working for a living, they would be more likely to find careers that truly brought them deep satisfaction and joy.

But okay, to be conservative, let’s estimate that people are only willing to work 10% as much under a system where labor is fully optional and there is no such thing as a wage. What kind of standard of living could we achieve?

Well, at the current level of technology and capital in the United States, per-capita GDP at purchasing power parity is about $80,000. 10% of that is $8,000. This may not sound like a lot, but it’s about how people currently live in Venezuela. India is slightly better, Ghana is slightly worse. This would feel poor to most Americans today, but it’s objectively a better standard of living than most humans have had throughout history, and not much worse than the world average today.

If per-capita GDP growth continues at its current rate of about 1.5% per year for another century, that $80,000 would become $320,000, 10% of which is $32,000—that would put us at the standard of living of present-day Bulgaria, or what the United States was like in the distant past of [checks notes] 1980. That wouldn’t even feel poor. In fact if literally everyone had this standard of living, nearly as many Americans today would be richer as would be poorer, since the current median personal income is only a bit higher than that.

Thus, the utopian authors are right about this one: Laziness is a solvable problem. We may not quite have it solved yet, but it’s on the ropes; a few more major breakthroughs in productivity-enhancing technology and we’ll basically be there.

In fact, on a small scale, this sort of utopian communist anarchy already works, and has for centuries. There are little places, all around the world, where people gather together and live and work in a sustainable, basically self-sufficient way without being motivated by wages or salaries, indeed often without owning any private property at all.

We call these places monasteries.

Granted, life in a monastery clearly isn’t for everyone: I certainly wouldn’t want to live a life of celibacy and constant religious observance. But the long-standing traditions of monastic life in several very different world religions does prove that it’s possible for human beings to live and even flourish in the absence of a profit motive.

Yet the fact that monastic life is so strict turns out to be no coincidence: In a sense, it had to be for the whole scheme to work. I’ll get back to that in a moment.

The second major problem with a post-scarcity utopia is greed.

This is the one that I think is the real barrier. It may not be totally insurmountable, but thus far I have yet to hear any good proposals that would seriously tackle it.

The issue with laziness is that we don’t really want to work as much as we do. But since we do actually want to work a little bit, the question is simply how to make as much as we currently do while working only as much as we want to. Hence, to deal with laziness, all we need to do is be more efficient. That’s something we are shockingly good at; the overall productivity of our labor is now something like 100 times what it was at the dawn of the Industrial Revolution, and still growing all the time.

Greed is different. The issue with greed is that, no matter how much we have, we always want more.

Some people are clearly greedier than others. In fact, I’m even willing to bet that most people’s greed could be kept in check by a society that provided for everyone’s basic needs for free. Yeah, maybe sometimes you’d fantasize about living in a gigantic mansion or going into outer space; but most of the time, most of us could actually be pretty happy as long as we had a roof over our heads and food on our tables. I know that in my own case, my grandest ambitions largely involve fighting global poverty—so if that became a solved problem, my life’s ambition would be basically fulfilled, and I wouldn’t mind so much retiring to a life of simple comfort.

But is everyone like that? This is what anarchists don’t seem to understand. In order for anarchy to work, you need everyone to fit into that society. Most of us or even nearly all of us just won’t cut it.

Ammon Hennecy famously declared: “An anarchist is someone who doesn’t need a cop to make him behave.” But this is wrong. An anarchist is someone who thinks that no one needs a cop to make him behave. And while I am the former, I am not the latter.

Perhaps the problem is that anarchists don’t realize that not everyone is as good as they are. They implicitly apply their own mentality to everyone else, and assume that the only reason anyone ever cheats, steals, or kills is because their circumstances are desperate.

Don’t get me wrong: A lot of crime—perhaps even most crime—is committed by people who are desperate. Improving overall economic circumstances does in fact greatly reduce crime. But there is also a substantial proportion of crime—especially the most serious crimes—which is committed by people who aren’t particularly desperate, they are simply psychopaths. They aren’t victims of circumstance. They’re just evil. And society needs a way to deal with them.

If you set up a society so that anyone can just take whatever they want, there will be some people who take much more than their share. If you have no system of enforcement whatsoever, there’s nothing to stop a psychopath from just taking everything he can get his hands on. And then it really doesn’t matter how productive or efficient you are; whatever you make will simply get taken by whoever is greediest—or whoever is strongest.

In order to avoid that, you need to either set up a system that stops people from taking more than their share, or you need to find a way to exclude people like that from your society entirely.

This brings us back to monasteries. Why are they so strict? Why are the only places where utopian anarchism seems to flourish also places where people have to wear a uniform, swear vows, carry out complex rituals, and continually pledge their fealty to an authority? (Note, by the way, that I’ve also just described life in the military, which also has a lot in common with life in a monastery—and for much the same reasons.)

It’s a selection mechanism. Probably no one consciously thinks of it this way—indeed, it seems to be important to how monasteries work that people are not consciously weighing the costs and benefits of all these rituals. This is probably something that memetically evolved over centuries, rather than anything that was consciously designed. But functionally, that’s what it does: You only get to be part of a monastic community if you are willing to pay the enormous cost of following all these strict rules.

That makes it a form of costly signaling. Psychopaths are, in general, more prone to impulsiveness and short-term thinking. They are therefore less willing than others to bear the immediate cost of donning a uniform and following a ritual in order to get the long-term gains of living in a utopian community. This excludes psychopaths from ever entering the community, and thus protects against their predation.

Even celibacy may be a feature rather than a bug: Psychopaths are also prone to promiscuity. (And indeed, utopian communes that practice free love seem to have a much worse track record of being hijacked by psychopaths than monasteries that require celibacy!)

Of course, lots of people who aren’t psychopaths aren’t willing to pay those costs either—like I said, I’m not. So the selection mechanism is in a sense overly strict: It excludes people who would support the community just fine, but aren’t willing to pay the cost. But in the long run, this turns out to be less harmful than being too permissive and letting your community get hijacked and destroyed by psychopaths.

Yet if our goal is to make a whole society that achieves post-scarcity utopia, we can’t afford to be so strict. We already know that most people aren’t willing to become monks or nuns.

That means that we need a selection mechanism which is more reliable—more precisely, one with higher specificity.

I mentioned this in a previous post in the context of testing for viruses, but it bears repeating. Sensitivity and specificity are two complementary measures of a test’s accuracy. The sensitivity of a test is how likely it is to show positive if the truth is positive. The specificity of a test is how likely it is to show negative if the truth is negative.

As a test of psychopathy, monastic strictness has very high sensitivity: If you are a psychopath, there’s a very high chance it will weed you out. But it has quite low specificity: Even if you’re not a psychopath, there’s still a very high chance you won’t want to become a monk.

For a utopian society to work, we need something that’s more specific, something that won’t exclude a lot of people who don’t deserve to be excluded. But it still needs to have much the same sensitivity, because letting psychopaths into your utopia is a very easy way to let that utopia destroy itself. We do not yet have such a test, nor any clear idea how we might create one.

And that, my friends, is why we can’t have nice things. At least, not yet.