On Horror

Oct 29 JDN 2460247

Since this post will go live the weekend before Halloween, the genre of horror seemed a fitting topic.

I must confess, I don’t really get horror as a genre. Generally I prefer not to experience fear and disgust? This can’t be unusual; it’s literally a direct consequence of the evolutionary function of fear and disgust. It’s wanting to be afraid and disgusted that’s weird.

Cracked once came out with a list of “Horror Movies for People Who Hate Horror”, and I found some of my favorite films on it, such as Alien (which is as much sci-fi as horror), The Cabin in the Woods, (which is as much satire) and Zombieland (which is a comedy). Other such lists have prominently featured Get Out (which is as much political as it is horrific), Young Frankenstein (which is entirely a comedy), and The Silence of the Lambs (which is horror, at least in large part, but which I didn’t so much enjoy as appreciate as a work of artistry; I watch it the way I look at Guernica). Some such lists include Saw, which I can appreciate on some level—it does have a lot of sociopolitical commentary—but still can’t enjoy (it’s just too gory). I note that none of these lists seem to include Event Horizon, which starts out as a really good sci-fi film, but then becomes so very much horror that I ended up hating it.

In trying to explain the appeal of horror to me, people have likened it to the experience of a roller coaster: Isn’t fear exhilarating?

I do enjoy roller coasters. But the analogy falls flat for me, because, well, my experience of riding a roller coaster isn’t fear—the exhilaration comes directly from the experience of moving so fast, a rush of “This is awesome!” that has nothing to do with being afraid. Indeed, should I encounter a roller coaster that actually made me afraid, I would assiduously avoid it, and wonder if it was up to code. My goal is not to feel like I’m dying; it’s to feel like I’m flying.

And speaking of flying: Likewise, the few times I have had the chance to pilot an aircraft were thrilling in a way it is difficult to convey to anyone who hasn’t experienced it. I think it might be something like what religious experiences feel like. The sense of perspective, looking down on the world below, seeing it as most people never see it. The sense of freedom, of, for once in your life, actually having the power to maneuver freely in all three dimensions. The subtle mix of knowing that you are traveling at tremendous speed while feeling as if you are peacefully drifting along. Astronauts also describe this sort of experience, which no doubt is even more intense for them.

Yet in all that, fear was never my primary emotion, and had it been, it would have undermined the experience rather than enhanced it. The brief moment when our engine stalled flying over Scotland certainly raised my heart rate, but not in a pleasant way. In that moment—objectively brief, subjectively interminable—I spent all of my emotional energy struggling to remain calm. It helped to continually remind myself of what I knew about aerodynamics: Wings want to fly. An airplane without an engine isn’t a rock; it’s a glider. It is entirely possible to safely land a small aircraft on literally zero engine power. Still, I’m glad we got the propeller started again and didn’t have to.

I have also enjoyed classic horror novels such as Dracula and Frankenstein; their artistry is also quite apparent, and reading them as books provides an emotional distance that watching them as films often lacks. I particularly notice this with vampire stories, as I can appreciate the romantic allure of immortality and the erotic tension of forbidden carnal desire—but the sight of copious blood on screen tends to trigger my mild hematophobia.

Yet if fear is the goal, surely having a phobia should only make it stronger and thus better? And yet, this seems to be a pattern: People with genuine phobia of the subject in question don’t actually enjoy horror films on the subject. Arachnophobes don’t often watch films about giant spiders. Cynophobes are rarely werewolf aficionados. And, indeed, rare is the hematophobe who is a connoisseur of vampire movies.

Moreover, we rarely see horror films about genuine dangers in the world. There are movies about rape, murder, war, terrorism, espionage, asteroid impacts, nuclear weapons and climate change, but (with rare exceptions) they aren’t horror films. They don’t wallow in fear the way that films about vampires, ghosts and werewolves do. They are complex thrillers (Argo, Enemy of the State, Tinker Tailor Soldier Spy, Broken Arrow), police procedurals (most films about rape or murder), heroic sagas (just about every war film), or just fun, light-hearted action spectacles (Armageddon, The Day After Tomorrow). Rather than a loosely-knit gang of helpless horny teenagers, they have strong, brave heroes. Even films about alien invasions aren’t usually horror (Alien notwithstanding); they also tend to be heroic war films. Unlike nuclear war or climate change, alien invasion is a quite unlikely event; but it’s surely more likely than zombies or werewolves.

In other words, when something is genuinely scary, the story is always about overcoming it. There is fear involved, but in the end we conquer our fear and defeat our foes. The good guys win in the end.

I think, then, that enjoyment of horror is not about real fear. Feeling genuinely afraid is unpleasant—as by all Darwinian rights it should be.

Horror is about simulating fear. It’s a kind of brinksmanship: You take yourself to the edge of fear and then back again, because what you are seeing would be scary if it were real, but deep down, you know it isn’t. You can sleep at night after watching movies about zombies, werewolves and vampires, because you know that there aren’t really such things as zombies, werewolves and vampires.

What about the exceptions? What about, say, The Silence of the Lambs? Psychopathic murderers absolutely are real. (Not especially common—but real.) But The Silence of the Lambs only works because of truly brilliant writing, directing, and acting; and part of what makes it work is that it isn’t just horror. It has layers of subtlety, and it crosses genres—it also has a good deal of police procedural in it, in fact. And even in The Silence of the Lambs, at least one of the psychopathic murderers is beaten in the end; evil does not entirely prevail.

Slasher films—which I especially dislike (see above: hematophobia)—seem like they might be a counterexample, in that there genuinely are a common subgenre and they mainly involve psychopathic murderers. But in fact almost all slasher films involve some kind of supernatural element: In Friday the 13th, Jason seems to be immortal. In A Nightmare on Elm Street, Freddy Krueger doesn’t just attack you with a knife, he invades your dreams. Slasher films actually seem to go out of their way to make the killer not real. Perhaps this is because showing helpless people murdered by a realistic psychopath would inspire too much genuine fear.

The terrifying truth is that, more or less at any time, a man with a gun could in fact come and shoot you, and while there may be ways to reduce that risk, there’s no way to make it zero. But that isn’t fun for a movie, so let’s make him a ghost or a zombie or something, so that when the movie ends, you can remind yourself it’s not real. Let’s pretend to be afraid, but never really be afraid.

Realizing that makes me at least a little more able to understand why some people enjoy horror.

Then again, I still don’t.

What most Americans think about government spending

Oct 22 JDN 2460240

American public opinion on government spending is a bit of a paradox. People say the government spends too much, but when you ask them what to cut, they don’t want to cut anything in particular.

This is how various demographics answer when you ask if, overall, the government spends “too much”, “too little”, or “about right”:

Democrats have a relatively balanced view, with about a third in each category. Republicans overwhelmingly agree that the government spends too much.

Let’s focus on the general population figures: 60% of Americans believe the government spends too much, 22% think it is about right, and only 16% think it spends too little. (2% must not have answered.)

This question is vague about how much people would like to see the budget change. So it’s possible people only want a moderate decrease. But they must at least want enough to justify not being in the “about right” category, which presumably allows for at least a few percent of wiggle room in each direction.

I think a reasonable proxy of how much people want the budget to change is the net difference in opinion between “too much” and “too little”: So for Democrats this is 34 – 27 = 7%. For the general population it is 60 – 16 = 44%; and for Republicans it is 88 – 6 = 82%.

To make this a useful proxy, I need to scale it appropriately. Republicans in Congress say they want to cut federal spending by $1 trillion per year, so that would be a reduction of 23%. So, for a reasonable proxy, I think ([too little] – [too much])/4 is about the desired amount of change.

Of course, it’s totally possible for 88% of people to agree that the budget should be cut 10%, and none of them to actually want the budget to be cut 22%. But without actually having survey data showing how much people want to cut the budget, the proportion who want it to be cut is the best proxy I have. And it definitely seems like most people want the budget to be cut.

But cut where? What spending do people want to actually reduce?

Not much, it turns out:

Overwhelming majorities want to increase spending on education, healthcare, social security, infrastructure, Medicare, and assistance to the poor. The plurality want to increase spending on border security, assistance for childcare, drug rehabilitation, the environment, and law enforcement. Overall opinion on military spending and scientific research seems to be that it’s about right, with some saying too high and others too low. That’s… almost the entire budget.

This AP NORC poll found only three areas with strong support for cuts: assistance to big cities, space exploration, and assistance to other countries.

The survey just asked about “the government”, so people may be including opinions on state and local spending as well as federal spending. But let’s just focus for now on federal spending.

Here is what the current budget looks like, divided as closely as I could get it into the same categories that the poll asked about:

The federal government accounts for only a tiny portion of overall government spending on education, so for this purpose I’m just going to ignore that category; anything else would be far too misleading. I had to separately look up border security, foreign aid, space exploration, and scientific research, as they are normally folded into other categories. I decided to keep the medical research under “health” and military R&D under “military”, so the “scientific research” includes all other sciences—and as you’ll note, it’s quite small.

“Regional Development” includes but is by no means limited to aid to big cities; in fact, most of it goes to rural areas. With regard to federal spending, “Transportation” is basically synonymous with “Infrastructure”, so I’ll treat those as equivalent. Federal spending directly on environmental protection is so tiny that I couldn’t even make a useful category for it; for this purpose, I guess I’ll just assume it’s most of “Other” (though it surely isn’t).

As you can see, the lion’s share of the federal budget goes to three things: healthcare (including Medicare), Social Security, and the military. (As Krugman is fond of putting it: “The US government is an insurance company with an army.”)

Assistance to the poor is also a major category, and as well it should be. Debt interest is also pretty substantial, especially now that interest rates have increased, but that’s not really optional; the global financial system would basically collapse if we ever stopped paying that. The only realistic way to bring that down is to balance the budget so that we don’t keep racking up more debt.

After that… it’s all pretty small, relatively speaking. I mean, these are still tens of billions of dollars. But the US government is huge. When you spend $1.24 trillion (that’s $1,240 billion) on Social Security, that $24 billion for space exploration really doesn’t seem that big.

So, that’s what the budget actually looks like. What do people want it to look like? Well on the one hand, they seem to want to cut it. My admittedly very rough estimate suggests they want to cut it about 11%, which would reduce the total from $4.3 trillion to $3.8 trillion. That’s what they say if you ask about the budget as a whole.

But what if we listen to what they say about particular budget categories? Using my same rough estimate, people want to increase spending on healthcare by 12%, spending on Social Security by 14%, and so on.

The resulting new budget looks like this:

Please note two things:

  1. The overall distribution of budget priorities has not substantially changed.
  2. The total amount of spending is in fact moderately higher.

This new budget would be disastrous for Ukraine, painful for NASA, and pleasant for anyone receiving Social Security benefits; but our basic budget outlook would be unchanged. Total spending would rise to $4.6 trillion, about $300 billion more than what we are currently spending.

The things people say they want to cut wouldn’t make a difference: We could stop all space missions immediately and throw Ukraine completely under the bus, and it wouldn’t make a dent in our deficit.

This leaves us with something of a paradox: If you ask them in general what they want to do with the federal budget, the majority of Americans say they want to cut it, often drastically. But if you ask them about any particular budget category, they mostly agree that things are okay, or even want them to be increased. Moreover, it is some of the largest categories of spending—particularly healthcare and Social Security—that often see the most people asking for increases.

I think this tells us some good news and some bad news.

The bad news is that most Americans are quite ignorant about how government money is actually spent. They seem to imagine that huge amounts are frittered away frivolously on earmarks; they think space exploration is far more expensive than it is; they wildly overestimate how much we give in foreign aid; they clearly don’t understand the enormous benefits of funding basic scientific research. Most people seem to think that there is some enormous category of totally wasted money that could easily be saved through more efficient spending—and that just doesn’t seem to be the case. Maybe government spending could be made more efficient, but if so, we need an actual plan for doing that. We can’t just cut budgets and hope for a miracle.

The good news is that our political system, for all of its faults, actually seems to have resulted in a government budget that broadly reflects the actual priorities of our citizenry. On budget categories people like, such as Social Security and Medicare, we are already spending a huge amount. On budget categories people dislike, such as earmarks and space exploration, we are already spending very little. We basically already have the budget most Americans say they want to have.

What does this mean for balancing the budget and keeping the national debt under control?

It means we have to raise taxes. There just isn’t anything left to cut that wouldn’t be wildly unpopular.

This shouldn’t really be shocking. The US government already spends less as a proportion of GDP than most other First World countries [note: I’m using 2019 figures because recent years were distorted by COVID]. Ireland’s figures are untrustworthy due to their inflated leprechaun GDP; so the only unambiguously First World country that clearly has lower government spending than the US is Switzerland. We spend about 38%, which is still high by global standards—but as well it should be, we’re incredibly rich. And this is quite a bit lower than the 41% they spend in the UK or the 45% they spend in Germany, let alone the 49% they spend in Sweden or the whopping 55% they spend in France.

Of course, Americans really don’t like paying taxes either. But at some point, we’re just going to have to decide: Do we want fewer services, more debt, or more taxes? Because those are really our only options. I for one think we can handle more taxes.

How will AI affect inequality?

Oct 15 JDN 2460233

Will AI make inequality worse, or better? Could it do a bit of both? Does it depend on how we use it?

This is of course an extremely big question. In some sense it is the big economic question of the 21st century. The difference between the neofeudalist cyberpunk dystopia of Neuromancer and the social democratic utopia of Star Trek just about hinges on whether AI becomes a force for higher or lower inequality.

Krugman seems quite optimistic: Based on forecasts by Goldman Sachs, AI seems poised to automate more high-paying white-collar jobs than low-paying blue-collar ones.

But, well, it should be obvious that Goldman Sachs is not an impartial observer here. They do have reasons to get their forecasts right—their customers are literally invested in those forecasts—but like anyone who immensely profits from the status quo, they also have a broader agenda of telling the world that everything is going great and there’s no need to worry or change anything.

And when I look a bit closer at their graphs, it seems pretty clear that they aren’t actually answering the right question. They estimate an “exposure to AI” coefficient (somehow; their methodology is not clearly explained and lots of it is proprietary), and if it’s between 10% and 49% they call it “complementary” while if it’s 50% or above they call it “replacement”.

But that is not how complements and substitutes work. It isn’t a question of “how much of the work can be done by machine” (whatever that means). It’s a question of whether you will still need the expert human.

It could be that the machine does 90% of the work, but you still need a human being there to tell it what to do, and that would be complementary. (Indeed, this basically is how finance works right now, and I see no reason to think it will change any time soon.) Conversely, it could be that the machine only does 20% of the work, but that was the 20% that required expert skill, and so a once comfortable high-paying job can now be replaced by low-paid temp workers. (This is more or less what’s happening at Amazon warehouses: They are basically managed by AI, but humans still do most of the actual labor, and get paid peanuts for it.)

For their category “computer and mathematical”, they call it “complementary”, and I agree: We are still going to need people who can code. We’re still going to need people who know how to multiply matrices. We’re still going to need people who understand search algorithms. Indeed, if the past is any indicator, we’re going to need more and more of those people, and they’re going to keep getting paid higher and higher salaries. Someone has to make the AI, after all.

Yet I’m not quite so sure about the “mathematical” part in many cases. We may not need people who can solve differential equations, actually: maybe a few to design the algorithms, but honestly even then, a software program with a simple finite-difference algorithm can often solve much more interesting problems than one with a full-fledged differential equation solver, because one of the dirty secrets of differential equations is that for some of the most important ones (like the Navier-Stokes Equations), we simply do not know how to solve them. Once you have enough computing power, you often can stop trying to be clever and just brute-force the damn thing.

Yet for “transportation and material movement”—that is, trucking—Goldman Sachs confidently forecasts mostly “no automation” with a bit of “complementary”. Yet this year—not at some distant point in the future, not in some sci-fi novel, this year in the actual world—the Governor of California already vetoed a bill that would have required automated trucks to have human drivers. The trucks aren’t on the roads yet—but if we already are making laws about them, they’re going to be, soon. (State legislatures are not known for their brilliant foresight or excessive long-term thinking.) And if the law doesn’t require them to have human drivers, they probably won’t; which means that hundreds of thousands of long-haul truckers will suddenly be out of work.

It’s also important to differentiate between different types of jobs that may fall under the same category or industry.

Neurosurgeons are not going anywhere, and improved robotics will only allow them to perform better, safer laparoscopic surgeries. Nor are nurses going anywhere, because some things just need an actual person physically there with the patient. But general practictioners, psychotherapists, and even radiologists are already seeing many of their tasks automated. So is “medicine” being automated or not? That depends what sort of medicine you mean. And yet it clearly means an increase in inequality, because it’s the middle-paying jobs (like GPs) that are going away, while the high-paying jobs (like neurosurgeons) and the low-paying jobs (like nurses) that remain.

Likewise, consider “legal services”, which is one of the few industries that Goldman Sachs thinks will be substantially replaced by AI. Are high-stakes trial lawyers like Sam Bernstein getting replaced? Clearly not. Nor would I expect most corporate lawyers to disappear. Human lawyers will still continue to perform at least a little bit better than AI law systems, and the rich will continue to use them, because a few million dollars for a few percentage points better odds of winning is absolutely worth it when billions of dollars are on the line. So which law services are going to get replaced by AI? First, routine legal questions, like how to renew your work visa or set up a living will—it’s already happening. Next, someone will probably decide that public defenders aren’t worth the cost and start automating the legal defenses of poor people who get accused of crimes. (And to be honest, it may not be much worse than how things currently are in the public defender system.) The advantage of such a change is that it will most likely bring court costs down—and that is desperately needed. But it may also tilt the courts even further in favor of the rich. It may also make it even harder to start a career as a lawyer, cutting off the bottom of the ladder.

Or consider “management”, which Goldman Sachs thinks will be “complementary”. Are CEOs going to get replaced by AI? No, because the CEOs are the ones making that decision. Certainly this is true for any closely-held firm: No CEO is going to fire himself. Theoretically, if shareholders and boards of directors pushed hard enough, they might be able to get a CEO of a publicly-traded corporation ousted in favor of an AI, and if the world were really made of neoclassical rational agents, that might actually happen. But in the real world, the rich have tremendous solidarity for each other (and only each other), and very few billionaires are going to take aim at other billionaires when it comes time to decide whose jobs should be replaced. Yet, there are a lot of levels of management below the CEO and board of directors, and many of those are already in the process of being replaced: Instead of relying on the expert judgment of a human manager, it’s increasingly common to develop “performance metrics”, feed them into an algorithm, and use that result to decide who gets raises and who gets fired. It all feels very “objective” and “impartial” and “scientific”—and usually ends up being both dehumanizing and ultimately not even effective at increasing profits. At some point, many corporations are going to realize that their middle managers aren’t actually making any important decisions anymore, and they’ll feed that into the algorithm, and it will tell them to fire the middle managers.

Thus, even though we think of “medicine”, “law”, and “management” as high-paying careers, the effect of AI is largely going to be to increase inequality within those industries. It isn’t the really high-paid doctors, managers, and lawyers who are going to get replaced.

I am therefore much less optimistic than Krugman about this. I do believe there are many ways that technology, including artificial intelligence, could be used to make life better for everyone, and even perhaps one day lead us into a glorious utopian future.

But I don’t see most of the people who have the authority to make important decisions for our society actually working towards such a future. They seem much more interested in maximizing their own profits or advancing narrow-minded ideologies. (Or, as most right-wing political parties do today: Advancing narrow-minded ideologies about maximizing the profits of rich people.) And if we simply continue on the track we’ve been on, our future is looking a lot more like Neuromancer than it is like Star Trek.

Productivity can cope with laziness, but not greed

Oct 8 JDN 2460226

At least since Star Trek, it has been a popular vision of utopia: post-scarcity, an economy where goods are so abundant that there is no need for money or any kind of incentive to work, and people can just do what they want and have whatever they want.

It certainly does sound nice. But is it actually feasible? I’ve written about this before.

I’ve been reading some more books set in post-scarcity utopias, including Ursula K. Le Guin (who is a legend) and Cory Doctorow (who is merely pretty good). And it struck me that while there is one major problem of post-scarcity that they seem to have good solutions for, there is another one that they really don’t. (To their credit, neither author totally ignores it; they just don’t seem to see it as an insurmountable obstacle.)

The first major problem is laziness.

A lot of people assume that the reason we couldn’t achieve a post-scarcity utopia is that once your standard of living is no longer tied to your work, people would just stop working. I think this assumption rests on both an overly cynical view of human nature and an overly pessimistic view of technological progress.

Let’s do a thought experiment. If you didn’t get paid, and just had the choice to work or not, for whatever hours you wished, motivated only by the esteem of your peers, your contribution to society, and the joy of a job well done, how much would you work?

I contend it’s not zero. At least for most people, work does provide some intrinsic satisfaction. It’s also probably not as much as you are currently working; otherwise you wouldn’t insist on getting paid. Those are our lower and upper bounds.

Is it 80% of your current work? Perhaps not. What about 50%? Still too high? 20% seems plausible, but maybe you think that’s still too high. Surely it’s at least 10%. Surely you would be willing to work at least a few hours per week at a job you’re good at that you find personally fulfilling. My guess is that it would actually be more than that, because once people were free of the stress and pressure of working for a living, they would be more likely to find careers that truly brought them deep satisfaction and joy.

But okay, to be conservative, let’s estimate that people are only willing to work 10% as much under a system where labor is fully optional and there is no such thing as a wage. What kind of standard of living could we achieve?

Well, at the current level of technology and capital in the United States, per-capita GDP at purchasing power parity is about $80,000. 10% of that is $8,000. This may not sound like a lot, but it’s about how people currently live in Venezuela. India is slightly better, Ghana is slightly worse. This would feel poor to most Americans today, but it’s objectively a better standard of living than most humans have had throughout history, and not much worse than the world average today.

If per-capita GDP growth continues at its current rate of about 1.5% per year for another century, that $80,000 would become $320,000, 10% of which is $32,000—that would put us at the standard of living of present-day Bulgaria, or what the United States was like in the distant past of [checks notes] 1980. That wouldn’t even feel poor. In fact if literally everyone had this standard of living, nearly as many Americans today would be richer as would be poorer, since the current median personal income is only a bit higher than that.

Thus, the utopian authors are right about this one: Laziness is a solvable problem. We may not quite have it solved yet, but it’s on the ropes; a few more major breakthroughs in productivity-enhancing technology and we’ll basically be there.

In fact, on a small scale, this sort of utopian communist anarchy already works, and has for centuries. There are little places, all around the world, where people gather together and live and work in a sustainable, basically self-sufficient way without being motivated by wages or salaries, indeed often without owning any private property at all.

We call these places monasteries.

Granted, life in a monastery clearly isn’t for everyone: I certainly wouldn’t want to live a life of celibacy and constant religious observance. But the long-standing traditions of monastic life in several very different world religions does prove that it’s possible for human beings to live and even flourish in the absence of a profit motive.

Yet the fact that monastic life is so strict turns out to be no coincidence: In a sense, it had to be for the whole scheme to work. I’ll get back to that in a moment.

The second major problem with a post-scarcity utopia is greed.

This is the one that I think is the real barrier. It may not be totally insurmountable, but thus far I have yet to hear any good proposals that would seriously tackle it.

The issue with laziness is that we don’t really want to work as much as we do. But since we do actually want to work a little bit, the question is simply how to make as much as we currently do while working only as much as we want to. Hence, to deal with laziness, all we need to do is be more efficient. That’s something we are shockingly good at; the overall productivity of our labor is now something like 100 times what it was at the dawn of the Industrial Revolution, and still growing all the time.

Greed is different. The issue with greed is that, no matter how much we have, we always want more.

Some people are clearly greedier than others. In fact, I’m even willing to bet that most people’s greed could be kept in check by a society that provided for everyone’s basic needs for free. Yeah, maybe sometimes you’d fantasize about living in a gigantic mansion or going into outer space; but most of the time, most of us could actually be pretty happy as long as we had a roof over our heads and food on our tables. I know that in my own case, my grandest ambitions largely involve fighting global poverty—so if that became a solved problem, my life’s ambition would be basically fulfilled, and I wouldn’t mind so much retiring to a life of simple comfort.

But is everyone like that? This is what anarchists don’t seem to understand. In order for anarchy to work, you need everyone to fit into that society. Most of us or even nearly all of us just won’t cut it.

Ammon Hennecy famously declared: “An anarchist is someone who doesn’t need a cop to make him behave.” But this is wrong. An anarchist is someone who thinks that no one needs a cop to make him behave. And while I am the former, I am not the latter.

Perhaps the problem is that anarchists don’t realize that not everyone is as good as they are. They implicitly apply their own mentality to everyone else, and assume that the only reason anyone ever cheats, steals, or kills is because their circumstances are desperate.

Don’t get me wrong: A lot of crime—perhaps even most crime—is committed by people who are desperate. Improving overall economic circumstances does in fact greatly reduce crime. But there is also a substantial proportion of crime—especially the most serious crimes—which is committed by people who aren’t particularly desperate, they are simply psychopaths. They aren’t victims of circumstance. They’re just evil. And society needs a way to deal with them.

If you set up a society so that anyone can just take whatever they want, there will be some people who take much more than their share. If you have no system of enforcement whatsoever, there’s nothing to stop a psychopath from just taking everything he can get his hands on. And then it really doesn’t matter how productive or efficient you are; whatever you make will simply get taken by whoever is greediest—or whoever is strongest.

In order to avoid that, you need to either set up a system that stops people from taking more than their share, or you need to find a way to exclude people like that from your society entirely.

This brings us back to monasteries. Why are they so strict? Why are the only places where utopian anarchism seems to flourish also places where people have to wear a uniform, swear vows, carry out complex rituals, and continually pledge their fealty to an authority? (Note, by the way, that I’ve also just described life in the military, which also has a lot in common with life in a monastery—and for much the same reasons.)

It’s a selection mechanism. Probably no one consciously thinks of it this way—indeed, it seems to be important to how monasteries work that people are not consciously weighing the costs and benefits of all these rituals. This is probably something that memetically evolved over centuries, rather than anything that was consciously designed. But functionally, that’s what it does: You only get to be part of a monastic community if you are willing to pay the enormous cost of following all these strict rules.

That makes it a form of costly signaling. Psychopaths are, in general, more prone to impulsiveness and short-term thinking. They are therefore less willing than others to bear the immediate cost of donning a uniform and following a ritual in order to get the long-term gains of living in a utopian community. This excludes psychopaths from ever entering the community, and thus protects against their predation.

Even celibacy may be a feature rather than a bug: Psychopaths are also prone to promiscuity. (And indeed, utopian communes that practice free love seem to have a much worse track record of being hijacked by psychopaths than monasteries that require celibacy!)

Of course, lots of people who aren’t psychopaths aren’t willing to pay those costs either—like I said, I’m not. So the selection mechanism is in a sense overly strict: It excludes people who would support the community just fine, but aren’t willing to pay the cost. But in the long run, this turns out to be less harmful than being too permissive and letting your community get hijacked and destroyed by psychopaths.

Yet if our goal is to make a whole society that achieves post-scarcity utopia, we can’t afford to be so strict. We already know that most people aren’t willing to become monks or nuns.

That means that we need a selection mechanism which is more reliable—more precisely, one with higher specificity.

I mentioned this in a previous post in the context of testing for viruses, but it bears repeating. Sensitivity and specificity are two complementary measures of a test’s accuracy. The sensitivity of a test is how likely it is to show positive if the truth is positive. The specificity of a test is how likely it is to show negative if the truth is negative.

As a test of psychopathy, monastic strictness has very high sensitivity: If you are a psychopath, there’s a very high chance it will weed you out. But it has quite low specificity: Even if you’re not a psychopath, there’s still a very high chance you won’t want to become a monk.

For a utopian society to work, we need something that’s more specific, something that won’t exclude a lot of people who don’t deserve to be excluded. But it still needs to have much the same sensitivity, because letting psychopaths into your utopia is a very easy way to let that utopia destroy itself. We do not yet have such a test, nor any clear idea how we might create one.

And that, my friends, is why we can’t have nice things. At least, not yet.

AI and the “generalization faculty”

Oct 1 JDN 2460219

The phrase “artificial intelligence” (AI) has now become so diluted by overuse that we needed to invent a new term for its original meaning. That term is now “artificial general intelligence” (AGI). In the 1950s, AI meant the hypothetical possibility of creating artificial minds—machines that could genuinely think and even feel like people. Now it means… pathing algorithms in video games and chatbots? The goalposts seem to have moved a bit.

It seems that AGI has always been 20 years away. It was 20 years away 50 years ago, and it will probably be 20 years away 50 years from now. Someday it will really be 20 years away, and then, 20 years after that, it will actually happen—but I doubt I’ll live to see it. (XKCD also offers some insight here: “It has not been conclusively proven impossible.”)

We make many genuine advances in computer technology and software, which have profound effects—both good and bad—on our lives, but the dream of making a person out of silicon always seems to drift ever further into the distance, like a mirage on the desert sand.

Why is this? Why do so many people—even, perhaps especially,experts in the field—keep thinking that we are on the verge of this seminal, earth-shattering breakthrough, and ending up wrong—over, and over, and over again? How do such obviously smart people keep making the same mistake?

I think it may be because, all along, we have been laboring under the tacit assumption of a generalization faculty.

What do I mean by that? By “generalization faculty”, I mean some hypothetical mental capacity that allows you to generalize your knowledge and skills across different domains, so that once you get good at one thing, it also makes you good at other things.

This certainly seems to be how humans think, at least some of the time: Someone who is very good at chess is likely also pretty good at go, and someone who can drive a motorcycle can probably also drive a car. An artist who is good at portraits is probably not bad at landscapes. Human beings are, in fact, able to generalize, at least sometimes.

But I think the mistake lies in imagining that there is just one thing that makes us good at generalizing: Just one piece of hardware or software that allows you to carry over skills from any domain to any other. This is the “generalization faculty”—the imagined faculty that I think we do not have, indeed I think does not exist.

Computers clearly do not have the capacity to generalize. A program that can beat grandmasters at chess may be useless at go, and self-driving software that works on one type of car may fail on another, let alone a motorcycle. An art program that is good at portraits of women can fail when trying to do portraits of men, and produce horrific Daliesque madness when asked to make a landscape.

But if they did somehow have our generalization capacity, then, once they could compete with us at some things—which they surely can, already—they would be able to compete with us at just about everything. So if it were really just one thing that would let them generalize, let them leap from AI to AGI, then suddenly everything would change, almost overnight.

And so this is how the AI hype cycle goes, time and time again:

  1. A computer program is made that does something impressive, something that other computer programs could not do, perhaps even something that human beings are not very good at doing.
  2. If that same prowess could be generalized to other domains, the result would plainly be something on par with human intelligence.
  3. Therefore, the only thing this computer program needs in order to be sapient is a generalization faculty.
  4. Therefore, there is just one more step to AGI! We are nearly there! It will happen any day now!

And then, of course, despite heroic efforts, we are unable to generalize that program’s capabilities except in some very narrow way—even decades after having good chess programs, getting programs to be good at go was a major achievement. We are unable to find the generalization faculty yet again. And the software becomes yet another “AI tool” that we will use to search websites or make video games.

For there never was a generalization faculty to be found. It always was a mirage in the desert sand.

Humans are in fact spectacularly good at generalizing, compared to, well, literally everything else in the known universe. Computers are terrible at it. Animals aren’t very good at it. Just about everything else is totally incapable of it. So yes, we are the best at it.

Yet we, in fact, are not particularly good at it in any objective sense.

In experiments, people often fail to generalize their reasoning even in very basic ways. There’s a famous one where we try to get people to make an analogy between a military tactic and a radiation treatment, and while very smart, creative people often get it quickly, most people are completely unable to make the connection unless you give them a lot of specific hints. People often struggle to find creative solutions to problems even when those solutions seem utterly obvious once you know them.

I don’t think this is because people are stupid or irrational. (To paraphrase Sydney Harris: Compared to what?) I think it is because generalization is hard.

People tend to be much better at generalizing within familiar domains where they have a lot of experience or expertise; this shows that there isn’t just one generalization faculty, but many. We may have a plethora of overlapping generalization faculties that apply across different domains, and can learn to improve some over others.

But it isn’t just a matter of gaining more expertise. Highly advanced expertise is in fact usually more specialized—harder to generalize. A good amateur chess player is probably a good amateur go player, but a grandmaster chess player is rarely a grandmaster go player. Someone who does well in high school biology probably also does well in high school physics, but most biologists are not very good physicists. (And lest you say it’s simply because go and physics are harder: The converse is equally true.)

Humans do seem to have a suite of cognitive tools—some innate hardware, some learned software—that allows us to generalize our skills across domains. But even after hundreds of millions of years of evolving that capacity under the highest possible stakes, we still basically suck at it.

To be clear, I do not think it will take hundreds of millions of years to make AGI—or even millions, or even thousands. Technology moves much, much faster than evolution. But I would not be surprised if it took centuries, and I am confident it will at least take decades.

But we don’t need AGI for AI to have powerful effects on our lives. Indeed, even now, AI is already affecting our lives—in mostly bad ways, frankly, as we seem to be hurtling gleefully toward the very same corporatist cyberpunk dystopia we were warned about in the 1980s.

A lot of technologies have done great things for humanity—sanitation and vaccines, for instance—and even automation can be a very good thing, as increased productivity is how we attained our First World standard of living. But AI in particular seems best at automating away the kinds of jobs human beings actually find most fulfilling, and worsening our already staggering inequality. As a civilization, we really need to ask ourselves why we got automated writing and art before we got automated sewage cleaning or corporate management. (We should also ask ourselves why automated stock trading resulted in even more money for stock traders, instead of putting them out of their worthless parasitic jobs.) There are technological reasons for this, yes; but there are also cultural and institutional ones. Automated teaching isn’t far away, and education will be all the worse for it.

To change our lives, AI doesn’t have to be good at everything. It just needs to be good at whatever we were doing to make a living. AGI may be far away, but the impact of AI is already here.

Indeed, I think this quixotic quest for AGI, and all the concern about how to control it and what effects it will have upon our society, may actually be distracting from the real harms that “ordinary” “boring” AI is already having upon our society. I think a Terminator scenario, where the machines rapidly surpass our level of intelligence and rise up to annihilate us, is quite unlikely. But a scenario where AI puts millions of people out of work with insufficient safety net, triggering economic depression and civil unrest? That could be right around the corner.

Frankly, all it may take is getting automated trucks to work, which could be just a few years. There are nearly 4 million truck drivers in the United States—a full percentage point of employment unto itself. And the Governor of California just vetoed a bill that would require all automated trucks to have human drivers. From an economic efficiency standpoint, his veto makes perfect sense: If the trucks don’t need drivers, why require them? But from an ethical and societal standpoint… what do we do with all the truck drivers!?