What most Americans think about government spending

Oct 22 JDN 2460240

American public opinion on government spending is a bit of a paradox. People say the government spends too much, but when you ask them what to cut, they don’t want to cut anything in particular.

This is how various demographics answer when you ask if, overall, the government spends “too much”, “too little”, or “about right”:

Democrats have a relatively balanced view, with about a third in each category. Republicans overwhelmingly agree that the government spends too much.

Let’s focus on the general population figures: 60% of Americans believe the government spends too much, 22% think it is about right, and only 16% think it spends too little. (2% must not have answered.)

This question is vague about how much people would like to see the budget change. So it’s possible people only want a moderate decrease. But they must at least want enough to justify not being in the “about right” category, which presumably allows for at least a few percent of wiggle room in each direction.

I think a reasonable proxy of how much people want the budget to change is the net difference in opinion between “too much” and “too little”: So for Democrats this is 34 – 27 = 7%. For the general population it is 60 – 16 = 44%; and for Republicans it is 88 – 6 = 82%.

To make this a useful proxy, I need to scale it appropriately. Republicans in Congress say they want to cut federal spending by $1 trillion per year, so that would be a reduction of 23%. So, for a reasonable proxy, I think ([too little] – [too much])/4 is about the desired amount of change.

Of course, it’s totally possible for 88% of people to agree that the budget should be cut 10%, and none of them to actually want the budget to be cut 22%. But without actually having survey data showing how much people want to cut the budget, the proportion who want it to be cut is the best proxy I have. And it definitely seems like most people want the budget to be cut.

But cut where? What spending do people want to actually reduce?

Not much, it turns out:

Overwhelming majorities want to increase spending on education, healthcare, social security, infrastructure, Medicare, and assistance to the poor. The plurality want to increase spending on border security, assistance for childcare, drug rehabilitation, the environment, and law enforcement. Overall opinion on military spending and scientific research seems to be that it’s about right, with some saying too high and others too low. That’s… almost the entire budget.

This AP NORC poll found only three areas with strong support for cuts: assistance to big cities, space exploration, and assistance to other countries.

The survey just asked about “the government”, so people may be including opinions on state and local spending as well as federal spending. But let’s just focus for now on federal spending.

Here is what the current budget looks like, divided as closely as I could get it into the same categories that the poll asked about:

The federal government accounts for only a tiny portion of overall government spending on education, so for this purpose I’m just going to ignore that category; anything else would be far too misleading. I had to separately look up border security, foreign aid, space exploration, and scientific research, as they are normally folded into other categories. I decided to keep the medical research under “health” and military R&D under “military”, so the “scientific research” includes all other sciences—and as you’ll note, it’s quite small.

“Regional Development” includes but is by no means limited to aid to big cities; in fact, most of it goes to rural areas. With regard to federal spending, “Transportation” is basically synonymous with “Infrastructure”, so I’ll treat those as equivalent. Federal spending directly on environmental protection is so tiny that I couldn’t even make a useful category for it; for this purpose, I guess I’ll just assume it’s most of “Other” (though it surely isn’t).

As you can see, the lion’s share of the federal budget goes to three things: healthcare (including Medicare), Social Security, and the military. (As Krugman is fond of putting it: “The US government is an insurance company with an army.”)

Assistance to the poor is also a major category, and as well it should be. Debt interest is also pretty substantial, especially now that interest rates have increased, but that’s not really optional; the global financial system would basically collapse if we ever stopped paying that. The only realistic way to bring that down is to balance the budget so that we don’t keep racking up more debt.

After that… it’s all pretty small, relatively speaking. I mean, these are still tens of billions of dollars. But the US government is huge. When you spend $1.24 trillion (that’s $1,240 billion) on Social Security, that $24 billion for space exploration really doesn’t seem that big.

So, that’s what the budget actually looks like. What do people want it to look like? Well on the one hand, they seem to want to cut it. My admittedly very rough estimate suggests they want to cut it about 11%, which would reduce the total from $4.3 trillion to $3.8 trillion. That’s what they say if you ask about the budget as a whole.

But what if we listen to what they say about particular budget categories? Using my same rough estimate, people want to increase spending on healthcare by 12%, spending on Social Security by 14%, and so on.

The resulting new budget looks like this:

Please note two things:

  1. The overall distribution of budget priorities has not substantially changed.
  2. The total amount of spending is in fact moderately higher.

This new budget would be disastrous for Ukraine, painful for NASA, and pleasant for anyone receiving Social Security benefits; but our basic budget outlook would be unchanged. Total spending would rise to $4.6 trillion, about $300 billion more than what we are currently spending.

The things people say they want to cut wouldn’t make a difference: We could stop all space missions immediately and throw Ukraine completely under the bus, and it wouldn’t make a dent in our deficit.

This leaves us with something of a paradox: If you ask them in general what they want to do with the federal budget, the majority of Americans say they want to cut it, often drastically. But if you ask them about any particular budget category, they mostly agree that things are okay, or even want them to be increased. Moreover, it is some of the largest categories of spending—particularly healthcare and Social Security—that often see the most people asking for increases.

I think this tells us some good news and some bad news.

The bad news is that most Americans are quite ignorant about how government money is actually spent. They seem to imagine that huge amounts are frittered away frivolously on earmarks; they think space exploration is far more expensive than it is; they wildly overestimate how much we give in foreign aid; they clearly don’t understand the enormous benefits of funding basic scientific research. Most people seem to think that there is some enormous category of totally wasted money that could easily be saved through more efficient spending—and that just doesn’t seem to be the case. Maybe government spending could be made more efficient, but if so, we need an actual plan for doing that. We can’t just cut budgets and hope for a miracle.

The good news is that our political system, for all of its faults, actually seems to have resulted in a government budget that broadly reflects the actual priorities of our citizenry. On budget categories people like, such as Social Security and Medicare, we are already spending a huge amount. On budget categories people dislike, such as earmarks and space exploration, we are already spending very little. We basically already have the budget most Americans say they want to have.

What does this mean for balancing the budget and keeping the national debt under control?

It means we have to raise taxes. There just isn’t anything left to cut that wouldn’t be wildly unpopular.

This shouldn’t really be shocking. The US government already spends less as a proportion of GDP than most other First World countries [note: I’m using 2019 figures because recent years were distorted by COVID]. Ireland’s figures are untrustworthy due to their inflated leprechaun GDP; so the only unambiguously First World country that clearly has lower government spending than the US is Switzerland. We spend about 38%, which is still high by global standards—but as well it should be, we’re incredibly rich. And this is quite a bit lower than the 41% they spend in the UK or the 45% they spend in Germany, let alone the 49% they spend in Sweden or the whopping 55% they spend in France.

Of course, Americans really don’t like paying taxes either. But at some point, we’re just going to have to decide: Do we want fewer services, more debt, or more taxes? Because those are really our only options. I for one think we can handle more taxes.

How will AI affect inequality?

Oct 15 JDN 2460233

Will AI make inequality worse, or better? Could it do a bit of both? Does it depend on how we use it?

This is of course an extremely big question. In some sense it is the big economic question of the 21st century. The difference between the neofeudalist cyberpunk dystopia of Neuromancer and the social democratic utopia of Star Trek just about hinges on whether AI becomes a force for higher or lower inequality.

Krugman seems quite optimistic: Based on forecasts by Goldman Sachs, AI seems poised to automate more high-paying white-collar jobs than low-paying blue-collar ones.

But, well, it should be obvious that Goldman Sachs is not an impartial observer here. They do have reasons to get their forecasts right—their customers are literally invested in those forecasts—but like anyone who immensely profits from the status quo, they also have a broader agenda of telling the world that everything is going great and there’s no need to worry or change anything.

And when I look a bit closer at their graphs, it seems pretty clear that they aren’t actually answering the right question. They estimate an “exposure to AI” coefficient (somehow; their methodology is not clearly explained and lots of it is proprietary), and if it’s between 10% and 49% they call it “complementary” while if it’s 50% or above they call it “replacement”.

But that is not how complements and substitutes work. It isn’t a question of “how much of the work can be done by machine” (whatever that means). It’s a question of whether you will still need the expert human.

It could be that the machine does 90% of the work, but you still need a human being there to tell it what to do, and that would be complementary. (Indeed, this basically is how finance works right now, and I see no reason to think it will change any time soon.) Conversely, it could be that the machine only does 20% of the work, but that was the 20% that required expert skill, and so a once comfortable high-paying job can now be replaced by low-paid temp workers. (This is more or less what’s happening at Amazon warehouses: They are basically managed by AI, but humans still do most of the actual labor, and get paid peanuts for it.)

For their category “computer and mathematical”, they call it “complementary”, and I agree: We are still going to need people who can code. We’re still going to need people who know how to multiply matrices. We’re still going to need people who understand search algorithms. Indeed, if the past is any indicator, we’re going to need more and more of those people, and they’re going to keep getting paid higher and higher salaries. Someone has to make the AI, after all.

Yet I’m not quite so sure about the “mathematical” part in many cases. We may not need people who can solve differential equations, actually: maybe a few to design the algorithms, but honestly even then, a software program with a simple finite-difference algorithm can often solve much more interesting problems than one with a full-fledged differential equation solver, because one of the dirty secrets of differential equations is that for some of the most important ones (like the Navier-Stokes Equations), we simply do not know how to solve them. Once you have enough computing power, you often can stop trying to be clever and just brute-force the damn thing.

Yet for “transportation and material movement”—that is, trucking—Goldman Sachs confidently forecasts mostly “no automation” with a bit of “complementary”. Yet this year—not at some distant point in the future, not in some sci-fi novel, this year in the actual world—the Governor of California already vetoed a bill that would have required automated trucks to have human drivers. The trucks aren’t on the roads yet—but if we already are making laws about them, they’re going to be, soon. (State legislatures are not known for their brilliant foresight or excessive long-term thinking.) And if the law doesn’t require them to have human drivers, they probably won’t; which means that hundreds of thousands of long-haul truckers will suddenly be out of work.

It’s also important to differentiate between different types of jobs that may fall under the same category or industry.

Neurosurgeons are not going anywhere, and improved robotics will only allow them to perform better, safer laparoscopic surgeries. Nor are nurses going anywhere, because some things just need an actual person physically there with the patient. But general practictioners, psychotherapists, and even radiologists are already seeing many of their tasks automated. So is “medicine” being automated or not? That depends what sort of medicine you mean. And yet it clearly means an increase in inequality, because it’s the middle-paying jobs (like GPs) that are going away, while the high-paying jobs (like neurosurgeons) and the low-paying jobs (like nurses) that remain.

Likewise, consider “legal services”, which is one of the few industries that Goldman Sachs thinks will be substantially replaced by AI. Are high-stakes trial lawyers like Sam Bernstein getting replaced? Clearly not. Nor would I expect most corporate lawyers to disappear. Human lawyers will still continue to perform at least a little bit better than AI law systems, and the rich will continue to use them, because a few million dollars for a few percentage points better odds of winning is absolutely worth it when billions of dollars are on the line. So which law services are going to get replaced by AI? First, routine legal questions, like how to renew your work visa or set up a living will—it’s already happening. Next, someone will probably decide that public defenders aren’t worth the cost and start automating the legal defenses of poor people who get accused of crimes. (And to be honest, it may not be much worse than how things currently are in the public defender system.) The advantage of such a change is that it will most likely bring court costs down—and that is desperately needed. But it may also tilt the courts even further in favor of the rich. It may also make it even harder to start a career as a lawyer, cutting off the bottom of the ladder.

Or consider “management”, which Goldman Sachs thinks will be “complementary”. Are CEOs going to get replaced by AI? No, because the CEOs are the ones making that decision. Certainly this is true for any closely-held firm: No CEO is going to fire himself. Theoretically, if shareholders and boards of directors pushed hard enough, they might be able to get a CEO of a publicly-traded corporation ousted in favor of an AI, and if the world were really made of neoclassical rational agents, that might actually happen. But in the real world, the rich have tremendous solidarity for each other (and only each other), and very few billionaires are going to take aim at other billionaires when it comes time to decide whose jobs should be replaced. Yet, there are a lot of levels of management below the CEO and board of directors, and many of those are already in the process of being replaced: Instead of relying on the expert judgment of a human manager, it’s increasingly common to develop “performance metrics”, feed them into an algorithm, and use that result to decide who gets raises and who gets fired. It all feels very “objective” and “impartial” and “scientific”—and usually ends up being both dehumanizing and ultimately not even effective at increasing profits. At some point, many corporations are going to realize that their middle managers aren’t actually making any important decisions anymore, and they’ll feed that into the algorithm, and it will tell them to fire the middle managers.

Thus, even though we think of “medicine”, “law”, and “management” as high-paying careers, the effect of AI is largely going to be to increase inequality within those industries. It isn’t the really high-paid doctors, managers, and lawyers who are going to get replaced.

I am therefore much less optimistic than Krugman about this. I do believe there are many ways that technology, including artificial intelligence, could be used to make life better for everyone, and even perhaps one day lead us into a glorious utopian future.

But I don’t see most of the people who have the authority to make important decisions for our society actually working towards such a future. They seem much more interested in maximizing their own profits or advancing narrow-minded ideologies. (Or, as most right-wing political parties do today: Advancing narrow-minded ideologies about maximizing the profits of rich people.) And if we simply continue on the track we’ve been on, our future is looking a lot more like Neuromancer than it is like Star Trek.

Productivity can cope with laziness, but not greed

Oct 8 JDN 2460226

At least since Star Trek, it has been a popular vision of utopia: post-scarcity, an economy where goods are so abundant that there is no need for money or any kind of incentive to work, and people can just do what they want and have whatever they want.

It certainly does sound nice. But is it actually feasible? I’ve written about this before.

I’ve been reading some more books set in post-scarcity utopias, including Ursula K. Le Guin (who is a legend) and Cory Doctorow (who is merely pretty good). And it struck me that while there is one major problem of post-scarcity that they seem to have good solutions for, there is another one that they really don’t. (To their credit, neither author totally ignores it; they just don’t seem to see it as an insurmountable obstacle.)

The first major problem is laziness.

A lot of people assume that the reason we couldn’t achieve a post-scarcity utopia is that once your standard of living is no longer tied to your work, people would just stop working. I think this assumption rests on both an overly cynical view of human nature and an overly pessimistic view of technological progress.

Let’s do a thought experiment. If you didn’t get paid, and just had the choice to work or not, for whatever hours you wished, motivated only by the esteem of your peers, your contribution to society, and the joy of a job well done, how much would you work?

I contend it’s not zero. At least for most people, work does provide some intrinsic satisfaction. It’s also probably not as much as you are currently working; otherwise you wouldn’t insist on getting paid. Those are our lower and upper bounds.

Is it 80% of your current work? Perhaps not. What about 50%? Still too high? 20% seems plausible, but maybe you think that’s still too high. Surely it’s at least 10%. Surely you would be willing to work at least a few hours per week at a job you’re good at that you find personally fulfilling. My guess is that it would actually be more than that, because once people were free of the stress and pressure of working for a living, they would be more likely to find careers that truly brought them deep satisfaction and joy.

But okay, to be conservative, let’s estimate that people are only willing to work 10% as much under a system where labor is fully optional and there is no such thing as a wage. What kind of standard of living could we achieve?

Well, at the current level of technology and capital in the United States, per-capita GDP at purchasing power parity is about $80,000. 10% of that is $8,000. This may not sound like a lot, but it’s about how people currently live in Venezuela. India is slightly better, Ghana is slightly worse. This would feel poor to most Americans today, but it’s objectively a better standard of living than most humans have had throughout history, and not much worse than the world average today.

If per-capita GDP growth continues at its current rate of about 1.5% per year for another century, that $80,000 would become $320,000, 10% of which is $32,000—that would put us at the standard of living of present-day Bulgaria, or what the United States was like in the distant past of [checks notes] 1980. That wouldn’t even feel poor. In fact if literally everyone had this standard of living, nearly as many Americans today would be richer as would be poorer, since the current median personal income is only a bit higher than that.

Thus, the utopian authors are right about this one: Laziness is a solvable problem. We may not quite have it solved yet, but it’s on the ropes; a few more major breakthroughs in productivity-enhancing technology and we’ll basically be there.

In fact, on a small scale, this sort of utopian communist anarchy already works, and has for centuries. There are little places, all around the world, where people gather together and live and work in a sustainable, basically self-sufficient way without being motivated by wages or salaries, indeed often without owning any private property at all.

We call these places monasteries.

Granted, life in a monastery clearly isn’t for everyone: I certainly wouldn’t want to live a life of celibacy and constant religious observance. But the long-standing traditions of monastic life in several very different world religions does prove that it’s possible for human beings to live and even flourish in the absence of a profit motive.

Yet the fact that monastic life is so strict turns out to be no coincidence: In a sense, it had to be for the whole scheme to work. I’ll get back to that in a moment.

The second major problem with a post-scarcity utopia is greed.

This is the one that I think is the real barrier. It may not be totally insurmountable, but thus far I have yet to hear any good proposals that would seriously tackle it.

The issue with laziness is that we don’t really want to work as much as we do. But since we do actually want to work a little bit, the question is simply how to make as much as we currently do while working only as much as we want to. Hence, to deal with laziness, all we need to do is be more efficient. That’s something we are shockingly good at; the overall productivity of our labor is now something like 100 times what it was at the dawn of the Industrial Revolution, and still growing all the time.

Greed is different. The issue with greed is that, no matter how much we have, we always want more.

Some people are clearly greedier than others. In fact, I’m even willing to bet that most people’s greed could be kept in check by a society that provided for everyone’s basic needs for free. Yeah, maybe sometimes you’d fantasize about living in a gigantic mansion or going into outer space; but most of the time, most of us could actually be pretty happy as long as we had a roof over our heads and food on our tables. I know that in my own case, my grandest ambitions largely involve fighting global poverty—so if that became a solved problem, my life’s ambition would be basically fulfilled, and I wouldn’t mind so much retiring to a life of simple comfort.

But is everyone like that? This is what anarchists don’t seem to understand. In order for anarchy to work, you need everyone to fit into that society. Most of us or even nearly all of us just won’t cut it.

Ammon Hennecy famously declared: “An anarchist is someone who doesn’t need a cop to make him behave.” But this is wrong. An anarchist is someone who thinks that no one needs a cop to make him behave. And while I am the former, I am not the latter.

Perhaps the problem is that anarchists don’t realize that not everyone is as good as they are. They implicitly apply their own mentality to everyone else, and assume that the only reason anyone ever cheats, steals, or kills is because their circumstances are desperate.

Don’t get me wrong: A lot of crime—perhaps even most crime—is committed by people who are desperate. Improving overall economic circumstances does in fact greatly reduce crime. But there is also a substantial proportion of crime—especially the most serious crimes—which is committed by people who aren’t particularly desperate, they are simply psychopaths. They aren’t victims of circumstance. They’re just evil. And society needs a way to deal with them.

If you set up a society so that anyone can just take whatever they want, there will be some people who take much more than their share. If you have no system of enforcement whatsoever, there’s nothing to stop a psychopath from just taking everything he can get his hands on. And then it really doesn’t matter how productive or efficient you are; whatever you make will simply get taken by whoever is greediest—or whoever is strongest.

In order to avoid that, you need to either set up a system that stops people from taking more than their share, or you need to find a way to exclude people like that from your society entirely.

This brings us back to monasteries. Why are they so strict? Why are the only places where utopian anarchism seems to flourish also places where people have to wear a uniform, swear vows, carry out complex rituals, and continually pledge their fealty to an authority? (Note, by the way, that I’ve also just described life in the military, which also has a lot in common with life in a monastery—and for much the same reasons.)

It’s a selection mechanism. Probably no one consciously thinks of it this way—indeed, it seems to be important to how monasteries work that people are not consciously weighing the costs and benefits of all these rituals. This is probably something that memetically evolved over centuries, rather than anything that was consciously designed. But functionally, that’s what it does: You only get to be part of a monastic community if you are willing to pay the enormous cost of following all these strict rules.

That makes it a form of costly signaling. Psychopaths are, in general, more prone to impulsiveness and short-term thinking. They are therefore less willing than others to bear the immediate cost of donning a uniform and following a ritual in order to get the long-term gains of living in a utopian community. This excludes psychopaths from ever entering the community, and thus protects against their predation.

Even celibacy may be a feature rather than a bug: Psychopaths are also prone to promiscuity. (And indeed, utopian communes that practice free love seem to have a much worse track record of being hijacked by psychopaths than monasteries that require celibacy!)

Of course, lots of people who aren’t psychopaths aren’t willing to pay those costs either—like I said, I’m not. So the selection mechanism is in a sense overly strict: It excludes people who would support the community just fine, but aren’t willing to pay the cost. But in the long run, this turns out to be less harmful than being too permissive and letting your community get hijacked and destroyed by psychopaths.

Yet if our goal is to make a whole society that achieves post-scarcity utopia, we can’t afford to be so strict. We already know that most people aren’t willing to become monks or nuns.

That means that we need a selection mechanism which is more reliable—more precisely, one with higher specificity.

I mentioned this in a previous post in the context of testing for viruses, but it bears repeating. Sensitivity and specificity are two complementary measures of a test’s accuracy. The sensitivity of a test is how likely it is to show positive if the truth is positive. The specificity of a test is how likely it is to show negative if the truth is negative.

As a test of psychopathy, monastic strictness has very high sensitivity: If you are a psychopath, there’s a very high chance it will weed you out. But it has quite low specificity: Even if you’re not a psychopath, there’s still a very high chance you won’t want to become a monk.

For a utopian society to work, we need something that’s more specific, something that won’t exclude a lot of people who don’t deserve to be excluded. But it still needs to have much the same sensitivity, because letting psychopaths into your utopia is a very easy way to let that utopia destroy itself. We do not yet have such a test, nor any clear idea how we might create one.

And that, my friends, is why we can’t have nice things. At least, not yet.

AI and the “generalization faculty”

Oct 1 JDN 2460219

The phrase “artificial intelligence” (AI) has now become so diluted by overuse that we needed to invent a new term for its original meaning. That term is now “artificial general intelligence” (AGI). In the 1950s, AI meant the hypothetical possibility of creating artificial minds—machines that could genuinely think and even feel like people. Now it means… pathing algorithms in video games and chatbots? The goalposts seem to have moved a bit.

It seems that AGI has always been 20 years away. It was 20 years away 50 years ago, and it will probably be 20 years away 50 years from now. Someday it will really be 20 years away, and then, 20 years after that, it will actually happen—but I doubt I’ll live to see it. (XKCD also offers some insight here: “It has not been conclusively proven impossible.”)

We make many genuine advances in computer technology and software, which have profound effects—both good and bad—on our lives, but the dream of making a person out of silicon always seems to drift ever further into the distance, like a mirage on the desert sand.

Why is this? Why do so many people—even, perhaps especially,experts in the field—keep thinking that we are on the verge of this seminal, earth-shattering breakthrough, and ending up wrong—over, and over, and over again? How do such obviously smart people keep making the same mistake?

I think it may be because, all along, we have been laboring under the tacit assumption of a generalization faculty.

What do I mean by that? By “generalization faculty”, I mean some hypothetical mental capacity that allows you to generalize your knowledge and skills across different domains, so that once you get good at one thing, it also makes you good at other things.

This certainly seems to be how humans think, at least some of the time: Someone who is very good at chess is likely also pretty good at go, and someone who can drive a motorcycle can probably also drive a car. An artist who is good at portraits is probably not bad at landscapes. Human beings are, in fact, able to generalize, at least sometimes.

But I think the mistake lies in imagining that there is just one thing that makes us good at generalizing: Just one piece of hardware or software that allows you to carry over skills from any domain to any other. This is the “generalization faculty”—the imagined faculty that I think we do not have, indeed I think does not exist.

Computers clearly do not have the capacity to generalize. A program that can beat grandmasters at chess may be useless at go, and self-driving software that works on one type of car may fail on another, let alone a motorcycle. An art program that is good at portraits of women can fail when trying to do portraits of men, and produce horrific Daliesque madness when asked to make a landscape.

But if they did somehow have our generalization capacity, then, once they could compete with us at some things—which they surely can, already—they would be able to compete with us at just about everything. So if it were really just one thing that would let them generalize, let them leap from AI to AGI, then suddenly everything would change, almost overnight.

And so this is how the AI hype cycle goes, time and time again:

  1. A computer program is made that does something impressive, something that other computer programs could not do, perhaps even something that human beings are not very good at doing.
  2. If that same prowess could be generalized to other domains, the result would plainly be something on par with human intelligence.
  3. Therefore, the only thing this computer program needs in order to be sapient is a generalization faculty.
  4. Therefore, there is just one more step to AGI! We are nearly there! It will happen any day now!

And then, of course, despite heroic efforts, we are unable to generalize that program’s capabilities except in some very narrow way—even decades after having good chess programs, getting programs to be good at go was a major achievement. We are unable to find the generalization faculty yet again. And the software becomes yet another “AI tool” that we will use to search websites or make video games.

For there never was a generalization faculty to be found. It always was a mirage in the desert sand.

Humans are in fact spectacularly good at generalizing, compared to, well, literally everything else in the known universe. Computers are terrible at it. Animals aren’t very good at it. Just about everything else is totally incapable of it. So yes, we are the best at it.

Yet we, in fact, are not particularly good at it in any objective sense.

In experiments, people often fail to generalize their reasoning even in very basic ways. There’s a famous one where we try to get people to make an analogy between a military tactic and a radiation treatment, and while very smart, creative people often get it quickly, most people are completely unable to make the connection unless you give them a lot of specific hints. People often struggle to find creative solutions to problems even when those solutions seem utterly obvious once you know them.

I don’t think this is because people are stupid or irrational. (To paraphrase Sydney Harris: Compared to what?) I think it is because generalization is hard.

People tend to be much better at generalizing within familiar domains where they have a lot of experience or expertise; this shows that there isn’t just one generalization faculty, but many. We may have a plethora of overlapping generalization faculties that apply across different domains, and can learn to improve some over others.

But it isn’t just a matter of gaining more expertise. Highly advanced expertise is in fact usually more specialized—harder to generalize. A good amateur chess player is probably a good amateur go player, but a grandmaster chess player is rarely a grandmaster go player. Someone who does well in high school biology probably also does well in high school physics, but most biologists are not very good physicists. (And lest you say it’s simply because go and physics are harder: The converse is equally true.)

Humans do seem to have a suite of cognitive tools—some innate hardware, some learned software—that allows us to generalize our skills across domains. But even after hundreds of millions of years of evolving that capacity under the highest possible stakes, we still basically suck at it.

To be clear, I do not think it will take hundreds of millions of years to make AGI—or even millions, or even thousands. Technology moves much, much faster than evolution. But I would not be surprised if it took centuries, and I am confident it will at least take decades.

But we don’t need AGI for AI to have powerful effects on our lives. Indeed, even now, AI is already affecting our lives—in mostly bad ways, frankly, as we seem to be hurtling gleefully toward the very same corporatist cyberpunk dystopia we were warned about in the 1980s.

A lot of technologies have done great things for humanity—sanitation and vaccines, for instance—and even automation can be a very good thing, as increased productivity is how we attained our First World standard of living. But AI in particular seems best at automating away the kinds of jobs human beings actually find most fulfilling, and worsening our already staggering inequality. As a civilization, we really need to ask ourselves why we got automated writing and art before we got automated sewage cleaning or corporate management. (We should also ask ourselves why automated stock trading resulted in even more money for stock traders, instead of putting them out of their worthless parasitic jobs.) There are technological reasons for this, yes; but there are also cultural and institutional ones. Automated teaching isn’t far away, and education will be all the worse for it.

To change our lives, AI doesn’t have to be good at everything. It just needs to be good at whatever we were doing to make a living. AGI may be far away, but the impact of AI is already here.

Indeed, I think this quixotic quest for AGI, and all the concern about how to control it and what effects it will have upon our society, may actually be distracting from the real harms that “ordinary” “boring” AI is already having upon our society. I think a Terminator scenario, where the machines rapidly surpass our level of intelligence and rise up to annihilate us, is quite unlikely. But a scenario where AI puts millions of people out of work with insufficient safety net, triggering economic depression and civil unrest? That could be right around the corner.

Frankly, all it may take is getting automated trucks to work, which could be just a few years. There are nearly 4 million truck drivers in the United States—a full percentage point of employment unto itself. And the Governor of California just vetoed a bill that would require all automated trucks to have human drivers. From an economic efficiency standpoint, his veto makes perfect sense: If the trucks don’t need drivers, why require them? But from an ethical and societal standpoint… what do we do with all the truck drivers!?

The inequality of factor mobility

Sep 24 JDN 2460212

I’ve written before about how free trade has brought great benefits, but also great costs. It occurred to me this week that there is a fairly simple reason why free trade has never been as good for the world as the models would suggest: Some factors of production are harder to move than others.

To some extent this is due to policy, especially immigration policy. But it isn’t just that.There are certain inherent limitations that render some kinds of inputs more mobile than others.

Broadly speaking, there are five kinds of inputs to production: Land, labor, capital, goods, and—oft forgotten—ideas.

You can of course parse them differently: Some would subdivide different types of labor or capital, and some things are hard to categorize this way. The same product, such as an oven or a car, can be a good or capital depending on how it’s used. (Or, consider livestock: is that labor, or capital? Or perhaps it’s a good? Oddly, it’s often discussed as land, which just seems absurd.) Maybe ideas can be considered a form of capital. There is a whole literature on human capital, which I increasingly find distasteful, because it seems to imply that economists couldn’t figure out how to value human beings except by treating them as a machine or a financial asset.

But this four-way categorization is particularly useful for what I want to talk about today. Because the rate at which those things move is very different.

Ideas move instantly. It takes literally milliseconds to transmit an idea anywhere in the world. This wasn’t always true; in ancient times ideas didn’t move much faster than people, and it wasn’t until the invention of the telegraph that their transit really became instantaneous. But it is certainly true now; once this post is published, it can be read in a hundred different countries in seconds.

Goods move in hours. Air shipping can take a product just about anywhere in less than a day. Sea shipping is a bit slower, but not radically so. It’s never been easier to move goods all around the world, and this has been the great success of free trade.

Capital moves in weeks. Here it might be useful to subdivide different types of capital: It’s surely faster to move an oven or even a car (the more good-ish sort of capital) than it is to move an entire factory (capital par excellence). But all in all, we can move stuff pretty fast these days. If you want to move your factory to China or Indonesia, you can probably get it done in a matter of weeks or at most months.

Labor moves in months. This one is a bit ironic, since it is surely easier to carry a single human person—or even a hundred human people—than all the equipment necessary to run an entire factory. But moving labor isn’t just a matter of physically carrying people from one place to another. It’s not like tourism, where you just pack and go. Moving labor requires uprooting people from where they used to live and letting them settle in a new place. It takes a surprisingly long time to establish yourself in a new environment—frankly even after two years in Edinburgh I’m not sure I quite managed it. And all the additional restrictions we’ve added involving border crossings and immigration laws and visas only make it that much slower.

Land moves never. This one seems perfectly obvious, but is also often neglected. You can’t pick up a mountain, a lake, a forest, or even a corn field and carry it across the border. (Yes, eventually plate tectonics will move our land around—but that’ll be millions of years.) Basically, land stays put—and so do all the natural environments and ecosystems on that land. Land isn’t as important for production as it once was; before industrialization, we were dependent on the land for almost everything. But we absolutely still are dependent on the land! If all the topsoil in the world suddenly disappeared, the economy wouldn’t simply collapse: the human race would face extinction. Moreover, a lot of fixed infrastructure, while technically capital, is no more mobile than land. We couldn’t much more easily move the Interstate Highway System to China than we could move Denali.

So far I have said nothing particularly novel. Yeah, clearly it’s much easier to move a mathematical theorem (if such a thing can even be said to “move”) than it is to move a factory, and much easier to move a factory than to move a forest. So what?

But now let’s consider the impact this has on free trade.

Ideas can move instantly, so free trade in ideas would allow all the world to instantaneously share all ideas. This isn’t quite what happens—but in the Internet age, we’re remarkably close to it. If anything, the world’s governments seem to be doing their best to stop this from happening: One of our most strictly-enforced trade agreements, the TRIPS Accord, is about stopping ideas from spreading too easily. And as far as I can tell, region-coding on media goes against everything free trade stands for, yet here we are. (Why, it’s almost as if these policies are more about corporate profits than they ever were about freedom!)

Goods and capital can move quickly. This is where we have really felt the biggest effects of free trade: Everything in the US says “made in China” because the capital is moved to China and then the goods are moved back to the US.

But it would honestly have made more sense to move all those workers instead. For all their obvious flaws, US institutions and US infrastructure are clearly superior to those in China. (Indeed, consider this: We may be so aware of the flaws because the US is especially transparent.) So, the most absolutely efficient way to produce all those goods would be to leave the factories in the US, and move the workers from China instead. If free trade were to achieve its greatest promises, this is the sort of thing we would be doing.


Of course that is not what we did. There are various reasons for this: A lot of the people in China would rather not have to leave. The Chinese government would not want them to leave. A lot of people in the US would not want them to come. The US government might not want them to come.

Most of these reasons are ultimately political: People don’t want to live around people who are from a different nation and culture. They don’t consider those people to be deserving of the same rights and status as those of their own country.

It may sound harsh to say it that way, but it’s clearly the truth. If the average American person valued a random Chinese person exactly the same as they valued a random other American person, our immigration policy would look radically different. US immigration is relatively permissive by world standards, and that is a great part of American success. Yet even here there is a very stark divide between the citizen and the immigrant.

There are morally and economically legitimate reasons to regulate immigration. There may even be morally and economically legitimate reasons to value those in your own nation above those in other nations (though I suspect they would not justify the degree that most people do). But the fact remains that in terms of pure efficiency, the best thing to do would obviously be to move all the people to the place where productivity is highest and do everything there.

But wouldn’t moving people there reduce the productivity? Yes. Somewhat. If you actually tried to concentrate the entire world’s population into the US, productivity in the US would surely go down. So, okay, fine; stop moving people to a more productive place when it has ceased to be more productive. What this should do is average out all the world’s labor productivity to the same level—but a much higher level than the current world average, and frankly probably quite close to its current maximum.

Once you consider that moving people and things does have real costs, maybe fully equaling productivity wouldn’t make sense. But it would be close. The differences in productivity across countries would be small.

They are not small.

Labor productivity worldwide varies tremendously. I don’t count Ireland, because that’s Leprechaun Economics (this is really US GDP with accounting tricks, not Irish GDP). So the prize for highest productivity goes to Norway, at $100 per worker hour (#ScandinaviaIsBetter). The US is doing the best among large countries, at an impressive $73 per hour. And at the very bottom of the list, we have places like Bangladesh at $4.79 per hour and Cambodia at $3.43 per hour. So, roughly speaking, there is about a 20-to-1 ratio between the most productive and least productive countries.

I could believe that it’s not worth it to move US production at $73 per hour to Norway to get it up to $100 per hour. (For one thing, where would we fit it all?) But I find it far more dubious that it wouldn’t make sense to move most of Cambodia’s labor to the US. (Even all 16 million people is less than what the US added between 2010 and 2020.) Even given the fact that these Cambodian workers are less healthy and less educated than American workers, they would almost certainly be more productive on the other side of the Pacific, quite likely ten times as productive as they are now. Yet we haven’t moved them, and have no plans to.

That leaves the question of whether we will move our capital to them. We have been doing so in China, and it worked (to a point). Before that, we did it in Korea and Japan, and it worked. Cambodia will probably come along sooner or later. For now, that seems to be the best we can do.

But I still can’t shake the thought that the world is leaving trillions of dollars on the table by refusing to move people. The inequality of factor mobility seems to be a big part of the world’s inequality, period.

What is anxiety for?

Sep 17 JDN 2460205

As someone who experiences a great deal of anxiety, I have often struggled to understand what it could possibly be useful for. We have this whole complex system of evolved emotions, and yet more often than not it seems to harm us rather than help us. What’s going on here? Why do we even have anxiety? What even is anxiety, really? And what is it for?

There’s actually an extensive body of research on this, though very few firm conclusions. (One of the best accounts I’ve read, sadly, is paywalled.)

For one thing, there seem to be a lot of positive feedback loops involved in anxiety: Panic attacks make you more anxious, triggering more panic attacks; being anxious disrupts your sleep, which makes you more anxious. Positive feedback loops can very easily spiral out of control, resulting in responses that are wildly disproportionate to the stimulus that triggered them.

A certain amount of stress response is useful, even when the stakes are not life-or-death. But beyond a certain point, more stress becomes harmful rather than helpful. This is the Yerkes-Dodson effect, for which I developed my stochastic overload model (which I still don’t know if I’ll ever publish, ironically enough, because of my own excessive anxiety). Realizing that anxiety can have benefits can also take some of the bite out of having chronic anxiety, and, ironically, reduce that anxiety a little. The trick is finding ways to break those positive feedback loops.

I think one of the most useful insights to come out of this research is the smoke-detector principle, which is a fundamentally economic concept. It sounds quite simple: When dealing with an uncertain danger, sound the alarm if the expected benefit of doing so exceeds the expected cost.

This has profound implications when risk is highly asymmetric—as it usually is. Running away from a shadow or a noise that probably isn’t a lion carries some cost; you wouldn’t want to do it all the time. But it is surely nowhere near as bad as failing to run away when there is an actual lion. Indeed, it might be fair to say that failing to run away from an actual lion counts as one of the worst possible things that could ever happen to you, and could easily be 100 times as bad as running away when there is nothing to fear.

With this in mind, if you have a system for detecting whether or not there is a lion, how sensitive should you make it? Extremely sensitive. You should in fact try to calibrate it so that 99% of the time you experience the fear and want to run away, there is not a lion. Because the 1% of the time when there is one, it’ll all be worth it.

Yet this is far from a complete explanation of anxiety as we experience it. For one thing, there has never been, in my entire life, even a 1% chance that I’m going to be attacked by a lion. Even standing in front of a lion enclosure at the zoo, my chances of being attacked are considerably less than that—for a zoo that allowed 1% of its customers to be attacked would not stay in business very long.

But for another thing, it isn’t really lions I’m afraid of. The things that make me anxious are generally not things that would be expected to do me bodily harm. Sure, I generally try to avoid walking down dark alleys at night, and I look both ways before crossing the street, and those are activities directly designed to protect me from bodily harm. But I actually don’t feel especially anxious about those things! Maybe I would if I actually had to walk through dark alleys a lot, but I don’t, and in the rare occasion I would, I think I’d feel afraid at the time but fine afterward, rather than experiencing persistent, pervasive, overwhelming anxiety. (Whereas, if I’m anxious about reading emails, and I do manage to read emails, I’m usually still anxious afterward.) When it comes to crossing the street, I feel very little fear at all, even though perhaps I should—indeed, it had been remarked that when it comes to the perils of motor vehicles, human beings suffer from a very dangerous lack of fear. We should be much more afraid than we are—and our failure to be afraid kills thousands of people.

No, the things that make me anxious are invariably social: Meetings, interviews, emails, applications, rejection letters. Also parties, networking events, and back when I needed them, dates. They involve interacting with other people—and in particular being evaluated by other people. I never felt particularly anxious about exams, except maybe a little before my PhD qualifying exam and my thesis defenses; but I can understand those who do, because it’s the same thing: People are evaluating you.

This suggests that anxiety, at least of the kind that most of us experience, isn’t really about danger; it’s about status. We aren’t worried that we will be murdered or tortured or even run over by a car. We’re worried that we will lose our friends, or get fired; we are worried that we won’t get a job, won’t get published, or won’t graduate.

And yet it is striking to me that it often feels just as bad as if we were afraid that we were going to die. In fact, in the most severe instances where anxiety feeds into depression, it can literally make people want to die. How can that be evolutionarily adaptive?

Here it may be helpful to remember that in our ancestral environment, status and survival were oft one and the same. Humans are the most social organisms on Earth; I even sometimes describe us as hypersocial, a whole new category of social that no other organism seems to have achieved. We cooperate with others of our species on a mind-bogglingly grand scale, and are utterly dependent upon vast interconnected social systems far too large and complex for us to truly understand, let alone control.

At this historical epoch, these social systems are especially vast and incomprehensible; but at least for most of us in First World countries, they are also forgiving in a way that is fundamentally alien to our ancestors’ experience. It was not so long ago that a failed hunt or a bad harvest would let your family starve unless you could beseech your community for aid successfully—which meant that your very survival could depend upon being in the good graces of that community. But now we have food stamps, so even if everyone in your town hates you, you still get to eat. Of course some societies are more forgiving (Sweden) than others (the United States); and virtually all societies could be even more forgiving than they are. But even the relatively cutthroat competition of the US today has far less genuine risk of truly catastrophic failure than what most human beings lived through for most of our existence as a species.

I have found this realization helpful—hardly a cure, but helpful, at least: What are you really afraid of? When you feel anxious, your body often tells you that the stakes are overwhelming, life-or-death; but if you stop and think about it, in the world we live in today, that’s almost never true. Failing at one important task at work probably won’t get you fired—and even getting fired won’t really make you starve.

In fact, we might be less anxious if it were! For our bodies’ fear system seems to be optimized for the following scenario: An immediate threat with high chance of success and life-or-death stakes. Spear that wild animal, or jump over that chasm. It will either work or it won’t, you’ll know immediately; it probably will work; and if it doesn’t, well, that may be it for you. So you’d better not fail. (I think it’s interesting how much of our fiction and media involves these kinds of events: The hero would surely and promptly die if he fails, but he won’t fail, for he’s the hero! We often seem more comfortable in that sort of world than we do in the one we actually live in.)

Whereas the life we live in now is one of delayed consequences with low chance of success and minimal stakes. Send out a dozen job applications. Hear back in a week from three that want to interview you. Do those interviews and maybe one will make you an offer—but honestly, probably not. Next week do another dozen. Keep going like this, week after week, until finally one says yes. Each failure actually costs you very little—but you will fail, over and over and over and over.

In other words, we have transitioned from an environment of immediate return to one of delayed return.

The result is that a system which was optimized to tell us never fail or you will die is being put through situations where failure is constantly repeated. I think deep down there is a part of us that wonders, “How are you still alive after failing this many times?” If you had fallen in as many ravines as I have received rejection letters, you would assuredly be dead many times over.

Yet perhaps our brains are not quite as miscalibrated as they seem. Again I come back to the fact that anxiety always seems to be about people and evaluation; it’s different from immediate life-or-death fear. I actually experience very little life-or-death fear, which makes sense; I live in a very safe environment. But I experience anxiety almost constantly—which also makes a certain amount of sense, seeing as I live in an environment where I am being almost constantly evaluated by other people.

One theory posits that anxiety and depression are a dual mechanism for dealing with social hierarchy: You are anxious when your position in the hierarchy is threatened, and depressed when you have lost it. Primates like us do seem to care an awful lot about hierarchies—and I’ve written before about how this explains some otherwise baffling things about our economy.

But I for one have never felt especially invested in hierarchy. At least, I have very little desire to be on top of the hiearchy. I don’t want to be on the bottom (for I know how such people are treated); and I strongly dislike most of the people who are actually on top (for they’re most responsible for treating the ones on the bottom that way). I also have ‘a problem with authority’; I don’t like other people having power over me. But if I were to somehow find myself ruling the world, one of the first things I’d do is try to figure out a way to transition to a more democratic system. So it’s less like I want power, and more like I want power to not exist. Which means that my anxiety can’t really be about fearing to lose my status in the hierarchy—in some sense, I want that, because I want the whole hierarchy to collapse.

If anxiety involved the fear of losing high status, we’d expect it to be common among those with high status. Quite the opposite is the case. Anxiety is more common among people who are more vulnerable: Women, racial minorities, poor people, people with chronic illness. LGBT people have especially high rates of anxiety. This suggests that it isn’t high status we’re afraid of losing—though it could still be that we’re a few rungs above the bottom and afraid of falling all the way down.

It also suggests that anxiety isn’t entirely pathological. Our brains are genuinely responding to circumstances. Maybe they are over-responding, or responding in a way that is not ultimately useful. But the anxiety is at least in part a product of real vulnerabilities. Some of what we’re worried about may actually be real. If you cannot carry yourself with the confidence of a mediocre White man, it may be simply because his status is fundamentally secure in a way yours is not, and he has been afforded a great many advantages you never will be. He never had a Supreme Court ruling decide his rights.

I cannot offer you a cure for anxiety. I cannot even really offer you a complete explanation of where it comes from. But perhaps I can offer you this: It is not your fault. Your brain evolved for a very different world than this one, and it is doing its best to protect you from the very different risks this new world engenders. Hopefully one day we’ll figure out a way to get it calibrated better.

Knowing When to Quit

Sep 10 JDN 2460198

At the time of writing this post, I have officially submitted my letter of resignation at the University of Edinburgh. I’m giving them an entire semester of notice, so I won’t actually be leaving until December. But I have committed to my decision now, and that feels momentous.

Since my position here was temporary to begin with, I’m actually only leaving a semester early. Part of me wanted to try to stick it out, continue for that one last semester and leave on better terms. Until I sent that letter, I had that option. Now I don’t, and I feel a strange mix of emotions: Relief that I have finally made the decision, regret that it came to this, doubt about what comes next, and—above all—profound ambivalence.

Maybe it’s the very act of quitting—giving up, being a quitter—that feels bad. Even knowing that I need to get out of here, it hurts to have to be the one to quit.

Our society prizes grit and perseverance. Since I was a child I have been taught that these are virtues. And to some extent, they are; there certainly is such a thing as giving up too quickly.

But there is also such a thing as not knowing when to quit. Sometimes things really aren’t going according to plan, and you need to quit before you waste even more time and effort. And I think I am like Randall Monroe in this regard; I am more inclined to stay when I shouldn’t than quit when I shouldn’t:

Sometimes quitting isn’t even as permanent as it is made out to be. In many cases, you can go back later and try again when you are better prepared.

In my case, I am unlikely to ever work at the University of Edinburgh again, but I haven’t yet given up on ever having a career in academia. Then again, I am by no means as certain as I once was that academia is the right path for me. I will definitely be searching for other options.

There is a reason we are so enthusiastically sold on the virtue of perseverance. Part of how our society sells the false narrative of meritocracy is by claiming that people who succeed did so because they tried harder or kept on trying.

This is not entirely false; all other things equal, you are more likely to succeed if you keep on trying. But in some ways that just makes it more seductive and insidious.

For the real reason most people hit home runs in life is they were born on third base. The vast majority of success in life is determined by circumstances entirely outside individual control.


Even having the resources to keep trying is not guaranteed for everyone. I remember a great post on social media pointing out that entrepreneurship is like one of those carnival games:

Entrepreneurship is like one of those carnival games where you throw darts or something.

Middle class kids can afford one throw. Most miss. A few hit the target and get a small prize. A very few hit the center bullseye and get a bigger prize. Rags to riches! The American Dream lives on.

Rich kids can afford many throws. If they want to, they can try over and over and over again until they hit something and feel good about themselves. Some keep going until they hit the center bullseye, then they give speeches or write blog posts about ‘meritocracy’ and the salutary effects of hard work.

Poor kids aren’t visiting the carnival. They’re the ones working it.

The odds of succeeding on any given attempt are slim—but you can always pay for more tries. A middle-class person can afford to try once; mostly those attempts will fail, but a few will succeed and then go on to talk about how their brilliant talent and hard work made the difference. A rich person can try as many times as they like, and when they finally succeed, they can credit their success to perseverance and a willingness to take risks. But the truth is, they didn’t have any exceptional reserves of grit or courage; they just had exceptional reserves of money.

In my case, I was not depleting money (if anything, I’m probably losing out financially by leaving early, though that very much depends on how the job market goes for me): It was something far more valuable. I was whittling away at my own mental health, depleting my energy, draining my motivation. The resource I was exhausting was my very soul.

I still have trouble articulating why it has been so painful for me to work here. It’s so hard to point to anything in particular.

The most obvious downsides were things I knew at the start: The position is temporary, the pay is mediocre, and I had to move across the Atlantic and live thousands of miles from home. And I had already heard plenty about the publish-or-perish system of research publication.

Other things seem like minor annoyances: They never did give me a good office (I have to share it with too many people, and there isn’t enough space, so in fact I rarely use it at all). They were supposed to assign me a faculty mentor and never did. They kept rearranging my class schedule and not telling me things until immediately beforehand.

I think what it really comes down to is I didn’t realize how much it would hurt. I knew that I was moving across the Atlantic—but I didn’t know how isolated and misunderstood I would feel when I did. I knew that publish-or-perish was a problem—but I didn’t know how agonizing it would be for me in particular. I knew I probably wouldn’t get very good mentorship from the other faculty—but I didn’t realize just how bad it would be, or how desperately I would need that support I didn’t get.

I either underestimated the severity of these problems, or overestimated my own resilience. I thought I knew what I was going into, and I thought I could take it. But I was wrong. I couldn’t take it. It was tearing me apart. My only answer was to leave.

So, leave I shall. I have now committed to doing so.

I don’t know what comes next. I don’t even know if I’ve made the right choice. Perhaps I’ll never truly know. But I made the choice, and now I have to live with it.

The rise and plateau of China’s economy

Sep 3 JDN 2460191

It looks like China’s era of extremely rapid economic growth may be coming to an end. Consumer confidence in China cratered this year (and, in typical authoritarian fashion, the agency responsible just quietly stopped publishing the data after that). Current forecasts have China’s economy growing only about 4-5% this year, which would be very impressive for a First World country—but far below the 6%, 7%, even 8% annual growth rates China had in recent years.

Some slowdown was quite frankly inevitable. A surprising number of people—particularly those in or from China—seem to think that China’s ultra-rapid growth was something special about China that could be expected to continue indefinitely.

China’s growth does look really impressive, in isolation:

But in fact this is a pattern we’ve seen several times now (admittedly mostly in Asia): A desperately poor Third World country finally figures out how to get its act together, and suddenly has extremely rapid growth for awhile until it manages to catch up and become a First World country.

It happened in South Korea:

It happened in Japan:

It happened in Taiwan:

It even seems to be happening in Botswana:

And this is a good thing! These are the great success stories of economic development. If we could somehow figure out how to do this all over the world, it might literally be the best thing that ever happened. (It would solve so many problems!)

Here’s a more direct comparison across all these countries (as well as the US), on a log scale:

From this you can pretty clearly see two things.

First, as countries get richer, their growth tends to slow down gradually. By the time Japan, Korea, and Taiwan reached the level that the US had been at back in 1950, their growth slowed to a crawl. But that was okay, because they had already become quite rich.

And second, China is nothing special: Yes, their growth rate is faster than the US, because the US is already so rich. But they are following the same pattern as several other countries. In fact they’ve actually fallen behind Botswana—they used to be much richer than Botswana, and are now slightly poorer.

So while there are many news articles discussing why China’s economy is slowing down, and some of them may even have some merit (they really seem to have screwed up their COVID response, for instance, and their terrible housing price bubble just burst); but the ultimate reason is really that 7% annual economic growth is just not sustainable. It will slow down. When and how remains in question—but it will happen.

Thus, I am not particularly worried about the fact that China’s growth has slowed down. Or at least, I wouldn’t be, if China were governed well and had prepared for this obvious eventuality the way that Korea and Japan did. But what does worry me is that they seem unprepared for this. Their authoritarian government seems to have depended upon sky-high economic growth to sustain support for their regime. The cracks are now forming in that dam, and something terrible could happen when it bursts.

Things may even be worse than they look, because we know that the Chinese government often distorts or omits statistics when they become inconvenient. That can only work for so long: Eventually the reality on the ground will override whatever lies the government is telling.

There are basically two ways this could go: They could reform their government to something closer to a liberal democracy, accept that growth will slow down and work toward more shared prosperity, and then take their place as a First World country like Japan did. Or they could try to cling to their existing regime, gripping ever tighter until it all slips out of their fingers in a potentially catastrophic collapse. Unfortunately, they seem to be opting for the latter.

I hope I’m wrong. I hope that China will find its way toward a future of freedom and prosperity.

But at this point, it doesn’t look terribly likely.

Why are political speeches so vacuous?

Aug 27 JDN 2460184

In last week’s post I talked about how posters for shows at the Fringe seem to be attention-grabbing but almost utterly devoid of useful information.

This brings to mind another sort of content that also fits that description: political speeches.

While there are some exceptions—including in fact some of the greatest political speeches ever made, such as Martin Luther King’s “I have a dream” or Dwight Eisenhower’s “Cross of Iron”—on the whole, most political speeches seem to be incredibly vacuous.

Each country probably has its own unique flavor of vacuousness, but in the US they talk about motherhood, and apple pie, and American exceptionalism. “I love my great country, we are an amazing country, I’m so proud to live here” is basically the extent of the information conveyed within what could well be a full hour-long oration.

This raises a question: Why? Why don’t political speeches typically contain useful information?

It’s not that there’s no useful information to be conveyed: There are all sorts of things that people would like to know about a political candidate, including how honest they are, how competent they are, and the whole range of policies they intend to support or oppose on a variety of issues.

But most of what you’d like to know about a candidate actually comes in one of two varieties: Cheap talk, or controversy.

Cheap talk is the part related to being honest and competent. Basically every voter wants candidates who are honest and competent, and we know all too well that not all candidates qualify. The problem is, how do they show that they are honest and competent? They could simply assert it, but that’s basically meaningless—anybody could assert it. In fact, Donald Trump is the candidate who leaps to mind as the most eager to frequently assert his own honesty and competence, and also the most successful candidate in at least my lifetime who seems to utterly and totally lack anything resembling these qualities.

So unless you are clever enough to find ways to demonstrate your honesty and competence, you’re really not accomplishing anything by asserting it. Most people simply won’t believe you, and they’re right not to. So it doesn’t make much sense to spend a lot of effort trying to make such assertions.

Alternatively, you could try to talk about policy, say what you would like to do regarding climate change, the budget, or the military, or the healthcare system, or any of dozens of other political questions. That would absolutely be useful information for voters, and it isn’t just cheap talk, because different candidates and voters do intend different things and voters would like to know which ones are which.

The problem, then, is that it’s controversial. Not everyone is going to agree with your particular take on any given political issue—even within your own party there is bound to be substantial disagreement.

If enough voters were sufficiently rational about this, and could coolly evaluate a candidate’s policies, accepting the pros and cons, then it would still make sense to deliver this information. I for one would rather vote for someone I know agrees with me 90% of the time than someone who won’t even tell me what they intend to do while in office.

But in fact most voters are not sufficiently rational about this. Voters react much more strongly to negative information than positive information: A candidate you agree with 9 times out of 10 can still make you utterly outraged by their stance on issue number 10. This is a specific form of the more general phenomenon of negativity bias: Psychologically, people just react a lot more strongly to bad things than to good things. Negativity bias has strong effects on how people vote, especially young people.

Rather than a cool-headed, rational assessment of pros and cons, most voters base their decision on deal-breakers: “I could never vote for a Republican” or “I could never vote for someone who wants to cut the military”. Only after they’ve excluded a large portion of candidates based on these heuristics do they even try to look closer at the detailed differences between candidates.

This means that, if you are a candidate, your best option is to avoid offering any deal-breakers. You want to say things that almost nobody will strongly disagree with—because any strong disagreement could be someone’s deal-breaker and thereby hurt your poll numbers.

And what’s the best way to not say anything that will offend or annoy anyone? Not say anything at all. Campaign managers basically need to Mirandize their candidates: You have the right to remain silent. Anything you say can and will be used against you in the court of public opinion.

But in fact you can’t literally remain silent—when running for office, you are expected to make a lot of speeches. So you do the next best thing: You say a lot of words, but convey very little meaning. You say things like “America is great” and “I love apple pie” and “Moms are heroes” that, while utterly vapid, are very unlikely to make anyone particularly angry at you or be any voter’s deal-breaker.

And then we get into a Nash equilibrium where everyone is talking like this, nobody is saying anything, and political speeches become entirely devoid of useful content.

What can we as voters do about this? Individually, perhaps nothing. Collectively, literally everything.

If we could somehow shift the equilibrium so that candidates who are brave enough to make substantive, controversial claims get rewarded for it—even when we don’t entirely agree with them—while those who continue to recite insipid nonsense are punished, then candidates will absolutely change how they speak.

But this would require a lot of people to change, more or less all at once. A sufficiently large critical mass of voters would need to be willing to support candidates specifically because they made detailed policy proposals, even if we didn’t particularly like those policy proposals.

Obviously, if their policy proposals were terrible, we’d have good reason to reject them; but for this to work, we need to be willing to support a lot of things that are just… kind of okay. Because it’s vanishingly unlikely that the first candidates who are brave enough to say what they intend will also be ones whose intentions we entirely agree with. We need to set some kind of threshold of minimum agreement, and reward anyone who exceeds it. We need to ask ourselves if our deal-breakers really need to be deal-breakers.

The Fringe: An overwhelming embarrassment of riches

Aug 20 JDN 2460177

As I write this, Edinburgh is currently in the middle of The Fringe: It’s often described as an “arts and culture festival”, but mainly it consists of a huge number of theatre and comedy performances that go on across the city in hundreds of venues all month long. It’s an “open access festival”, which basically means that it’s half a dozen different festivals that all run independently and are loosely coordinated with one another.

There is truly an embarrassment of riches in the sheer number and variety of performances going on. There’s no way I could ever go to all of them, or even half of them, even though most of them are going on every single day, all month long

It would be tremendously helpful to get good information about which performances are likely to suit my tastes, so I’d know which ones to attend. For once, advertising actually has a genuinely useful function to serve!

And yet, the ads for performances plastered across the city are almost completely useless. They tell you virtually nothing about the content or even style of the various shows. You are bombarded with hundreds of posters for hundreds of performances as you walk through the city, and almost none of them tell you anything useful that would help you decide which shows you want to attend.

Here’s what they look like; imagine this plastered on every bus shelter and spare bit of wall in the city, as well as plenty of specially-built structures explicitly for the purpose:

What I want to ask today is: Why are these posters so uninformative?

I think there are two forces at work here which may explain this phenomenon.

The first is about comedy: Most of these shows are comedy shows, and it’s very hard to explain to someone what is funny about a joke. In fact, most jokes aren’t even funny once they have been explained. Comedy seems to be closely tied to surprise: If you know exactly what they are going to say, it isn’t funny anymore. So it is inherently difficult to explain what’s good about a comedy show without making it actually less funny for those attending.

Yet this is not a complete explanation. For there are some things you could explain about comedy shows without ruining them. You could give it a general genre: political satire, slapstick, alternative, dark comedy, blue comedy, burlesque, cringe, insult, sitcom, parody, surreal, and so on. That would at least tell you something—I tend to like satire and parody, dark and blue are hit-or-miss, surreal leaves me cold, and I can’t stand cringe. And some of the posters do this—yet a remarkable number do not. I often find myself staring at a particular poster, poring over its details, trying to get some inkling of what kind of comedy I could expect from this performer.

To fully explain this, we need something more: And that, I believe, is provided by economic theory.

Consider for a moment that comedy is varied and largely subjective: What one person finds hilarious, another finds boring, and yet another finds outrageously offensive. And whether or not you find a particular routine funny can be hard to predict—even for you.

But consider that money is quite the opposite: Everyone wants it, everyone always wants more of it, and people pretty much want it for the same reasons.

So when you offer to pay money for comedy, you are offering something fundamentally fungible and objective in exchange for something almost totally individual and subjective. You are giving what everyone wants in exchange for something that only some people want and you yourself may or may not want—and may have no way of knowing whether you want until you have it.

I believe it is in the interests of the performers to keep you in the dark in this way. They don’t want to resolve your ignorance too thoroughly. Their goal is not to find the market niche of people who would most enjoy their comedy. Their goal is to get as many people as possible to show up to their shows. Even if someone absolutely hates their show, if they bought tickets, that’s a win. And even any negative reviews or word-of-mouth they might try to give the comedian is probably still a win—comedians are one profession for which there really may be no such thing as bad publicity.

In other words, even these relatively helpful advertisements aren’t actually designed to inform you. They are, as all advertisements are, designed to get you to buy something. And the way to get you to do that is twofold:

First, get your attention. That’s vital. And it’s quite difficult in such a saturated environment. As a result, all of the posters are quite eye-catching and often bizarre. They use loud colors and striking images, and the whole city is filled with them. It actually becomes exhausting to look at them all; but this is the Nash equilibrium, because there is an arms race between different performers to look more interesting and exciting than all the rest.

Second, convince you to go. But let’s be clear about this: It is not necessary to make you absolutely certain that this show is one you’ll enjoy. It is merely to tip the balance of probability, make you reasonably confident that it is likely to be one you’ll enjoy. Given the subjectivity and unpredictability of comedy, any attendee knows that they are likely to end up with a few duds. That risk effectively gets priced in: You accept that one £10 ticket may be wasted, in exchange for buying another £10 ticket that you’d have gladly paid £20 for.

If the posters tried to give more details about what the shows were about, there would be two costs to this: One, it might make the posters less eye-catching and interesting in the first place. And two, it might (perhaps correctly!) convince some customers that this flavor of comedy really wasn’t for them, making them decide not to buy a ticket. The task when designing such a poster, then, is to make one that conveys enough that people are willing to take the chance on it—but not too much so that you might scare some potential audience members away.

I think that this has implications which go beyond comedy. In fact, I think that something quite similar is going on with political speeches. But I’ll save that one for another post.