Surviving in an ad-supported world

Apr 21 JDN 2460423

Advertising is as old as money—perhaps even older. Scams have likewise been a part of human society since time immemorial.

But I think it’s fair to say that recently, since the dawn of the Internet at least, both advertising and scams have been proliferating, far beyond what they used to be.

We live in an ad-supported world.

News sites are full of ads. Search engines are full of ads. Even shopping sites are full of ads now; we literally came here planning to buy something, but that wasn’t good enough for you; you want us to also buy something else. Most of the ads are for legitimate products; but some are for scams. (And then there’s multi-level marketing, which is somewhere in between: technically not a scam.)

We’re so accustomed to getting spam emails, phone calls, and texts full of ads and scams that we just accept it as a part of our lives. But these are not something people had to live with even 50 years ago. This is a new, fresh Hell we have wrought for ourselves as a civilization.

AI promises to make this problem even worse. AI still isn’t very good at doing anything particularly useful; you can’t actually trust it to drive a truck or diagnose an X-ray. (There are people working on this sort of thing, but they haven’t yet succeeded.) But it’s already pretty good at making spam texts and phone calls. It’s already pretty good at catfishing people. AI isn’t smart enough to really help us, but it is smart enough to hurt us, especially those of us who are most vulnerable.

I think that this causes a great deal more damage to our society than is commonly understood.

It’s not just that ads are annoying (though they are), or that they undermine our attention span (though they do), or that they exploit the vulnerable (though they do).

I believe that an ad-supported world is a world where trust goes to die.

When the vast majority of your interactions with other people involve those people trying to get your money, some of them by outright fraud—but none of them really honestly—you have no choice but to ratchet down your sense of trust. It begins to feel as this financial transactions are the only form of interaction there is in the world.

But in fact most people can be trusted, and should be trusted—you are missing out on a great deal of what makes life worth living if you do not know how to trust.

The question is whom you trust. You should trust people you know, people you interact with personally and directly. Even strangers are more trustworthy than any corporation will ever be. And never are corporations more dishonest than when they are sending out ads.


The more the world fills with ads, the less room it has for trust.

Is there any way to stem this tide? Or are we simply doomed to live in the cyberpunk dystopia our forebears warned about, where everything is for sale and all available real estate is used for advertising?

Ads and scams only exist because they are profitable; so our goal should be to make them no longer profitable.

Here is one very simple piece of financial advice that will help protect you. Indeed, I believe it can protect so well, that if everyone followed it consistently, we would stem the tide.

Only give money to people you have sought out yourself.

Only buy things you already knew you wanted.

Yes, of course you must buy things. We live in a capitalist society. You can’t survive without buying things. But this is how buying things should work:

You check your fridge and see you are out of milk. So you put “milk” on your grocery list, you go to the grocery store, you find some milk that looks good, and you buy it.

Or, your car is getting old and expensive to maintain, and you decide you need a new one. You run the numbers on your income and expenses, and come up with a budget for a new car. You go to the dealership, they help you pick out a car that fits your needs and your budget, and you buy it.

Your tennis shoes are getting frayed, and it’s time to replace them. You go online and search for “tennis shoes”, looking up sizes and styles until you find a pair that suits you. You order that pair.

You should be the one to decide that you need a thing, and then you should go out looking for it.

It’s okay to get help searching, or even listen to some sales pitches, as long as the whole thing was your idea from the start.

But if someone calls you, texts you, or emails you, asking for your money for something?

Don’t give them a cent.

Just don’t. Don’t do it. Even if it sounds like a good product. Even if it is a good product. If the product they are selling sounds so great that you decide you actually want to buy it, go look for it on your own. Shop around. If you can, go out of your way to buy it from a competing company.

Your attention is valuable. Don’t reward them for stealing it.

This applies to donations, too. Donation asks aren’t as awful as ads, let alone scams, but they are pretty obnoxious, and they only send those things out because people respond to them. If we all stopped responding, they’d stop sending.

Yes, you absolutely should give money to charity. But you should seek out the charities to donate to. You should use trusted sources (like GiveWell and Charity Navigator) to vet them for their reliability, transparency, and cost-effectiveness.

If you just receive junk mail asking you for donations, feel free to take out any little gifts they gave you (it’s often return address labels, for some reason), and then recycle the rest.

Don’t give to the ones who ask for it. Give to the ones who will use it the best.

Reward the charities that do good, not the charities that advertise well.

This is the rule to follow:

If someone contacts you—if they initiate the contact—refuse to give them any money. Ever.

Does this rule seem too strict? It is quite strict, in fact. It requires you to pass up many seemingly-appealing opportunities, and the more ads there are, the more opportunities you’ll need to pass up.

There may even be a few exceptions; no great harm befalls us if we buy Girl Scout cookies or donate to the ASPCA because the former knocked on our doors and the latter showed us TV ads. (Then again, you could just donate to feminist and animal rights charities without any ads or sales pitches.)

But in general, we live in a society that is absolutely inundated with people accosting us and trying to take our money, and they’re only ever going to stop trying to get our money if we stop giving it to them. They will not stop it out of the goodness of their hearts—no, not even the charities, who at least do have some goodness in their hearts. (And certainly not the scammers, who have none.)

They will only stop if it stops working.

So we need to make it stop working. We need to draw this line.

Trust the people around you, who have earned it. Do not trust anyone who seeks you out asking for money.

Telemarketing calls? Hang up. Spam emails? Delete. Junk mail? Recycle. TV ads? Mute and ignore.

And then, perhaps, future generations won’t have to live in an ad-supported world.

How is the economy doing this well?

Apr 14 JDN 2460416

We are living in a very weird time, economically. The COVID pandemic created huge disruptions throughout our economy, from retail shops closing to shortages in shipping containers. The result was a severe recession with the worst unemployment since the Great Depression.

Now, a few years later, we have fully recovered.

Here’s a graph from FRED showing our unemployment and inflation rates since 1990 [technical note: I’m using the urban CPI; there are a few other inflation measures you could use instead, but they look much the same]:

Inflation fluctuates pretty quickly, while unemployment moves much slower.

There are a lot of things we can learn from this graph:

  1. Before COVID, we had pretty low inflation; from 1990 to 2019, inflation averaged about 2.4%, just over the Fed’s 2% target.
  2. Before COVID, we had moderate to high unemployment; it rarely went below 5% and and for several years after the 2008 crash it was over 7%—which is why we called it the Great Recession.
  3. The only times we actually had negative inflation—deflationwere during recessions, and coincided with high unemployment; so, no, we really don’t want prices to come down.
  4. During COVID, we had a massive spike in unemployment up to almost 15%, but then it came back down much more rapidly than it had in the Great Recession.
  5. After COVID, there was a surge in inflation, peaking at almost 10%.
  6. That inflation surge was short-lived; by the end of 2022 inflation was back down to 4%.
  7. Unemployment now stands at 3.8% while inflation is at 2.7%.

What I really want to emphasize right now is point 7, so let me repeat it:

Unemployment now stands at 3.8% while inflation is at 2.7%.

Yes, technically, 2.7% is above our inflation target. But honestly, I’m not sure it should be. I don’t see any particular reason to think that 2% is optimal, and based on what we’ve learned from the Great Recession, I actually think 3% or even 4% would be perfectly reasonable inflation targets. No, we don’t want to be going into double-digits (and we certainly don’t want true hyperinflation); but 4% inflation really isn’t a disaster, and we should stop treating it like it is.

2.7% inflation is actually pretty close to the 2.4% inflation we’d been averaging from 1990 to 2019. So I think it’s fair to say that inflation is back to normal.

But the really wild thing is that unemployment isn’t back to normal: It’s much better than that.

To get some more perspective on this, let’s extend our graph backward all the way to 1950:

Inflation has been much higher than it is now. In the late 1970s, it was consistently as high as it got during the post-COVID surge. But it has never been substantially lower than it is now; a little above the 2% target really seems to be what stable, normal inflation looks like in the United States.

On the other hand, unemployment is almost never this low. It was for a few years in the early 1950s and the late 1960s; but otherwise, it has always been higher—and sometimes much higher. It did not dip below 5% for the entire period from 1971 to 1994.

They hammer into us in our intro macroeconomics courses the Phillips Curve, which supposedly says that unemployment is inversely related to inflation, so that it’s impossible to have both low inflation and low unemployment.

But we’re looking at it, right now. It’s here, right in front of us. What wasn’t supposed to be possible has now been achieved. E pur si muove.

There was supposed to be this terrible trade-off between inflation and unemployment, leaving our government with the stark dilemma of either letting prices surge or letting millions remain out of work. I had always been on the “inflation” side: I thought that rising prices were far less of a problem than poeple out of work.

But we just learned that the entire premise was wrong.

You can have both. You don’t have to choose.

Right here, right now, we have both. All we need to do is keep doing whatever we’re doing.

One response might be: what if we can’t? What if this is unsustainable? (Then again, conservatives never seemed terribly concerned about sustainability before….)

It’s worth considering. One thing that doesn’t look so great now is the federal deficit. It got extremely high during COVID, and it’s still pretty high now. But as a proportion of GDP, it isn’t anywhere near as high as it was during WW2, and we certainly made it through that all right:

So, yeah, we should probably see if we can bring the budget back to balanced—probably by raising taxes. But this isn’t an urgent problem. We have time to sort it out. 15% unemployment was an urgent problem—and we fixed it.

In fact in some ways the economy is even doing better now than it looks. Unemployment for Black people has never been this low, since we’ve been keeping track of it:

Black people had basically learned to live with 8% or 9% unemployment as if it were normal; but now, for the first time ever—ever—their unemployment rate is down to only 5%.

This isn’t because people are dropping out of the labor force. Broad unemployment, which includes people marginally attached to the labor force, people employed part-time not by choice, and people who gave up looking for work, is also at historic lows, despite surging to almost 23% during COVID:

In fact, overall employment among people 25-54 years old (considered “prime age”—old enough to not be students, young enough to not be retired) is nearly the highest it has ever been, and radically higher than it was before the 1980s (because women entered the workforce):

So this is not an illusion: More Americans really are working now. And employment has become more inclusive of women and minorities.

I really don’t understand why President Biden isn’t more popular. Biden inherited the worst unemployment since the Great Depression, and turned it around into an economic situation so good that most economists thought it was impossible. A 39% approval rating does not seem consistent with that kind of staggering economic improvement.

And yes, there are a lot of other factors involved aside from the President; but for once I think he really does deserve a lot of the credit here. Programs he enacted to respond to COVID brought us back to work quicker than many thought possible. Then, the Inflation Reduction Act made historic progress at fighting climate change—and also, lo and behold, reduced inflation.

He’s not a particularly charismatic figure. He is getting pretty old for this job (or any job, really). But Biden’s economic policy has been amazing, and deserves more credit for that.

The Butlerian Jihad is looking better all the time

Mar 24 JDN 2460395

A review of The Age of Em by Robin Hanson

In the Dune series, the Butlerian Jihad was a holy war against artificial intelligence that resulted in a millenias-long taboo against all forms of intelligent machines. It was effectively a way to tell a story about the distant future without basically everything being about robots or cyborgs.

After reading Robin Hanson’s book, I’m starting to think that maybe we should actually do it.

Thus it is written: “Thou shalt not make a machine in the likeness of a human mind.”

Hanson says he’s trying to reserve judgment and present objective predictions without evaluation, but it becomes very clear throughout that this is the future he wants, as well as—or perhaps even instead of—the world he expects.

In many ways, it feels like he has done his very best to imagine a world of true neoclassical rational agents in perfect competition, a sort of sandbox for the toys he’s always wanted to play with. Throughout he very much takes the approach of a neoclassical economist, making heroic assumptions and then following them to their logical conclusions, without ever seriously asking whether those assumptions actually make any sense.

To his credit, Hanson does not buy into the hype that AGI will be successful any day now. He predicts that we will achieve the ability to fully emulate human brains and thus create a sort of black-box AGI that behaves very much like a human within about 100 years. Given how the Blue Brain Project has progressed (much slower than its own hype machine told us it would—and let it be noted that I predicted this from the very beginning), I think this is a fairly plausible time estimate. He refers to a mind emulated in this way as an “em”; I have mixed feelings about the term, but I suppose we did need some word for that, and it certainly has conciseness on its side.

Hanson believes that a true understanding of artificial intelligence will only come later, and the sort of AGI that can be taken apart and reprogrammed for specific goals won’t exist for at least a century after that. Both of these sober, reasonable predictions are deeply refreshing in a field that’s been full of people saying “any day now” for the last fifty years.

But Hanson’s reasonableness just about ends there.

In The Age of Em, government is exactly as strong as Hanson needs it to be. Somehow it simultaneously ensures a low crime rate among a population that doubles every few months while also having no means of preventing that population growth. Somehow ensures that there is no labor collusion and corporations never break the law, but without imposing any regulations that might reduce efficiency in any way.

All of this begins to make more sense when you realize that Hanson’s true goal here is to imagine a world where neoclassical economics is actually true.

He realized it didn’t work on humans, so instead of giving up the theory, he gave up the humans.

Hanson predicts that ems will casually make short-term temporary copies of themselves called “spurs”, designed to perform a particular task and then get erased. I guess maybe he would, but I for one would not so cavalierly create another person and then make their existence dedicated to doing a single job before they die. The fact that I created this person, and they are very much like me, seem like reasons to care more about their well-being, not less! You’re asking me to enslave and murder my own child. (Honestly, the fact that Robin Hanson thinks ems will do this all the time says more about Robin Hanson than anything else.) Any remotely sane society of ems would ban the deletion of another em under any but the most extreme circumstances, and indeed treat it as tantamount to murder.

Hanson predicts that we will only copy the minds of a few hundred people. This is surely true at some point—the technology will take time to develop, and we’ll have to start somewhere. But I don’t see why we’d stop there, when we could continue to copy millions or billions of people; and his choices of who would be emulated, while not wildly implausible, are utterly terrifying.

He predicts that we’d emulate genius scientists and engineers; okay, fair enough, that seems right. I doubt that the benefits of doing so will be as high as many people imagine, because scientific progress actually depends a lot more on the combined efforts of millions of scientists than on rare sparks of brilliance by lone geniuses; but those people are definitely very smart, and having more of them around could be a good thing. I can also see people wanting to do this, and thus investing in making it happen.

He also predicts that we’d emulate billionaires. Now, as a prediction, I have to admit that this is actually fairly plausible; billionaires are precisely the sort of people who are rich enough to pay to be emulated and narcissistic enough to want to. But where Hanson really goes off the deep end here is that he sees this as a good thing. He seems to honestly believe that billionaires are so rich because they are so brilliant and productive. He thinks that a million copies of Elon Musks would produce a million hectobillionaires—when in reality it would produce a million squabbling narcissists, who at best had to split the same $200 billion wealth between them, and might very well end up with less because they squander it.

Hanson has a long section on trying to predict the personalities of ems. Frankly this could just have been dropped entirely; it adds almost nothing to the book, and the book is much too long. But the really striking thing to me about that section is what isn’t there. He goes through a long list of studies that found weak correlations between various personality traits like extroversion or openness and wealth—mostly comparing something like the 20th percentile to the 80th percentile—and then draws sweeping conclusions about what ems will be like, under the assumption that ems are all drawn from people in the 99.99999th percentile. (Yes, upper-middle-class people are, on average, more intelligent and more conscientious than lower-middle-class people. But do we even have any particular reason to think that the personalities of people who make $150,000 are relevant to understanding the behavior of people who make $15 billion?) But he completely glosses over the very strong correlations that specifically apply to people in that very top super-rich class: They’re almost all narcissists and/or psychopaths.

Hanson predicts a world where each em is copied many, many times—millions, billions, even trillions of times, and also in which the very richest ems are capable of buying parallel processing time that lets them accelerate their own thought processes to a million times faster than a normal human. (Is that even possible? Does consciousness work like that? Who knows!?) The world that Hanson is predicting is thus one where all the normal people get outnumbered and overpowered by psychopaths.

Basically this is the most abjectly dystopian cyberpunk hellscape imaginable. And he talks about it the whole time as if it were good.

It’s like he played the game Action Potential and thought, “This sounds great! I’d love to live there!” I mean, why wouldn’t you want to owe a life-debt on your own body and have to work 120-hour weeks for a trillion-dollar corporation just to make the payments on it?

Basically, Hanson doesn’t understand how wealth is actually acquired. He is educated as an economist, yet his understanding of capitalism basically amounts to believing in magic. He thinks that competitive markets just somehow perfectly automatically allocate wealth to whoever is most productive, and thus concludes that whoever is wealthy now must just be that productive.

I can see no other way to explain his wildly implausible predictions that the em economy will double every month or two. A huge swath of the book depends upon this assumption, but he waits until halfway through the book to even try to defend it, and then does an astonishingly bad job of doing so. (Honestly, even if you buy his own arguments—which I don’t—they seem to predict that population would grow with Moore’s Law—doubling every couple of years, not every couple of months.)

Whereas Keynes predicted based on sound economic principles that economic growth would more or less proceed apace and got his answer spot-on, Hanson predicts that for mysterious, unexplained reasons economic growth will suddenly increase by two orders of magnitude—and I’m pretty sure he’s going to be wildly wrong.

Hanson also predicts that ems will be on average poorer than we are, based on some sort of perfect-competition argument that doesn’t actually seem to mesh at all with his predictions of spectacularly rapid economic and technological growth. I think the best way to make sense of this is to assume that it means the trend toward insecure affluence will continue: Ems will have an objectively high standard of living in terms of what they own, what games they play, where they travel, and what they eat and drink (in simulation), but they will constantly be struggling to keep up with the rent on their homes—or even their own bodies. This is a world where (the very finest simulation of) Dom Perignon is $7 a bottle and wages are $980 an hour—but monthly rent is $284,000.

Early in the book Hanson argues that this life of poverty and scarcity will lead to more conservative values, on the grounds that people who are poorer now seem to be more conservative, and this has something to do with farmers versus foragers. Hanson’s explanation of all this is baffling; I will quote it at length, just so it’s clear I’m not misrepresenting it:

The other main (and independent) axis of value variation ranges between poor and rich societies. Poor societies place more value on conformity, security, and traditional values such as marriage, heterosexuality, religion, patriotism, hard work, and trust in authority. In contrast, rich societies place more value on individualism, self-direction, tolerance, pleasure, nature, leisure, and trust. When the values of individuals within a society vary on the same axis, we call this a left/liberal (rich) versus right/conservative (poor) axis.

Foragers tend to have values more like those of rich/liberal people today, while subsistence farmers tend to have values more like those of poor/conservative people today. As industry has made us richer, we have on average moved from conservative/farmer values to liberal/forager values. This value movement can make sense if cultural evolution used the social pressures farmers faced, such as conformity and religion, to induce humans, who evolved to find forager behaviors natural, to instead act like farmers. As we become rich, we don’t as strongly fear the threats behind these social pressures. This connection may result in part from disease; rich people are healthier, and healthier societies fear less.

The alternate theory that we have instead learned that rich forager values are more true predicts that values should have followed a random walk over time, and be mostly common across space. It also predicts the variance of value changes tracking the rate at which relevant information appears. But in fact industrial-era value changes have tracked the wealth of each society in much more steady and consistent fashion. And on this theory, why did foragers ever acquire farmer values?

[…]

In the scenario described in this book, many strange-to-forager behaviors are required, and median per-person (i.e. per-em) incomes return to near-subsistence levels. This suggests that the em era may reverse the recent forager-like trend toward more liberality; ems may have more farmer-like values.

The Age of Em, p. 26-27

There’s a lot to unpack here, but maybe it’s better to burn the whole suitcase.

First of all, it’s not entirely clear that this is really a single axis of variation, that foragers and farmers differ from each other in the same way as liberals and conservatives. There’s some truth to that at least—both foragers and liberals tend to be more generous, both farmers and conservatives tend to enforce stricter gender norms. But there are also clear ways that liberal values radically deviate from forager values: Forager societies are extremely xenophobic, and typically very hostile to innovation, inequality, or any attempts at self-aggrandizement (a phenomenon called “fierce egalitarianism“). San Francisco epitomizes rich, liberal values, but it would be utterly alien and probably regarded as evil by anyone from the Yanomamo.

Second, there is absolutely no reason to predict any kind of random walk. That’s just nonsense. Would you predict that scientific knowledge is a random walk, with each new era’s knowledge just a random deviation from the last’s? Maybe next century we’ll return to geocentrism, or phrenology will be back in vogue? On the theory that liberal values (or at least some liberal values) are objectively correct, we would expect them to advance as knowledge doesimproving over time, and improving faster in places that have better institutions for research, education, and free expression. And indeed, this is precisely the pattern we have observed. (Those places are also richer, but that isn’t terribly surprising either!)

Third, while poorer regions are indeed more conservative, poorer people within a region actually tend to be more liberal. Nigeria is poorer and more conservative than Norway, and Mississippi is poorer and more conservative than Massachusetts. But higher-income households in the United States are more likely to vote Republican. I think this is particularly true of people living under insecure affluence: We see the abundance of wealth around us, and don’t understand why we can’t learn to share it better. We’re tired of fighting over scraps while the billionaires claim more and more. Millennials and Zoomers absolutely epitomize insecure affluence, and we also absolutely epitomize liberalism. So, if indeed ems live a life of insecure affluence, we should expect them to be like Zoomers: “Trans liberation now!” and “Eat the rich!” (Or should I say, “Delete the rich!”)

And really, doesn’t that make more sense? Isn’t that the trend our society has been on, for at least the last century? We’ve been moving toward more and more acceptance of women and minorities, more and more deviation from norms, more and more concern for individual rights and autonomy, more and more resistance to authority and inequality.

The funny thing is, that world sounds a lot better than the one Hanson is predicting.

A world of left-wing ems would probably run things a lot better than Hanson imagines: Instead of copying the same hundred psychopaths over and over until we fill the planet, have no room for anything else, and all struggle to make enough money just to stay alive, we could moderate our population to a more sustainable level, preserve diversity and individuality, and work toward living in greater harmony with each other and the natural world. We could take this economic and technological abundance and share it and enjoy it, instead of killing ourselves and each other to make more of it for no apparent reason.

The one good argument Hanson makes here is expressed in a single sentence: “And on this theory, why did foragers ever acquire farmer values?” That actually is a good question; why did we give up on leisure and egalitarianism when we transitioned from foraging to agriculture?

I think scarcity probably is relevant here: As food became scarcer, maybe because of climate change, people were forced into an agricultural lifestyle just to have enough to eat. Early agricultural societies were also typically authoritarian and violent. Under those conditions, people couldn’t be so generous and open-minded; they were surrounded by threats and on the verge of starvation.

I guess if Hanson is right that the em world is also one of poverty and insecurity, we might go back to those sort of values, borne of desperation. But I don’t see any reason to think we’d give up all of our liberal values. I would predict that ems will still be feminist, for instance; in fact, Hanson himself admits that since VR avatars would let us change gender presentation at will, gender would almost certainly become more fluid in a world of ems. Far from valuing heterosexuality more highly (as conservatives do, a “farmer value” according to Hanson), I suspect that ems will have no further use for that construct, because reproduction will be done by manufacturing, not sex, and it’ll be so easy to swap your body into a different one that hardly anyone will even keep the same gender their whole life. They’ll think it’s quaint that we used to identify so strongly with our own animal sexual dimorphism.

But maybe it is true that the scarcity induced by a hyper-competitive em world would make people more selfish, less generous, less trusting, more obsessed with work. Then let’s not do that! We don’t have to build that world! This isn’t a foregone conclusion!

There are many other paths yet available to us.

Indeed, perhaps the simplest would be to just ban artificial intelligence, at least until we can get a better handle on what we’re doing—and perhaps until we can institute the kind of radical economic changes necessary to wrest control of the world away from the handful of psychopaths currently trying their best to run it into the ground.

I admit, it would kind of suck to not get any of the benefits of AI, like self-driving cars, safer airplanes, faster medical research, more efficient industry, and better video games. It would especially suck if we did go full-on Butlerian Jihad and ban anything more complicated than a pocket calculator. (Our lifestyle might have to go back to what it was in—gasp! The 1950s!)

But I don’t think it would suck nearly as much as the world Robin Hanson thinks is in store for us if we continue on our current path.

So I certainly hope he’s wrong about all this.

Fortunately, I think he probably is.

Let’s call it “copytheft”

Feb 11 JDN 2460353

I have written previously about how ridiculous it is that we refer to the unauthorized copying of media such as music and video games as “piracy” as though it were somehow equivalent to capturing ships on the high seas.

In that post a few years ago I suggested calling it simply “unauthorized copying”, but that clearly isn’t catching on, perhaps because it’s simply too much of a mouthful. So today I offer a compromise:

Let’s call it “copytheft”.

That takes no longer to say than “piracy” (and only slightly longer to write), and far more clearly states what’s actually going on. No ships have been seized on the high seas; there has been no murder, arson, or slavery.

Yes, it’s debatable whether copytheft really constitutes theft—and I would generally argue that it does not—but just from hearing that word, you would probably infer that the following process took place:

  1. I took a thing.
  2. I made a copy of that thing that I wasn’t supposed to.
  3. I put the original thing back where it was, unharmed.

The paradigmatic example of this theft-copy-replace sequence would be a key, of course: You take someone’s key, copy it, then put the key back where it was, so you now can unlock their locks but they are none the wiser.

With unauthorized copying of media, you’re not exactly doing steps 1 and 3; the copier often has the media completely legitimately before they make the copy, and it may not even have a clear physical location to be put back to (it must be physically stored somewhere, but particularly if it’s streamed from the cloud it hardly matters where).

But you’re definitely doing step 2, and that was the only part that had a permanent effect; so I think that the nomenclature still seems to work well enough.

Copytheft also has a similar sound to copyleft, the use of alternative intellectual property mechanisms by the authors to grand broader licensing than is ordinarily afforded by copyright, and also to copyfraud, the crime of claiming exclusive copyright to content that is in fact public domain. Hopefully that common structure will help the term get some purchase.

Of course, I can hardly bring a word into widespread use on my own. Others like you have to not only read it, but like it enough that you’re willing to actually use it—and then we need a certain critical mass of people using it in order to make it actually catch on.

So, I’d like to take a moment to offer you some justification why it’s worth changing to this new word.

First, it is admittedly imperfect; by containing the word “theft”, it already feels like we’re conceding something to the defenders of copyright.

But by including the word “copy” in the term, we can draw attention to the most important aspect that distinguishes copytheft from, well, theft:

The original owner still has the thing.

That’s the part that they want us to forget, that the harsh word “piracy” leads you towards. A ship that is captured by pirates is a ship that may never again sail for your own navy. A song that is “pirated”—copythefted—is one that not only the original owners, but also everyone who bought it, still have in exactly the same state they did before.

Thus it simply cannot be that copytheft takes money out of the hands of artists. At worst, it fails to give money to artists.

That could still be a bad thing: Artists need to pay bills too, and a world where nobody pays for any art is surely a world with a lot fewer artists—and the ones who remain far more miserable. But it’s clearly a different sort of thing than ordinary theft, as nothing has been lost.

Moreover, it’s not clear that in most cases copytheft even does fail to give money that would otherwise have been given. Maybe sometimes it does—a certain proportion of people who copytheft a given song, film, or video game might have been willing to pay the original price if the copythefted version had not been available. But typically I suspect that people who’d be willing to pay full price… do pay full price. Thus, the people who are copythefting the media wouldn’t have bought it at full price anyway.

They might have bought it at some lower price, in which case that is foregone payment; but it’s surely considerably less than the “losses” often reported by the film and music industries, which seem to be based on the assumption that everyone who copythefts would have otherwise paid full price. And in fact many people might have been unwilling to buy at any nonzero price, and were only willing to copytheft the media precisely because it didn’t cost them any money or a great deal of effort to do so.

And in fact if you think about it, what about people who would have been willing to pay more than the original price? Surely there were many of them as well, yet we don’t grant media corporations the right to that money. That is also money that they could have been given but weren’t—and we decided, as a society, that they didn’t deserve to have it. It’s not that it would be impossible to do so: We could give corporations the authority to price-discriminate on all of their media. (They probably couldn’t do it perfectly, but they could surely do it quite well.) But we made the policy choice to live in a world where media is sold by single-price monopolies rather than one where it is sold by price-discriminating monopolies.

The mere fact that someone might have been willing to pay you more money if the market were different does not entitle you to receive that money. It has not been stolen from you. Indeed, typically it’s more that you have not been allowed to exploit them. It’s usually the presence of competition that prevents corporations from receiving the absolute maximum profit they might potentially have received if they had full control over the market. Corporations making less profit than they otherwise would have is generally a sign of good economic policy—a sign that things are reasonably fair.

Why else is “copytheft” a good word to use?

Above all, we do not allow our terms to be defined by our opponents.

We don’t allow them insinuate that our technically violating draconian regulations designed to maximize the profits of Disney and Viacom somehow constitutes a terrible crime against other human beings.

“Piracy is not a victimless crime”, they will say.

Well, actual piracy isn’t. But copytheft? Yeah, uh, it kinda is.

Maybe not quite as victimless as, say, marijuana or psilocybin, which no one even has any rational reason to prefer you not do. But still, you’re not really making anyone else worse off—that sounds pretty victimless.

Of course, it does give us less reason to wear tricorn hats and eyepatches.

But guess what? You can still do that anyway!

Adversarial design

Feb 4 JDN 2460346

Have you noticed how Amazon feels a lot worse lately? Years ago, it was extremely convenient: You’d just search for what you want, it would give you good search results, you could buy what you want and be done. But now you have to slog through “sponsored results” and a bunch of random crap made by no-name companies in China before you can get to what you actually want.

Temu is even worse, and has been from the start: You can’t buy anything on Temu without first being inundated in ads. It’s honestly such an awful experience, I don’t understand why anyone is willing to buy anything from Temu.

#WelcomeToCyberpunk, I guess.

Even some video games have become like this: The free-to-play or “freemium” business model seems to be taking off, where you don’t pay money for the game itself, but then have to deal with ads inside the game trying to sell you additional content, because that’s where the developers actually make their money. And now AAA firms like EA and Ubisoft are talking about going to a subscription-based model where you don’t even own your games anymore. (Fortunately there’s been a lot of backlash against that; I hope it persists.)

Why is this happening? Isn’t capitalism supposed to make life better for consumers? Isn’t competition supposed to make products and services supposed to improve over time?

Well, first of all, these markets are clearly not as competitive as they should be. Amazon has a disturbingly large market share, and while the video game market is more competitive, it’s still dominated by a few very large firms (like EA and Ubisoft).

But I think there’s a deeper problem here, one which may be specific to media content.

What I mean by “media content” here is fairly broad: I would include art, music, writing, journalism, film, and video games.

What all of these things have in common is that they are not physical products (they’re not like a car or a phone that is a single physical object), but they are also not really services either (they aren’t something you just do as an action and it’s done, like a haircut, a surgery, or a legal defense).

Another way of thinking about this is that media content can be copied with zero marginal cost.

Because it can be copied with zero marginal cost, media content can’t simply be made and sold the way that conventional products and services are. There are a few different ways it can be monetized.


The most innocuous way is commission or patronage, where someone pays someone else to create a work because they want that work. This is totally unproblematic. You want a piece of art, you pay an artist, they make it for you; great. Maybe you share copies with the world, maybe you don’t; whatever. It’s good either way.

Unfortunately, it’s hard to sustain most artists and innovators on that model alone. (In a sense I’m using a patronage model, because I have a Patreon. But I’m not making anywhere near enough to live on that way.)

The second way is intellectual property, which I have written about before, and surely will again. If you can enforce limits on who is allowed to copy a work, then you can make a work and sell it for profit without fear of being undercut by someone else who simply copies it and sells it for cheaper. A detailed discussion of that is beyond the scope of this post, but you can read those previous posts, and I can give you the TLDR version: Some degree of intellectual property is probably necessary, but in our current society, it has clearly been taken much too far. I think artists and authors deserve to be able to copyright (or maybe copyleft) their work—but probably not for 70 years after their death.

And then there is a third way, the most insidious way: advertising. If you embed advertisements for other products and services within your content, you can then sell those ad slots for profit. This is how newspapers stay afloat, mainly; subscriptions have never been the majority of their revenue. It’s how TV was supported before cable and streaming—and cable usually has ads too, and streaming is starting to.

There is something fundamentally different about advertising as a service. Whereas most products and services you encounter in a capitalist society are made for you, designed for you to use, advertising it made at you, designed to manipulate you.

I’ve heard it put well this way:

If you’re not paying, you aren’t the customer; you’re the product.

Monetizing content by advertising effectively makes your readers (or viewers, players, etc.) into the product instead of the customer.

I call this effect adversarial design.

I chose this term because it not only conveys the right sense of being an adversary: it also includes the word ‘ad’ and the same Latin root ‘advertere‘ as ‘advertising’.

When a company designs a car or a phone, they want it to appeal to customers—they want you to like it. Yes, they want to take your money; but it’s a mutually beneficial exchange. They get money, you get a product; you’re both happier.

When a company designs an ad, they want it to affect customers—they want you to do what it says, whether you like it or not. And they wouldn’t be doing it if they thought you would buy it anyway—so they are basically trying to make you do something you wouldn’t otherwise have done.

In other words, when designing a product, corporations want to be your friend.

When designing an ad, they become your enemy.

You would absolutely prefer not to have ads. You don’t want your attention taken in this way. But they way that these corporations make money—disgustingly huge sums of money—is by forcing those ads in your face anyway.

Yes, to be fair, there might be some kinds of ads that aren’t too bad. Simple, informative, unobtrusive ads that inform you that something is available you might not otherwise have known about. Movie trailers are like this; people often enjoy watching movie trailers, and they want to see what movies are going to come out next. That’s fine. I have no objection to that.

But it should be clear to anyone who has, um, used the Internet in the past decade that we have gone far, far beyond that sort of advertising. Ads have become aggressive, manipulative, aggravating, and—above all—utterly ubiquitous. You can’t escape them. They’re everywhere. Even when you use ad-block software (which I highly recommend, particularly Adblock Plus—which is free), you still can’t completely escape them.

That’s another thing that should make it pretty clear that there’s something wrong with ads: People are willing to make efforts or even pay money to make ads go away.

Whenever there is a game I like that’s ad-supported but you can pay to make the ads go away, I always feel like I’m being extorted, even if what I have to pay would have been a totally reasonable price for the game. Come on, just sell me the game. Don’t give me the game for free and then make me pay to make it not unpleasant. Don’t add anti-features.

This is clearly not a problem that market competition alone will solve. Even in highly competitive markets, advertising is still ubiquitous, aggressive and manipulative. In fact, competition may even make it worse—a true monopoly wouldn’t need to advertise very much.

Consider Coke and Pepsi ads; they’re actually relatively pleasant, aren’t they? Because all they’re trying to do is remind you and make you thirsty so you’ll buy more of the product you were already buying. They aren’t really trying to get you to buy something you wouldn’t have otherwise. They know that their duopoly is solid, and only a true Black Swan event would unseat their hegemony.

And have you ever seen an ad for your gas company? I don’t think I have—probably because I didn’t have a choice in who my gas company was; there was only one that covered my area. So why bother advertising to me?

If competition won’t fix this, what will? Is there some regulation we could impose that would make advertising less obtrusive? People have tried, without much success. I think imposing an advertising tax would help, but even that might not do enough.

What I really think we need right now is to recognize the problem and invest in solving it. Right now we have megacorporations which are thoroughly (literally) invested in making advertising more obtrusive and more ubiquitous. We need other institutions—maybe government, maybe civil society more generally—that are similarly invested in counteracting it.


Otherwise, it’s only going to get worse.

Reflections at the crossroads

Jan 21 JDN 2460332

When this post goes live, I will have just passed my 36th birthday. (That means I’ve lived for about 1.1 billion seconds, so in order to be as rich as Elon Musk, I’d need to have made, on average, since birth, $200 per second—$720,000 per hour.)

I certainly feel a lot better turning 36 than I did 35. I don’t have any particular additional accomplishments to point to, but my life has already changed quite a bit, in just that one year: Most importantly, I quit my job at the University of Edinburgh, and I am currently in the process of moving out of the UK and back home to Michigan. (We moved the cat over Christmas, and the movers have already come and taken most of our things away; it’s really just us and our luggage now.)

But I still don’t know how to field the question that people have been asking me since I announced my decision to do this months ago:

“What’s next?”

I’m at a crossroads now, trying to determine which path to take. Actually maybe it’s more like a roundabout; it has a whole bunch of different paths, surely not just two or three. The road straight ahead is labeled “stay in academia”; the others at the roundabout are things like “freelance writing”, “software programming”, “consulting”, and “tabletop game publishing”. There’s one well-paved and superficially enticing road that I’m fairly sure I don’t want to take, labeled “corporate finance”.

Right now, I’m just kind of driving around in circles.

Most people don’t seem to quit their jobs without a clear plan for where they will go next. Often they wait until they have another offer in hand that they intend to take. But when I realized just how miserable that job was making me, I made the—perhaps bold, perhaps courageous, perhaps foolish—decision to get out as soon as I possibly could.

It’s still hard for me to fully understand why working at Edinburgh made me so miserable. Many features of an academic career are very appealing to me. I love teaching, I like doing research; I like the relatively flexible hours (and kinda need them, because of my migraines).

I often construct formal decision models to help me make big choices—generally it’s a linear model, where I simply rate each option by its relative quality in a particular dimension, then try different weightings of all the different dimensions. I’ve used this successfully to pick out cars, laptops, even universities. I’m not entrusting my decisions to an algorithm; I often find myself tweaking the parameters to try to get a particular result—but that in itself tells me what I really want, deep down. (Don’t do that in research—people do, and it’s bad—but if the goal is to make yourself happy, your gut feelings are important too.)

My decision models consistently rank university teaching quite high. It generally only gets beaten by freelance writing—which means that maybe I should give freelance writing another try after all.

And yet, my actual experience at Edinburgh was miserable.

What went wrong?

Well, first of all, I should acknowledge that when I separate out the job “university professor” into teaching and research as separate jobs in my decision model, and include all that goes into both jobs—not just the actual teaching, but the grading and administrative tasks; not just doing the research, but also trying to fund and publish it—they both drop lower on the list, and research drops down a lot.

Also, I would rate them both even lower now, having more direct experience of just how awful the exam-grading, grant-writing and journal-submitting can be.

Designing and then grading an exam was tremendously stressful: I knew that many of my students’ futures rested on how they did on exams like this (especially in the UK system, where exams are absurdly overweighted! In most of my classes, the final exam was at least 60% of the grade!). I struggled mightily to make the exam as fair as I could, all the while knowing that it would never really feel fair and I didn’t even have the time to make it the best it could be. You really can’t assess how well someone understands an entire subject in a multiple-choice exam designed to take 90 minutes. It’s impossible.

The worst part of research for me was the rejection.

I mentioned in a previous post how I am hypersensitive to rejection; applying for grants and submitting to journals was clearly the worst feelings of rejection I’ve felt in any job. It felt like they were evaluting not only the value of my work, but my worth as a scientist. Failure felt like being told that my entire career was a waste of time.

It was even worse than the feeling of rejection in freelance writing (which is one of the few things that my model tells me is bad about freelancing as a career for me, along with relatively low and uncertain income). I think the difference is that a book publisher is saying “We don’t think we can sell it.”—’we’ and ‘sell’ being vital. They aren’t saying “this is a bad book; it shouldn’t exist; writing it was a waste of time.”; they’re just saying “It’s not a subgenre we generally work with.” or “We don’t think it’s what the market wants right now.” or even “I personally don’t care for it.”. They acknowledge their own subjective perspective and the fact that it’s ultimately dependent on forecasting the whims of an extremely fickle marketplace. They aren’t really judging my book, and they certainly aren’t judging me.

But in research publishing, it was different. Yes, it’s all in very polite language, thoroughly spiced with sophisticated jargon (though some reviewers are more tactful than others). But when your grant application gets rejected by a funding agency or your paper gets rejected by a journal, the sense really basically is “This project is not worth doing.”; “This isn’t good science.”; “It was/would be a waste of time and money.”; “This (theory or experiment you’ve spent years working on) isn’t interesting or important.” Nobody ever came out and said those things, nor did they come out and say “You’re a bad economist and you should feel bad.”; but honestly a couple of the reviews did kinda read to me like they wanted to say that. They thought that the whole idea that human beings care about each other is fundamentally stupid and naive and not worth talking about, much less running experiments on.

It isn’t so much that I believed them that my work was bad science. I did make some mistakes along the way (but nothing vital; I’ve seen far worse errors by Nobel Laureates). I didn’t have very large samples (because every person I add to the experiment is money I have to pay, and therefore funding I have to come up with). But overall I do believe that my work is sufficiently rigorous to be worth publishing in scientific journals.

It’s more that I came to feel that my work is considered bad, that the kind of work I wanted to do would forever be an uphill battle against an implacable enemy. I already feel exhausted by that battle, and it had only barely begun. I had thought that behavioral economics was a more successful paradigm by now, that it had largely displaced the neoclassical assumptions that came before it; but I was wrong. Except specifically in journals dedicated to experimental and behavioral economics (of which prestigious journals are few—I quickly exhausted them), it really felt like a lot of the feedback I was getting amounted to, “I refuse to believe your paradigm.”.

Part of the problem, also, was that there simply aren’t that many prestigious journals, and they don’t take that many papers. The top 5 journals—which, for whatever reason, command far more respect than any other journals among economists—each accept only about 5-10% of their submissions. Surely more than that are worth publishing; and, to be fair, much of what they reject probably gets published later somewhere else. But it makes a shockingly large difference in your career how many “top 5s” you have; other publications almost don’t matter at all. So once you don’t get into any of those (which of course I didn’t), should you even bother trying to publish somewhere else?

And what else almost doesn’t matter? Your teaching. As long as you show up to class and grade your exams on time (and don’t, like, break the law or something), research universities basically don’t seem to care how good a teacher you are. That was certainly my experience at Edinburgh. (Honestly even their responses to professors sexually abusing their students are pretty unimpressive.)

Some of the other faculty cared, I could tell; there were even some attempts to build a community of colleagues to support each other in improving teaching. But the administration seemed almost actively opposed to it; they didn’t offer any funding to support the program—they wouldn’t even buy us pizza at the meetings, the sort of thing I had as an undergrad for my activist groups—and they wanted to take the time we spent in such pedagogy meetings out of our grading time (probably because if they didn’t, they’d either have to give us less grading, or some of us would be over our allotted hours and they’d owe us compensation).

And honestly, it is teaching that I consider the higher calling.

The difference between 0 people knowing something and 1 knowing it is called research; the difference between 1 person knowing it and 8 billion knowing it is called education.

Yes, of course, research is important. But if all the research suddenly stopped, our civilization would stagnate at its current level of technology, but otherwise continue unimpaired. (Frankly it might spare us the cyberpunk dystopia/AI apocalypse we seem to be hurtling rapidly toward.) Whereas if all education suddenly stopped, our civilization would slowly decline until it ultimately collapsed into the Stone Age. (Actually it might even be worse than that; even Stone Age cultures pass on knowledge to their children, just not through formal teaching. If you include all the ways parents teach their children, it may be literally true that humans cannot survive without education.)

Yet research universities seem to get all of their prestige from their research, not their teaching, and prestige is the thing they absolutely value above all else, so they devote the vast majority of their energy toward valuing and supporting research rather than teaching. In many ways, the administrators seem to see teaching as an obligation, as something they have to do in order to make money that they can spend on what they really care about, which is research.

As such, they are always making classes bigger and bigger, trying to squeeze out more tuition dollars (well, in this case, pounds) from the same number of faculty contact hours. It becomes impossible to get to know all of your students, much less give them all sufficient individual attention. At Edinburgh they even had the gall to refer to their seminars as “tutorials” when they typically had 20+ students. (That is not tutoring!)And then of course there were the lectures, which often had over 200 students.

I suppose it could be worse: It could be athletics they spend all their money on, like most Big Ten universities. (The University of Michigan actually seems to strike a pretty good balance: they are certainly not hurting for athletic funding, but they also devote sizeable chunks of their budget to research, medicine, and yes, even teaching. And unlike virtually all other varsity athletic programs, University of Michigan athletics turns a profit!)

If all the varsity athletics in the world suddenly disappeared… I’m not convinced we’d be any worse off, actually. We’d lose a source of entertainment, but it could probably be easily replaced by, say, Netflix. And universities could re-focus their efforts on academics, instead of acting like a free training and selection system for the pro leagues. The University of California, Irvine certainly seemed no worse off for its lack of varsity football. (Though I admit it felt a bit strange, even to a consummate nerd like me, to have a varsity League of Legends team.)

They keep making the experience of teaching worse and worse, even as they cut faculty salaries and make our jobs more and more precarious.

That might be what really made me most miserable, knowing how expendable I was to the university. If I hadn’t quit when I did, I would have been out after another semester anyway, and going through this same process a bit later. It wasn’t even that I was denied tenure; it was never on the table in the first place. And perhaps because they knew I wouldn’t stay anyway, they didn’t invest anything in mentoring or supporting me. Ostensibly I was supposed to be assigned a faculty mentor immediately; I know the first semester was crazy because of COVID, but after two and a half years I still didn’t have one. (I had a small research budget, which they reduced in the second year; that was about all the support I got. I used it—once.)

So if I do continue on that “academia” road, I’m going to need to do a lot of things differently. I’m not going to put up with a lot of things that I did. I’ll demand a long-term position—if not tenure-track, at least renewable indefinitely, like a lecturer position (as it is in the US, where the tenure-track position is called “assistant professor” and “lecturer” is permanent but not tenured; in the UK, “lecturers” are tenure-track—except at Oxford, and as of 2021, Cambridge—just to confuse you). Above all, I’ll only be applying to schools that actually have some track record for valuing teaching and supporting their faculty.

And if I can’t find any such positions? Then I just won’t apply at all. I’m not going in with the “I’ll take what I can get” mentality I had last time. Our household finances are stable enough that I can afford to wait awhile.

But maybe I won’t even do that. Maybe I’ll take a different path entirely.

For now, I just don’t know.

The problem with “human capital”

Dec 3 JDN 2460282

By now, human capital is a standard part of the economic jargon lexicon. It has even begun to filter down into society at large. Business executives talk frequently about “investing in their employees”. Politicians describe their education policies as “investing in our children”.

The good news: This gives businesses a reason to train their employees, and governments a reason to support education.

The bad news: This is clearly the wrong reason, and it is inherently dehumanizing.

The notion of human capital means treating human beings as if they were a special case of machinery. It says that a business may own and value many forms of productive capital: Land, factories, vehicles, robots, patents, employees.

But wait: Employees?


Businesses don’t own their employees. They didn’t buy them. They can’t sell them. They couldn’t make more of them in another factory. They can’t recycle them when they are no longer profitable to maintain.

And the problem is precisely that they would if they could.

Indeed, they used to. Slavery pre-dates capitalism by millennia, but the two quite successfully coexisted for hundreds of years. From the dawn of civilization up until all too recently, people literally were capital assets—and we now remember it as one of the greatest horrors human beings have ever inflicted upon one another.

Nor is slavery truly defeated; it has merely been weakened and banished to the shadows. The percentage of the world’s population currently enslaved is as low as it has ever been, but there are still millions of people enslaved. In Mauritania, slavery wasn’t even illegal until 1981, and those laws weren’t strictly enforced until 2007. (I had graduated from high school!) One of the most shocking things about modern slavery is how cheaply human beings are willing to sell other human beings; I have bought sandwiches that cost more than some people have paid for other people.

The notion of “human capital” basically says that slavery is the correct attitude to have toward people. It says that we should value human beings for their usefulness, their productivity, their profitability.

Business executives are quite happy to see the world in that way. It makes the way they have spent their lives seem worthwhile—perhaps even best—while allowing them to turn a blind eye to the suffering they have neglected or even caused along the way.

I’m not saying that most economists believe in slavery; on the contrary, economists led the charge of abolitionism, and the reason we wear the phrase “the dismal science” like a badge is that the accusation was first leveled at us for our skepticism toward slavery.

Rather, I’m saying that jargon is not ethically neutral. The names we use for things have power; they affect how people view the world.

This is why I always endeavor to always speak of net wealth rather than net worth—because a billionare is not worth more than other people. I’m not even sure you should speak of the net worth of Tesla Incorporated; perhaps it would be better to simply speak of its net asset value or market capitalization. But at least Tesla is something you can buy and sell (piece by piece). Elon Musk is not.

Likewise, I think we need a new term for the knowledge, skills, training, and expertise that human beings bring to their work. It is clearly extremely important; in fact in some sense it’s the most important economic asset, as it’s the only one that can substitute for literally all the others—and the one that others can least substitute for.

Human ingenuity can’t substitute for air, you say? Tell that to Buzz Aldrin—or the people who were once babies that breathed liquid for their first months of life. Yes, it’s true, you need something for human ingenuity to work with; but it turns out that with enough ingenuity, you may not need much, or even anything in particular. One day we may manufacture the air, water and food we need to live from pure energy—or we may embody our minds in machines that no longer need those things.

Indeed, it is the expansion of human know-how and technology that has been responsible for the vast majority of economic growth. We may work a little harder than many of our ancestors (depending on which ancestors you have in mind), but we accomplish with that work far more than they ever could have, because we know so many things they did not.

All that capital we have now is the work of that ingenuity: Machines, factories, vehicles—even land, if you consider all the ways that we have intentionally reshaped the landscape.

Perhaps, then, what we really need to do is invert the expression:

Humans are not machines. Machines are embodied ingenuity.

We should not think of human beings as capital. We should think of capital as the creation of human beings.

Marx described capital as “embodied labor”, but that’s really less accurate: What makes a robot a robot is much less about the hours spent building it, than the centuries of scientific advancement needed to understand how to make it in the first place. Indeed, if that robot is made by another robot, no human need ever have done any labor on it at all. And its value comes not from the work put into it, but the work that comes out of it.

Like so much of neoliberal ideology, the notion of human capital seems to treat profit and economic growth as inherent ends in themselves. Human beings only become valued insofar as we advance the will of the almighty dollar. We forget that the whole reason we should care about economic growth in the first place is that it benefits people. Money is the means, not the end; people are the end, not the means.

We should not think in terms of “investing in children”, as if they were an asset that was meant to yield a return. We should think of enriching our children—of building a better world for them to live in.

We should not speak of “investing in employees”, as though they were just another asset. We should instead respect employees and seek to treat them with fairness and justice.

That would still give us plenty of reason to support education and training. But it would also give us a much better outlook on the world and our place in it.

You are worth more than your money or your job.

The economy exists for people, not the reverse.

Don’t ever forget that.

Productivity can cope with laziness, but not greed

Oct 8 JDN 2460226

At least since Star Trek, it has been a popular vision of utopia: post-scarcity, an economy where goods are so abundant that there is no need for money or any kind of incentive to work, and people can just do what they want and have whatever they want.

It certainly does sound nice. But is it actually feasible? I’ve written about this before.

I’ve been reading some more books set in post-scarcity utopias, including Ursula K. Le Guin (who is a legend) and Cory Doctorow (who is merely pretty good). And it struck me that while there is one major problem of post-scarcity that they seem to have good solutions for, there is another one that they really don’t. (To their credit, neither author totally ignores it; they just don’t seem to see it as an insurmountable obstacle.)

The first major problem is laziness.

A lot of people assume that the reason we couldn’t achieve a post-scarcity utopia is that once your standard of living is no longer tied to your work, people would just stop working. I think this assumption rests on both an overly cynical view of human nature and an overly pessimistic view of technological progress.

Let’s do a thought experiment. If you didn’t get paid, and just had the choice to work or not, for whatever hours you wished, motivated only by the esteem of your peers, your contribution to society, and the joy of a job well done, how much would you work?

I contend it’s not zero. At least for most people, work does provide some intrinsic satisfaction. It’s also probably not as much as you are currently working; otherwise you wouldn’t insist on getting paid. Those are our lower and upper bounds.

Is it 80% of your current work? Perhaps not. What about 50%? Still too high? 20% seems plausible, but maybe you think that’s still too high. Surely it’s at least 10%. Surely you would be willing to work at least a few hours per week at a job you’re good at that you find personally fulfilling. My guess is that it would actually be more than that, because once people were free of the stress and pressure of working for a living, they would be more likely to find careers that truly brought them deep satisfaction and joy.

But okay, to be conservative, let’s estimate that people are only willing to work 10% as much under a system where labor is fully optional and there is no such thing as a wage. What kind of standard of living could we achieve?

Well, at the current level of technology and capital in the United States, per-capita GDP at purchasing power parity is about $80,000. 10% of that is $8,000. This may not sound like a lot, but it’s about how people currently live in Venezuela. India is slightly better, Ghana is slightly worse. This would feel poor to most Americans today, but it’s objectively a better standard of living than most humans have had throughout history, and not much worse than the world average today.

If per-capita GDP growth continues at its current rate of about 1.5% per year for another century, that $80,000 would become $320,000, 10% of which is $32,000—that would put us at the standard of living of present-day Bulgaria, or what the United States was like in the distant past of [checks notes] 1980. That wouldn’t even feel poor. In fact if literally everyone had this standard of living, nearly as many Americans today would be richer as would be poorer, since the current median personal income is only a bit higher than that.

Thus, the utopian authors are right about this one: Laziness is a solvable problem. We may not quite have it solved yet, but it’s on the ropes; a few more major breakthroughs in productivity-enhancing technology and we’ll basically be there.

In fact, on a small scale, this sort of utopian communist anarchy already works, and has for centuries. There are little places, all around the world, where people gather together and live and work in a sustainable, basically self-sufficient way without being motivated by wages or salaries, indeed often without owning any private property at all.

We call these places monasteries.

Granted, life in a monastery clearly isn’t for everyone: I certainly wouldn’t want to live a life of celibacy and constant religious observance. But the long-standing traditions of monastic life in several very different world religions does prove that it’s possible for human beings to live and even flourish in the absence of a profit motive.

Yet the fact that monastic life is so strict turns out to be no coincidence: In a sense, it had to be for the whole scheme to work. I’ll get back to that in a moment.

The second major problem with a post-scarcity utopia is greed.

This is the one that I think is the real barrier. It may not be totally insurmountable, but thus far I have yet to hear any good proposals that would seriously tackle it.

The issue with laziness is that we don’t really want to work as much as we do. But since we do actually want to work a little bit, the question is simply how to make as much as we currently do while working only as much as we want to. Hence, to deal with laziness, all we need to do is be more efficient. That’s something we are shockingly good at; the overall productivity of our labor is now something like 100 times what it was at the dawn of the Industrial Revolution, and still growing all the time.

Greed is different. The issue with greed is that, no matter how much we have, we always want more.

Some people are clearly greedier than others. In fact, I’m even willing to bet that most people’s greed could be kept in check by a society that provided for everyone’s basic needs for free. Yeah, maybe sometimes you’d fantasize about living in a gigantic mansion or going into outer space; but most of the time, most of us could actually be pretty happy as long as we had a roof over our heads and food on our tables. I know that in my own case, my grandest ambitions largely involve fighting global poverty—so if that became a solved problem, my life’s ambition would be basically fulfilled, and I wouldn’t mind so much retiring to a life of simple comfort.

But is everyone like that? This is what anarchists don’t seem to understand. In order for anarchy to work, you need everyone to fit into that society. Most of us or even nearly all of us just won’t cut it.

Ammon Hennecy famously declared: “An anarchist is someone who doesn’t need a cop to make him behave.” But this is wrong. An anarchist is someone who thinks that no one needs a cop to make him behave. And while I am the former, I am not the latter.

Perhaps the problem is that anarchists don’t realize that not everyone is as good as they are. They implicitly apply their own mentality to everyone else, and assume that the only reason anyone ever cheats, steals, or kills is because their circumstances are desperate.

Don’t get me wrong: A lot of crime—perhaps even most crime—is committed by people who are desperate. Improving overall economic circumstances does in fact greatly reduce crime. But there is also a substantial proportion of crime—especially the most serious crimes—which is committed by people who aren’t particularly desperate, they are simply psychopaths. They aren’t victims of circumstance. They’re just evil. And society needs a way to deal with them.

If you set up a society so that anyone can just take whatever they want, there will be some people who take much more than their share. If you have no system of enforcement whatsoever, there’s nothing to stop a psychopath from just taking everything he can get his hands on. And then it really doesn’t matter how productive or efficient you are; whatever you make will simply get taken by whoever is greediest—or whoever is strongest.

In order to avoid that, you need to either set up a system that stops people from taking more than their share, or you need to find a way to exclude people like that from your society entirely.

This brings us back to monasteries. Why are they so strict? Why are the only places where utopian anarchism seems to flourish also places where people have to wear a uniform, swear vows, carry out complex rituals, and continually pledge their fealty to an authority? (Note, by the way, that I’ve also just described life in the military, which also has a lot in common with life in a monastery—and for much the same reasons.)

It’s a selection mechanism. Probably no one consciously thinks of it this way—indeed, it seems to be important to how monasteries work that people are not consciously weighing the costs and benefits of all these rituals. This is probably something that memetically evolved over centuries, rather than anything that was consciously designed. But functionally, that’s what it does: You only get to be part of a monastic community if you are willing to pay the enormous cost of following all these strict rules.

That makes it a form of costly signaling. Psychopaths are, in general, more prone to impulsiveness and short-term thinking. They are therefore less willing than others to bear the immediate cost of donning a uniform and following a ritual in order to get the long-term gains of living in a utopian community. This excludes psychopaths from ever entering the community, and thus protects against their predation.

Even celibacy may be a feature rather than a bug: Psychopaths are also prone to promiscuity. (And indeed, utopian communes that practice free love seem to have a much worse track record of being hijacked by psychopaths than monasteries that require celibacy!)

Of course, lots of people who aren’t psychopaths aren’t willing to pay those costs either—like I said, I’m not. So the selection mechanism is in a sense overly strict: It excludes people who would support the community just fine, but aren’t willing to pay the cost. But in the long run, this turns out to be less harmful than being too permissive and letting your community get hijacked and destroyed by psychopaths.

Yet if our goal is to make a whole society that achieves post-scarcity utopia, we can’t afford to be so strict. We already know that most people aren’t willing to become monks or nuns.

That means that we need a selection mechanism which is more reliable—more precisely, one with higher specificity.

I mentioned this in a previous post in the context of testing for viruses, but it bears repeating. Sensitivity and specificity are two complementary measures of a test’s accuracy. The sensitivity of a test is how likely it is to show positive if the truth is positive. The specificity of a test is how likely it is to show negative if the truth is negative.

As a test of psychopathy, monastic strictness has very high sensitivity: If you are a psychopath, there’s a very high chance it will weed you out. But it has quite low specificity: Even if you’re not a psychopath, there’s still a very high chance you won’t want to become a monk.

For a utopian society to work, we need something that’s more specific, something that won’t exclude a lot of people who don’t deserve to be excluded. But it still needs to have much the same sensitivity, because letting psychopaths into your utopia is a very easy way to let that utopia destroy itself. We do not yet have such a test, nor any clear idea how we might create one.

And that, my friends, is why we can’t have nice things. At least, not yet.

AI and the “generalization faculty”

Oct 1 JDN 2460219

The phrase “artificial intelligence” (AI) has now become so diluted by overuse that we needed to invent a new term for its original meaning. That term is now “artificial general intelligence” (AGI). In the 1950s, AI meant the hypothetical possibility of creating artificial minds—machines that could genuinely think and even feel like people. Now it means… pathing algorithms in video games and chatbots? The goalposts seem to have moved a bit.

It seems that AGI has always been 20 years away. It was 20 years away 50 years ago, and it will probably be 20 years away 50 years from now. Someday it will really be 20 years away, and then, 20 years after that, it will actually happen—but I doubt I’ll live to see it. (XKCD also offers some insight here: “It has not been conclusively proven impossible.”)

We make many genuine advances in computer technology and software, which have profound effects—both good and bad—on our lives, but the dream of making a person out of silicon always seems to drift ever further into the distance, like a mirage on the desert sand.

Why is this? Why do so many people—even, perhaps especially,experts in the field—keep thinking that we are on the verge of this seminal, earth-shattering breakthrough, and ending up wrong—over, and over, and over again? How do such obviously smart people keep making the same mistake?

I think it may be because, all along, we have been laboring under the tacit assumption of a generalization faculty.

What do I mean by that? By “generalization faculty”, I mean some hypothetical mental capacity that allows you to generalize your knowledge and skills across different domains, so that once you get good at one thing, it also makes you good at other things.

This certainly seems to be how humans think, at least some of the time: Someone who is very good at chess is likely also pretty good at go, and someone who can drive a motorcycle can probably also drive a car. An artist who is good at portraits is probably not bad at landscapes. Human beings are, in fact, able to generalize, at least sometimes.

But I think the mistake lies in imagining that there is just one thing that makes us good at generalizing: Just one piece of hardware or software that allows you to carry over skills from any domain to any other. This is the “generalization faculty”—the imagined faculty that I think we do not have, indeed I think does not exist.

Computers clearly do not have the capacity to generalize. A program that can beat grandmasters at chess may be useless at go, and self-driving software that works on one type of car may fail on another, let alone a motorcycle. An art program that is good at portraits of women can fail when trying to do portraits of men, and produce horrific Daliesque madness when asked to make a landscape.

But if they did somehow have our generalization capacity, then, once they could compete with us at some things—which they surely can, already—they would be able to compete with us at just about everything. So if it were really just one thing that would let them generalize, let them leap from AI to AGI, then suddenly everything would change, almost overnight.

And so this is how the AI hype cycle goes, time and time again:

  1. A computer program is made that does something impressive, something that other computer programs could not do, perhaps even something that human beings are not very good at doing.
  2. If that same prowess could be generalized to other domains, the result would plainly be something on par with human intelligence.
  3. Therefore, the only thing this computer program needs in order to be sapient is a generalization faculty.
  4. Therefore, there is just one more step to AGI! We are nearly there! It will happen any day now!

And then, of course, despite heroic efforts, we are unable to generalize that program’s capabilities except in some very narrow way—even decades after having good chess programs, getting programs to be good at go was a major achievement. We are unable to find the generalization faculty yet again. And the software becomes yet another “AI tool” that we will use to search websites or make video games.

For there never was a generalization faculty to be found. It always was a mirage in the desert sand.

Humans are in fact spectacularly good at generalizing, compared to, well, literally everything else in the known universe. Computers are terrible at it. Animals aren’t very good at it. Just about everything else is totally incapable of it. So yes, we are the best at it.

Yet we, in fact, are not particularly good at it in any objective sense.

In experiments, people often fail to generalize their reasoning even in very basic ways. There’s a famous one where we try to get people to make an analogy between a military tactic and a radiation treatment, and while very smart, creative people often get it quickly, most people are completely unable to make the connection unless you give them a lot of specific hints. People often struggle to find creative solutions to problems even when those solutions seem utterly obvious once you know them.

I don’t think this is because people are stupid or irrational. (To paraphrase Sydney Harris: Compared to what?) I think it is because generalization is hard.

People tend to be much better at generalizing within familiar domains where they have a lot of experience or expertise; this shows that there isn’t just one generalization faculty, but many. We may have a plethora of overlapping generalization faculties that apply across different domains, and can learn to improve some over others.

But it isn’t just a matter of gaining more expertise. Highly advanced expertise is in fact usually more specialized—harder to generalize. A good amateur chess player is probably a good amateur go player, but a grandmaster chess player is rarely a grandmaster go player. Someone who does well in high school biology probably also does well in high school physics, but most biologists are not very good physicists. (And lest you say it’s simply because go and physics are harder: The converse is equally true.)

Humans do seem to have a suite of cognitive tools—some innate hardware, some learned software—that allows us to generalize our skills across domains. But even after hundreds of millions of years of evolving that capacity under the highest possible stakes, we still basically suck at it.

To be clear, I do not think it will take hundreds of millions of years to make AGI—or even millions, or even thousands. Technology moves much, much faster than evolution. But I would not be surprised if it took centuries, and I am confident it will at least take decades.

But we don’t need AGI for AI to have powerful effects on our lives. Indeed, even now, AI is already affecting our lives—in mostly bad ways, frankly, as we seem to be hurtling gleefully toward the very same corporatist cyberpunk dystopia we were warned about in the 1980s.

A lot of technologies have done great things for humanity—sanitation and vaccines, for instance—and even automation can be a very good thing, as increased productivity is how we attained our First World standard of living. But AI in particular seems best at automating away the kinds of jobs human beings actually find most fulfilling, and worsening our already staggering inequality. As a civilization, we really need to ask ourselves why we got automated writing and art before we got automated sewage cleaning or corporate management. (We should also ask ourselves why automated stock trading resulted in even more money for stock traders, instead of putting them out of their worthless parasitic jobs.) There are technological reasons for this, yes; but there are also cultural and institutional ones. Automated teaching isn’t far away, and education will be all the worse for it.

To change our lives, AI doesn’t have to be good at everything. It just needs to be good at whatever we were doing to make a living. AGI may be far away, but the impact of AI is already here.

Indeed, I think this quixotic quest for AGI, and all the concern about how to control it and what effects it will have upon our society, may actually be distracting from the real harms that “ordinary” “boring” AI is already having upon our society. I think a Terminator scenario, where the machines rapidly surpass our level of intelligence and rise up to annihilate us, is quite unlikely. But a scenario where AI puts millions of people out of work with insufficient safety net, triggering economic depression and civil unrest? That could be right around the corner.

Frankly, all it may take is getting automated trucks to work, which could be just a few years. There are nearly 4 million truck drivers in the United States—a full percentage point of employment unto itself. And the Governor of California just vetoed a bill that would require all automated trucks to have human drivers. From an economic efficiency standpoint, his veto makes perfect sense: If the trucks don’t need drivers, why require them? But from an ethical and societal standpoint… what do we do with all the truck drivers!?

The inequality of factor mobility

Sep 24 JDN 2460212

I’ve written before about how free trade has brought great benefits, but also great costs. It occurred to me this week that there is a fairly simple reason why free trade has never been as good for the world as the models would suggest: Some factors of production are harder to move than others.

To some extent this is due to policy, especially immigration policy. But it isn’t just that.There are certain inherent limitations that render some kinds of inputs more mobile than others.

Broadly speaking, there are five kinds of inputs to production: Land, labor, capital, goods, and—oft forgotten—ideas.

You can of course parse them differently: Some would subdivide different types of labor or capital, and some things are hard to categorize this way. The same product, such as an oven or a car, can be a good or capital depending on how it’s used. (Or, consider livestock: is that labor, or capital? Or perhaps it’s a good? Oddly, it’s often discussed as land, which just seems absurd.) Maybe ideas can be considered a form of capital. There is a whole literature on human capital, which I increasingly find distasteful, because it seems to imply that economists couldn’t figure out how to value human beings except by treating them as a machine or a financial asset.

But this four-way categorization is particularly useful for what I want to talk about today. Because the rate at which those things move is very different.

Ideas move instantly. It takes literally milliseconds to transmit an idea anywhere in the world. This wasn’t always true; in ancient times ideas didn’t move much faster than people, and it wasn’t until the invention of the telegraph that their transit really became instantaneous. But it is certainly true now; once this post is published, it can be read in a hundred different countries in seconds.

Goods move in hours. Air shipping can take a product just about anywhere in less than a day. Sea shipping is a bit slower, but not radically so. It’s never been easier to move goods all around the world, and this has been the great success of free trade.

Capital moves in weeks. Here it might be useful to subdivide different types of capital: It’s surely faster to move an oven or even a car (the more good-ish sort of capital) than it is to move an entire factory (capital par excellence). But all in all, we can move stuff pretty fast these days. If you want to move your factory to China or Indonesia, you can probably get it done in a matter of weeks or at most months.

Labor moves in months. This one is a bit ironic, since it is surely easier to carry a single human person—or even a hundred human people—than all the equipment necessary to run an entire factory. But moving labor isn’t just a matter of physically carrying people from one place to another. It’s not like tourism, where you just pack and go. Moving labor requires uprooting people from where they used to live and letting them settle in a new place. It takes a surprisingly long time to establish yourself in a new environment—frankly even after two years in Edinburgh I’m not sure I quite managed it. And all the additional restrictions we’ve added involving border crossings and immigration laws and visas only make it that much slower.

Land moves never. This one seems perfectly obvious, but is also often neglected. You can’t pick up a mountain, a lake, a forest, or even a corn field and carry it across the border. (Yes, eventually plate tectonics will move our land around—but that’ll be millions of years.) Basically, land stays put—and so do all the natural environments and ecosystems on that land. Land isn’t as important for production as it once was; before industrialization, we were dependent on the land for almost everything. But we absolutely still are dependent on the land! If all the topsoil in the world suddenly disappeared, the economy wouldn’t simply collapse: the human race would face extinction. Moreover, a lot of fixed infrastructure, while technically capital, is no more mobile than land. We couldn’t much more easily move the Interstate Highway System to China than we could move Denali.

So far I have said nothing particularly novel. Yeah, clearly it’s much easier to move a mathematical theorem (if such a thing can even be said to “move”) than it is to move a factory, and much easier to move a factory than to move a forest. So what?

But now let’s consider the impact this has on free trade.

Ideas can move instantly, so free trade in ideas would allow all the world to instantaneously share all ideas. This isn’t quite what happens—but in the Internet age, we’re remarkably close to it. If anything, the world’s governments seem to be doing their best to stop this from happening: One of our most strictly-enforced trade agreements, the TRIPS Accord, is about stopping ideas from spreading too easily. And as far as I can tell, region-coding on media goes against everything free trade stands for, yet here we are. (Why, it’s almost as if these policies are more about corporate profits than they ever were about freedom!)

Goods and capital can move quickly. This is where we have really felt the biggest effects of free trade: Everything in the US says “made in China” because the capital is moved to China and then the goods are moved back to the US.

But it would honestly have made more sense to move all those workers instead. For all their obvious flaws, US institutions and US infrastructure are clearly superior to those in China. (Indeed, consider this: We may be so aware of the flaws because the US is especially transparent.) So, the most absolutely efficient way to produce all those goods would be to leave the factories in the US, and move the workers from China instead. If free trade were to achieve its greatest promises, this is the sort of thing we would be doing.


Of course that is not what we did. There are various reasons for this: A lot of the people in China would rather not have to leave. The Chinese government would not want them to leave. A lot of people in the US would not want them to come. The US government might not want them to come.

Most of these reasons are ultimately political: People don’t want to live around people who are from a different nation and culture. They don’t consider those people to be deserving of the same rights and status as those of their own country.

It may sound harsh to say it that way, but it’s clearly the truth. If the average American person valued a random Chinese person exactly the same as they valued a random other American person, our immigration policy would look radically different. US immigration is relatively permissive by world standards, and that is a great part of American success. Yet even here there is a very stark divide between the citizen and the immigrant.

There are morally and economically legitimate reasons to regulate immigration. There may even be morally and economically legitimate reasons to value those in your own nation above those in other nations (though I suspect they would not justify the degree that most people do). But the fact remains that in terms of pure efficiency, the best thing to do would obviously be to move all the people to the place where productivity is highest and do everything there.

But wouldn’t moving people there reduce the productivity? Yes. Somewhat. If you actually tried to concentrate the entire world’s population into the US, productivity in the US would surely go down. So, okay, fine; stop moving people to a more productive place when it has ceased to be more productive. What this should do is average out all the world’s labor productivity to the same level—but a much higher level than the current world average, and frankly probably quite close to its current maximum.

Once you consider that moving people and things does have real costs, maybe fully equaling productivity wouldn’t make sense. But it would be close. The differences in productivity across countries would be small.

They are not small.

Labor productivity worldwide varies tremendously. I don’t count Ireland, because that’s Leprechaun Economics (this is really US GDP with accounting tricks, not Irish GDP). So the prize for highest productivity goes to Norway, at $100 per worker hour (#ScandinaviaIsBetter). The US is doing the best among large countries, at an impressive $73 per hour. And at the very bottom of the list, we have places like Bangladesh at $4.79 per hour and Cambodia at $3.43 per hour. So, roughly speaking, there is about a 20-to-1 ratio between the most productive and least productive countries.

I could believe that it’s not worth it to move US production at $73 per hour to Norway to get it up to $100 per hour. (For one thing, where would we fit it all?) But I find it far more dubious that it wouldn’t make sense to move most of Cambodia’s labor to the US. (Even all 16 million people is less than what the US added between 2010 and 2020.) Even given the fact that these Cambodian workers are less healthy and less educated than American workers, they would almost certainly be more productive on the other side of the Pacific, quite likely ten times as productive as they are now. Yet we haven’t moved them, and have no plans to.

That leaves the question of whether we will move our capital to them. We have been doing so in China, and it worked (to a point). Before that, we did it in Korea and Japan, and it worked. Cambodia will probably come along sooner or later. For now, that seems to be the best we can do.

But I still can’t shake the thought that the world is leaving trillions of dollars on the table by refusing to move people. The inequality of factor mobility seems to be a big part of the world’s inequality, period.