A knockdown proof of social preferences

Apr 27 JDN 2460793

In economics jargon, social preferences basically just means that people care about what happens to people other than themselves.

If you are not an economist, it should be utterly obvious that social preferences exist:

People generally care the most about their friends and family, less but still a lot about their neighbors and acquaintances, less but still moderately about other groups they belong to such as those delineated by race, gender, religion, and nationality (or for that matter alma mater), and less still but not zero about any randomly-selected human being. Most of us even care about the welfare of other animals, though we can be curiously selective about this: Abuse that would horrify most people if done to cats or dogs passes more or less ignored when it is committed against cows, pigs, and chickens.

For some people, there are also groups for which there seem to be negative social preferences, sometimes called “spiteful preferences”, but that doesn’t really seem to capture it: I think we need a stronger word like hatredfor whatever emotion human beings feel when they are willing and eager to participate in genocide. Yet even that is still a social preference: If you want someone to suffer or die, you do care about what happens to them.

But if you are an economist, you’ll know that the very idea of social preferences remains controversial, even after it has been clearly and explictly demonstrated by numerous randomized controlled experiments. (I will never forget the professor who put “altruism” in scare quotes in an email reply he sent me.)

Indeed, I have realized that the experimental evidence is so clear, so obvious, that it surprises me that I haven’t seen anyone present the really overwhelming knockdown evidence that ought to convince any reasonable skeptic. So that is what I have decided to do today.

Consider the following four economics experiments:

Dictator 1Participant 1 chooses an allocation of $20, dividing it between themself and Participant 2. Whatever allocation Participant 1 chooses, Participant 2 must accept. Both participants get their allocated amounts.
Dictator 2Participant 1 chooses an allocation of $20, choosing how much they get. Participant 1 gets their allocated amount. The rest of the money is burned.
Ultimatum 1Participant 1 chooses an allocation of $20, dividing it between themself and Participant 2. Participant 2 may choose to accept or reject this allocation; if they accept, both participants get their allocated amounts. If they reject, both participants get nothing.
Ultimatum 2Participant 1 chooses an allocation of $20, dividing it between themself and Participant 2. Participant 2 may choose to accept or reject this allocation; if they accept, both participants get their allocated amounts. If they reject, Participant 2 gets nothing, but Participant 1 still gets the allocated amount.

Dictator 1 and Ultimatum 1 are the standard forms of the Dictator Game and Ultimatum Game, which are experiments that have been conducted dozens if not hundreds of times and are the subject of a huge number of papers in experimental economics.

These experiments clearly demonstrate the existence of social preferences. But I think even most behavioral economists don’t quite seem to grasp just how compelling that evidence is.

This is because they have generally failed to compare against my other two experiments, Dictator 2 and Ultimatum 2.

If social preferences did not exist, Participant 1 would be completely indifferent about what happened to the money that they themself did not receive.

In that case, Dictator 1 and Dictator 2 should show the same result: Participant 1 chooses to get $20.

Likewise, Ultimatum 1 and Ultimatum 2 should show the same result: Participant 1 chooses to get $19, offering only $1 to Participant 2, and Participant 2 accepts. This is the outcome that is “rational” in the hyper-selfish neoclassical sense.

Much ink has already been spilled over the fact that these are not the typical outcomes of Dictator 1 and Ultimatum 1. Far more likely is that Participant 1 offers something close to $10, or even $10 exactly, in both games; and in Ultimatum 1, in the unlikely event that Participant 1 should offer only $1 or $2, Participant 2 will typically reject.

But what I’d like to point out today is that the “rational” neoclassical outcome is what would happen in Dictator 2 and Ultimatum 2, and that this is so obvious we probably don’t even need to run the experiments (but we might as well, just to be sure).

In Dictator 1, the money that Participant 1 doesn’t keep goes to Participant 2, and so they are deciding how to weigh their own interests against those of another. But in Dictator 2, Participant 1 is literally just deciding how much free money they will receive. The other money doesn’t go to anyone—not even back to the university conducting the experiment. It’s just burned. It provides benefit to no one. So the rational choice is in fact obvious: Take all of the free money. (Technically, burning money and thereby reducing the money supply would have a miniscule effect of reducing future inflation across the entire economy. But even the full $20 would be several orders of magnitude too small for anyone to notice—and even a much larger amount like $10 billion would probably end up being compensated by the actions of the Federal Reserve.)

Likewise, in both Ultimatum 1 and Ultimatum 2, the money that Participant 1 doesn’t keep will go to Participant 2. Their offer will thus probably be close to $10. But what I really want to focus in on is Participant 2’s choice: If they are offered only $1 or $2, will they accept? Neoclassical theory says that the “rational” choice is to accept it. But in Ultimatum 1, most people will reject it. Are they being irrational?

If they were simply being irrational—failing to maximize their own payoff—then they should reject just as often in Ultimatum 2. But I contend that they would in fact accept far more offers in Ultimatum 2 than they did in Ultimatum 1. Why? Because rejection doesn’t stop Participant 1 from getting what they demanded. There is no way to punish Participant 1 for an unfair offer in Ultimatum 2: It is literally just a question of whether you get $1 or $0.

Like I said, I haven’t actually run these experiments. I’m not sure anyone has. But these results seem very obvious, and I would be deeply shocked if they did not turn out the way I expect. (Perhaps as shocked as so many neoclassical economists were when they first saw the results of experiments on Dictator 1 and Ultimatum 1!)

Thus, Dictator 2 and Ultimatum 2 should have outcomes much more like what neoclassical economics predicts than Dictator 1 and Ultimatum 1.

Yet the only difference—the only difference—between Dictator 1 and Dictator 2, and between Ultimatum 1 and Ultimatum 2, is what happens to someone else’s payoff when you make your decision. Your own payoff is exactly identical.

Thus, behavior changes when we change only the effects on the payoffs of other people; therefore people care about the payoffs of others; therefore social preferences exist.

QED.

Of course this still leaves the question of what sort of social preferences people have, and why:

  • Why are some people more generous than others? Why are people sometimes spiteful—or even hateful?
  • Is it genetic? Is it evolutionary? Is it learned? Is it cultural? Likely all of the above.
  • Are people implicitly thinking of themselves as playing in a broader indefinitely iterated game called “life” and using that to influence their decisions? Quite possibly.
  • Is maintaining a reputation of being a good person important to people? In general, I’m sure it is, but I don’t think it can explain the results of these economic experiments by itself—especially in versions where everything is completely anonymous.

But given the stark differences between Dictator 1 versus Dictator 2 and Ultimatum 1 versus Ultimatum 2 (and really, feel free to run the experiments!), I don’t think anyone can reasonably doubt that social preferences do, in fact, exist.

If you ever find someone who does doubt social preferences, point them to this post.

Extrapolating the INE

Apr 6 JDN 2460772

I was only able to find sufficient data to calculate the Index of Necessary Expenditure back to 1990. But I found a fairly consistent pattern that the INE grew at a rate about 20% faster than the CPI over that period, so I decided to take a look at what longer-term income growth looks like if we extrapolate that pattern back further in time.

The result is this graph:

Using the CPI, real per-capita GDP in the US (in 2024 dollars) has grown from $25,760 in 1950 to $85,779 today—increasing by a factor of 3.33. Even accounting for increased inequality and the fact that more families have two income earners, that’s still a substantial increase.

But using the extrapolated INE, real per-capita GDP has only grown from $43,622 in 1950 to $85,779 today—increasing by only a factor of 1.97. This is a much smaller increase, especially when we adjusted for increased inequality and increased employment for women.

Even without the extrapolation, it’s still clear that real INE-adjusted incomes have were basically stagnant in the 2000s, increased rather slowly in the 2020s, and then actually dropped in 2022 after a bunch of government assistance ended. What looked, under the CPI, like steadily increasing real income was actually more like treading water.

Should we trust this extrapolation? It’s a pretty simplistic approach, I admit. But I think it is plausible when we consider this graph of the ratio between median income and median housing price:

This ratio was around 6 in the 1950s, then began to fall until in the 1970s it stabilized around 4. It began to slowly creep back up, but then absolutely skyrocketed in the 2000s before the 2008 crash. Now it has been rising again, and is now above 7, the highest it has been since the Second World War. (Does this mean we’re due for another crash? I’d bet as much.)

What does this mean? It means that a typical family used to be able to afford a typical house with only four years of their total income—and now would require seven. In that sense, homes are now 75% more expensive today than they were in the 1970s.

Similar arguments can be made for the rising costs of education and healthcare; while many prices have not grown much (gasoline) or even fallen (jewelry and technology), these necessities have continued to grow more and more expensive, not simply in nominal terms, but even compared to the median income.

This is further evidence that our standard measures of “inflation” and “real income” are fundamentally inadequate. They simply aren’t accurately reflecting the real cost of living for most American families. Even in many times when it seemed “inflation” was low and “real income” was growing, in fact it was growing harder and harder to afford vital necessities such as housing, education, and healthcare.

This economic malaise may have been what contributed to the widespread low opinion of Biden’s economy. While the official figures looked good, people’s lives weren’t actually getting better.

Yet this is still no excuse for those who voted for Trump; even the policies he proudly announced he would do—like tariffs and deportations—have clearly made these problems worse, and this was not only foreseeable but actually foreseen by the vast majority of the world’s economists. Then there are all the things he didn’t even say he would do but is now doing, like cozying up to Putin, alienating our closest allies, and discussing “methods” for achieving an unconstitutional third term.

Indeed, it honestly feels quite futile to even reflect upon what was wrong with our economy even when things seemed to be running smoothly, because now things are rapidly getting worse, and showing no sign of getting better in any way any time soon.

A new theoretical model of co-ops

Mar 30 JDN 2460765

A lot of economists seem puzzled by the fact that co-ops are just as efficient as corporate firms, since they have this idea that profit-sharing inevitably results in lower efficiency due to perverse incentives.

I think they’ve been modeling co-ops wrong. Here I present a new model, a very simple one, with linear supply and demand curves. Of course one could make a more sophisticated model, but this should be enough to make the point (and this is just a blog post, not a research paper, after all).

Demand curve is p = a – b q

Marginal cost is f q

There are n workers, who would hold equal shares of the co-op.

Competitive market

First, let’s start with the traditional corporate firm in a competitive market.

Since the market is competitive, price would equal marginal cost would equal wage:

a – b q = d q

q = a/(b+f)

w = d (a/(b+f)) = (a d)/(b+f)

Total profit will be

(p – w)q = 0.

Monopoly firm

In a monopoly, marginal revenue would equal marginal cost:
d[pq]/dq = a – 2 b q

If they are also a monopsonist in the labor market, this marginal cost would be marginal cost of labor, not wage:

d[d q2]/dq = 2 f q

a – 2 b q = 2 f q

q = a/(2b + 2f)

p = a – b q = a (1 – b/(2b + 2f)) = (a (b + 2f))/(2b + 2f)

w = d q = (a f)/(2b + 2f)

Total profit will be

(p – w) q = ((a (b + 2f))/(2b + 2f) – (a f)/(2b + 2f))a/(2b + 2f) = a2/(4b + 2f)

Now consider the co-op.

First, suppose that instead of working for a wage, I work for profit sharing.

If our product market is competitive, we’ll be price-takers, and we will produce until price equals marginal cost:

p = f q

a – b q = f q

q = a/(a+b)

But will we, really? I only get 1/n share of the profits. So let’s see here. My marginal cost of production is still f q, but the marginal benefit I get from more sales may only be p/n.

In that case I would work until:

p/n = f q

(a – b q)/n = fq

a – b q = n f q

q = (a/(b+nf))

Thus I would under-produce. This is the usual argument against co-ops and similar shared ownership.

Co-ops with wages

But that’s not actually how co-ops work. They pay wages. Why do they do that? Well, consider what happens if I am offered a wage as a worker-owner of the co-op.

Is there any reason for the co-op to vote on a wage that is less than the competitive market? No, because owners are workers, so any additional profit from a lower wage would simply be taken from their own wages.

If there any reason for the co-op to vote on a wage that is more than the competitive market? No, because workers are owners, and any surplus lost by paying higher wages would simply be taken from their own profits.

So if the product market is competitive, the co-op will produce the same amount and charge the same price as a firm in perfect competition, even if they have market power over their own wages.

Monopoly co-ops

The argument above doesn’t assume that the co-op has no market power in the labor market. Thus if they are a monopoly in the product market and a monopsony in the labor market, they still pay a competitive wage.

Thus they would set marginal revenue equal to marginal cost:

a – 2 b q = f q

q = a/(2b + f)

The co-op will produce more than the monopoly firm..

This is the new price:

p = a – b q = a(1 – b/(2b+f)) = a(b+f)/(2b + f)

It’s not obvious that this is lower than the price charged by the monopoly firm, but it is.

(a (b + 2f))/(2b + 2f) – a(b+f)/(2b + f) = (a (2b + f)(b + 2f) – 2 a(b+f)2)/(2(b+f)(2b+f))

This is proportional to:

(2b + f)(b + 2f) – 2(b+f)2

2b2 + 5bf + 2f2 – (2b2 + 4bf + 2f2) = bf

So it’s not a large difference, but it’s there. In the presence of market power in the labor market, the co-op is better for consumers, because they get more goods and pay a lower price.

Thus, there is actually no lost efficiency from being a co-op. There is simply much lower inequality, and potentially higher efficiency.

But that’s just in theory.

What do we see in practice?

Exactly that.

Co-ops have the same productivity and efficiency as corporate firms, but they pay higher wages, provide better benefits, and offer collateral benefits to their communities. In fact, they are sometimes more efficient than corporate firms.

Since they’re just as efficient—if not more so—and produce much lower inequality, switching more firms over to co-ops would clearly be a good thing.

Why, then, aren’t co-ops more common?

Because the people who have the money don’t like them.

The biggest barrier facing co-ops is their inability to get financing, because they don’t pay shareholders (so no IPOs) and banks don’t like to lend to them. They tend to make less profit than corporate firms, which offers investors a lower return—instead that money goes to the worker-owners. This lower return isn’t due to inefficiency; it’s just a different distribution of income, more to labor and less to capital.

We will need new financial institutions to support co-ops, such as the Cooperative Fund of New England. And general redistribution of wealth would also help, because if middle class people had more wealth they could afford to finance co-ops. (It would also be good for many other reasons, of course.)

The Index of Necessary Expenditure

Mar 16 JDN 2460751

I’m still reeling from the fact that Donald Trump was re-elected President. He seemed obviously horrible at the time, and he still seems horrible now, for many of the same reasons as before (we all knew the tariffs were coming, and I think deep down we knew he would sell out Ukraine because he loves Putin), as well as some brand new ones (I did not predict DOGE would gain access to all the government payment systems, nor that Trump would want to start a “crypto fund”). Kamala Harris was not an ideal candidate, but she was a good candidate, and the comparison between the two could not have been starker.

Now that the dust has cleared and we have good data on voting patterns, I am now less convinced than I was that racism and sexism were decisive against Harris. I think they probably hurt her some, but given that she actually lost the most ground among men of color, racism seems like it really couldn’t have been a big factor. Sexism seems more likely to be a significant factor, but the fact that Harris greatly underperformed Hillary Clinton among Latina women at least complicates that view.

A lot of voters insisted that they voted on “inflation” or “the economy”. Setting aside for a moment how absurd it was—even at the time—to think that Trump (he of the tariffs and mass deportations!) was going to do anything beneficial for the economy, I would like to better understand how people could be so insistent that the economy was bad even though standard statistical measures said it was doing fine.

Krugman believes it was a “vibecession”, where people thought the economy was bad even though it wasn’t. I think there may be some truth to this.


But today I’d like to evaluate another possibility, that what people were really reacting against was not inflation per se but necessitization.

I first wrote about necessitization in 2020; as far as I know, the term is my own coinage. The basic notion is that while prices overall may not have risen all that much, prices of necessities have risen much faster, and the result is that people feel squeezed by the economy even as CPI growth remains low.

In this post I’d like to more directly evaluate that notion, by constructing an index of necessary expenditure (INE).

The core idea here is this:

What would you continue to buy, in roughly the same amounts, even if it doubled in price, because you simply can’t do without it?

For example, this is clearly true of housing: You can rent or you can own, but can’t not have a house. And nor are most families going to buy multiple houses—and they can’t buy partial houses.

It’s also true of healthcare: You need whatever healthcare you need. Yes, depending on your conditions, you maybe could go without, but not without suffering, potentially greatly. Nor are you going to go out and buy a bunch of extra healthcare just because it’s cheap. You need what you need.

I think it’s largely true of education as well: You want your kids to go to college. If college gets more expensive, you might—of necessity—send them to a worse school or not allow them to complete their degree, but this would feel like a great hardship for your family. And in today’s economy you can’t not send your kids to college.

But this is not true of technology: While there is a case to be made that in today’s society you need a laptop in the house, the fact is that people didn’t used to have those not that long ago, and if they suddenly got a lot cheaper you very well might buy another one.

Well, it just so happens that housing, healthcare, and education have all gotten radically more expensive over time, while technology has gotten radically cheaper. So prima facie, this is looking pretty plausible.

But I wanted to get more precise about it. So here is the index I have constructed. I consider a family of four, two adults, two kids, making the median household income.

To get the median income, I’ll use this FRED series for median household income, then use this table of median federal tax burden to get an after-tax wage. (State taxes vary too much for me to usefully include them.) Since the tax table ends in 2020 which was anomalous, I’m going to extrapolate that 2021-2024 should be about the same as 2019.

I assume the kids go to public school, but the parents are saving up for college; to make the math simple, I’ll assume the family is saving enough for each kid to graduate from with a four-year degree from a public university, and that saving is spread over 16 years of the child’s life. 2*4/16 = 0.5; this means that each year the family needs to come up with 0.5 years of cost of attendance. (I had to get the last few years from here, but the numbers are comparable.)

I assume the family owns two cars—both working full time, they kinda have to—which I amortize over 10 year lifetimes; 2*1/10 = 0.2, so each year the family pays 0.2 times the value of an average midsize car. (The current average new car price is $33226; I then use the CPI for cars to figure out what it was in previous years.)

I assume they pay a 30-year mortgage on the median home; they would pay interest on this mortgage, so I need to factor that in. I’ll assume they pay the average mortgage rate in that year, but I don’t want to have to do a full mortgage calculation (including PMI, points, down payment etc.) for each year, so I’ll say that they amount they pay is (1/30 + 0.5 (interest rate))*(home value) per year, which seems to be a reasonable approximation over the relevant range.

I assume that both adults have a 15-mile commute (this seems roughly commensurate with the current mean commute time of 26 minutes), both adults work 5 days per week, 50 weeks per year, and their cars get the median level of gas mileage. This means that they consume 2*15*2*5*50/(median MPG) = 15000/(median MPG) gallons of gasoline per year. I’ll use this BTS data for gas mileage. I’m intentionally not using median gasoline consumption, because when gas is cheap, people might take more road trips, which is consumption that could be avoided without great hardship when gas gets expensive. I will also assume that the kids take the bus to school, so that doesn’t contribute to the gasoline cost.

That I will multiply by the average price of gasoline in June of that year, which I have from the EIA since 1993. (I’ll extrapolate 1990-1992 as the same as 1993, which is conservative.)

I will assume that the family owns 2 cell phones, 1 computer, and 1 television. This is tricky, because the quality of these tech items has dramatically increased over time.

If you try to measure with equivalent buying power (e.g. a 1 MHz computer, a 20-inch CRT TV), then you’ll find that these items have gotten radically cheaper; $1000 in 1950 would only buy as much TV as $7 today, and a $50 Raspberry Pi‘s 2.4 GHz processor is 150 times faster than the 16 MHz offered by an Apple Powerbook in 1991—despite the latter selling for $2500 nominally. So in dollars per gigahertz, the price of computers has fallen by an astonishing 7,500 times just since 1990.

But I think that’s an unrealistic comparison. The standards for what was considered necessary have also increased over time. I actually think it’s quite fair to assume that people have spent a roughly constant nominal amount on these items: about $500 for a TV, $1000 for a computer, and $500 for a cell phone. I’ll also assume that the TV and phones are good for 5 years while the computer is good for 2 years, which makes the total annual expenditure for 2 phones, a TV, and a computer equal to 2/5*500 + 1/5*500 + 1/2*1000 = 800. This is about what a family must spend every year to feel like they have an adequate amount of digital technology.

I will also assume that the family buys clothes with this equivalent purchasing power, with an index that goes from 166 in 1990 to 177 in 2024—also nearly constant in nominal terms. I’ll multiply that index by $10 because the average annual household spending on clothes is about $1700 today.

I will assume that the family buys the equivalent of five months of infant care per year; they surely spend more than this (in either time or money) when they have actual infants, but less as the kids grow. This amounts to about $5000 today, but was only $1600 in 1990—a 214% increase, or 3.42% per year.

For food expenditure, I’m going to use the USDA’s thrifty plan for June of that year. I’ll use the figures assuming that one child is 6 and the other is 9. I don’t have data before 1994, so I’ll extrapolate that with the average growth rate of 3.2%.

Food expenditures have been at a fairly consistent 11% of disposable income since 1990; so I’m going to include them as 2*11%*40*50*(after-tax median wage) = 440*(after-tax median wage).

The figures I had the hardest time getting were for utilities. It’s also difficult to know what to include: Is Internet access a necessity? Probably, nowadays—but not in 1990. Should I separate electric and natural gas, even though they are partial substitutes? But using these figures I estimate that utility costs rise at about 0.8% per year in CPI-adjusted terms, so what I’ll do is benchmark to $3800 in 2016 and assume that utility costs have risen by (0.8% + inflation rate) per year each year.

Healthcare is also a tough one; pardon the heteronormativity, but for simplicity I’m going to use the mean personal healthcare expenditures for one man and woman (aged 19-44) and one boy and one girl (aged 0-18). Unfortunately I was only able to find that for two-year intervals in the range from 2002 to 2020, so I interpolated and extrapolated both directions assuming the same average growth rate of 3.5%.

So let’s summarize what all is included here:

  • Estimated payment on a mortgage
  • 0.5 years of college tuition
  • amortized cost of 2 cars
  • 7500/(median MPG) gallons of gasoline
  • amortized cost of 2 phones, 1 computer, and 1 television
  • average spending on clothes
  • 11% of income on food
  • Estimated utilities spending
  • Estimated childcare equivalent to five months of infant care
  • Healthcare for one man, one woman, one boy, one girl

There are obviously many criticisms you could make of these choices. If I were writing a proper paper, I would search harder for better data and run robustness checks over the various estimation and extrapolation assumptions. But for these purposes I really just want a ballpark figure, something that will give me a sense of what rising cost of living feels like to most people.

What I found absolutely floored me. Over the range from 1990 to 2024:

  1. The Index of Necessary Expenditure rose by an average of 3.45% per year, almost a full percentage point higher than the average CPI inflation of 2.62% per year.
  2. Over the same period, after-tax income rose at a rate of 3.31%, faster than CPI inflation, but slightly slower than the growth rate of INE.
  3. The Index of Necessary Expenditure was over 100% of median after-tax household income every year except 2020.
  4. Since 2021, the Index of Necessary Expenditure has risen at an average rate of 5.74%, compared to CPI inflation of only 2.66%. In that same time, after-tax income has only grown at a rate of 4.94%.

Point 3 is the one that really stunned me. The only time in the last 34 years that a family of four has been able to actually pay for all necessities—just necessities—on a typical household income was during the COVID pandemic, and that in turn was only because the federal tax burden had been radically reduced in response to the crisis. This means that every single year, a typical American family has been either going further and further into debt, or scrimping on something really important—like healthcare or education.

No wonder people feel like the economy is failing them! It is!

In fact, I can even make sense now of how Trump could convince people with “Are you better off than you were four years ago?” in 2024 looking back at 2020—while the pandemic was horrific and the disruption to the economy was massive, thanks to the US government finally actually being generous to its citizens for once, people could just about actually make ends meet. That one year. In my entire life.

This is why people felt betrayed by Biden’s economy. For the first time most of us could remember, we actually had this brief moment when we could pay for everything we needed and still have money left over. And then, when things went back to “normal”, it was taken away from us. We were back to no longer making ends meet.

When I went into this, I expected to see that the INE had risen faster than both inflation and income, which was indeed the case. But I expected to find that INE was a large but manageable proportion of household income—maybe 70% or 80%—and slowly growing. Instead, I found that INE was greater than 100% of income in every year but one.

And the truth is, I’m not sure I’ve adequately covered all necessary spending! My figures for childcare and utilities are the most uncertain; those could easily go up or down by quite a bit. But even if I exclude them completely, the reduced INE is still greater than income in most years.

Suddenly the way people feel about the economy makes a lot more sense to me.

Why does everyone work full-time?

Jun 30 JDN 2460492

Over 70% of US workers work “full-time”, that is, at least 40 hours a week. The average number of hours worked per week is 33.8, and the average number of overtime hours is only 3.6. So basically, about 2/3 of workers work almost exactly 40 hours per week.

We’re accustomed to this situation, so it may not seem strange to you. But stop and think for a moment: What are the odds that across every industry, exactly 40 hours per week is the most efficient arrangement?

Indeed, there is mounting evidence that in many industries, 40 hours is too much, and something like 5 or even 30 would actually be more efficient. Yet we continue to work 40-hour weeks.

This looks like a corner solution: Rather than choosing an optimal amount, we’re all up against some kind of constraint.


What’s the constraint? Well, the government requires (for most workers) that anything above 40 hours per week must be paid as overtime, that is, at a higher wage rate. So it looks like we would all be working more than 40 hours per week, but we hit the upper limit due to these regulations.

Does this mean we would be better off without the regulations? Clearly not. As I just pointed out, the evidence is mounting that 40 hours is too much, not too little. But why, then, would we all be trying to work so many hours?

I believe this is yet another example of hyper-competition, where competition drives us to an inefficient outcome.

Employers value employees who work a lot of hours. Indeed, I contend that they do so far more than makes any rational sense; they seem to care more about how many hours you work than about the actual quality or quantity of your output. Maybe this is because hours worked is easier to measure, or because it seems like a fairer estimate of your effort; but for whatever reason, employers really seem to reward employees who work a lot of hours, regardless of almost everything else.

In the absence of a limit on hours worked, then, employers are going to heap rewards on whoever works the most hours, and so people will be pressured to work more and more hours. Then we would all work ourselves to death, and it’s not even clear that this would be good for GDP.

Indeed, this seems to be what happened, before the 40-hour work week became the standard. In the 1800s, the average American worked over 60 hours per week. It wasn’t until the 1940s that 40-hour weeks became the norm.

But speaking of norms, that also seems to be a big factor here. The truth is, overtime isn’t really that expensive, and employers could be smarter about rewarding good work rather than more hours. But once a norm establishes itself in a society, it can be very hard to change. And right now, the norm is that 40 hours is a “normal” “standard” “full” work week—any more is above and beyond, and any less is inferior.

This is a problem, because a lot of people can’t work 40-hour weeks. Our standard for what makes someone “disabled” isn’t that you can’t work at all; it’s that you can’t work as much as society expects. I wonder how many people are currently living on disability who could have been working part-time, but there just weren’t enough part-time jobs available. The employment rate among people with a disability is only 41%, compared to 77% of the general population.

And it’s not that we need to work this much. Our productivity is now staggeringly high: We produce more than five times as much wealth per hour of work than we did as recently as the 1940s. So in theory, we should be able to live just as well while working one-fifth as much… but that’s clearly not what happened.

Keynes accurately predicted our high level of productivity; but he wrongly predicted that we would work less, when instead we just kept right on working almost as hard as before.

Indeed, it doesn’t even seem like we live five times as well while working just as much. Many things are better now—healthcare, entertainment, and of course electronics—but somehow, we really don’t feel like we are living better lives than our ancestors.

The Economic Policy Institute offers an explanation for this phenomenon: Our pay hasn’t kept up with our productivity.


Up until about 1980, productivity and pay rose in lockstep. But then they started to diverge, and they never again converged. Productivity continued to soar, while real wages only barely increased. The result is that since then, productivity has grown by 64%, and hourly pay has only grown 15%.

This is definitely part of the problem, but I think there’s more to it as well. Housing and healthcare have become so utterly unaffordable in this country that it really doesn’t matter that our cars are nice and our phones are dirt cheap. We are theoretically wealthier now, but most of that extra wealth goes into simply staying healthy and having a home. Our consumption has been necessitized.

If we can solve these problems, maybe people won’t feel a need to work so many hours. Or, maybe competition will continue to pressure them to work those hours… but at least we’ll actually feel richer when we do it.

Wrongful beneficence

Jun 9 JDN 2460471

One of the best papers I’ve ever read—one that in fact was formative in making me want to be an economist—is Wrongful Beneficence by Chris Meyers.

This paper opened my eyes to a whole new class of unethical behavior: Acts that unambiguously make everyone better off, but nevertheless are morally wrong. Hence, wrongful beneficence.

A lot of economists don’t even seem to believe in such things. They seem convinced that as long as no one is made worse off by a transaction, that transaction must be ethically defensible.

Chris Meyers convinced me that they are wrong.

The key insight here is that it’s still possible to exploit someone even if you make them better off. This happens when they are in a desperate situation and you take advantage of that to get an unfair payoff.


Here one of the cases Meyers offers to demonstrate this:

Suppose Carole is driving across the desert on a desolate road when her car breaks down. After two days and two nights without seeing a single car pass by, she runs out of water and feels rather certain that she will perish if not rescued soon. Now suppose that Jason happens to drive down this road and finds Carole. He sees that her situation is rather desperate and that she needs (or strongly desires) to get to the nearest town as soon as possible. So Jason offers her a ride but only on the condition that […] [she gives him] her entire net worth, the title to her house and car, all of her money in the bank, and half of her earnings for the next ten years.

Carole obviously is better off than she would be if Jason hadn’t shown up—she might even have died. She freely consented to this transaction—again, because if she didn’t, she might die. Yet it seems absurd to say that Jason has done nothing wrong by making such an exorbitant demand. If he had asked her to pay for gas, or even to compensate him for his time at a reasonable rate, we’d have no objection. But to ask for her life savings, all her assets, and half her earnings for ten years? Obviously unfair—and obviously unethical. Jason is making Carole (a little) better off while making himself (a lot) better off, so everyone is benefited; but what he’s doing is obviously wrong.

Once you recognize that such behavior can exist, you start to see it all over the place, particularly in markets, where corporations are quite content to gouge their customers with high prices and exploit their workers with low wages—but still, technically, we’re better off than we would be with no products and no jobs at all.

Indeed, the central message of Wrongful Beneficence is actually about sweatshop labor: It’s not that the workers are worse off than they would have been (in general, they aren’t); it’s that they are so desperate that corporations can get away with exploiting them with obviously unfair wages and working conditions.

Maybe it would be easier just to move manufacturing back to First World countries?

Right-wingers are fond of making outlandish claims that making products at First World wages would be utterly infeasible; here’s one claiming that an iPhone would need to cost $30,000 if it were made in the US. In fact, the truth is that it would only need to cost about $40 more—because hardly any of its cost is actually going to labor. Most of its price is pure monopoly profit for Apple; most of the rest is components and raw materials. (Of course, if those also had to come from the US, the price would go up more; but even so, we’re talking something like double its original price, not thirty times. Workers in the US are indeed paid a lot more than workers in China; they are also more productive.)

It’s true that actually moving manufacturing from other countries back to the US would be a substantial undertaking, requiring retooling factories, retraining engineers, and so on; but it’s not like we’ve never done that sort of thing before. I’m sure it could not be done overnight; but of course it could be done. We do this sort of thing all the time.

Ironically, this sort of right-wing nonsense actually seems to feed the far left as well, supporting their conviction that all this prosperity around us is nothing more than an illusion, that all our wealth only exists because we steal it from others. But this could scarcely be further from the truth; our wealth comes from technology, not theft. If we offered a fairer bargain to poorer countries, we’d be a bit less rich, but they would be much less poor—the overall wealth in the world would in fact probably increase.

A better argument for not moving manufacturing back to the First World is that many Third World economies would collapse if they stopped manufacturing things for other countries, and that would be disastrous for millions of people.

And free trade really does increase efficiency and prosperity for all.

So, yes; let’s keep on manufacturing goods wherever it is cheapest to do so. But when we decide what’s cheapest, let’s evaluate that based on genuinely fair wages and working conditions, not the absolute cheapest that corporations think they can get away with.

Sometimes they may even decide that it’s not really cheaper to manufacture in poorer countries, because they need advanced technology and highly-skilled workers that are easier to come by in First World countries. In that case, bringing production back here is the right thing to do.

Of course, this raises the question:

What would be fair wages and working conditions?

That’s not so easy to answer. Since workers in Third World countries are less educated than workers in First World countries, and have access to less capital and worse technology, we should in fact expect them to be less productive and therefore get paid less. That may be unfair in some cosmic sense, but it’s not anyone’s fault, and it’s not any particular corporation’s responsibility to fix it.

But when there are products for which less than 1% of the sales price of the product goes to the workers who actually made the product, something is wrong. When the profit margin is often wildly larger than the total amount spent on labor, something is wrong.

It may be that we will never have precise thresholds we can set to decide what definitely is or is not exploitative; but that doesn’t mean we can’t ever recognize it when we see it. There are various institutional mechanisms we could use to enforce better wages and working conditions without ever making such a sharp threshold.

One of the simplest, in fact, is Fair Trade.

Fair Trade is by no means a flawless system; in fact there’s a lot of research debating how effective it is at achieving its goals. But it does seem to be accomplishing something. And it’s a system that we already have in place, operating successfully in many countries; it simply needs to be scaled up (and hopefully improved along the way).

One of the clearest pieces of evidence that it’s helping, in fact, is that farmers are willing to participate in it. That shows that it is beneficent.

Of course, that doesn’t mean that it’s genuinely fair! This could just be another kind of wrongful beneficence. Perhaps Fair Trade is really just less exploitative than all the available alternatives.

If so, then we need something even better still, some new system that will reliably pass on the increased cost for customers all the way down to increased wages for workers.

Fair Trade shows us something else, too: A lot of customers clearly are willing to pay a bit more in order to see workers treated better. Even if they weren’t, maybe they should be forced to. But the fact is, they are! Even those who are most adamantly opposed to Fair Trade can’t deny that people really are willing to pay more to help other people. (Yet another example of obvious altruism that neoclassical economists somehow manage to ignore.) They simply deny that it’s actually helping, which is an empirical matter.

But if this isn’t helping enough, fine; let’s find something else that does.

Surviving in an ad-supported world

Apr 21 JDN 2460423

Advertising is as old as money—perhaps even older. Scams have likewise been a part of human society since time immemorial.

But I think it’s fair to say that recently, since the dawn of the Internet at least, both advertising and scams have been proliferating, far beyond what they used to be.

We live in an ad-supported world.

News sites are full of ads. Search engines are full of ads. Even shopping sites are full of ads now; we literally came here planning to buy something, but that wasn’t good enough for you; you want us to also buy something else. Most of the ads are for legitimate products; but some are for scams. (And then there’s multi-level marketing, which is somewhere in between: technically not a scam.)

We’re so accustomed to getting spam emails, phone calls, and texts full of ads and scams that we just accept it as a part of our lives. But these are not something people had to live with even 50 years ago. This is a new, fresh Hell we have wrought for ourselves as a civilization.

AI promises to make this problem even worse. AI still isn’t very good at doing anything particularly useful; you can’t actually trust it to drive a truck or diagnose an X-ray. (There are people working on this sort of thing, but they haven’t yet succeeded.) But it’s already pretty good at making spam texts and phone calls. It’s already pretty good at catfishing people. AI isn’t smart enough to really help us, but it is smart enough to hurt us, especially those of us who are most vulnerable.

I think that this causes a great deal more damage to our society than is commonly understood.

It’s not just that ads are annoying (though they are), or that they undermine our attention span (though they do), or that they exploit the vulnerable (though they do).

I believe that an ad-supported world is a world where trust goes to die.

When the vast majority of your interactions with other people involve those people trying to get your money, some of them by outright fraud—but none of them really honestly—you have no choice but to ratchet down your sense of trust. It begins to feel as this financial transactions are the only form of interaction there is in the world.

But in fact most people can be trusted, and should be trusted—you are missing out on a great deal of what makes life worth living if you do not know how to trust.

The question is whom you trust. You should trust people you know, people you interact with personally and directly. Even strangers are more trustworthy than any corporation will ever be. And never are corporations more dishonest than when they are sending out ads.


The more the world fills with ads, the less room it has for trust.

Is there any way to stem this tide? Or are we simply doomed to live in the cyberpunk dystopia our forebears warned about, where everything is for sale and all available real estate is used for advertising?

Ads and scams only exist because they are profitable; so our goal should be to make them no longer profitable.

Here is one very simple piece of financial advice that will help protect you. Indeed, I believe it can protect so well, that if everyone followed it consistently, we would stem the tide.

Only give money to people you have sought out yourself.

Only buy things you already knew you wanted.

Yes, of course you must buy things. We live in a capitalist society. You can’t survive without buying things. But this is how buying things should work:

You check your fridge and see you are out of milk. So you put “milk” on your grocery list, you go to the grocery store, you find some milk that looks good, and you buy it.

Or, your car is getting old and expensive to maintain, and you decide you need a new one. You run the numbers on your income and expenses, and come up with a budget for a new car. You go to the dealership, they help you pick out a car that fits your needs and your budget, and you buy it.

Your tennis shoes are getting frayed, and it’s time to replace them. You go online and search for “tennis shoes”, looking up sizes and styles until you find a pair that suits you. You order that pair.

You should be the one to decide that you need a thing, and then you should go out looking for it.

It’s okay to get help searching, or even listen to some sales pitches, as long as the whole thing was your idea from the start.

But if someone calls you, texts you, or emails you, asking for your money for something?

Don’t give them a cent.

Just don’t. Don’t do it. Even if it sounds like a good product. Even if it is a good product. If the product they are selling sounds so great that you decide you actually want to buy it, go look for it on your own. Shop around. If you can, go out of your way to buy it from a competing company.

Your attention is valuable. Don’t reward them for stealing it.

This applies to donations, too. Donation asks aren’t as awful as ads, let alone scams, but they are pretty obnoxious, and they only send those things out because people respond to them. If we all stopped responding, they’d stop sending.

Yes, you absolutely should give money to charity. But you should seek out the charities to donate to. You should use trusted sources (like GiveWell and Charity Navigator) to vet them for their reliability, transparency, and cost-effectiveness.

If you just receive junk mail asking you for donations, feel free to take out any little gifts they gave you (it’s often return address labels, for some reason), and then recycle the rest.

Don’t give to the ones who ask for it. Give to the ones who will use it the best.

Reward the charities that do good, not the charities that advertise well.

This is the rule to follow:

If someone contacts you—if they initiate the contact—refuse to give them any money. Ever.

Does this rule seem too strict? It is quite strict, in fact. It requires you to pass up many seemingly-appealing opportunities, and the more ads there are, the more opportunities you’ll need to pass up.

There may even be a few exceptions; no great harm befalls us if we buy Girl Scout cookies or donate to the ASPCA because the former knocked on our doors and the latter showed us TV ads. (Then again, you could just donate to feminist and animal rights charities without any ads or sales pitches.)

But in general, we live in a society that is absolutely inundated with people accosting us and trying to take our money, and they’re only ever going to stop trying to get our money if we stop giving it to them. They will not stop it out of the goodness of their hearts—no, not even the charities, who at least do have some goodness in their hearts. (And certainly not the scammers, who have none.)

They will only stop if it stops working.

So we need to make it stop working. We need to draw this line.

Trust the people around you, who have earned it. Do not trust anyone who seeks you out asking for money.

Telemarketing calls? Hang up. Spam emails? Delete. Junk mail? Recycle. TV ads? Mute and ignore.

And then, perhaps, future generations won’t have to live in an ad-supported world.

How is the economy doing this well?

Apr 14 JDN 2460416

We are living in a very weird time, economically. The COVID pandemic created huge disruptions throughout our economy, from retail shops closing to shortages in shipping containers. The result was a severe recession with the worst unemployment since the Great Depression.

Now, a few years later, we have fully recovered.

Here’s a graph from FRED showing our unemployment and inflation rates since 1990 [technical note: I’m using the urban CPI; there are a few other inflation measures you could use instead, but they look much the same]:

Inflation fluctuates pretty quickly, while unemployment moves much slower.

There are a lot of things we can learn from this graph:

  1. Before COVID, we had pretty low inflation; from 1990 to 2019, inflation averaged about 2.4%, just over the Fed’s 2% target.
  2. Before COVID, we had moderate to high unemployment; it rarely went below 5% and and for several years after the 2008 crash it was over 7%—which is why we called it the Great Recession.
  3. The only times we actually had negative inflation—deflationwere during recessions, and coincided with high unemployment; so, no, we really don’t want prices to come down.
  4. During COVID, we had a massive spike in unemployment up to almost 15%, but then it came back down much more rapidly than it had in the Great Recession.
  5. After COVID, there was a surge in inflation, peaking at almost 10%.
  6. That inflation surge was short-lived; by the end of 2022 inflation was back down to 4%.
  7. Unemployment now stands at 3.8% while inflation is at 2.7%.

What I really want to emphasize right now is point 7, so let me repeat it:

Unemployment now stands at 3.8% while inflation is at 2.7%.

Yes, technically, 2.7% is above our inflation target. But honestly, I’m not sure it should be. I don’t see any particular reason to think that 2% is optimal, and based on what we’ve learned from the Great Recession, I actually think 3% or even 4% would be perfectly reasonable inflation targets. No, we don’t want to be going into double-digits (and we certainly don’t want true hyperinflation); but 4% inflation really isn’t a disaster, and we should stop treating it like it is.

2.7% inflation is actually pretty close to the 2.4% inflation we’d been averaging from 1990 to 2019. So I think it’s fair to say that inflation is back to normal.

But the really wild thing is that unemployment isn’t back to normal: It’s much better than that.

To get some more perspective on this, let’s extend our graph backward all the way to 1950:

Inflation has been much higher than it is now. In the late 1970s, it was consistently as high as it got during the post-COVID surge. But it has never been substantially lower than it is now; a little above the 2% target really seems to be what stable, normal inflation looks like in the United States.

On the other hand, unemployment is almost never this low. It was for a few years in the early 1950s and the late 1960s; but otherwise, it has always been higher—and sometimes much higher. It did not dip below 5% for the entire period from 1971 to 1994.

They hammer into us in our intro macroeconomics courses the Phillips Curve, which supposedly says that unemployment is inversely related to inflation, so that it’s impossible to have both low inflation and low unemployment.

But we’re looking at it, right now. It’s here, right in front of us. What wasn’t supposed to be possible has now been achieved. E pur si muove.

There was supposed to be this terrible trade-off between inflation and unemployment, leaving our government with the stark dilemma of either letting prices surge or letting millions remain out of work. I had always been on the “inflation” side: I thought that rising prices were far less of a problem than poeple out of work.

But we just learned that the entire premise was wrong.

You can have both. You don’t have to choose.

Right here, right now, we have both. All we need to do is keep doing whatever we’re doing.

One response might be: what if we can’t? What if this is unsustainable? (Then again, conservatives never seemed terribly concerned about sustainability before….)

It’s worth considering. One thing that doesn’t look so great now is the federal deficit. It got extremely high during COVID, and it’s still pretty high now. But as a proportion of GDP, it isn’t anywhere near as high as it was during WW2, and we certainly made it through that all right:

So, yeah, we should probably see if we can bring the budget back to balanced—probably by raising taxes. But this isn’t an urgent problem. We have time to sort it out. 15% unemployment was an urgent problem—and we fixed it.

In fact in some ways the economy is even doing better now than it looks. Unemployment for Black people has never been this low, since we’ve been keeping track of it:

Black people had basically learned to live with 8% or 9% unemployment as if it were normal; but now, for the first time ever—ever—their unemployment rate is down to only 5%.

This isn’t because people are dropping out of the labor force. Broad unemployment, which includes people marginally attached to the labor force, people employed part-time not by choice, and people who gave up looking for work, is also at historic lows, despite surging to almost 23% during COVID:

In fact, overall employment among people 25-54 years old (considered “prime age”—old enough to not be students, young enough to not be retired) is nearly the highest it has ever been, and radically higher than it was before the 1980s (because women entered the workforce):

So this is not an illusion: More Americans really are working now. And employment has become more inclusive of women and minorities.

I really don’t understand why President Biden isn’t more popular. Biden inherited the worst unemployment since the Great Depression, and turned it around into an economic situation so good that most economists thought it was impossible. A 39% approval rating does not seem consistent with that kind of staggering economic improvement.

And yes, there are a lot of other factors involved aside from the President; but for once I think he really does deserve a lot of the credit here. Programs he enacted to respond to COVID brought us back to work quicker than many thought possible. Then, the Inflation Reduction Act made historic progress at fighting climate change—and also, lo and behold, reduced inflation.

He’s not a particularly charismatic figure. He is getting pretty old for this job (or any job, really). But Biden’s economic policy has been amazing, and deserves more credit for that.

The Butlerian Jihad is looking better all the time

Mar 24 JDN 2460395

A review of The Age of Em by Robin Hanson

In the Dune series, the Butlerian Jihad was a holy war against artificial intelligence that resulted in a millenias-long taboo against all forms of intelligent machines. It was effectively a way to tell a story about the distant future without basically everything being about robots or cyborgs.

After reading Robin Hanson’s book, I’m starting to think that maybe we should actually do it.

Thus it is written: “Thou shalt not make a machine in the likeness of a human mind.”

Hanson says he’s trying to reserve judgment and present objective predictions without evaluation, but it becomes very clear throughout that this is the future he wants, as well as—or perhaps even instead of—the world he expects.

In many ways, it feels like he has done his very best to imagine a world of true neoclassical rational agents in perfect competition, a sort of sandbox for the toys he’s always wanted to play with. Throughout he very much takes the approach of a neoclassical economist, making heroic assumptions and then following them to their logical conclusions, without ever seriously asking whether those assumptions actually make any sense.

To his credit, Hanson does not buy into the hype that AGI will be successful any day now. He predicts that we will achieve the ability to fully emulate human brains and thus create a sort of black-box AGI that behaves very much like a human within about 100 years. Given how the Blue Brain Project has progressed (much slower than its own hype machine told us it would—and let it be noted that I predicted this from the very beginning), I think this is a fairly plausible time estimate. He refers to a mind emulated in this way as an “em”; I have mixed feelings about the term, but I suppose we did need some word for that, and it certainly has conciseness on its side.

Hanson believes that a true understanding of artificial intelligence will only come later, and the sort of AGI that can be taken apart and reprogrammed for specific goals won’t exist for at least a century after that. Both of these sober, reasonable predictions are deeply refreshing in a field that’s been full of people saying “any day now” for the last fifty years.

But Hanson’s reasonableness just about ends there.

In The Age of Em, government is exactly as strong as Hanson needs it to be. Somehow it simultaneously ensures a low crime rate among a population that doubles every few months while also having no means of preventing that population growth. Somehow ensures that there is no labor collusion and corporations never break the law, but without imposing any regulations that might reduce efficiency in any way.

All of this begins to make more sense when you realize that Hanson’s true goal here is to imagine a world where neoclassical economics is actually true.

He realized it didn’t work on humans, so instead of giving up the theory, he gave up the humans.

Hanson predicts that ems will casually make short-term temporary copies of themselves called “spurs”, designed to perform a particular task and then get erased. I guess maybe he would, but I for one would not so cavalierly create another person and then make their existence dedicated to doing a single job before they die. The fact that I created this person, and they are very much like me, seem like reasons to care more about their well-being, not less! You’re asking me to enslave and murder my own child. (Honestly, the fact that Robin Hanson thinks ems will do this all the time says more about Robin Hanson than anything else.) Any remotely sane society of ems would ban the deletion of another em under any but the most extreme circumstances, and indeed treat it as tantamount to murder.

Hanson predicts that we will only copy the minds of a few hundred people. This is surely true at some point—the technology will take time to develop, and we’ll have to start somewhere. But I don’t see why we’d stop there, when we could continue to copy millions or billions of people; and his choices of who would be emulated, while not wildly implausible, are utterly terrifying.

He predicts that we’d emulate genius scientists and engineers; okay, fair enough, that seems right. I doubt that the benefits of doing so will be as high as many people imagine, because scientific progress actually depends a lot more on the combined efforts of millions of scientists than on rare sparks of brilliance by lone geniuses; but those people are definitely very smart, and having more of them around could be a good thing. I can also see people wanting to do this, and thus investing in making it happen.

He also predicts that we’d emulate billionaires. Now, as a prediction, I have to admit that this is actually fairly plausible; billionaires are precisely the sort of people who are rich enough to pay to be emulated and narcissistic enough to want to. But where Hanson really goes off the deep end here is that he sees this as a good thing. He seems to honestly believe that billionaires are so rich because they are so brilliant and productive. He thinks that a million copies of Elon Musks would produce a million hectobillionaires—when in reality it would produce a million squabbling narcissists, who at best had to split the same $200 billion wealth between them, and might very well end up with less because they squander it.

Hanson has a long section on trying to predict the personalities of ems. Frankly this could just have been dropped entirely; it adds almost nothing to the book, and the book is much too long. But the really striking thing to me about that section is what isn’t there. He goes through a long list of studies that found weak correlations between various personality traits like extroversion or openness and wealth—mostly comparing something like the 20th percentile to the 80th percentile—and then draws sweeping conclusions about what ems will be like, under the assumption that ems are all drawn from people in the 99.99999th percentile. (Yes, upper-middle-class people are, on average, more intelligent and more conscientious than lower-middle-class people. But do we even have any particular reason to think that the personalities of people who make $150,000 are relevant to understanding the behavior of people who make $15 billion?) But he completely glosses over the very strong correlations that specifically apply to people in that very top super-rich class: They’re almost all narcissists and/or psychopaths.

Hanson predicts a world where each em is copied many, many times—millions, billions, even trillions of times, and also in which the very richest ems are capable of buying parallel processing time that lets them accelerate their own thought processes to a million times faster than a normal human. (Is that even possible? Does consciousness work like that? Who knows!?) The world that Hanson is predicting is thus one where all the normal people get outnumbered and overpowered by psychopaths.

Basically this is the most abjectly dystopian cyberpunk hellscape imaginable. And he talks about it the whole time as if it were good.

It’s like he played the game Action Potential and thought, “This sounds great! I’d love to live there!” I mean, why wouldn’t you want to owe a life-debt on your own body and have to work 120-hour weeks for a trillion-dollar corporation just to make the payments on it?

Basically, Hanson doesn’t understand how wealth is actually acquired. He is educated as an economist, yet his understanding of capitalism basically amounts to believing in magic. He thinks that competitive markets just somehow perfectly automatically allocate wealth to whoever is most productive, and thus concludes that whoever is wealthy now must just be that productive.

I can see no other way to explain his wildly implausible predictions that the em economy will double every month or two. A huge swath of the book depends upon this assumption, but he waits until halfway through the book to even try to defend it, and then does an astonishingly bad job of doing so. (Honestly, even if you buy his own arguments—which I don’t—they seem to predict that population would grow with Moore’s Law—doubling every couple of years, not every couple of months.)

Whereas Keynes predicted based on sound economic principles that economic growth would more or less proceed apace and got his answer spot-on, Hanson predicts that for mysterious, unexplained reasons economic growth will suddenly increase by two orders of magnitude—and I’m pretty sure he’s going to be wildly wrong.

Hanson also predicts that ems will be on average poorer than we are, based on some sort of perfect-competition argument that doesn’t actually seem to mesh at all with his predictions of spectacularly rapid economic and technological growth. I think the best way to make sense of this is to assume that it means the trend toward insecure affluence will continue: Ems will have an objectively high standard of living in terms of what they own, what games they play, where they travel, and what they eat and drink (in simulation), but they will constantly be struggling to keep up with the rent on their homes—or even their own bodies. This is a world where (the very finest simulation of) Dom Perignon is $7 a bottle and wages are $980 an hour—but monthly rent is $284,000.

Early in the book Hanson argues that this life of poverty and scarcity will lead to more conservative values, on the grounds that people who are poorer now seem to be more conservative, and this has something to do with farmers versus foragers. Hanson’s explanation of all this is baffling; I will quote it at length, just so it’s clear I’m not misrepresenting it:

The other main (and independent) axis of value variation ranges between poor and rich societies. Poor societies place more value on conformity, security, and traditional values such as marriage, heterosexuality, religion, patriotism, hard work, and trust in authority. In contrast, rich societies place more value on individualism, self-direction, tolerance, pleasure, nature, leisure, and trust. When the values of individuals within a society vary on the same axis, we call this a left/liberal (rich) versus right/conservative (poor) axis.

Foragers tend to have values more like those of rich/liberal people today, while subsistence farmers tend to have values more like those of poor/conservative people today. As industry has made us richer, we have on average moved from conservative/farmer values to liberal/forager values. This value movement can make sense if cultural evolution used the social pressures farmers faced, such as conformity and religion, to induce humans, who evolved to find forager behaviors natural, to instead act like farmers. As we become rich, we don’t as strongly fear the threats behind these social pressures. This connection may result in part from disease; rich people are healthier, and healthier societies fear less.

The alternate theory that we have instead learned that rich forager values are more true predicts that values should have followed a random walk over time, and be mostly common across space. It also predicts the variance of value changes tracking the rate at which relevant information appears. But in fact industrial-era value changes have tracked the wealth of each society in much more steady and consistent fashion. And on this theory, why did foragers ever acquire farmer values?

[…]

In the scenario described in this book, many strange-to-forager behaviors are required, and median per-person (i.e. per-em) incomes return to near-subsistence levels. This suggests that the em era may reverse the recent forager-like trend toward more liberality; ems may have more farmer-like values.

The Age of Em, p. 26-27

There’s a lot to unpack here, but maybe it’s better to burn the whole suitcase.

First of all, it’s not entirely clear that this is really a single axis of variation, that foragers and farmers differ from each other in the same way as liberals and conservatives. There’s some truth to that at least—both foragers and liberals tend to be more generous, both farmers and conservatives tend to enforce stricter gender norms. But there are also clear ways that liberal values radically deviate from forager values: Forager societies are extremely xenophobic, and typically very hostile to innovation, inequality, or any attempts at self-aggrandizement (a phenomenon called “fierce egalitarianism“). San Francisco epitomizes rich, liberal values, but it would be utterly alien and probably regarded as evil by anyone from the Yanomamo.

Second, there is absolutely no reason to predict any kind of random walk. That’s just nonsense. Would you predict that scientific knowledge is a random walk, with each new era’s knowledge just a random deviation from the last’s? Maybe next century we’ll return to geocentrism, or phrenology will be back in vogue? On the theory that liberal values (or at least some liberal values) are objectively correct, we would expect them to advance as knowledge doesimproving over time, and improving faster in places that have better institutions for research, education, and free expression. And indeed, this is precisely the pattern we have observed. (Those places are also richer, but that isn’t terribly surprising either!)

Third, while poorer regions are indeed more conservative, poorer people within a region actually tend to be more liberal. Nigeria is poorer and more conservative than Norway, and Mississippi is poorer and more conservative than Massachusetts. But higher-income households in the United States are more likely to vote Republican. I think this is particularly true of people living under insecure affluence: We see the abundance of wealth around us, and don’t understand why we can’t learn to share it better. We’re tired of fighting over scraps while the billionaires claim more and more. Millennials and Zoomers absolutely epitomize insecure affluence, and we also absolutely epitomize liberalism. So, if indeed ems live a life of insecure affluence, we should expect them to be like Zoomers: “Trans liberation now!” and “Eat the rich!” (Or should I say, “Delete the rich!”)

And really, doesn’t that make more sense? Isn’t that the trend our society has been on, for at least the last century? We’ve been moving toward more and more acceptance of women and minorities, more and more deviation from norms, more and more concern for individual rights and autonomy, more and more resistance to authority and inequality.

The funny thing is, that world sounds a lot better than the one Hanson is predicting.

A world of left-wing ems would probably run things a lot better than Hanson imagines: Instead of copying the same hundred psychopaths over and over until we fill the planet, have no room for anything else, and all struggle to make enough money just to stay alive, we could moderate our population to a more sustainable level, preserve diversity and individuality, and work toward living in greater harmony with each other and the natural world. We could take this economic and technological abundance and share it and enjoy it, instead of killing ourselves and each other to make more of it for no apparent reason.

The one good argument Hanson makes here is expressed in a single sentence: “And on this theory, why did foragers ever acquire farmer values?” That actually is a good question; why did we give up on leisure and egalitarianism when we transitioned from foraging to agriculture?

I think scarcity probably is relevant here: As food became scarcer, maybe because of climate change, people were forced into an agricultural lifestyle just to have enough to eat. Early agricultural societies were also typically authoritarian and violent. Under those conditions, people couldn’t be so generous and open-minded; they were surrounded by threats and on the verge of starvation.

I guess if Hanson is right that the em world is also one of poverty and insecurity, we might go back to those sort of values, borne of desperation. But I don’t see any reason to think we’d give up all of our liberal values. I would predict that ems will still be feminist, for instance; in fact, Hanson himself admits that since VR avatars would let us change gender presentation at will, gender would almost certainly become more fluid in a world of ems. Far from valuing heterosexuality more highly (as conservatives do, a “farmer value” according to Hanson), I suspect that ems will have no further use for that construct, because reproduction will be done by manufacturing, not sex, and it’ll be so easy to swap your body into a different one that hardly anyone will even keep the same gender their whole life. They’ll think it’s quaint that we used to identify so strongly with our own animal sexual dimorphism.

But maybe it is true that the scarcity induced by a hyper-competitive em world would make people more selfish, less generous, less trusting, more obsessed with work. Then let’s not do that! We don’t have to build that world! This isn’t a foregone conclusion!

There are many other paths yet available to us.

Indeed, perhaps the simplest would be to just ban artificial intelligence, at least until we can get a better handle on what we’re doing—and perhaps until we can institute the kind of radical economic changes necessary to wrest control of the world away from the handful of psychopaths currently trying their best to run it into the ground.

I admit, it would kind of suck to not get any of the benefits of AI, like self-driving cars, safer airplanes, faster medical research, more efficient industry, and better video games. It would especially suck if we did go full-on Butlerian Jihad and ban anything more complicated than a pocket calculator. (Our lifestyle might have to go back to what it was in—gasp! The 1950s!)

But I don’t think it would suck nearly as much as the world Robin Hanson thinks is in store for us if we continue on our current path.

So I certainly hope he’s wrong about all this.

Fortunately, I think he probably is.

Let’s call it “copytheft”

Feb 11 JDN 2460353

I have written previously about how ridiculous it is that we refer to the unauthorized copying of media such as music and video games as “piracy” as though it were somehow equivalent to capturing ships on the high seas.

In that post a few years ago I suggested calling it simply “unauthorized copying”, but that clearly isn’t catching on, perhaps because it’s simply too much of a mouthful. So today I offer a compromise:

Let’s call it “copytheft”.

That takes no longer to say than “piracy” (and only slightly longer to write), and far more clearly states what’s actually going on. No ships have been seized on the high seas; there has been no murder, arson, or slavery.

Yes, it’s debatable whether copytheft really constitutes theft—and I would generally argue that it does not—but just from hearing that word, you would probably infer that the following process took place:

  1. I took a thing.
  2. I made a copy of that thing that I wasn’t supposed to.
  3. I put the original thing back where it was, unharmed.

The paradigmatic example of this theft-copy-replace sequence would be a key, of course: You take someone’s key, copy it, then put the key back where it was, so you now can unlock their locks but they are none the wiser.

With unauthorized copying of media, you’re not exactly doing steps 1 and 3; the copier often has the media completely legitimately before they make the copy, and it may not even have a clear physical location to be put back to (it must be physically stored somewhere, but particularly if it’s streamed from the cloud it hardly matters where).

But you’re definitely doing step 2, and that was the only part that had a permanent effect; so I think that the nomenclature still seems to work well enough.

Copytheft also has a similar sound to copyleft, the use of alternative intellectual property mechanisms by the authors to grand broader licensing than is ordinarily afforded by copyright, and also to copyfraud, the crime of claiming exclusive copyright to content that is in fact public domain. Hopefully that common structure will help the term get some purchase.

Of course, I can hardly bring a word into widespread use on my own. Others like you have to not only read it, but like it enough that you’re willing to actually use it—and then we need a certain critical mass of people using it in order to make it actually catch on.

So, I’d like to take a moment to offer you some justification why it’s worth changing to this new word.

First, it is admittedly imperfect; by containing the word “theft”, it already feels like we’re conceding something to the defenders of copyright.

But by including the word “copy” in the term, we can draw attention to the most important aspect that distinguishes copytheft from, well, theft:

The original owner still has the thing.

That’s the part that they want us to forget, that the harsh word “piracy” leads you towards. A ship that is captured by pirates is a ship that may never again sail for your own navy. A song that is “pirated”—copythefted—is one that not only the original owners, but also everyone who bought it, still have in exactly the same state they did before.

Thus it simply cannot be that copytheft takes money out of the hands of artists. At worst, it fails to give money to artists.

That could still be a bad thing: Artists need to pay bills too, and a world where nobody pays for any art is surely a world with a lot fewer artists—and the ones who remain far more miserable. But it’s clearly a different sort of thing than ordinary theft, as nothing has been lost.

Moreover, it’s not clear that in most cases copytheft even does fail to give money that would otherwise have been given. Maybe sometimes it does—a certain proportion of people who copytheft a given song, film, or video game might have been willing to pay the original price if the copythefted version had not been available. But typically I suspect that people who’d be willing to pay full price… do pay full price. Thus, the people who are copythefting the media wouldn’t have bought it at full price anyway.

They might have bought it at some lower price, in which case that is foregone payment; but it’s surely considerably less than the “losses” often reported by the film and music industries, which seem to be based on the assumption that everyone who copythefts would have otherwise paid full price. And in fact many people might have been unwilling to buy at any nonzero price, and were only willing to copytheft the media precisely because it didn’t cost them any money or a great deal of effort to do so.

And in fact if you think about it, what about people who would have been willing to pay more than the original price? Surely there were many of them as well, yet we don’t grant media corporations the right to that money. That is also money that they could have been given but weren’t—and we decided, as a society, that they didn’t deserve to have it. It’s not that it would be impossible to do so: We could give corporations the authority to price-discriminate on all of their media. (They probably couldn’t do it perfectly, but they could surely do it quite well.) But we made the policy choice to live in a world where media is sold by single-price monopolies rather than one where it is sold by price-discriminating monopolies.

The mere fact that someone might have been willing to pay you more money if the market were different does not entitle you to receive that money. It has not been stolen from you. Indeed, typically it’s more that you have not been allowed to exploit them. It’s usually the presence of competition that prevents corporations from receiving the absolute maximum profit they might potentially have received if they had full control over the market. Corporations making less profit than they otherwise would have is generally a sign of good economic policy—a sign that things are reasonably fair.

Why else is “copytheft” a good word to use?

Above all, we do not allow our terms to be defined by our opponents.

We don’t allow them insinuate that our technically violating draconian regulations designed to maximize the profits of Disney and Viacom somehow constitutes a terrible crime against other human beings.

“Piracy is not a victimless crime”, they will say.

Well, actual piracy isn’t. But copytheft? Yeah, uh, it kinda is.

Maybe not quite as victimless as, say, marijuana or psilocybin, which no one even has any rational reason to prefer you not do. But still, you’re not really making anyone else worse off—that sounds pretty victimless.

Of course, it does give us less reason to wear tricorn hats and eyepatches.

But guess what? You can still do that anyway!