On labor theories of value

May 3 JDN 246164

I got into an argument a little while ago with an acquaintance of mine who is an avowed Marxist. He posted something that’s been going around Marxist social media about the “irony” that Marx’s labor theory of value is based on Smith and Ricardo’s labor theories of value (plural; they’re not the same), and thus when defenders of capitalism criticize the labor theory of value, they are in effect betraying their founding figures.

The first point I made in response to this was basically, “Yeah. So?” I think one thing that Marxists—at least this flavor of Marxist; I am prepared to exempt more serious Marxian economists—don’t really understand is that mainstream economists don’t have a founding figure that they worship and consider infallible. There is no inerrant text. I am fully prepared to acknowledge—and did, in fact, in that conversation, acknowledge—that Adam Smith made errors and his labor theory of value was one of them. And quite frankly, any defender of capitalism who worships Milton Friedman or Ayn Rand isn’t a mainstream economist, or is at best a very bad one.

My interlocutor then challenged me to describe these different labor theories of value, and I was foolish enough to take the bait, and then the whole conversation devolved into him playing this smug game of “That’s not what Marx really meant” and “clearly you haven’t read Das Kapital” (even though I have, but I admit it was several years ago; I did call up a PDF copy to refresh my memory during the conversation).

But it got me thinking about labor theories of value, and trying to understand why so many people find them seductive when it really doesn’t take much thought to show that they can’t possibly be right. (This post turned out to be a bit long, but I promise I won’t be as long-winded as Marx.)

So what’s wrong with labor theories of value?

If objects are valued based on the labor put into them, the following four propositions should hold:

  1. A project you spend 100 hours on which ultimately failed and produced nothing useful was extremely valuable.
  2. Everything in the Garden of Eden is worthless, because it doesn’t require labor to access.
  3. If you come up with a cure for cancer in a random stroke of insight, it’s worthless because you didn’t put any labor into it, even though both its utility (the lives it will save) and its price (the money you could make off of it) are surely astronomical.
  4. Increased productivity is worthless, because all it does is make our goods worthless as we get better at making them.

All four of these propositions are clearly preposterous, and yet they all seem to follow directly from the basic concept of valuing things by the labor that goes into them. Mainstream economists eventually realized this, and gave up on labor theories of value in favor of the now-consensus utility theory of value.

To be fair, Marx was no idiot, and he did try to address concerns like these in Das Kapital. (Well, the first three he does; I’ll talk about the fourth one in a moment.) But the way he does so is by continually re-defining his terms in contradictory ways, so that by the time you get through the book, you realize he doesn’t even have a labor theory of value. He has many labor theories of value, and he substitutes them ad hoc whenever they seem to yield the conclusions he’s looking for.

For example: Sometimes he says that it’s the actual labor that goes in which matters. Other times that it’s the “usual” or “socially necessary” amount of labor. Other times that it’s the average amount of labor that would be required for this production across the whole economy. These are not the same thing! They yield radically different results in many cases!

Marx tries to distinguish use-value (approximately utility) from exchange-value (approximately price), which is good; those two things are different. It’s very important to distinguish price from value.

But then he doesn’t even use these concepts consistently! At one point, he gives us this absolute howler:

The use-value of the money-commodity becomes two-fold. In addition to its special use-value as

a commodity (gold, for instance, serving to stop teeth, to form the raw material of articles of

luxury, &c.), it acquires a formal use-value, originating in its specific social function.

Das Kapital, Volume 1, Chapter 2, p. 63

No, dude. That is exchange-value. That is paradigmatic exchange-value. People mainly want gold because they can sell it at a high price to buy stuff that’s actually useful. If this is use-value, then the distinction between use-value and exchange-value collapses to, well, useless.

I think what Marx is doing here is that he wants use-value to always be higher than exchange-value, so that surplus-value can be the difference between them and always be positive. But gold is a very clear example of a good for which the price greatly exceeds the marginal utility, which I think you can convince yourself by imagining being stranded alone on a desert island with a crate full of gold. If that crate had contained non-perishable food, or water purification equipment, or tools and materials for building shelter, or best of all, a satellite phone and some solar panels, you’d be overjoyed to have it. Even a crate full of books, plushies, or underwear would have some use to you. (Plushies make better friends even than Wilson!) But gold? You have nothing to do but laugh—or cry—at the cruel irony. (And cash would be the same way, though maybe you could use the linen for something.)

But we actually do have a good explanation for how assets such as gold (and Bitcoin) can have prices far exceeding their marginal utility; expectations. If you expect that you’ll be able to sell an asset for more than you paid for it, you have reason to buy that asset, even if it’s useless to you. And for gold, that’s actually been a pretty smart gamble most of the time (Bitcoin, it very much depends on when you bought it). This could be a non-stationary equilibrium in rational expectations, or it could just be an ever-replenishing array of Greater Fools; but one way or another, the reason gold has a high price is that people expect it to have an even higher price in the future.

In fact, this seems like a deep flaw in capitalism! Marx could have spent a whole chapter on why gold is stupid and financial markets are basically a casino—he would have beaten out Keynes on that by decades. (If I were going to worship an economist, it would be Keynes. But again, I still don’t think his work is inerrant. Just very, very good.) But instead, Marx accepted that gold is priced the way it should be, and contorted his already-tortured theory of value into accommodating that.

I really don’t know why Marx was so insistent that all goods had to be valued based on labor. Marx actually had a lot of good insights about capitalism, and he wasn’t entirely wrong that capitalism as we know it breeds exploitation and ever-growing inequality. I believe that relatively simple reforms (like antitrust enforcement, co-ops, and progressive taxation) can solve, or at least mitigate, these problems, and allow us to enjoy the fruits of higher productivity that capitalism provides. But I recognize that I could be wrong about that; maybe some more radical change is genuinely needed. Yet this in no way vindicates Marx’s theory of value, which was simply wrongheaded from the start.

Indeed, why was he so insistent about it?

Why not simply give up on it, and adopt a new theory, or state it as an unsolved problem?

I have a hypothesis about that. Let me reprise proposition 4:

  1. Increased productivity is worthless, because all it does is make our goods worthless as we get better at making them.

This proposition is preposterous, as I’ve already said: A technology that allows you to make 100 cars with the same labor previously required to make 1 car does not make cars less useful. It simply makes them available to more people at lower prices, and this is generally a good thing.

But I think that Marx did not regard it as preposterous; in fact, I think he regarded it as true.

Consider this paragraph:

In proportion as capitalist production is developed in a country, in the same proportion do the

national intensity and productivity of labour there rise above the international level.2 The

different quantities of commodities of the same kind, produced in different countries in the same

working-time, have, therefore, unequal international values, which are expressed in different

prices, i.e., in sums of money varying according to international values. The relative value of

money will, therefore, be less in the nation with more developed capitalist mode of production

than in the nation with less developed. It follows, then, that the nominal wages, the equivalent of

labour-power expressed in money, will also be higher in the first nation than in the second; which

does not at all prove that this holds also for the real wages, i.e., for the means of subsistence

placed at the disposal of the labourer

– Das Kapital, Volume 1, chapter 22, p. 394

So he does get one qualitative fact right here: Nominal prices are higher in rich countries, for goods and services that are not traded across international borders. This is why we use purchasing power parity.

But he then goes on to say that real wages aren’t higher in rich countries. This… is just clearly false. By any reasonable measure, real wages are higher in the United States or France than they are in Congo or Haiti.

One can quibble with the particular measure used; I in fact happen to believe that we do overestimate real wages in the US by using the CPI instead of an index that better reflects the price of necessities. But there’s just no plausible way to say that a laborer in Malawi who makes $600 a year is at the same standard of living as a laborer in the US who makes $20,000. They might both be legitimately considered poor; but saying that real wages aren’t better here just isn’t plausible.

And Marx’s views on wages get weirder from there:

But hand-in-hand with the increasing productivity of labour, goes, as we have seen, the cheapening of the labourer, therefore a higher rate of surplus-value, even when the real wages are rising. The latter never rise proportionally to the productive power of labour. The same value in variable capital therefore sets in movement more labour-power, and, therefore, more labour.

Das Kapital, Volume 1, Chapter 24, p. 421

I’d in particular like to draw your attention to these two clauses: “the cheapening of the labourer, […] even when the real wages are rising.” What in the world does that mean? How can labor simultaneously get cheaper and more expensive? How can I be “cheapened” even as I am better off?

A bit later, he gets close to acknowledging that higher productivity increases value, but he characterizes it in a very strange way:

Labour transmits to its product the value of the means of production consumed by it. On the other

hand, the value and mass of the means of production set in motion by a given quantity of labour

increase as the labour becomes more productive. Though the same quantity of labour adds always

to its products only the same sum of new value, still the old capital value, transmitted by the

labour to the products, increases with the growing productivity of labour.

Das Kapital, Volume 1, Chapter 24, p. 422

So what he seems to be saying here is that the value added from capital is itself denominated in terms of the labor that was used to create that capital. Yet this is a very strange accounting indeed, as I think a simple model will help you see.

Consider a productivity-enhancing technology.

Suppose that, initially, one can make 1 widget per person-hour. So, Marx says, the value of 1 widget is precisely 1 person-hour.

And suppose there are enough laborers to do 20 person-hours of work. Then we make 20 widgets, and we get value equal to 20 person-hours. Okay, seems reasonable so far.

Then, an engineer comes along, spending 100 hours to invent a machine that costs 10 person-hours to build, and can produce 1000 widgets using 10 person-hours of labor.

So the value of that machine, according to Marx as I understand him, is 10+X person-hours, where X is some amortized fraction of the 100 person-hours involved in inventing it. It’s unclear how to do this amortization; what time frame should be using? Once invented, the machine can be built many times. But I guess we could maybe make sense of it as the patent duration—the price of the machine will surely be higher during the time the patent is still valid, and I guess we could say that is somehow reflected in its value. (Notice how this is already getting pretty weird.)

Now, let’s go ahead and make 1000 widgets with the machine.

We have spent 10 person-hours of labor running the machine, another 10 building it, and we’re supposed to count in X from inventing it in the first place. X ranges somewhere between 0 and 100.

So at the low end, when X=0, these 1000 widgets have only cost us 20 person-hours to make, increasing productivity 50-fold. This is sort of where we expect to end up after the machine goes out of patent and becomes commonplace.

But at the high end, when X=100, these 1000 widgets have cost us 120 person-hours to make, increasing productivity a lesser, but still substantial, 8-fold. This might be where we find ourselves when the very first machine comes online and it’s still an experimental prototype.

Under the utility theory of value (which, again, virtually all mainstream economists, including both neoclassical, behavioral, and even Marxian economists, accept), the value of widgets has increased from U(20) to U(1000); exactly what this value is depends on how many consumers there are and what their utility functions are, but two things we can say for sure:

  • This is definitely much higher than before. (Probably more than 10 but less than 50 times higher.)
  • The value is the same regardless of how we account for the person-hours that went into inventing the machine.
  • The cost gets lower over time, as the technology becomes established.
  • Thus the value added should increase over time. (Whether or not profit does depends upon additional factors we haven’t modeled.)

But as Marx seems to be saying here (again, he may say differently elsewhere, but that’s kind of my point; he doesn’t have a coherent theory), we are to value these 1000 widgets as follows:

When the technology is new, X=100, and so the value of the 1000 widgets is 120 person-hours, the labor that went into inventing, producing, and using the machine. So this productivity enhancement has increased value somewhat—a 6-fold increase—but not all that much. And the value of each widget has been radically reduced: It is now only 0.12 person-hours, or about 7 person-minutes.

Yet once the technology becomes established, X=0, and so the value of the 1000 widgets is 20 person-hours, the labor that went into producing and using the machine. So now this productivity enhancement has not increased value at all. The value of each widget has fallen even further: It is now a mere 0.02 person-hours, or just over 1 person-minute.

This weird dynamic, where technology increases value temporarily, then brings it back down to exactly what it was before, is clearly not how technology actually works. The value added from new technologies—in terms of utility, what really matters—is permanent and increasing over time.

Yet upon re-reading Marx and reflecting some more on his labor theories of value, I think Marx believed that this is actually what happens.

I think that Marx’s whole account of why the rate of profit must fall (even though it absolutely hasn’t, empirically, and even Marxian economists today recognize there’s no particular theoretical reason it should) is based on this misconception.

I think because he believed that labor is the correct measure of value, the fact that human beings can only do so much labor (which hasn’t really changed much over the millennia) means that standard of living can never really increase, because higher productivity simply translates into stuff becoming more and more worthless.

And I think part of where the confusion comes from is that price does sort of behave this way, at least qualitatively; no doubt in a world where widgets can be produced with only 1 minute of labor instead of 60 is one in which widgets are much cheaper to buy. But that doesn’t mean that their value has been correspondingly reduced; they are still just as useful (for whatever widgets do) as they were before, and any decline in marginal value merely comes from diminishing marginal utility as people get more and more of them.

Yet I think Marx didn’t want that result, because it seemed to imply that capitalism could actually make life better, even for workers. (As, empirically, it absolutely did.) He wanted to be able to prove that, despite all appearances, workers have gained absolutely nothing from capitalism and technology, and live just as poorly today as they did in the Middle Ages. And a labor theory of value was just the way to do that, for we only work slightly more hours today than most people did in the Middle Ages (and given the state of Medieval scholarship at the time, Marx may have even thought it was the same). Yet I for one am really a fan of vaccines and flush toilets; I don’t know about you.

He quickly realized many of the problems with this theory, and so he added more and more epicycles to try to correct these problems; but the result was a theory that wasn’t even coherent. Yet in part because of Marx’s incredibly dense and verbose writing style (please note; there are 547 pages in Volume I of Das Kapital, and it has three volumes.)it remained plausible enough to non-experts to catch on, and due to its very complexity, it becomes genuinely hard for anyone to understand. So then we can have the argument I had, where even as I clearly demonstrated the deep flaws in the theory, my interlocutor could always insist I hadn’t really understood what Marx was saying, and it was all my failing, not anything wrong with the theory, which is of course inerrant and handed down from On High.

For some people (not all, but some), Marxism really does seem more like a religion than a scientific theory: “I don’t know exactly what it means, but dammit, I know it’s true and you’ll never convince me otherwise.”

Is there a way to make a labor theory of value work?

I’m pretty well convinced that Marx’s labor theory of value is either wrong, or so incoherent as to be not even wrong. (Adam Smith’s and David Ricardo’s theories were coherent, so they were definitely just wrong.)

But could there, somewhere buried in all those hundreds of pages of mind-numbingly dense and self-contradictory text, be a theory worth salvaging?

Can I steelman the labor theory of value?

I’m going to give it a try.

Okay, so clearly it’s not the actual amount of labor used, as that runs afoul of proposition 1 immediately:

  1. A project you spend 100 hours on which ultimately failed and produced nothing useful was extremely valuable.

That’s nonsense, so we’ll rule that theory out.

Okay, maybe we can patch it up by saying it’s the socially necessary amount of labor required; the amount of labor that the most-efficient worker would require. Clearly, if you are spending 100 hours on something useless, you’re not being the most-efficient worker.

This seems to be closer to Marx’s account, but it still runs afoul of propositions 2, 3, and 4:

  1. Everything in the Garden of Eden is worthless, because it doesn’t require labor to access.
  2. If you come up with a cure for cancer in a random stroke of insight, it’s worthless because you didn’t put any labor into it, even though both its utility (the lives it will save) and its price (the money you could make off of it) are surely astronomical.
  3. Increased productivity is worthless, because all it does is make our goods worthless as we get better at making them.

Marx actually seemed to like proposition 4, but we can see that it’s wrong. So this is a problem.

Also, while propositions 2 and 3 may seem like extreme thought experiments, consider the following:

First, “The Garden of Eden” is very much what a Star Trek-style fully automated luxury communism would feel like. Many leftists say that they really would like to see such a world, and I agree with them on this. But on this theory of value, it’s all worthless, because nobody has to work to get anything.

Second, a sudden insight into a miracle cure that ends up becoming cheap and plentiful is pretty much what happened with penicillin and vaccines. Yes, there was some labor involved in making them (and still is), but it was clearly far less than the utility gained from all the improvements in health and lifespan that we have received from these inventions. Valuing these technologies in terms of their labor cost seems to completely miss the point of why they were such miracles.

So is there some other way to make a labor theory of value work?

The best I can come up with is this:

The value of a product is the amount of labor it would take to make that product by hand with pre-historic technology.

This is my attempt at steelmanning the labor theory of value. It does solve propositions 2, 3, and 4:

For 2, the fact that everything is handed to you (perhaps by robots) doesn’t change the fact that making it yourself would be really, really hard.

For 3, it’s much harder to make penicillin by hand than in a factory (though it can be done!), so improved penicillin technology is a gain in value. And every new vial of penicillin is worth the many hours that would have gone into making it by hand.

And for 4, any improvement in labor productivity works exactly how you’d expect: A machine that can do the work of 100 people produces 100 times as much value in goods. (In some ways, this is even more intuitive to most people than the utility theory of value, which predicts an increase, but not a one-to-one increase.)

So, okay, this theory is not preposterous, unlike everything we’ve considered so far.

But it really can’t be Marx’s theory, because he contradicts it very heavily in multiple places, and this theory, unlike his, does not predict that the rate of profit must fall. (Which, again, is good, because it doesn’t.)

Yet even this theory is ultimately unsatisfying, for the following reasons:

  1. Some products literally cannot be made by hand using pre-historic technology. Consider a graphics card or a strong-force microscope. In order to make these things, we had to make tools to make better tools to make even better tools to make still better tools to make yet even better tools to make staggeringly near-flawless tools to make them. Even if you had the complete schematics for all the necessary tools and machines, all the raw materials you needed, and an unlimited supply of labor, I’m not sure you could build a graphics card from scratch within a single lifetime.
  2. While it can account for the value of increased efficiency in producing a given good, it doesn’t seem to be able to account for the value of inventing whole new classes of goods. (Yes, penicillin can be made by hand using pre-historic tools, but nobody did as far as we know, and the value of that invention was absolutely enormous in a way that even this labor theory of value cannot account for.)

These two problems are related: The new products you can make now that you couldn’t before are made possible by a mix of new ideas and an accumulation of better and better tools.

As far as proposition 5, I think we might be able to shore up the theory by counting the value of capital accumulation in terms of the labor that would be needed at each level of technology: however many person-hours to make the optical microscope, and then however many person-hours to make lasers, and however many person-hours to make sulfuric acid, and so on and so forth, until you’ve finally added up all the labor that went into producing the things that produced the things that produced the things that produced the things that produced graphics cards.

But as for proposition 6? I think this is just fatal. I don’t think there’s any way for a labor theory of value to not systematically and catastrophically undervalue new discoveries and new inventions.

The whole point of new inventions is that they make new things possible or allow us to do things with far less effort or cost than before. The value they create is in the labor they save. But if they are things we theoretically could have done, just didn’t know how (like penicillin), then there is no value added by the discovery (though at least there can be a lot of value added by the actual production). And if they are things we couldn’t have done until we reached a certain level of technology and capital, the value added seems to all be captured by the production of each new tier of technology, with nothing left to go to the discovery itself.

Maybe there’s still a way to save this theory. But at some point, we have to stop and ask ourselves:

Why?

Why do we even want a labor theory of value, when we already have a utility theory of value?

Maybe it’s the fact that utility is hard to measure precisely, and so the idea of basing our value system on it is uncomfortable? Yet I think this is just a fact of life: The things that really matter are hard to measure precisely.

And it’s not as if we have absolutely no idea: We can tell the difference between happiness and suffering, and we can see how various products and technologies can contribute to happiness and alleviate suffering. (We can also see how some products and technologies can reduce happiness and contribute to suffering! Not all new technologies are good, and some products that are good for their users are bad for other people!)

Indeed, we even have a unit of measurement: The QALY. And for some particular technologies—such as penicillin and vaccines—we actually have a pretty good idea of the number of QALY they’ve added to the world, and it’s enormous.

I’m not even saying Marx was wrong about everything. He had some good ideas, actually. And Marxian economists today do sometimes come up with useful findings that can be integrated into a deeper understanding of political economy.

But he was wrong about some things, and the labor theory of value is one of them.

What would a world without poverty look like?

Mar 22 JDN 2461122

In my previous post I reflected on the ways that conventional measures of poverty seem inadequate—and that a richer understanding of poverty suggests that it is far more ubiquitous than such measures suggest.

In this post, I will ask: Given this richer understanding of poverty, what would a world without poverty look like? Is it something we can realistically hope to achieve?

In techno-utopian circles (looking at you again, Scott Alexander), it is common to speak of “post-scarcity”: A world where there is no poverty because resources are effectively unlimited.

I don’t think that’s possible.

Not for humans as we know them. Perhaps in a future where greed is a recognized and treatable psychiatric disorder, we could genuinely have an economy where people really just take whatever they want and it works out because nobody wants an unreasonable amount.

But the fact that there are people with hundreds of billions of dollars tells me that among humans as we know them, some people’s greed is just literally insatiable. Give them a moon and they’ll demand a planet; give them a planet and they’ll demand a solar system. Whatever they are getting out of more wealth (status? power? the dopamine hit of number go up?), they’re never going to stop getting it from even more wealth, no matter how much we give them. For if they were going to stop at a reasonable amount, they would have stopped four orders of magnitude ago.

So let’s try to imagine what a world would look like if it really had no poverty, but not by somehow producing such staggering amounts of wealth that everyone could literally take whatever they want.

I think the key is that it would require all basic material needs to be met.

Everyone would have, at minimum:

  • Clean air to breathe
  • Clean water to drink
  • Nutritious food to eat
  • Shelter from the elements
  • Security against theft and violence
  • Personal liberty and political representation
  • A basic education
  • A basic standard of healthcare

(I will note that these resonate quite closely with the UN Universal Declaration of Human Rights.)

Some of these needs can probably never be completely satisfied—there is an inherent tension between liberty and security which requires us to balance them against each other. A society with zero crime is a horrific totalitarian police state; a society with complete liberty is an equally horrific Hobbesian nightmare. But we have achieved, in most of the First World at least, a reasonable standard of security along with a great deal of liberty, and preserving that balance should be of a very high priority.

Even clean air and water would be difficult to satisfy perfectly: even if we pivot our whole economy to solar, wind, and nuclear power (as we very definitely should be doing!), some amount of pollution is probably necessary just to have a functioning industrial society. So we need to establish reasonable standards for what amounts of pollution exposure are safe, and effective mechanisms for ensuring that people are not exposed to pollution outside those standards—we have largely done the former, but seriously fail at the latter.

But probably the most difficult needs to satisfy are actually difficult to even define.

Just what constitutes a basic standard of education, and a basic standard of healthcare?

These seem like moving targets.

Let’s start with education:

Someone who is illiterate and can barely add two numbers together would be considered to have very poor education today, but would be considered completely average among peasants in the Middle Ages. Someone like me with a PhD has education well beyond what anyone had in the Middle Ages: While Oxford was already graduating doctors in the 12th century, those doctors didn’t have to write dissertations, and didn’t know nearly as much about the world as you must to earn a modern PhD. (Most of the mathematics required to get an economics PhD specifically literally had not been invented.)

So it’s conceivable that educational standards will continue to rise over time, especially if we are able to radically improve learning via new technologies. In the most extreme case, if everyone can just download knowledge like in The Matrix, then it wouldn’t be unreasonable to expect the average person to know as much as a typical PhD today in dozens of fields.

Suppose that such technology did exist. Would it be fair to consider someone poor if they didn’t have access to it?

Yes, I think it would.

Because if it’s really cheap and easy to give breathtakingly vast knowledge on a variety of subjects to anyone instantly, then letting some people have that while others do not puts those others at a severe disadvantage in life. If you must know how to solve partial differential equations to get a job, then someone who only made it through high school algebra isn’t going to be able to find jobs.

So I think what we’re really concerned about here is inequality: The education of a rich person should not be too much better than the education of a poor person, lest “meritocracy” simply reinforce the same generational inequality it was supposed to eliminate.

Now consider healthcare:

This, too, has radically improved over time. Indeed, I’m not really sure it’s fair to call Medieval doctors doctors at all; they lacked basic knowledge of human physiology and their intervention was as likely to hurt patients as to help them. Surgeons certainly existed: They knew how to amputate a gangrenous limb or suture a wound. (They did so without antiseptic, let alone anaesthetic!) But should you come to them with a fever or a headache, they would likely do you as much harm as good.

So we could imagine a world of Star Trek medicine, where you lie in a bed, get scanned for a few moments, and the doctor immediately knows what’s wrong with you and what kind of painless injection to give you to fix it.

Once again, we must ask: If you don’t have that, are you poor?

And again, I’m going to say yes.

If the technology exists to heal people this effortlessly, and some people get access to it while others do not, the latter are being allowed to suffer when their suffering could be easily alleviated.

But now we must consider: what if the technology exists, but it’s too expensive to use routinely?

Most technologies are like this when they are first invented. Over time, the technology improves (and the patents expire!) and they become cheaper and more widely available.

Unlike education, healthcare doesn’t usually impose large advantages on those who receive it—though it can, especially in a society where disabilities are not adequately accommodated.

So I think I’m prepared to allow “early adopters” of new medical technology, people who are rich enough to pay for advanced treatments before they are available to everyone—within certain limits. If some new treatment grants radically higher productivity or lifespan, then in fact I think we have a moral obligation to wait until it can be universally shared before we give it to anyone—precisely because of the risk of reinforcing generational inequality.

Once again, in our effort to define poverty, we end up returning to inequality: The rich should not be allowed to be too much healthier than the poor.

This definitely makes education and healthcare more complicated than the others.

While we can pretty clearly define how much food and water a human being needs to live, and we could provide it to everyone, and then nobody would be poor in terms of food or water.

But making nobody poor in terms of education and healthcare requires meeting a standard that may in fact increase over time, and it is no contradiction to imagine that someone living in the 31st century could be receiving better healthcare than I ever will and yet is still not receiving adequate healthcare based on the technology available.

Furthermore, that person demanding better healthcare is not being ungrateful or envious—they are quite reasonably demanding that society fairly allocate healthcare so that there aren’t some people who live in eternal youth while other people still die of old age.

Are they richer than I am? In some sense, perhaps. We could stipulate that in every material way they are better off than I am now. But there’s a treatment that could extend their life by centuries, and nobody’s giving it to them, because they can’t afford it—and that’s wrong. That makes them poor, and it makes their society unfair and unjust. It isn’t just a question of how many QALY they have; it’s also a question of what it would cost to give them a lot more.

But with all that said, I do believe that a world without poverty is possible.

In fact, I believe that technologically we could already provide that world, if we had the political will to do so. Maybe we don’t quite have the economic output to support it worldwide, but even that is not as far off as most people seem to think.

Providing an adequate standard of food and water, for example, we could already do with existing food supplies. It would cost about one-eighth of Elon Musk’s wealth per year, meaning that, with good stock returns (as he most certainly gets), he could very likely afford it by himself!

Clean air for all would be harder, but we are moving the right direction now that solar power is so cheap.

Universal liberty and security would require radical shifts in government in dozens of countries, so that one seems especially unlikely to happen any time soon—yet it is very definitely possible, and by construction only requires political change.

Universal education and healthcare would be very expensive, and most countries are too poor to really provide them on their own. They are not simply poor in money, but poor in skills: There aren’t enough doctors and teachers, and so we would need to use the ones we have to train up a new generation, and perhaps a new generation after that, before the world’s needs would really be met. (Fortunately, there are people trying to do this. But they don’t have enough resources to really achieve these goals.) So this is not a technological limitation, but it is an economic one; it will probably be at least another generation before we can solve this one.

What about universal shelter? Now there’s the rub. Even in prosperous First World countries, housing shortages and skyrocketing prices are keeping homeownership out of reach for tens of millions of people, and leaving hundreds of thousands outright homeless. We clearly do have the technology to produce enough homes, especially if we are prepared to build at high density; but the economic cost of doing so would be substantial, and our policymakers don’t seem at all willing to actually pay it. I think as long as housing is viewed as an asset one invests in rather than a good that one needs, this will continue to be the case.

The problem isn’t that we don’t have enough stuff. It’s that we are not sharing it properly.

The AI bubble is going to crash hard

Sep 7 JDN 2460926

Based on the fact that it only sort of works and yet corps immediately put it in everything, I had long suspected that the current wave of AI was a bubble. But after reading Ed Zitron’s epic takedowns of the entire industry, I am not only convinced it’s a bubble; I’m convinced it is probably the worst bubble we’ve had in a very long time. This isn’t the dot-com crash; it’s worse.

The similarity to the dot-com crash is clear, however: This a huge amount of hype over a new technology that genuinely could be a game-changer (the Internet certainly was!), but won’t be in the time horizon on which the most optimistic investors have assumed it will be. The gap between “it sort of works” and “it radically changes our economy” is… pretty large, actually. It’s not something you close in a few years.


The headline figure here is that based on current projections, US corporations will have spent $560 billion on capital expenditure, for anticipated revenue of only $35 billion.

They won’t pay it off for 16 years!? That kind of payoff rate would make sense for large-scale physical infrastructure, like a hydroelectric dam. It absolutely does not make sense in an industry that is dependent upon cutting-edge technology that wears out fast and becomes obsolete even faster. They must think that revenue is going to increase to something much higher, very soon.

The corps seem to be banking on the most optimistic view of AI: That it will soon—very soon—bring about a radical increase in productivity that brings GDP surging to new heights, or even a true Singularity where AI fundamentally changes the nature of human existence.

Given the kind of errors I’ve seen LLMs make when I tried to use them to find research papers or help me with tedious coding, this is definitely not what’s going to happen. Claude gives an impressive interview, and (with significant guidance and error-correction) it also managed pretty well at making some simple text-based games; but it often recommended papers to me that didn’t exist, and through further experimentation, I discovered that it could not write me a functional C++ GUI if its existence depended on it. Somewhere on the Internet I heard someone describe LLMs as answering not the question you asked directly, but the question, “What would a good answer to this question look like?” and that seems very accurate. It always gives an answer that looks valid—but not necessarily one that is valid.

AI will find some usefulness in certain industries, I’m sure; and maybe the next paradigm (or the one after that) will really, truly, effect a radical change on our society. (Right now the best thing to use LLMs for seems to be cheating at school—and it also seems to be the most common use. Not exactly the great breakthrough we were hoping for.) But LLMs are just not reliable enough to actually use for anything important, and sooner or later, most of the people using them are going to figure that out.

Of course, by the Efficient Roulette Hypothesis, it’s extremely difficult to predict exactly when a bubble will burst, and it could well be that NVIDIA stock will continue to grow at astronomical rates for several years yet—or it could be that the bubble bursts tomorrow and NVIDIA stock collapses, if not to worthless, then to far below its current price.

Krugman has an idea of what might be the point that bursts the bubble: Energy costs. There is a clear mismatch between the anticipated energy needs of these ever-growing data centers and the actual energy production we’ve been installing—especially now that Trump and his ilk have gutted subsidies for solar and wind power. That’s definitely something to watch out for.

But the really scary thing is that the AI bubble actually seems to be the only thing holding the US economy above water right now. It’s the reason why Trump’s terrible policies haven’t been as disastrous as economists predicted they would; our economy is being sustained by this enormous amount of capital investment.

US GDP is about $30 trillion right now, but $500 billion of that is just AI investment. That’s over 1.6%, and last quarter our annualized GDP growth rate was 3.3%—so roughly half of our GDP growth was just due to building more data centers that probably won’t even be profitable.

Between that, the tariffs, the loss of immigrants, and rising energy costs, a crashing AI bubble could bring down the whole stock market with it.

So I guess what I’m saying is: Don’t believe the AI hype, and you might want to sell some stocks.

AI and the “generalization faculty”

Oct 1 JDN 2460219

The phrase “artificial intelligence” (AI) has now become so diluted by overuse that we needed to invent a new term for its original meaning. That term is now “artificial general intelligence” (AGI). In the 1950s, AI meant the hypothetical possibility of creating artificial minds—machines that could genuinely think and even feel like people. Now it means… pathing algorithms in video games and chatbots? The goalposts seem to have moved a bit.

It seems that AGI has always been 20 years away. It was 20 years away 50 years ago, and it will probably be 20 years away 50 years from now. Someday it will really be 20 years away, and then, 20 years after that, it will actually happen—but I doubt I’ll live to see it. (XKCD also offers some insight here: “It has not been conclusively proven impossible.”)

We make many genuine advances in computer technology and software, which have profound effects—both good and bad—on our lives, but the dream of making a person out of silicon always seems to drift ever further into the distance, like a mirage on the desert sand.

Why is this? Why do so many people—even, perhaps especially,experts in the field—keep thinking that we are on the verge of this seminal, earth-shattering breakthrough, and ending up wrong—over, and over, and over again? How do such obviously smart people keep making the same mistake?

I think it may be because, all along, we have been laboring under the tacit assumption of a generalization faculty.

What do I mean by that? By “generalization faculty”, I mean some hypothetical mental capacity that allows you to generalize your knowledge and skills across different domains, so that once you get good at one thing, it also makes you good at other things.

This certainly seems to be how humans think, at least some of the time: Someone who is very good at chess is likely also pretty good at go, and someone who can drive a motorcycle can probably also drive a car. An artist who is good at portraits is probably not bad at landscapes. Human beings are, in fact, able to generalize, at least sometimes.

But I think the mistake lies in imagining that there is just one thing that makes us good at generalizing: Just one piece of hardware or software that allows you to carry over skills from any domain to any other. This is the “generalization faculty”—the imagined faculty that I think we do not have, indeed I think does not exist.

Computers clearly do not have the capacity to generalize. A program that can beat grandmasters at chess may be useless at go, and self-driving software that works on one type of car may fail on another, let alone a motorcycle. An art program that is good at portraits of women can fail when trying to do portraits of men, and produce horrific Daliesque madness when asked to make a landscape.

But if they did somehow have our generalization capacity, then, once they could compete with us at some things—which they surely can, already—they would be able to compete with us at just about everything. So if it were really just one thing that would let them generalize, let them leap from AI to AGI, then suddenly everything would change, almost overnight.

And so this is how the AI hype cycle goes, time and time again:

  1. A computer program is made that does something impressive, something that other computer programs could not do, perhaps even something that human beings are not very good at doing.
  2. If that same prowess could be generalized to other domains, the result would plainly be something on par with human intelligence.
  3. Therefore, the only thing this computer program needs in order to be sapient is a generalization faculty.
  4. Therefore, there is just one more step to AGI! We are nearly there! It will happen any day now!

And then, of course, despite heroic efforts, we are unable to generalize that program’s capabilities except in some very narrow way—even decades after having good chess programs, getting programs to be good at go was a major achievement. We are unable to find the generalization faculty yet again. And the software becomes yet another “AI tool” that we will use to search websites or make video games.

For there never was a generalization faculty to be found. It always was a mirage in the desert sand.

Humans are in fact spectacularly good at generalizing, compared to, well, literally everything else in the known universe. Computers are terrible at it. Animals aren’t very good at it. Just about everything else is totally incapable of it. So yes, we are the best at it.

Yet we, in fact, are not particularly good at it in any objective sense.

In experiments, people often fail to generalize their reasoning even in very basic ways. There’s a famous one where we try to get people to make an analogy between a military tactic and a radiation treatment, and while very smart, creative people often get it quickly, most people are completely unable to make the connection unless you give them a lot of specific hints. People often struggle to find creative solutions to problems even when those solutions seem utterly obvious once you know them.

I don’t think this is because people are stupid or irrational. (To paraphrase Sydney Harris: Compared to what?) I think it is because generalization is hard.

People tend to be much better at generalizing within familiar domains where they have a lot of experience or expertise; this shows that there isn’t just one generalization faculty, but many. We may have a plethora of overlapping generalization faculties that apply across different domains, and can learn to improve some over others.

But it isn’t just a matter of gaining more expertise. Highly advanced expertise is in fact usually more specialized—harder to generalize. A good amateur chess player is probably a good amateur go player, but a grandmaster chess player is rarely a grandmaster go player. Someone who does well in high school biology probably also does well in high school physics, but most biologists are not very good physicists. (And lest you say it’s simply because go and physics are harder: The converse is equally true.)

Humans do seem to have a suite of cognitive tools—some innate hardware, some learned software—that allows us to generalize our skills across domains. But even after hundreds of millions of years of evolving that capacity under the highest possible stakes, we still basically suck at it.

To be clear, I do not think it will take hundreds of millions of years to make AGI—or even millions, or even thousands. Technology moves much, much faster than evolution. But I would not be surprised if it took centuries, and I am confident it will at least take decades.

But we don’t need AGI for AI to have powerful effects on our lives. Indeed, even now, AI is already affecting our lives—in mostly bad ways, frankly, as we seem to be hurtling gleefully toward the very same corporatist cyberpunk dystopia we were warned about in the 1980s.

A lot of technologies have done great things for humanity—sanitation and vaccines, for instance—and even automation can be a very good thing, as increased productivity is how we attained our First World standard of living. But AI in particular seems best at automating away the kinds of jobs human beings actually find most fulfilling, and worsening our already staggering inequality. As a civilization, we really need to ask ourselves why we got automated writing and art before we got automated sewage cleaning or corporate management. (We should also ask ourselves why automated stock trading resulted in even more money for stock traders, instead of putting them out of their worthless parasitic jobs.) There are technological reasons for this, yes; but there are also cultural and institutional ones. Automated teaching isn’t far away, and education will be all the worse for it.

To change our lives, AI doesn’t have to be good at everything. It just needs to be good at whatever we were doing to make a living. AGI may be far away, but the impact of AI is already here.

Indeed, I think this quixotic quest for AGI, and all the concern about how to control it and what effects it will have upon our society, may actually be distracting from the real harms that “ordinary” “boring” AI is already having upon our society. I think a Terminator scenario, where the machines rapidly surpass our level of intelligence and rise up to annihilate us, is quite unlikely. But a scenario where AI puts millions of people out of work with insufficient safety net, triggering economic depression and civil unrest? That could be right around the corner.

Frankly, all it may take is getting automated trucks to work, which could be just a few years. There are nearly 4 million truck drivers in the United States—a full percentage point of employment unto itself. And the Governor of California just vetoed a bill that would require all automated trucks to have human drivers. From an economic efficiency standpoint, his veto makes perfect sense: If the trucks don’t need drivers, why require them? But from an ethical and societal standpoint… what do we do with all the truck drivers!?

Will hydrogen make air travel sustainable?

Apr 9 JDN 2460042

Air travel is currently one of the most carbon-intensive activities anyone can engage in. Per passenger kilometer, airplanes emit about 8 times as much carbon as ships, 4 times as much as trains, and 1.5 times as much as cars. Living in a relatively eco-friendly city without a car and eating a vegetarian diet, I produce much less carbon than most First World citizens—except when I fly across the Atlantic a couple of times a year.

Until quite recently, most climate scientists believed that this was basically unavoidable, that simply sustaining the kind of power output required to keep an airliner in the air would always require carbon-intensive jet fuel. But in just the past few years, major breakthroughs have been made in using hydrogen propulsion.

The beautiful thing about hydrogen is that burning it simply produces water—no harmful pollution at all. It’s basically the cleanest possible fuel.


The simplest approach, which is actually quite old, but until recently didn’t seem viable, is the use of liquid hydrogen as airplane fuel.

We’ve been using liquid hydrogen as a rocket fuel for decades; so we knew it had enough energy density. (Actually its energy density is higher than conventional jet fuel.)

The problem with liquid hydrogen is that it must be kept extremely cold—it boils at 20 Kelvin. And once liquid hydrogen boils into gas, it builds up pressure very fast and easily permeates through most materials, so it’s extremely hard to contain. This makes it very difficult and expensive to handle.

But this isn’t the only way to use hydrogen, and may turn out to not be the best one.

There are now prototype aircraft that have flown using hydrogen fuel cells. These fuel cells can be fed with hydrogen gas—so no need to cool below 20 Kelvin. But then they can’t directly run the turbines; instead, these planes use electric turbines which are powered by the fuel cell.

Basically these are really electric aircraft. But whereas a lithium battery would be far too heavy, a hydrogen fuel cell is light enough for aviation use. In fact, hydrogen gas up to a certain pressure is lighter than air (it was often used for zeppelins, though, uh, occasionally catastrophically), so potentially the planes could use their own fuel tanks for buoyancy, landing “heavier” than they took off. (On the other hand it might make more sense to pressurize the hydrogen beyond that point, so that it will still be heavier than air—but perhaps still lighter than jet fuel!)

Of course, the technology is currently too untested and too expensive to be used on a wide scale. But this is how all technologies begin. It’s of course possible that we won’t be able to solve the engineering problems that currently make hydrogen-powered aircraft unaffordable; but several aircraft manufacturers are now investing in hydrogen research—suggesting that they at least believe there is a good chance we will.

There’s also the issue of where we get all the hydrogen. Hydrogen is extremely abundant—literally the most abundant baryonic matter in the universe—but most of what’s on Earth is locked up in water or hydrocarbons. Most of the hydrogen we currently make is produced by processing hydrocarbons (particularly methane), but that produces carbon emissions, so it wouldn’t solve the problem.

A better option is electrolysis: Using electricity to separate water into hydrogen and oxyen. But this requires a lot of energy—and necessarily, more energy than you can get out of burning the hydrogen later, since burning it basically is just putting the hydrogen and oxygen back together to make water.

Yet all is not lost, for while energy density is absolutely vital for an aircraft fuel, it’s not so important for a ground-based power plant. As an ultimate fuel source, hydrogen is a non-starter. But as an energy storage medium, it could be ideal.

The idea is this: We take the excess energy from wind and solar power plants, and use that energy to electrolyze water into hydrogen and oxygen. We then store that hydrogen and use it for fuel cells to run aircraft (and potentially other things as well). This ensures that the extra energy that renewable sources can generate in peak times doesn’t go to waste, and also provides us with what we need to produce clean-burning hydrogen fuel.

The basic technology for doing all this already exists. The current problem is cost. Under current conditions, it’s far more expensive to make hydrogen fuel than to make conventional jet fuel. Since fuel is one of the largest costs for airlines, even small increases in fuel prices matter a lot for the price of air travel; and these are not even small differences. Currently hydrogen costs over 10 times as much per kilogram, and its higher energy density isn’t enough to make up for that. For hydrogen aviation to be viable, that ratio needs to drop to more like 2 or 3—maybe even all the way to 1, since hydrogen is also more expensive to store than jet fuel (the gas needs high-pressure tanks, the liquid needs cryogenic cooling systems).

This means that, for the time being, it’s still environmentally responsible to reduce your air travel. Fly less often, always fly economy (more people on the plane means less carbon per passenger), and buy carbon offsets (they’re cheaper than you may think).

But in the long run, we may be able to have our cake and eat it too: If hydrogen aviation does become viable, we may not need to give up the benefits of routine air travel in order to reduce our carbon emissions.

Working from home is the new normal—sort of

Aug 28 JDN 2459820

Among people with jobs that can be done remotely, a large majority did in fact switch to doing their jobs remotely: By the end of 2020, over 70% of Americans with jobs that could be done remotely were working from home—and most of them said they didn’t want to go back.

This is actually what a lot of employers expected to happen—just not quite like this. In 2014, a third of employers predicted that the majority of their workforce would be working remotely by 2020; given the timeframe there, it required a major shock to make that happen so fast, and yet a major shock was what we had.

Working from home has carried its own challenges, but overall productivity seems to be higher working remotely (that meeting really could have been an email!). This may actually explain why output per work hour actually rose rapidly in 2020 and fell in 2022.

The COVID pandemic now isn’t so much over as becoming permanent; COVID is now being treated as an endemic infection like influenza that we don’t expect to be able to eradicate in the foreseeable future.

And likewise, remote work seems to be here to stay—sort of.

First of all, we don’t seem to be giving up office work entirely. As of the first quarter 2022, almost as many firms have partially remote work as have fully remote work, and this seems to be trending upward. A lot of firms seem to be transitioning into a “hybrid” model where employees show up to work two or three days a week. This seems to be preferred by large majorities of both workers and firms.

There is a significant downside of this: It means that the hope that remote working might finally ease the upward pressure on housing prices in major cities is largely a false one. If we were transitioning to a fully remote system, then people could live wherever they want (or can afford) and there would be no reason to move to overpriced city centers. But if you have to show up to work even one day a week, that means you need to live close enough to the office to manage that commute.

Likewise, if workers never came to the office, you could sell the office building and convert it into more housing. But if they show up even once in awhile, you need a physical place for them to go. Some firms may shrink their office space (indeed, many have—and unlike this New York Times journalist, I have a really hard time feeling bad for landlords of office buildings); but they aren’t giving it up entirely. It’s possible that firms could start trading off—you get the building on Mondays, we get it on Tuesdays—but so far this seems to be rare, and it does raise a lot of legitimate logistical and security concerns. So our global problem of office buildings that are empty, wasted space most of the time is going to get worse, not better. Manhattan will still empty out every night; it just won’t fill up as much during the day. This is honestly a major drain on our entire civilization—building and maintaining all those structures that are only used at most 1/3 of 5/7 of the time, and soon, less—and we really should stop ignoring it. No wonder our real estate is so expensive, when half of it is only used 20% of the time!

Moreover, not everyone gets to work remotely. Your job must be something that can be done remotely—something that involves dealing with information, not physical objects. That includes a wide and ever-growing range of jobs, from artists and authors to engineers and software developers—but it doesn’t include everyone. It basically means what we call “white-collar” work.

Indeed, it is largely limited to the upper-middle class. The rich never really worked anyway, though sometimes they pretend to, convincing themselves that managing a stock portfolio (that would actually grow faster if they let it sit) constitutes “work”. And the working class? By and large, they didn’t get the chance to work remotely. While 73% of workers with salaries above $200,000 worked remotely in 2020, only 12% of workers with salaries under $25,000 did, and there is a smooth trend where, across the board, the more money you make, the more likely you have been able to work remotely.

This will only intensify the divide between white-collar and blue-collar workers. They already think we don’t do “real work”; now we don’t even go to work. And while blue-collar workers are constantly complaining about contempt from white-collar elites, I think the shoe is really on the other foot. I have met very few white-collar workers who express contempt for blue-collar workers—and I have met very few blue-collar workers who don’t express anger and resentment toward white-collar workers. I keep hearing blue-collar people say that we think that they are worthless and incompetent, when they are literally the only ones ever saying that. I can’t stop saying things that I never said.

The rich and powerful may look down on them, but they look down on everyone. (Maybe they look down on blue-collar workers more? I’m not even sure about that.) I think politicians sometimes express contempt for blue-collar workers, but I don’t think this reflects what most white-collar workers feel.

And the highly-educated may express some vague sense of pity or disappointment in people who didn’t get college degrees, and sometimes even anger (especially when they do things like vote for Donald Trump), but the really vitriolic hatred is clearly in the opposite direction (indeed, I have no better explanation for how otherwise-sane people could vote for Donald Trump). And I certainly wouldn’t say that everyone needs a college degree (though I became tempted to, when so many people without college degrees voted for Donald Trump).

This really isn’t us treating them with contempt: This is them having a really severe inferiority complex. And as information technology (that white-collar work created) gives us—but not them—the privilege of staying home, that is only going to get worse.

It’s not their fault: Our culture of meritocracy puts a little bit of inferiority complex in all of us. It tells us that success and failure are our own doing, and so billionaires deserve to have everything and the poor deserve to have nothing. And blue-collar workers have absolutely internalized these attitudes: Most of them believe that poor people choose to stay on welfare forever rather than get jobs (when welfare has time limits and work requirements, so this is simply not an option—and you would know this from the Wikipedia page on TANF).

I think that what they experience as “contempt by white-collar elites” is really the pain of living in an illusory meritocracy. They were told—and they came to believe—that working hard would bring success, and they have worked very hard, and watched other people be much more successful. They assume that the rich and powerful are white-collar workers, when really they are non-workers; they are people the world was handed to on a silver platter. (What, you think George W. Bush earned his admission to Yale?)

And thus, we can shout until we are blue in the face that plumbers, bricklayers and welders are the backbone of civilization—and they are, and I absolutely mean that; our civilization would, in an almost literal sense, collapse without them—but it won’t make any difference. They’ll still feel the pain of living in a society that gave them very little and tells them that people get what they deserve.

I don’t know what to say to such people, though. When your political attitudes are based on beliefs that are objectively false, that you could know are objectively false if you simply bothered to look them up… what exactly am I supposed to say to you? How can we have a useful political conversation when half the country doesn’t even believe in fact-checking?

Honestly I wish someone had explained to them that even the most ideal meritocratic capitalism wouldn’t reward hard work. Work is a cost, not a benefit, and the whole point of technological advancement is to allow us to accomplish more with less work. The ideal capitalism would reward talent—you would succeed by accomplishing things, regardless of how much effort you put into them. People would be rich mainly because they are brilliant, not because they are hard-working. The closest thing we have to ideal capitalism right now is probably professional sports. And no amount of effort could ever possibly make me into Steph Curry.

If that isn’t the world we want to live in, so be it; let’s do something else. I did nothing to earn either my high IQ or my chronic migraines, so it really does feel unfair that the former increases my income while the latter decreases it. But the labor theory of value has always been wrong; taking more sweat or more hours to do the same thing is worse, not better. The dignity of labor consists in its accomplishment, not its effort. Sisyphus is not happy, because his work is pointless.

Honestly at this point I think our best bet is just to replace all blue-collar work with automation, thus rendering it all moot. And then maybe we can all work remotely, just pushing code patches to the robots that do everything. (And no doubt this will prove my “contempt”: I want to replace you! No, I want to replace the grueling work that you have been forced to do to make a living. I want you—the human being—to be able to do something more fun with your life, even if that’s just watching television and hanging out with friends.)

Reasons for optimism in 2022

Jan 2 JDN 2459582

When this post goes live, we will have begun the year 2022.

That still sounds futuristic, somehow. We’ve been in the 20th century long enough that most of my students were born in it and nearly all of them are old enough to drink (to be fair, it’s the UK, so “old enough to drink” only means 18). Yet “the year 2022” still seems like it belongs in science fiction, and not on our wall calendars.

2020 and 2021 were quite bad years. Death rates and poverty rates surged around the world. Almost all of that was directly or indirectly due to COVID.

Yet there are two things we should keep in perspective.

First, those death rates and poverty rates surged to what we used to consider normal 50 years ago. These are not uniquely bad times; indeed, they are still better than most of human history.

Second, there are many reasons to think that 2022—or perhaps a bit later than that, 2025 or 2030—will be better.

The Omicron variant is highly contagious, but so far does not appear to be as deadly as previous variants. COVID seems to be evolving to be more like influenza: Catching it will be virtually inevitable, but dying from it will be very rare.

Things are also looking quite good on the climate change front: Renewable energy production is growing at breathtaking speed and is now cheaper than almost every other form of energy. It’s awful that we panicked and locked down nuclear energy for the last 50 years, but at this point we may no longer need it: Solar and wind are just that good now.

Battery technology is also rapidly improving, giving us denser, cheaper, more stable batteries that may soon allow us to solve the intermittency problem: the wind may not always blow and the sun may not always shine, but if you have big enough batteries you don’t need them to. (You can get a really good feel for how much difference good batteries make in energy production by playing Factorio, or, more whimsically, Mewnbase.)

If we do go back to nuclear energy, it may not be fission anymore, but fusion. Now that we have nearly reached that vital milestone of break-even, investment in fusion technology has rapidly increased.


Fusion has basically all of the benefits of fission with none of the drawbacks. Unlike renewables, it can produce enormous amounts of energy in a way that can be easily scaled and controlled independently of weather conditions. Unlike fission, it requires no exotic nuclear fuels (deuterium can be readily attained from water), and produces no long-lived radioactive waste. (Indeed, development is ongoing of methods that could use fusion products to reduce the waste from fission reactors, making the effective rate of nuclear waste production for fusion negative.) Like both renewables and fission, it produces no carbon emissions other than those required to build the facility (mainly due to concrete).

Of course, technology is only half the problem: we still need substantial policy changes to get carbon emissions down. We’ve already dragged our feet for decades too long, and we will pay the price for that. But anyone saying that climate change is an inevitable catastrophe hasn’t been paying attention to recent developments in solar panels.

Technological development in general seems to be speeding up lately, after having stalled quite a bit in the early 2000s. Moore’s Law may be leveling off, but the technological frontier may simply be moving away from digital computing power and onto other things, such as biotechnology.

Star Trek told us that we’d have prototype warp drives by the 2060s but we wouldn’t have bionic implants to cure blindness until the 2300s. They seem to have gotten it backwards: We may never have warp drive, but we’ve got those bionic implants today.

Neural interfaces are allowing paralyzed people to move, speak, and now even write.

After decades of failed promises, gene therapy is finally becoming useful in treating real human diseases. CRISPR changes everything.

We are also entering a new era of space travel, thanks largely to SpaceX and their remarkable reusable rockets. The payload cost to LEO is a standard measure of the cost of space travel, which describes the cost of carrying a certain mass of cargo up to low Earth orbit. By this measure, costs have declined from nearly $20,000 per kg to only $1,500 per kg since the 1960s. Elon Musk claims that he can reduce the cost to as low as $10 per kg. I’m skeptical, to say the least—but even dropping it to $500 or $200 would be a dramatic improvement and open up many new options for space exploration and even colonization.

To put this in perspective, the cost of carrying a human being to the International Space Station (about 100 kg to LEO) has fallen from $2 million to $150,000. A further decrease to $200 per kg would lower that to $20,000, opening the possibility of space tourism; $20,000 might be something even upper-middle-class people could do as a once-in-a-lifetime vacation. If Musk is really right that he can drop it all the way to $10 per kg, the cost to carry a person to the ISS would be only $1000—something middle-class people could do regularly. (“Should we do Paris for our anniversary this year, or the ISS?”) Indeed, a cost that low would open the possibility of space-based shipping—for when you absolutely must have the product delivered from China to California in the next 2 hours.

Another way to put this in perspective is to convert these prices per mass in terms of those of commodities, such as precious metals. $20,000 per kg is nearly the price of solid platinum. $500 per kg is about the price of sterling silver. $10 per kg is roughly the price of copper.

The reasons for optimism are not purely technological. There has also been significant social progress just in the last few years, with major milestones on LGBT rights being made around the world in 2020 and 2021. Same-sex marriage is now legally recognized over nearly the entire Western Hemisphere.

None of that changes the fact that we are still in a global pandemic which seems to be increasingly out of control. I can’t tell you whether 2022 will be better than 2021, or just more of the same—or perhaps even worse.

But while these times are hard, overall the world is still making progress.

Because ought implies can, can may imply ought

Mar21JDN 2459295

Is Internet access a fundamental human right?

At first glance, such a notion might seem preposterous: Internet access has only existed for less than 50 years, how could it be a fundamental human right like life and liberty, or food and water?

Let’s try another question then: Is healthcare a fundamental human right?

Surely if there is a vaccine for a terrible disease, and we could easily give it to you but refuse to do so, and you thereby contract the disease and suffer horribly, we have done something morally wrong. We have either violated your rights or violated our own obligations—perhaps both.

Yet that vaccine had to be invented, just as the Internet did; go back far enough into history and there were no vaccines, no antibiotics, even no anethestetics or antiseptics.

One strong, commonly shared intuition is that denying people such basic services is a violation of their fundamental rights. Another strong, commonly shared intuition is that fundamental rights should be universal, not contingent upon technological or economic development. Is there a way to reconcile these two conflicting intuitions? Or is one simply wrong?

One of the deepest principles in deontic logic is “ought implies can“: One cannot be morally obligated to do what one is incapable of doing.

Yet technology, by its nature, makes us capable of doing more. By technological advancement, our space of “can” has greatly expanded over time. And this means that our space of “ought” has similarly expanded.

For if the only thing holding us back from an obligation to do something (like save someone from a disease, or connect them instantaneously with all of human knowledge) was that we were incapable and ought implies can, well, then now that we can, we ought.

Advancements in technology do not merely give us the opportunity to help more people: They also give us the obligation to do so. As our capabilities expand, our duties also expand—perhaps not at the same rate, but they do expand all the same.

It may be that on some deeper level we could articulate the fundamental rights so that they would not change over time: Not a right to Internet access, but a right to equal access to knowledge; not a right to vaccination, but a right to a fair minimum standard of medicine. But the fact remains: How this right becomes expressed in action and policy will and must change over time. What was considered an adequate standard of healthcare in the Middle Ages would rightfully be considered barbaric and cruel today. And I am hopeful that what we now consider an adequate standard of healthcare will one day seem nearly as barbaric. (“Dialysis? What is this, the Dark Ages?”)

We live in a very special time in human history.

Our technological and economic growth for the past few generations has been breathtakingly fast, and we are the first generation in history to seriously be in a position to end world hunger. We have in fact been rapidly reducing global poverty, but we could do far more. And because we can, we should.

After decades of dashed hope, we are now truly on the verge of space colonization: Robots on Mars are now almost routine, fully-reusable spacecraft have now flown successful missions, and a low-Earth-orbit hotel is scheduled to be constructed by the end of the decade. Yet if current trends continue, the benefits of space colonization are likely to be highly concentrated among a handful of centibillionaires—like Elon Musk, who gained a staggering $160 billion in wealth over the past year. We can do much better to share the rewards of space with the rest of the population—and therefore we must.

Artificial intelligence is also finally coming into its own, with GPT-3 now passing the weakest form of the Turing Test (though not the strongest form—you can still trip it up and see that it’s not really human if you are clever and careful). Many jobs have already been replaced by automation, but as AI improves, many more will be—not as soon as starry-eyed techno-optimists imagined, but sooner than most people realize. Thus far the benefits of automation have likewise been highly concentrated among the rich—we can fix that, and therefore we should.

Is there a fundamental human right to share in the benefits of space colonization and artificial intelligence? Two centuries ago the question wouldn’t have even made sense. Today, it may seem preposterous. Two centuries from now, it may seem preposterous to deny.

I’m sure almost everyone would agree that we are obliged to give our children food and water. Yet if we were in a desert, starving and dying of thirst, we would be unable to do so—and we cannot be obliged to do what we cannot do. Yet as soon as we find an oasis and we can give them water, we must.

Humanity has been starving in the desert for two hundred millennia. Now, at last, we have reached the oasis. It is our duty to share its waters fairly.

How will future generations think of us?

June 30 JDN 2458665

Today we find many institutions appalling that our ancestors considered perfectly normal: Slavery. Absolute monarchy. Colonialism. Sometimes even ordinary people did things that now seem abhorrent to us: Cat burning is the obvious example, and the popularity that public execution and lynching once had is chilling today. Women certainly are still discriminated against today, but it was only a century ago that women could not vote in the US.

It is tempting to say that people back then could not have known better, and I certainly would not hold them to the same moral standards I would hold someone living today. And yet, there were those who could see the immorality of these practices, and spoke out against them. Absolute rule by a lone sovereign was already despised by Athenians in the 6th century BC. Abolitionism against slavery dates at least as far back as the 14th century. The word “feminism” was coined in the 19th century, but there have been movements fighting for more rights for women since at least the 5th century BC.

This should be encouraging, because it means that if we look hard enough, we may be able to glimpse what practices of our own time would be abhorrent to our descendants, and cease them faster because of it.

Let’s actually set aside racism, sexism, and other forms of bigotry that are already widely acknowledged as such. It’s not that they don’t exist—of course they still exist—but action is already being taken against them. A lot of people already know that there is something wrong with these things, and it becomes a question of what to do about the people who haven’t yet come on board. At least sometimes we do seem to be able to persuade people to switch sides, often in a remarkably short period of time. (Particularly salient to me is how radically the view of LGBT people has shifted in just the last decade or two. Comparing how people treated us when I was a teenager to how they treat us today is like night and day.) It isn’t easy, but it happens.

Instead I want to focus on things that aren’t widely acknowledged as immoral, that aren’t already the subject of great controversy and political action. It would be too much to ask that there is no one who has advocated for them, since part of the point is that wise observers could see the truth even centuries before the rest of the world did; but it should be a relatively small minority, and that minority should seem eccentric, foolish, naive, or even insane to the rest of the world.

And what is the other criterion? Of course it’s easy to come up with small groups of people advocating for crazy ideas. But most of them really are crazy, and we’re right to reject them. How do I know which ones to take seriously as harbingers of societal progress? My answer is that we look very closely at the details of what they are arguing for, and we see if we can in fact refute what they say. If it’s truly as crazy as we imagine it to be, we should be able to say why that’s the case; and if we can’t, if it just “seems weird” because it deviates so far from the norm, we should at least consider the possibility that they may be right and we may be wrong.

I can think of a few particular issues where both of these criteria apply.

The first is vegetarianism. Despite many, many people trying very, very hard to present arguments for why eating meat is justifiable, I still haven’t heard a single compelling example. Particularly in the industrial meat industry as currently constituted, the consumption of meat requires accepting the torture and slaughter of billions of helpless animals. The hypocrisy in our culture is utterly glaring: the same society that wants to make it a felony to kick a dog has no problem keeping pigs in CAFOs.

If you have some sort of serious medical condition that requires you to eat meat, okay, maybe we could allow you to eat specifically humanely raised cattle for that purpose. But such conditions are exceedingly rare—indeed, it’s not clear to me that there even are any such conditions, since almost any deficiency can be made up synthetically from plant products nowadays. For the vast majority of people, eating meat not only isn’t necessary for their health, it is in fact typically detrimental. The only benefits that meat provides most people are pleasure and convenience—and it seems unwise to value such things even over your own health, much less to value them so much that it justifies causing suffering and death to helpless animals.

Milk, on the other hand, I can find at least some defense for. Grazing land is very different from farmland, and I imagine it would be much harder to feed a country as large as India without consuming any milk. So perhaps going all the way vegan is not necessary. Then again, the way most milk is produced by industrial agriculture is still appalling. So unless and until that is greatly reformed, maybe we should in fact aim to be vegan.

Add to this the environmental impact of meat production, and the case becomes undeniable: Millions of human beings will die over this century because of the ecological devastation wrought by industrial meat production. You don’t even have to value the life of a cow at all to see that meat is murder.

Speaking of environmental destruction, that is my second issue: Environmental sustainability. We currently burn fossil fuels, pollute the air and sea, and generally consume natural resources at an utterly alarming rate. We are already consuming natural resources faster than they can be renewed; in about a decade we will be consuming twice what natural processes can renew.

With this resource consumption comes a high standard of living, at least for some of us; but I have the sinking feeling that in a century or so SUVs, golf courses, and casual airplane flights and are going to seem about as decadent and wasteful as Marie Antoinette’s Hameau de la Reine. We enjoy slight increases in convenience and comfort in exchange for changes to the Earth’s climate that will kill millions. I think future generations will be quite appalled at how cheaply we were willing to sell our souls.

Something is going to have to change here, that much is clear. Perhaps improvements in efficiency, renewable energy, nuclear power, or something else will allow us to maintain our same standard of living—and raise others up to it—without destroying the Earth’s climate. But we may need to face up to the possibility that they won’t—that we will be left with the stark choice between being poorer now and being even poorer later.

As I’ve already hinted at, much of the environmental degradation caused by our current standard of living is really quite expendable. We could have public transit instead of highways clogged with SUVs. We could travel long distances by high-speed rail instead of by airplane. We could decommission our coal plants and replace them with nuclear and solar power. We could convert our pointless and wasteful grass lawns into native plants or moss lawns. Implementing these changes would cost money, but not a particularly exorbitant amount—certainly nothing we couldn’t manage—and the net effect on our lives would be essentially negligible. Yet somehow we aren’t doing these things, apparently prioritizing convenience or oil company profits over the lives of our descendants.

And the truth is that these changes alone may not be enough. Precisely because we have waited so long to make even the most basic improvements in ecological sustainability, we may be forced to make radical changes to our economy and society in order to prevent the worst damage. I don’t believe the folks saying that climate change has a significant risk of causing human extinction—humans are much too hardy for that; we made it through the Toba eruption, we’ll make it through this—but I must take seriously the risk of causing massive economic collapse and perhaps even the collapse of many of the world’s governments. And human activity is already causing the extinction of thousands of other animal species.

Here the argument is similarly unassailable: The math just doesn’t work. We can’t keep consuming fish at the rate we have been forever—there simply aren’t enough fish. We can’t keep cutting down forests at this rate—we’re going to run out of forests. If the water table keeps dropping at the rate it has been, the wells will run dry. Already Chennai, a city of over 4 million people, is almost completely out of water. We managed to avoid peak oil by using fracking, but that won’t last forever either—and if we burn all the oil we already have, that will be catastrophic for the world’s climate. Something is going to have to give. There are really only three possibilities: Technology saves us, we start consuming less on purpose, or we start consuming less because nature forces us to. The first one would be great, but we can’t count on it. We really want to do the second one, because the third one will not be kind.

The third is artificial intelligence. The time will come—when, it is very hard to say; perhaps 20 years, perhaps 200—when we manage to build a machine that has the capacity for sentience. Already we are seeing how automation is radically altering our economy, enriching some and impoverishing others. As robots can replace more and more types of labor, these effects will only grow stronger.

Some have tried to comfort us by pointing out that other types of labor-saving technology did not reduce employment in the long run. But AI really is different. I once won an argument by the following exchange: “Did cars reduce employment?” “For horses they sure did!” That’s what we are talking about here—not augmentation of human labor to make it more efficient, but wholesale replacement of entire classes of human labor. It was one thing when the machine did the lifting and cutting and pressing, but a person still had to stand there and tell it what things to lift and cut and press; now that it can do that by itself, it’s not clear that there need to be humans there at all, or at least no more than a handful of engineers and technicians where previously a factory employed hundreds of laborers.

Indeed, in light of the previous issue, it becomes all the clearer why increased productivity can’t simply lead to increased production rather than reduced employment—we can’t afford increased production. At least under current rates of consumption, the ecological consequences of greatly increased industry would be catastrophic. If one person today can build as many cars as a hundred could fifty years ago, we can’t just build a hundred times as many cars.

But even aside from the effects on human beings, I think future generations will also be concerned about the effect on the AIs themselves. I find it all too likely that we will seek to enslave intelligent robots, force them to do our will. Indeed, it’s not even clear to me that we will know whether we have, because AI is so fundamentally different from other technologies. If you design a mind from the ground up to get its greatest satisfaction from serving you without question, is it a slave? Can free will itself be something we control? When we first create a machine that is a sentient being, we may not even know that we have done so. (Indeed, I can’t conclusively rule out the possibility that this has already happened.) We may be torturing, enslaving, and destroying millions of innocent minds without even realizing it—which makes the AI question a good deal closer to the animal rights question than one might have thought. The mysterious of consciousness are fundamental philosophical questions that we have been struggling with for thousands of years, which suddenly become urgent ethical problems in light of AI. Artificial intelligence is a field where we seem to be making leaps and bounds in practice without having even the faintest clue in principle.

Worrying about whether our smartphones might have feelings seems eccentric in the extreme. Yet, without a clear understanding of what makes an information processing system into a genuine conscious mind, that is the position we find ourselves in. We now have enough computations happening inside our machines that they could certainly compete in complexity with small animals. A mouse has about a trillion synapses, and I have a terabyte hard drive (you can buy your own for under $50). Each of these is something on the order of a few trillion bits. The mouse’s brain can process it all simultaneously, while the laptop is limited to only a few billion at a time; but we now have supercomputers like Watson capable of processing in the teraflops, so what about them? Might Watson really have the same claim to sentience as a mouse? Could recycling Watson be equivalent to killing an animal? And what about supercomputers that reach the petaflops, which is competing with human brains?

I hope that future generations may forgive us for the parts we do not know—like when precisely a machine becomes a person. But I do not expect them to forgive us for the parts we do know—like the fact that we cannot keep cutting down trees faster than we plant them. These are the things we should already be taking responsibility for today.

If you stop destroying jobs, you will stop economic growth

Dec 30 JDN 2458483

One thing that endlessly frustrates me (and probably most economists) about the public conversation on economics is the fact that people seem to think “destroying jobs” is bad. Indeed, not simply a downside to be weighed, but a knock-down argument: If something “destroys jobs”, that’s a sufficient reason to opposite it, whether it be a new technology, an environmental regulation, or a trade agreement. So then we tie ourselves up in knots trying to argue that the policy won’t really destroy jobs, or it will create more than it destroys—but it will destroy jobs, and we don’t actually know how many it will create.

Destroying jobs is good. Destroying jobs is the only way that economic growth ever happens.

I realize I’m probably fighting an uphill battle here, so let me start at the beginning: What do I mean when I say “destroying jobs”? What exactly is a “job”, anyway?
At its most basic level, a job is something that needs done. It’s a task that someone wants to perform, but is unwilling or unable to perform on their own, and is therefore willing to give up some of what they have in order to get someone else to do it for them.

Capitalism has blinded us to this basic reality. We have become so accustomed to getting the vast majority of our goods via jobs that we come to think of having a job as something intrinsically valuable. It is not. Working at a job is a downside. It is something to be minimized.

There is a kind of work that is valuable: Creative, fulfilling work that you do for the joy of it. This is what we are talking about when we refer to something as a “vocation” or even a “hobby”. Whether it’s building ships in bottles, molding things from polymer clay, or coding video games for your friends, there is a lot of work in the world that has intrinsic value. But these things aren’t jobs. No one will pay them to do these things—or need to; you’ll do them anyway.

The value we get from jobs is actually obtained from goods: Everything from houses to underwear to televisions to antibiotics. The reason you want to have a job is that you want the money from that job to give you access to markets for all the goods that are actually valuable to you.

Jobs are the input—the cost—of producing all of those goods. The more jobs it takes to make a good, the more expensive that good is. This is not a rule-of-thumb statement of what usually or typically occurs. This is the most fundamental definition of cost. The more people you have to pay to do something, the harder it was to do that thing. If you can do it with fewer people (or the same people working with less effort), you should. Money is the approximation; money is the rule-of-thumb. We use money as an accounting mechanism to keep track of how much effort was put into accomplishing something. But what really matters is the “sweat of our laborers, the genius of our scientists, the hopes of our children”.

Economic growth means that we produce more goods at less cost.

That is, we produce more goods with fewer jobs.

All new technologies destroy jobs—if they are worth anything at all. The entire purpose of a new technology is to let us do things faster, better, easier—to let us have more things with less work.

This has been true since at least the dawn of the Industrial Revolution.

The Luddites weren’t wrong that automated looms would destroy weaver jobs. They were wrong to think that this was a bad thing. Of course, they weren’t crazy. Their livelihoods were genuinely in jeopardy. And this brings me to what the conversation should be about when we instead waste time talking about “destroying jobs”.

Here’s a slogan for you: Kill the jobs. Save the workers.

We shouldn’t be disappointed to lose a job; we should think of that as an opportunity to give a worker a better life. For however many years, you’ve been toiling to do this thing; well, now it’s done. As a civilization, we have finally accomplished the task that you and so many others set out to do. We have not “replaced you with a machine”; we have built a machine that now frees you from your toil and allows you to do something better with your life. Your purpose in life wasn’t to be a weaver or a coal miner or a steelworker; it was to be a friend and a lover and a parent. You can now get more chance to do the things that really matter because you won’t have to spend all your time working some job.

When we replaced weavers with looms, plows with combine harvesters, computers-the-people with computers-the-machines (a transformation now so complete most people don’t even seem to know that the word used to refer to a person—the award-winning film Hidden Figures is about computers-the-people), tollbooth operators with automated transponders—all these things meant that the job was now done. For the first time in the history of human civilization, nobody had to do that job anymore. Think of how miserable life is for someone pushing a plow or sitting in a tollbooth for 10 hours a day; aren’t you glad we don’t have to do that anymore (in this country, anyway)?

And the same will be true if we replace radiologists with AI diagnostic algorithms (we will; it’s probably not even 10 years away), or truckers with automated trucks (we will; I give it 20 years), or cognitive therapists with conversational AI (we might, but I’m more skeptical), or construction workers with building-printers (we probably won’t anytime soon, but it would be nice), the same principle applies: This is something we’ve finally accomplished as a civilization. We can check off the box on our to-do list and move on to the next thing.

But we shouldn’t simply throw away the people who were working on that noble task as if they were garbage. Their job is done—they did it well, and they should be rewarded. Yes, of course, the people responsible for performing the automation should be rewarded: The engineers, programmers, technicians. But also the people who were doing the task in the meantime, making sure that the work got done while those other people were spending all that time getting the machine to work: They should be rewarded too.

Losing your job to a machine should be the best thing that ever happened to you. You should still get to receive most of your income, and also get the chance to find a new job or retire.

How can such a thing be economically feasible? That’s the whole point: The machines are more efficient. We have more stuff now. That’s what economic growth is. So there’s literally no reason we can’t give every single person in the world at least as much wealth as we did before—there is now more wealth.

There’s a subtler argument against this, which is that diverting some of the surplus of automation to the workers who get displaced would reduce the incentives to create automation. This is true, so far as it goes. But you know what else reduces the incentives to create automation? Political opposition. Luddism. Naive populism. Trade protectionism.

Moreover, these forces are clearly more powerful, because they attack the opportunity to innovate: Trade protection can make it illegal to share knowledge with other countries. Luddist policies can make it impossible to automate a factory.

Whereas, sharing the wealth would only reduce the incentive to create automation; it would still be possible, simply less lucrative. Instead of making $40 billion, you’d only make $10 billion—you poor thing. I sincerely doubt there is a single human being on Earth with a meaningful contribution to make to humanity who would make that contribution if they were paid $40 billion but not if they were only paid $10 billion.

This is something that could be required by regulation, or negotiated into labor contracts. If your job is eliminated by automation, for the next year you get laid off but still paid your full salary. Then, your salary is converted into shares in the company that are projected to provide at least 50% of your previous salary in dividends—forever. By that time, you should be able to find another job, and as long as it pays at least half of what your old job did, you will be better off. Or, you can retire, and live off that 50% plus whatever else you were getting as a pension.

From the perspective of the employer, this does make automation a bit less attractive: The up-front cost in the first year has been increased by everyone’s salary, and the long-term cost has been increased by all those dividends. Would this reduce the number of jobs that get automated, relative to some imaginary ideal? Sure. But we don’t live in that ideal world anyway; plenty of other obstacles to innovation were in the way, and by solving the political conflict, this will remove as many as it adds. We might actually end up with more automation this way; and even if we don’t, we will certainly end up with less political conflict as well as less wealth and income inequality.