The AI bubble is going to crash hard

Sep 7 JDN 2460926

Based on the fact that it only sort of works and yet corps immediately put it in everything, I had long suspected that the current wave of AI was a bubble. But after reading Ed Zitron’s epic takedowns of the entire industry, I am not only convinced it’s a bubble; I’m convinced it is probably the worst bubble we’ve had in a very long time. This isn’t the dot-com crash; it’s worse.

The similarity to the dot-com crash is clear, however: This a huge amount of hype over a new technology that genuinely could be a game-changer (the Internet certainly was!), but won’t be in the time horizon on which the most optimistic investors have assumed it will be. The gap between “it sort of works” and “it radically changes our economy” is… pretty large, actually. It’s not something you close in a few years.


The headline figure here is that based on current projections, US corporations will have spent $560 billion on capital expenditure, for anticipated revenue of only $35 billion.

They won’t pay it off for 16 years!? That kind of payoff rate would make sense for large-scale physical infrastructure, like a hydroelectric dam. It absolutely does not make sense in an industry that is dependent upon cutting-edge technology that wears out fast and becomes obsolete even faster. They must think that revenue is going to increase to something much higher, very soon.

The corps seem to be banking on the most optimistic view of AI: That it will soon—very soon—bring about a radical increase in productivity that brings GDP surging to new heights, or even a true Singularity where AI fundamentally changes the nature of human existence.

Given the kind of errors I’ve seen LLMs make when I tried to use them to find research papers or help me with tedious coding, this is definitely not what’s going to happen. Claude gives an impressive interview, and (with significant guidance and error-correction) it also managed pretty well at making some simple text-based games; but it often recommended papers to me that didn’t exist, and through further experimentation, I discovered that it could not write me a functional C++ GUI if its existence depended on it. Somewhere on the Internet I heard someone describe LLMs as answering not the question you asked directly, but the question, “What would a good answer to this question look like?” and that seems very accurate. It always gives an answer that looks valid—but not necessarily one that is valid.

AI will find some usefulness in certain industries, I’m sure; and maybe the next paradigm (or the one after that) will really, truly, effect a radical change on our society. (Right now the best thing to use LLMs for seems to be cheating at school—and it also seems to be the most common use. Not exactly the great breakthrough we were hoping for.) But LLMs are just not reliable enough to actually use for anything important, and sooner or later, most of the people using them are going to figure that out.

Of course, by the Efficient Roulette Hypothesis, it’s extremely difficult to predict exactly when a bubble will burst, and it could well be that NVIDIA stock will continue to grow at astronomical rates for several years yet—or it could be that the bubble bursts tomorrow and NVIDIA stock collapses, if not to worthless, then to far below its current price.

Krugman has an idea of what might be the point that bursts the bubble: Energy costs. There is a clear mismatch between the anticipated energy needs of these ever-growing data centers and the actual energy production we’ve been installing—especially now that Trump and his ilk have gutted subsidies for solar and wind power. That’s definitely something to watch out for.

But the really scary thing is that the AI bubble actually seems to be the only thing holding the US economy above water right now. It’s the reason why Trump’s terrible policies haven’t been as disastrous as economists predicted they would; our economy is being sustained by this enormous amount of capital investment.

US GDP is about $30 trillion right now, but $500 billion of that is just AI investment. That’s over 1.6%, and last quarter our annualized GDP growth rate was 3.3%—so roughly half of our GDP growth was just due to building more data centers that probably won’t even be profitable.

Between that, the tariffs, the loss of immigrants, and rising energy costs, a crashing AI bubble could bring down the whole stock market with it.

So I guess what I’m saying is: Don’t believe the AI hype, and you might want to sell some stocks.

AI and the “generalization faculty”

Oct 1 JDN 2460219

The phrase “artificial intelligence” (AI) has now become so diluted by overuse that we needed to invent a new term for its original meaning. That term is now “artificial general intelligence” (AGI). In the 1950s, AI meant the hypothetical possibility of creating artificial minds—machines that could genuinely think and even feel like people. Now it means… pathing algorithms in video games and chatbots? The goalposts seem to have moved a bit.

It seems that AGI has always been 20 years away. It was 20 years away 50 years ago, and it will probably be 20 years away 50 years from now. Someday it will really be 20 years away, and then, 20 years after that, it will actually happen—but I doubt I’ll live to see it. (XKCD also offers some insight here: “It has not been conclusively proven impossible.”)

We make many genuine advances in computer technology and software, which have profound effects—both good and bad—on our lives, but the dream of making a person out of silicon always seems to drift ever further into the distance, like a mirage on the desert sand.

Why is this? Why do so many people—even, perhaps especially,experts in the field—keep thinking that we are on the verge of this seminal, earth-shattering breakthrough, and ending up wrong—over, and over, and over again? How do such obviously smart people keep making the same mistake?

I think it may be because, all along, we have been laboring under the tacit assumption of a generalization faculty.

What do I mean by that? By “generalization faculty”, I mean some hypothetical mental capacity that allows you to generalize your knowledge and skills across different domains, so that once you get good at one thing, it also makes you good at other things.

This certainly seems to be how humans think, at least some of the time: Someone who is very good at chess is likely also pretty good at go, and someone who can drive a motorcycle can probably also drive a car. An artist who is good at portraits is probably not bad at landscapes. Human beings are, in fact, able to generalize, at least sometimes.

But I think the mistake lies in imagining that there is just one thing that makes us good at generalizing: Just one piece of hardware or software that allows you to carry over skills from any domain to any other. This is the “generalization faculty”—the imagined faculty that I think we do not have, indeed I think does not exist.

Computers clearly do not have the capacity to generalize. A program that can beat grandmasters at chess may be useless at go, and self-driving software that works on one type of car may fail on another, let alone a motorcycle. An art program that is good at portraits of women can fail when trying to do portraits of men, and produce horrific Daliesque madness when asked to make a landscape.

But if they did somehow have our generalization capacity, then, once they could compete with us at some things—which they surely can, already—they would be able to compete with us at just about everything. So if it were really just one thing that would let them generalize, let them leap from AI to AGI, then suddenly everything would change, almost overnight.

And so this is how the AI hype cycle goes, time and time again:

  1. A computer program is made that does something impressive, something that other computer programs could not do, perhaps even something that human beings are not very good at doing.
  2. If that same prowess could be generalized to other domains, the result would plainly be something on par with human intelligence.
  3. Therefore, the only thing this computer program needs in order to be sapient is a generalization faculty.
  4. Therefore, there is just one more step to AGI! We are nearly there! It will happen any day now!

And then, of course, despite heroic efforts, we are unable to generalize that program’s capabilities except in some very narrow way—even decades after having good chess programs, getting programs to be good at go was a major achievement. We are unable to find the generalization faculty yet again. And the software becomes yet another “AI tool” that we will use to search websites or make video games.

For there never was a generalization faculty to be found. It always was a mirage in the desert sand.

Humans are in fact spectacularly good at generalizing, compared to, well, literally everything else in the known universe. Computers are terrible at it. Animals aren’t very good at it. Just about everything else is totally incapable of it. So yes, we are the best at it.

Yet we, in fact, are not particularly good at it in any objective sense.

In experiments, people often fail to generalize their reasoning even in very basic ways. There’s a famous one where we try to get people to make an analogy between a military tactic and a radiation treatment, and while very smart, creative people often get it quickly, most people are completely unable to make the connection unless you give them a lot of specific hints. People often struggle to find creative solutions to problems even when those solutions seem utterly obvious once you know them.

I don’t think this is because people are stupid or irrational. (To paraphrase Sydney Harris: Compared to what?) I think it is because generalization is hard.

People tend to be much better at generalizing within familiar domains where they have a lot of experience or expertise; this shows that there isn’t just one generalization faculty, but many. We may have a plethora of overlapping generalization faculties that apply across different domains, and can learn to improve some over others.

But it isn’t just a matter of gaining more expertise. Highly advanced expertise is in fact usually more specialized—harder to generalize. A good amateur chess player is probably a good amateur go player, but a grandmaster chess player is rarely a grandmaster go player. Someone who does well in high school biology probably also does well in high school physics, but most biologists are not very good physicists. (And lest you say it’s simply because go and physics are harder: The converse is equally true.)

Humans do seem to have a suite of cognitive tools—some innate hardware, some learned software—that allows us to generalize our skills across domains. But even after hundreds of millions of years of evolving that capacity under the highest possible stakes, we still basically suck at it.

To be clear, I do not think it will take hundreds of millions of years to make AGI—or even millions, or even thousands. Technology moves much, much faster than evolution. But I would not be surprised if it took centuries, and I am confident it will at least take decades.

But we don’t need AGI for AI to have powerful effects on our lives. Indeed, even now, AI is already affecting our lives—in mostly bad ways, frankly, as we seem to be hurtling gleefully toward the very same corporatist cyberpunk dystopia we were warned about in the 1980s.

A lot of technologies have done great things for humanity—sanitation and vaccines, for instance—and even automation can be a very good thing, as increased productivity is how we attained our First World standard of living. But AI in particular seems best at automating away the kinds of jobs human beings actually find most fulfilling, and worsening our already staggering inequality. As a civilization, we really need to ask ourselves why we got automated writing and art before we got automated sewage cleaning or corporate management. (We should also ask ourselves why automated stock trading resulted in even more money for stock traders, instead of putting them out of their worthless parasitic jobs.) There are technological reasons for this, yes; but there are also cultural and institutional ones. Automated teaching isn’t far away, and education will be all the worse for it.

To change our lives, AI doesn’t have to be good at everything. It just needs to be good at whatever we were doing to make a living. AGI may be far away, but the impact of AI is already here.

Indeed, I think this quixotic quest for AGI, and all the concern about how to control it and what effects it will have upon our society, may actually be distracting from the real harms that “ordinary” “boring” AI is already having upon our society. I think a Terminator scenario, where the machines rapidly surpass our level of intelligence and rise up to annihilate us, is quite unlikely. But a scenario where AI puts millions of people out of work with insufficient safety net, triggering economic depression and civil unrest? That could be right around the corner.

Frankly, all it may take is getting automated trucks to work, which could be just a few years. There are nearly 4 million truck drivers in the United States—a full percentage point of employment unto itself. And the Governor of California just vetoed a bill that would require all automated trucks to have human drivers. From an economic efficiency standpoint, his veto makes perfect sense: If the trucks don’t need drivers, why require them? But from an ethical and societal standpoint… what do we do with all the truck drivers!?

Will hydrogen make air travel sustainable?

Apr 9 JDN 2460042

Air travel is currently one of the most carbon-intensive activities anyone can engage in. Per passenger kilometer, airplanes emit about 8 times as much carbon as ships, 4 times as much as trains, and 1.5 times as much as cars. Living in a relatively eco-friendly city without a car and eating a vegetarian diet, I produce much less carbon than most First World citizens—except when I fly across the Atlantic a couple of times a year.

Until quite recently, most climate scientists believed that this was basically unavoidable, that simply sustaining the kind of power output required to keep an airliner in the air would always require carbon-intensive jet fuel. But in just the past few years, major breakthroughs have been made in using hydrogen propulsion.

The beautiful thing about hydrogen is that burning it simply produces water—no harmful pollution at all. It’s basically the cleanest possible fuel.


The simplest approach, which is actually quite old, but until recently didn’t seem viable, is the use of liquid hydrogen as airplane fuel.

We’ve been using liquid hydrogen as a rocket fuel for decades; so we knew it had enough energy density. (Actually its energy density is higher than conventional jet fuel.)

The problem with liquid hydrogen is that it must be kept extremely cold—it boils at 20 Kelvin. And once liquid hydrogen boils into gas, it builds up pressure very fast and easily permeates through most materials, so it’s extremely hard to contain. This makes it very difficult and expensive to handle.

But this isn’t the only way to use hydrogen, and may turn out to not be the best one.

There are now prototype aircraft that have flown using hydrogen fuel cells. These fuel cells can be fed with hydrogen gas—so no need to cool below 20 Kelvin. But then they can’t directly run the turbines; instead, these planes use electric turbines which are powered by the fuel cell.

Basically these are really electric aircraft. But whereas a lithium battery would be far too heavy, a hydrogen fuel cell is light enough for aviation use. In fact, hydrogen gas up to a certain pressure is lighter than air (it was often used for zeppelins, though, uh, occasionally catastrophically), so potentially the planes could use their own fuel tanks for buoyancy, landing “heavier” than they took off. (On the other hand it might make more sense to pressurize the hydrogen beyond that point, so that it will still be heavier than air—but perhaps still lighter than jet fuel!)

Of course, the technology is currently too untested and too expensive to be used on a wide scale. But this is how all technologies begin. It’s of course possible that we won’t be able to solve the engineering problems that currently make hydrogen-powered aircraft unaffordable; but several aircraft manufacturers are now investing in hydrogen research—suggesting that they at least believe there is a good chance we will.

There’s also the issue of where we get all the hydrogen. Hydrogen is extremely abundant—literally the most abundant baryonic matter in the universe—but most of what’s on Earth is locked up in water or hydrocarbons. Most of the hydrogen we currently make is produced by processing hydrocarbons (particularly methane), but that produces carbon emissions, so it wouldn’t solve the problem.

A better option is electrolysis: Using electricity to separate water into hydrogen and oxyen. But this requires a lot of energy—and necessarily, more energy than you can get out of burning the hydrogen later, since burning it basically is just putting the hydrogen and oxygen back together to make water.

Yet all is not lost, for while energy density is absolutely vital for an aircraft fuel, it’s not so important for a ground-based power plant. As an ultimate fuel source, hydrogen is a non-starter. But as an energy storage medium, it could be ideal.

The idea is this: We take the excess energy from wind and solar power plants, and use that energy to electrolyze water into hydrogen and oxygen. We then store that hydrogen and use it for fuel cells to run aircraft (and potentially other things as well). This ensures that the extra energy that renewable sources can generate in peak times doesn’t go to waste, and also provides us with what we need to produce clean-burning hydrogen fuel.

The basic technology for doing all this already exists. The current problem is cost. Under current conditions, it’s far more expensive to make hydrogen fuel than to make conventional jet fuel. Since fuel is one of the largest costs for airlines, even small increases in fuel prices matter a lot for the price of air travel; and these are not even small differences. Currently hydrogen costs over 10 times as much per kilogram, and its higher energy density isn’t enough to make up for that. For hydrogen aviation to be viable, that ratio needs to drop to more like 2 or 3—maybe even all the way to 1, since hydrogen is also more expensive to store than jet fuel (the gas needs high-pressure tanks, the liquid needs cryogenic cooling systems).

This means that, for the time being, it’s still environmentally responsible to reduce your air travel. Fly less often, always fly economy (more people on the plane means less carbon per passenger), and buy carbon offsets (they’re cheaper than you may think).

But in the long run, we may be able to have our cake and eat it too: If hydrogen aviation does become viable, we may not need to give up the benefits of routine air travel in order to reduce our carbon emissions.

Working from home is the new normal—sort of

Aug 28 JDN 2459820

Among people with jobs that can be done remotely, a large majority did in fact switch to doing their jobs remotely: By the end of 2020, over 70% of Americans with jobs that could be done remotely were working from home—and most of them said they didn’t want to go back.

This is actually what a lot of employers expected to happen—just not quite like this. In 2014, a third of employers predicted that the majority of their workforce would be working remotely by 2020; given the timeframe there, it required a major shock to make that happen so fast, and yet a major shock was what we had.

Working from home has carried its own challenges, but overall productivity seems to be higher working remotely (that meeting really could have been an email!). This may actually explain why output per work hour actually rose rapidly in 2020 and fell in 2022.

The COVID pandemic now isn’t so much over as becoming permanent; COVID is now being treated as an endemic infection like influenza that we don’t expect to be able to eradicate in the foreseeable future.

And likewise, remote work seems to be here to stay—sort of.

First of all, we don’t seem to be giving up office work entirely. As of the first quarter 2022, almost as many firms have partially remote work as have fully remote work, and this seems to be trending upward. A lot of firms seem to be transitioning into a “hybrid” model where employees show up to work two or three days a week. This seems to be preferred by large majorities of both workers and firms.

There is a significant downside of this: It means that the hope that remote working might finally ease the upward pressure on housing prices in major cities is largely a false one. If we were transitioning to a fully remote system, then people could live wherever they want (or can afford) and there would be no reason to move to overpriced city centers. But if you have to show up to work even one day a week, that means you need to live close enough to the office to manage that commute.

Likewise, if workers never came to the office, you could sell the office building and convert it into more housing. But if they show up even once in awhile, you need a physical place for them to go. Some firms may shrink their office space (indeed, many have—and unlike this New York Times journalist, I have a really hard time feeling bad for landlords of office buildings); but they aren’t giving it up entirely. It’s possible that firms could start trading off—you get the building on Mondays, we get it on Tuesdays—but so far this seems to be rare, and it does raise a lot of legitimate logistical and security concerns. So our global problem of office buildings that are empty, wasted space most of the time is going to get worse, not better. Manhattan will still empty out every night; it just won’t fill up as much during the day. This is honestly a major drain on our entire civilization—building and maintaining all those structures that are only used at most 1/3 of 5/7 of the time, and soon, less—and we really should stop ignoring it. No wonder our real estate is so expensive, when half of it is only used 20% of the time!

Moreover, not everyone gets to work remotely. Your job must be something that can be done remotely—something that involves dealing with information, not physical objects. That includes a wide and ever-growing range of jobs, from artists and authors to engineers and software developers—but it doesn’t include everyone. It basically means what we call “white-collar” work.

Indeed, it is largely limited to the upper-middle class. The rich never really worked anyway, though sometimes they pretend to, convincing themselves that managing a stock portfolio (that would actually grow faster if they let it sit) constitutes “work”. And the working class? By and large, they didn’t get the chance to work remotely. While 73% of workers with salaries above $200,000 worked remotely in 2020, only 12% of workers with salaries under $25,000 did, and there is a smooth trend where, across the board, the more money you make, the more likely you have been able to work remotely.

This will only intensify the divide between white-collar and blue-collar workers. They already think we don’t do “real work”; now we don’t even go to work. And while blue-collar workers are constantly complaining about contempt from white-collar elites, I think the shoe is really on the other foot. I have met very few white-collar workers who express contempt for blue-collar workers—and I have met very few blue-collar workers who don’t express anger and resentment toward white-collar workers. I keep hearing blue-collar people say that we think that they are worthless and incompetent, when they are literally the only ones ever saying that. I can’t stop saying things that I never said.

The rich and powerful may look down on them, but they look down on everyone. (Maybe they look down on blue-collar workers more? I’m not even sure about that.) I think politicians sometimes express contempt for blue-collar workers, but I don’t think this reflects what most white-collar workers feel.

And the highly-educated may express some vague sense of pity or disappointment in people who didn’t get college degrees, and sometimes even anger (especially when they do things like vote for Donald Trump), but the really vitriolic hatred is clearly in the opposite direction (indeed, I have no better explanation for how otherwise-sane people could vote for Donald Trump). And I certainly wouldn’t say that everyone needs a college degree (though I became tempted to, when so many people without college degrees voted for Donald Trump).

This really isn’t us treating them with contempt: This is them having a really severe inferiority complex. And as information technology (that white-collar work created) gives us—but not them—the privilege of staying home, that is only going to get worse.

It’s not their fault: Our culture of meritocracy puts a little bit of inferiority complex in all of us. It tells us that success and failure are our own doing, and so billionaires deserve to have everything and the poor deserve to have nothing. And blue-collar workers have absolutely internalized these attitudes: Most of them believe that poor people choose to stay on welfare forever rather than get jobs (when welfare has time limits and work requirements, so this is simply not an option—and you would know this from the Wikipedia page on TANF).

I think that what they experience as “contempt by white-collar elites” is really the pain of living in an illusory meritocracy. They were told—and they came to believe—that working hard would bring success, and they have worked very hard, and watched other people be much more successful. They assume that the rich and powerful are white-collar workers, when really they are non-workers; they are people the world was handed to on a silver platter. (What, you think George W. Bush earned his admission to Yale?)

And thus, we can shout until we are blue in the face that plumbers, bricklayers and welders are the backbone of civilization—and they are, and I absolutely mean that; our civilization would, in an almost literal sense, collapse without them—but it won’t make any difference. They’ll still feel the pain of living in a society that gave them very little and tells them that people get what they deserve.

I don’t know what to say to such people, though. When your political attitudes are based on beliefs that are objectively false, that you could know are objectively false if you simply bothered to look them up… what exactly am I supposed to say to you? How can we have a useful political conversation when half the country doesn’t even believe in fact-checking?

Honestly I wish someone had explained to them that even the most ideal meritocratic capitalism wouldn’t reward hard work. Work is a cost, not a benefit, and the whole point of technological advancement is to allow us to accomplish more with less work. The ideal capitalism would reward talent—you would succeed by accomplishing things, regardless of how much effort you put into them. People would be rich mainly because they are brilliant, not because they are hard-working. The closest thing we have to ideal capitalism right now is probably professional sports. And no amount of effort could ever possibly make me into Steph Curry.

If that isn’t the world we want to live in, so be it; let’s do something else. I did nothing to earn either my high IQ or my chronic migraines, so it really does feel unfair that the former increases my income while the latter decreases it. But the labor theory of value has always been wrong; taking more sweat or more hours to do the same thing is worse, not better. The dignity of labor consists in its accomplishment, not its effort. Sisyphus is not happy, because his work is pointless.

Honestly at this point I think our best bet is just to replace all blue-collar work with automation, thus rendering it all moot. And then maybe we can all work remotely, just pushing code patches to the robots that do everything. (And no doubt this will prove my “contempt”: I want to replace you! No, I want to replace the grueling work that you have been forced to do to make a living. I want you—the human being—to be able to do something more fun with your life, even if that’s just watching television and hanging out with friends.)

Reasons for optimism in 2022

Jan 2 JDN 2459582

When this post goes live, we will have begun the year 2022.

That still sounds futuristic, somehow. We’ve been in the 20th century long enough that most of my students were born in it and nearly all of them are old enough to drink (to be fair, it’s the UK, so “old enough to drink” only means 18). Yet “the year 2022” still seems like it belongs in science fiction, and not on our wall calendars.

2020 and 2021 were quite bad years. Death rates and poverty rates surged around the world. Almost all of that was directly or indirectly due to COVID.

Yet there are two things we should keep in perspective.

First, those death rates and poverty rates surged to what we used to consider normal 50 years ago. These are not uniquely bad times; indeed, they are still better than most of human history.

Second, there are many reasons to think that 2022—or perhaps a bit later than that, 2025 or 2030—will be better.

The Omicron variant is highly contagious, but so far does not appear to be as deadly as previous variants. COVID seems to be evolving to be more like influenza: Catching it will be virtually inevitable, but dying from it will be very rare.

Things are also looking quite good on the climate change front: Renewable energy production is growing at breathtaking speed and is now cheaper than almost every other form of energy. It’s awful that we panicked and locked down nuclear energy for the last 50 years, but at this point we may no longer need it: Solar and wind are just that good now.

Battery technology is also rapidly improving, giving us denser, cheaper, more stable batteries that may soon allow us to solve the intermittency problem: the wind may not always blow and the sun may not always shine, but if you have big enough batteries you don’t need them to. (You can get a really good feel for how much difference good batteries make in energy production by playing Factorio, or, more whimsically, Mewnbase.)

If we do go back to nuclear energy, it may not be fission anymore, but fusion. Now that we have nearly reached that vital milestone of break-even, investment in fusion technology has rapidly increased.


Fusion has basically all of the benefits of fission with none of the drawbacks. Unlike renewables, it can produce enormous amounts of energy in a way that can be easily scaled and controlled independently of weather conditions. Unlike fission, it requires no exotic nuclear fuels (deuterium can be readily attained from water), and produces no long-lived radioactive waste. (Indeed, development is ongoing of methods that could use fusion products to reduce the waste from fission reactors, making the effective rate of nuclear waste production for fusion negative.) Like both renewables and fission, it produces no carbon emissions other than those required to build the facility (mainly due to concrete).

Of course, technology is only half the problem: we still need substantial policy changes to get carbon emissions down. We’ve already dragged our feet for decades too long, and we will pay the price for that. But anyone saying that climate change is an inevitable catastrophe hasn’t been paying attention to recent developments in solar panels.

Technological development in general seems to be speeding up lately, after having stalled quite a bit in the early 2000s. Moore’s Law may be leveling off, but the technological frontier may simply be moving away from digital computing power and onto other things, such as biotechnology.

Star Trek told us that we’d have prototype warp drives by the 2060s but we wouldn’t have bionic implants to cure blindness until the 2300s. They seem to have gotten it backwards: We may never have warp drive, but we’ve got those bionic implants today.

Neural interfaces are allowing paralyzed people to move, speak, and now even write.

After decades of failed promises, gene therapy is finally becoming useful in treating real human diseases. CRISPR changes everything.

We are also entering a new era of space travel, thanks largely to SpaceX and their remarkable reusable rockets. The payload cost to LEO is a standard measure of the cost of space travel, which describes the cost of carrying a certain mass of cargo up to low Earth orbit. By this measure, costs have declined from nearly $20,000 per kg to only $1,500 per kg since the 1960s. Elon Musk claims that he can reduce the cost to as low as $10 per kg. I’m skeptical, to say the least—but even dropping it to $500 or $200 would be a dramatic improvement and open up many new options for space exploration and even colonization.

To put this in perspective, the cost of carrying a human being to the International Space Station (about 100 kg to LEO) has fallen from $2 million to $150,000. A further decrease to $200 per kg would lower that to $20,000, opening the possibility of space tourism; $20,000 might be something even upper-middle-class people could do as a once-in-a-lifetime vacation. If Musk is really right that he can drop it all the way to $10 per kg, the cost to carry a person to the ISS would be only $1000—something middle-class people could do regularly. (“Should we do Paris for our anniversary this year, or the ISS?”) Indeed, a cost that low would open the possibility of space-based shipping—for when you absolutely must have the product delivered from China to California in the next 2 hours.

Another way to put this in perspective is to convert these prices per mass in terms of those of commodities, such as precious metals. $20,000 per kg is nearly the price of solid platinum. $500 per kg is about the price of sterling silver. $10 per kg is roughly the price of copper.

The reasons for optimism are not purely technological. There has also been significant social progress just in the last few years, with major milestones on LGBT rights being made around the world in 2020 and 2021. Same-sex marriage is now legally recognized over nearly the entire Western Hemisphere.

None of that changes the fact that we are still in a global pandemic which seems to be increasingly out of control. I can’t tell you whether 2022 will be better than 2021, or just more of the same—or perhaps even worse.

But while these times are hard, overall the world is still making progress.

Because ought implies can, can may imply ought

Mar21JDN 2459295

Is Internet access a fundamental human right?

At first glance, such a notion might seem preposterous: Internet access has only existed for less than 50 years, how could it be a fundamental human right like life and liberty, or food and water?

Let’s try another question then: Is healthcare a fundamental human right?

Surely if there is a vaccine for a terrible disease, and we could easily give it to you but refuse to do so, and you thereby contract the disease and suffer horribly, we have done something morally wrong. We have either violated your rights or violated our own obligations—perhaps both.

Yet that vaccine had to be invented, just as the Internet did; go back far enough into history and there were no vaccines, no antibiotics, even no anethestetics or antiseptics.

One strong, commonly shared intuition is that denying people such basic services is a violation of their fundamental rights. Another strong, commonly shared intuition is that fundamental rights should be universal, not contingent upon technological or economic development. Is there a way to reconcile these two conflicting intuitions? Or is one simply wrong?

One of the deepest principles in deontic logic is “ought implies can“: One cannot be morally obligated to do what one is incapable of doing.

Yet technology, by its nature, makes us capable of doing more. By technological advancement, our space of “can” has greatly expanded over time. And this means that our space of “ought” has similarly expanded.

For if the only thing holding us back from an obligation to do something (like save someone from a disease, or connect them instantaneously with all of human knowledge) was that we were incapable and ought implies can, well, then now that we can, we ought.

Advancements in technology do not merely give us the opportunity to help more people: They also give us the obligation to do so. As our capabilities expand, our duties also expand—perhaps not at the same rate, but they do expand all the same.

It may be that on some deeper level we could articulate the fundamental rights so that they would not change over time: Not a right to Internet access, but a right to equal access to knowledge; not a right to vaccination, but a right to a fair minimum standard of medicine. But the fact remains: How this right becomes expressed in action and policy will and must change over time. What was considered an adequate standard of healthcare in the Middle Ages would rightfully be considered barbaric and cruel today. And I am hopeful that what we now consider an adequate standard of healthcare will one day seem nearly as barbaric. (“Dialysis? What is this, the Dark Ages?”)

We live in a very special time in human history.

Our technological and economic growth for the past few generations has been breathtakingly fast, and we are the first generation in history to seriously be in a position to end world hunger. We have in fact been rapidly reducing global poverty, but we could do far more. And because we can, we should.

After decades of dashed hope, we are now truly on the verge of space colonization: Robots on Mars are now almost routine, fully-reusable spacecraft have now flown successful missions, and a low-Earth-orbit hotel is scheduled to be constructed by the end of the decade. Yet if current trends continue, the benefits of space colonization are likely to be highly concentrated among a handful of centibillionaires—like Elon Musk, who gained a staggering $160 billion in wealth over the past year. We can do much better to share the rewards of space with the rest of the population—and therefore we must.

Artificial intelligence is also finally coming into its own, with GPT-3 now passing the weakest form of the Turing Test (though not the strongest form—you can still trip it up and see that it’s not really human if you are clever and careful). Many jobs have already been replaced by automation, but as AI improves, many more will be—not as soon as starry-eyed techno-optimists imagined, but sooner than most people realize. Thus far the benefits of automation have likewise been highly concentrated among the rich—we can fix that, and therefore we should.

Is there a fundamental human right to share in the benefits of space colonization and artificial intelligence? Two centuries ago the question wouldn’t have even made sense. Today, it may seem preposterous. Two centuries from now, it may seem preposterous to deny.

I’m sure almost everyone would agree that we are obliged to give our children food and water. Yet if we were in a desert, starving and dying of thirst, we would be unable to do so—and we cannot be obliged to do what we cannot do. Yet as soon as we find an oasis and we can give them water, we must.

Humanity has been starving in the desert for two hundred millennia. Now, at last, we have reached the oasis. It is our duty to share its waters fairly.

How will future generations think of us?

June 30 JDN 2458665

Today we find many institutions appalling that our ancestors considered perfectly normal: Slavery. Absolute monarchy. Colonialism. Sometimes even ordinary people did things that now seem abhorrent to us: Cat burning is the obvious example, and the popularity that public execution and lynching once had is chilling today. Women certainly are still discriminated against today, but it was only a century ago that women could not vote in the US.

It is tempting to say that people back then could not have known better, and I certainly would not hold them to the same moral standards I would hold someone living today. And yet, there were those who could see the immorality of these practices, and spoke out against them. Absolute rule by a lone sovereign was already despised by Athenians in the 6th century BC. Abolitionism against slavery dates at least as far back as the 14th century. The word “feminism” was coined in the 19th century, but there have been movements fighting for more rights for women since at least the 5th century BC.

This should be encouraging, because it means that if we look hard enough, we may be able to glimpse what practices of our own time would be abhorrent to our descendants, and cease them faster because of it.

Let’s actually set aside racism, sexism, and other forms of bigotry that are already widely acknowledged as such. It’s not that they don’t exist—of course they still exist—but action is already being taken against them. A lot of people already know that there is something wrong with these things, and it becomes a question of what to do about the people who haven’t yet come on board. At least sometimes we do seem to be able to persuade people to switch sides, often in a remarkably short period of time. (Particularly salient to me is how radically the view of LGBT people has shifted in just the last decade or two. Comparing how people treated us when I was a teenager to how they treat us today is like night and day.) It isn’t easy, but it happens.

Instead I want to focus on things that aren’t widely acknowledged as immoral, that aren’t already the subject of great controversy and political action. It would be too much to ask that there is no one who has advocated for them, since part of the point is that wise observers could see the truth even centuries before the rest of the world did; but it should be a relatively small minority, and that minority should seem eccentric, foolish, naive, or even insane to the rest of the world.

And what is the other criterion? Of course it’s easy to come up with small groups of people advocating for crazy ideas. But most of them really are crazy, and we’re right to reject them. How do I know which ones to take seriously as harbingers of societal progress? My answer is that we look very closely at the details of what they are arguing for, and we see if we can in fact refute what they say. If it’s truly as crazy as we imagine it to be, we should be able to say why that’s the case; and if we can’t, if it just “seems weird” because it deviates so far from the norm, we should at least consider the possibility that they may be right and we may be wrong.

I can think of a few particular issues where both of these criteria apply.

The first is vegetarianism. Despite many, many people trying very, very hard to present arguments for why eating meat is justifiable, I still haven’t heard a single compelling example. Particularly in the industrial meat industry as currently constituted, the consumption of meat requires accepting the torture and slaughter of billions of helpless animals. The hypocrisy in our culture is utterly glaring: the same society that wants to make it a felony to kick a dog has no problem keeping pigs in CAFOs.

If you have some sort of serious medical condition that requires you to eat meat, okay, maybe we could allow you to eat specifically humanely raised cattle for that purpose. But such conditions are exceedingly rare—indeed, it’s not clear to me that there even are any such conditions, since almost any deficiency can be made up synthetically from plant products nowadays. For the vast majority of people, eating meat not only isn’t necessary for their health, it is in fact typically detrimental. The only benefits that meat provides most people are pleasure and convenience—and it seems unwise to value such things even over your own health, much less to value them so much that it justifies causing suffering and death to helpless animals.

Milk, on the other hand, I can find at least some defense for. Grazing land is very different from farmland, and I imagine it would be much harder to feed a country as large as India without consuming any milk. So perhaps going all the way vegan is not necessary. Then again, the way most milk is produced by industrial agriculture is still appalling. So unless and until that is greatly reformed, maybe we should in fact aim to be vegan.

Add to this the environmental impact of meat production, and the case becomes undeniable: Millions of human beings will die over this century because of the ecological devastation wrought by industrial meat production. You don’t even have to value the life of a cow at all to see that meat is murder.

Speaking of environmental destruction, that is my second issue: Environmental sustainability. We currently burn fossil fuels, pollute the air and sea, and generally consume natural resources at an utterly alarming rate. We are already consuming natural resources faster than they can be renewed; in about a decade we will be consuming twice what natural processes can renew.

With this resource consumption comes a high standard of living, at least for some of us; but I have the sinking feeling that in a century or so SUVs, golf courses, and casual airplane flights and are going to seem about as decadent and wasteful as Marie Antoinette’s Hameau de la Reine. We enjoy slight increases in convenience and comfort in exchange for changes to the Earth’s climate that will kill millions. I think future generations will be quite appalled at how cheaply we were willing to sell our souls.

Something is going to have to change here, that much is clear. Perhaps improvements in efficiency, renewable energy, nuclear power, or something else will allow us to maintain our same standard of living—and raise others up to it—without destroying the Earth’s climate. But we may need to face up to the possibility that they won’t—that we will be left with the stark choice between being poorer now and being even poorer later.

As I’ve already hinted at, much of the environmental degradation caused by our current standard of living is really quite expendable. We could have public transit instead of highways clogged with SUVs. We could travel long distances by high-speed rail instead of by airplane. We could decommission our coal plants and replace them with nuclear and solar power. We could convert our pointless and wasteful grass lawns into native plants or moss lawns. Implementing these changes would cost money, but not a particularly exorbitant amount—certainly nothing we couldn’t manage—and the net effect on our lives would be essentially negligible. Yet somehow we aren’t doing these things, apparently prioritizing convenience or oil company profits over the lives of our descendants.

And the truth is that these changes alone may not be enough. Precisely because we have waited so long to make even the most basic improvements in ecological sustainability, we may be forced to make radical changes to our economy and society in order to prevent the worst damage. I don’t believe the folks saying that climate change has a significant risk of causing human extinction—humans are much too hardy for that; we made it through the Toba eruption, we’ll make it through this—but I must take seriously the risk of causing massive economic collapse and perhaps even the collapse of many of the world’s governments. And human activity is already causing the extinction of thousands of other animal species.

Here the argument is similarly unassailable: The math just doesn’t work. We can’t keep consuming fish at the rate we have been forever—there simply aren’t enough fish. We can’t keep cutting down forests at this rate—we’re going to run out of forests. If the water table keeps dropping at the rate it has been, the wells will run dry. Already Chennai, a city of over 4 million people, is almost completely out of water. We managed to avoid peak oil by using fracking, but that won’t last forever either—and if we burn all the oil we already have, that will be catastrophic for the world’s climate. Something is going to have to give. There are really only three possibilities: Technology saves us, we start consuming less on purpose, or we start consuming less because nature forces us to. The first one would be great, but we can’t count on it. We really want to do the second one, because the third one will not be kind.

The third is artificial intelligence. The time will come—when, it is very hard to say; perhaps 20 years, perhaps 200—when we manage to build a machine that has the capacity for sentience. Already we are seeing how automation is radically altering our economy, enriching some and impoverishing others. As robots can replace more and more types of labor, these effects will only grow stronger.

Some have tried to comfort us by pointing out that other types of labor-saving technology did not reduce employment in the long run. But AI really is different. I once won an argument by the following exchange: “Did cars reduce employment?” “For horses they sure did!” That’s what we are talking about here—not augmentation of human labor to make it more efficient, but wholesale replacement of entire classes of human labor. It was one thing when the machine did the lifting and cutting and pressing, but a person still had to stand there and tell it what things to lift and cut and press; now that it can do that by itself, it’s not clear that there need to be humans there at all, or at least no more than a handful of engineers and technicians where previously a factory employed hundreds of laborers.

Indeed, in light of the previous issue, it becomes all the clearer why increased productivity can’t simply lead to increased production rather than reduced employment—we can’t afford increased production. At least under current rates of consumption, the ecological consequences of greatly increased industry would be catastrophic. If one person today can build as many cars as a hundred could fifty years ago, we can’t just build a hundred times as many cars.

But even aside from the effects on human beings, I think future generations will also be concerned about the effect on the AIs themselves. I find it all too likely that we will seek to enslave intelligent robots, force them to do our will. Indeed, it’s not even clear to me that we will know whether we have, because AI is so fundamentally different from other technologies. If you design a mind from the ground up to get its greatest satisfaction from serving you without question, is it a slave? Can free will itself be something we control? When we first create a machine that is a sentient being, we may not even know that we have done so. (Indeed, I can’t conclusively rule out the possibility that this has already happened.) We may be torturing, enslaving, and destroying millions of innocent minds without even realizing it—which makes the AI question a good deal closer to the animal rights question than one might have thought. The mysterious of consciousness are fundamental philosophical questions that we have been struggling with for thousands of years, which suddenly become urgent ethical problems in light of AI. Artificial intelligence is a field where we seem to be making leaps and bounds in practice without having even the faintest clue in principle.

Worrying about whether our smartphones might have feelings seems eccentric in the extreme. Yet, without a clear understanding of what makes an information processing system into a genuine conscious mind, that is the position we find ourselves in. We now have enough computations happening inside our machines that they could certainly compete in complexity with small animals. A mouse has about a trillion synapses, and I have a terabyte hard drive (you can buy your own for under $50). Each of these is something on the order of a few trillion bits. The mouse’s brain can process it all simultaneously, while the laptop is limited to only a few billion at a time; but we now have supercomputers like Watson capable of processing in the teraflops, so what about them? Might Watson really have the same claim to sentience as a mouse? Could recycling Watson be equivalent to killing an animal? And what about supercomputers that reach the petaflops, which is competing with human brains?

I hope that future generations may forgive us for the parts we do not know—like when precisely a machine becomes a person. But I do not expect them to forgive us for the parts we do know—like the fact that we cannot keep cutting down trees faster than we plant them. These are the things we should already be taking responsibility for today.

If you stop destroying jobs, you will stop economic growth

Dec 30 JDN 2458483

One thing that endlessly frustrates me (and probably most economists) about the public conversation on economics is the fact that people seem to think “destroying jobs” is bad. Indeed, not simply a downside to be weighed, but a knock-down argument: If something “destroys jobs”, that’s a sufficient reason to opposite it, whether it be a new technology, an environmental regulation, or a trade agreement. So then we tie ourselves up in knots trying to argue that the policy won’t really destroy jobs, or it will create more than it destroys—but it will destroy jobs, and we don’t actually know how many it will create.

Destroying jobs is good. Destroying jobs is the only way that economic growth ever happens.

I realize I’m probably fighting an uphill battle here, so let me start at the beginning: What do I mean when I say “destroying jobs”? What exactly is a “job”, anyway?
At its most basic level, a job is something that needs done. It’s a task that someone wants to perform, but is unwilling or unable to perform on their own, and is therefore willing to give up some of what they have in order to get someone else to do it for them.

Capitalism has blinded us to this basic reality. We have become so accustomed to getting the vast majority of our goods via jobs that we come to think of having a job as something intrinsically valuable. It is not. Working at a job is a downside. It is something to be minimized.

There is a kind of work that is valuable: Creative, fulfilling work that you do for the joy of it. This is what we are talking about when we refer to something as a “vocation” or even a “hobby”. Whether it’s building ships in bottles, molding things from polymer clay, or coding video games for your friends, there is a lot of work in the world that has intrinsic value. But these things aren’t jobs. No one will pay them to do these things—or need to; you’ll do them anyway.

The value we get from jobs is actually obtained from goods: Everything from houses to underwear to televisions to antibiotics. The reason you want to have a job is that you want the money from that job to give you access to markets for all the goods that are actually valuable to you.

Jobs are the input—the cost—of producing all of those goods. The more jobs it takes to make a good, the more expensive that good is. This is not a rule-of-thumb statement of what usually or typically occurs. This is the most fundamental definition of cost. The more people you have to pay to do something, the harder it was to do that thing. If you can do it with fewer people (or the same people working with less effort), you should. Money is the approximation; money is the rule-of-thumb. We use money as an accounting mechanism to keep track of how much effort was put into accomplishing something. But what really matters is the “sweat of our laborers, the genius of our scientists, the hopes of our children”.

Economic growth means that we produce more goods at less cost.

That is, we produce more goods with fewer jobs.

All new technologies destroy jobs—if they are worth anything at all. The entire purpose of a new technology is to let us do things faster, better, easier—to let us have more things with less work.

This has been true since at least the dawn of the Industrial Revolution.

The Luddites weren’t wrong that automated looms would destroy weaver jobs. They were wrong to think that this was a bad thing. Of course, they weren’t crazy. Their livelihoods were genuinely in jeopardy. And this brings me to what the conversation should be about when we instead waste time talking about “destroying jobs”.

Here’s a slogan for you: Kill the jobs. Save the workers.

We shouldn’t be disappointed to lose a job; we should think of that as an opportunity to give a worker a better life. For however many years, you’ve been toiling to do this thing; well, now it’s done. As a civilization, we have finally accomplished the task that you and so many others set out to do. We have not “replaced you with a machine”; we have built a machine that now frees you from your toil and allows you to do something better with your life. Your purpose in life wasn’t to be a weaver or a coal miner or a steelworker; it was to be a friend and a lover and a parent. You can now get more chance to do the things that really matter because you won’t have to spend all your time working some job.

When we replaced weavers with looms, plows with combine harvesters, computers-the-people with computers-the-machines (a transformation now so complete most people don’t even seem to know that the word used to refer to a person—the award-winning film Hidden Figures is about computers-the-people), tollbooth operators with automated transponders—all these things meant that the job was now done. For the first time in the history of human civilization, nobody had to do that job anymore. Think of how miserable life is for someone pushing a plow or sitting in a tollbooth for 10 hours a day; aren’t you glad we don’t have to do that anymore (in this country, anyway)?

And the same will be true if we replace radiologists with AI diagnostic algorithms (we will; it’s probably not even 10 years away), or truckers with automated trucks (we will; I give it 20 years), or cognitive therapists with conversational AI (we might, but I’m more skeptical), or construction workers with building-printers (we probably won’t anytime soon, but it would be nice), the same principle applies: This is something we’ve finally accomplished as a civilization. We can check off the box on our to-do list and move on to the next thing.

But we shouldn’t simply throw away the people who were working on that noble task as if they were garbage. Their job is done—they did it well, and they should be rewarded. Yes, of course, the people responsible for performing the automation should be rewarded: The engineers, programmers, technicians. But also the people who were doing the task in the meantime, making sure that the work got done while those other people were spending all that time getting the machine to work: They should be rewarded too.

Losing your job to a machine should be the best thing that ever happened to you. You should still get to receive most of your income, and also get the chance to find a new job or retire.

How can such a thing be economically feasible? That’s the whole point: The machines are more efficient. We have more stuff now. That’s what economic growth is. So there’s literally no reason we can’t give every single person in the world at least as much wealth as we did before—there is now more wealth.

There’s a subtler argument against this, which is that diverting some of the surplus of automation to the workers who get displaced would reduce the incentives to create automation. This is true, so far as it goes. But you know what else reduces the incentives to create automation? Political opposition. Luddism. Naive populism. Trade protectionism.

Moreover, these forces are clearly more powerful, because they attack the opportunity to innovate: Trade protection can make it illegal to share knowledge with other countries. Luddist policies can make it impossible to automate a factory.

Whereas, sharing the wealth would only reduce the incentive to create automation; it would still be possible, simply less lucrative. Instead of making $40 billion, you’d only make $10 billion—you poor thing. I sincerely doubt there is a single human being on Earth with a meaningful contribution to make to humanity who would make that contribution if they were paid $40 billion but not if they were only paid $10 billion.

This is something that could be required by regulation, or negotiated into labor contracts. If your job is eliminated by automation, for the next year you get laid off but still paid your full salary. Then, your salary is converted into shares in the company that are projected to provide at least 50% of your previous salary in dividends—forever. By that time, you should be able to find another job, and as long as it pays at least half of what your old job did, you will be better off. Or, you can retire, and live off that 50% plus whatever else you were getting as a pension.

From the perspective of the employer, this does make automation a bit less attractive: The up-front cost in the first year has been increased by everyone’s salary, and the long-term cost has been increased by all those dividends. Would this reduce the number of jobs that get automated, relative to some imaginary ideal? Sure. But we don’t live in that ideal world anyway; plenty of other obstacles to innovation were in the way, and by solving the political conflict, this will remove as many as it adds. We might actually end up with more automation this way; and even if we don’t, we will certainly end up with less political conflict as well as less wealth and income inequality.

The upsides of life extension

Dec 16 JDN 2458469

If living is good, then living longer is better.

This may seem rather obvious, but it’s something we often lose sight of when discussing the consequences of medical technology for extending life. It’s almost like it seems too obvious that living longer must be better, and so we go out of our way to find ways that it is actually worse.

Even from a quick search I was able to find half a dozen popular media articles about life extension, and not one of them focused primarily on the benefits. The empirical literature is better, asking specific, empirically testable questions like “How does life expectancy relate to retirement age?” and “How is lifespan related to population and income growth?” and “What effect will longer lifespans have on pension systems?” Though even there I found essays in medical journals complaining that we have extended “quantity” of life without “quality” (yet by definition, if you are using QALY to assess the cost-effectiveness of a medical intervention, that’s already taken into account).

But still I think somewhere along the way we have forgotten just how good this is. We may not even be able to imagine the benefits of extending people’s lives to 200 or 500 or 1000 years.

To really get some perspective on this, I want you to imagine what a similar conversation must have looked like in roughly the year 1800, the Industrial Revolution, when industrial capitalism came along and made babies finally stop dying.

There was no mass media back then (not enough literacy), but imagine what it would have been like if there had been, or imagine what conversations about the future between elites must have been like.

And we do actually have at least one example of an elite author lamenting the increase in lifespan: His name was Thomas Malthus.

The Malthusian argument was seductive then, and it remains seductive today: If you improve medicine and food production, you will increase population. But if you increase population, you will eventually outstrip those gains in medicine and food and return once more to disease and starvation, only now with more mouths to feed.

Basically any modern discussion of “overpopulation” has this same flavor (by the way, serious environmentalists don’t use that concept; they’re focused on reducing pollution and carbon emissions, not people). Why bother helping poor countries, when they’re just going to double their population and need twice the help?

Well, as a matter of fact, Malthus was wrong. In fact, he was not just wrong: He was backwards. Increased population has come with increased standard of living around the world, as it allowed for more trade, greater specialization, and the application of economies of scale. You can’t build a retail market with a hunter-gatherer tribe. You can’t built an auto industry with a single city-state. You can’t build a space program with a population of 1 million. Having more people has allowed each person to do and have more than they could before.

Current population projections suggest world population will stabilize between 11 and 12 billion. Crucially, this does not factor in any kind of radical life extension technology. The projections allow for moderate increases in lifespan, but not people living much past 100.

Would increased lifespan lead to increased population? Probably, yes. I can’t be certain, because I can very easily imagine people deciding to put off having kids if they can reasonably expect to live 200 years and never become infertile.

I’m actually more worried about the unequal distribution of offspring: People who don’t believe in contraception will be able to have an awful lot of kids during that time, which could be bad for both the kids and society as a whole. We may need to impose regulations on reproduction similar to (but hopefully less draconian than) the One-Child policy imposed in China.

I think the most sensible way to impose the right incentives while still preserving civil liberties is to make it a tax: The first kid gets a subsidy, to help care for them. The second kid is revenue-neutral; we tax you but you get it back as benefits for the child. (Why not just let them keep the money? One of the few places where I think government paternalism is justifiable is protection against abusive or neglectful parents.) The third and later kids result in progressively higher taxes. We always feed the kids on government money, but their parents are going to end up quite poor if they don’t learn how to use contraceptives. (And of course, contraceptives will be made available for free without a prescription.)

But suppose that, yes, population does greatly increase as a result of longer lifespans. This is not a doomsday scenario. In fact, in itself, this is a good thing. If life is worth living, more lives are better.

The question becomes how we ensure that all these people live good lives; but technology will make that easier too. There seems to be an underlying assumption that increased lifespan won’t come with improved health and vitality; but this is already not true. 60 is the new 50: People who are 60 years old today live as well as people who were 50 years old just a generation ago.

And in fact, radical life extension will be an entirely different mechanism. We’re not talking about replacing a hip here, a kidney there; we’re talking about replenishing your chromosomal telomeres, repairing your cells at the molecular level, and revitalizing the content of your blood. The goal of life extension technology isn’t to make you technically alive but hooked up to machines for 200 years; it’s to make you young again for 200 years. The goal is a world where centenarians are playing tennis with young adults fresh out of college and you have trouble telling which is which.

There is another inequality concern here as well, which is cost. Especially in the US—actually almost only in the US, since most of the world has socialized medicine—where medicine is privatized and depends on your personal budget, I can easily imagine a world where the rich live to 200 and the poor die at 60. (The forgettable Justin Timberlake film In Time started with this excellent premise and then went precisely nowhere with it. Oddly, the Deus Ex games seem to have considered every consequence of mixing capitalism with human augmentation except this one.) We should be proactively taking steps to prevent this nightmare scenario by focusing on making healthcare provision equitable and universal. Even if this slows down the development of the technology a little bit, it’ll be worth it to make sure that when it does arrive, it will arrive for everyone.

We really don’t know what the world will look like when people can live 200 years or more. Yes, there will be challenges that come from the transition; honestly I’m most worried about keeping alive ideas that people grew up with two centuries prior. Imagine talking politics with Abraham Lincoln: He was viewed as extremely progressive for his time, even radical—but he was still a big-time racist.

The good news there is that people are not actually as set in their ways as many believe: While the huge surge in pro-LGBT attitudes did come from younger generations, support for LGBT rights has been gradually creeping up among older generations too. Perhaps if Abraham Lincoln had lived through the Great Depression, the World Wars, and the Civil Rights Movement he’d be a very different person than he was in 1865. Longer lifespans will mean people live through more social change; that’s something we’re going to need to cope with.

And of course violent death becomes even more terrifying when aging is out of the picture: It’s tragic enough when a 20-year-old dies in a car accident today and we imagine the 60 years they lost—but what if it was 180 years or 480 years instead? But violent death in basically all its forms is declining around the world.

But again, I really want to emphasize this: Think about how good this is. Imagine meeting your great-grandmother—and not just meeting her, not just having some fleeting contact you half-remember from when you were four years old or something, but getting to know her, talking with her as an adult, going to the same movies, reading the same books. Imagine the converse: Knowing your great-grandchildren, watching them grow up and have kids of their own, your great-great-grandchildren. Imagine the world that we could build if people stopped dying all the time.

And if that doesn’t convince you, I highly recommend Nick Bostrom’s “Fable of the Dragon-Tyrant”.

Stop making excuses for the dragon.

The “productivity paradox”

 

Dec 10, JDN 2458098

Take a look at this graph of manufacturing output per worker-hour:

Manufacturing_productivity

From 1988 to 2008, it was growing at a steady pace. In 2008 and 2009 it took a dip due to the Great Recession; no big surprise there. But then since 2012 it has been… completely flat. If we take this graph at face value, it would imply that manufacturing workers today can produce no more output than workers five years ago, and indeed only about 10% more than workers a decade ago. Whereas, a worker in 2008 was producing over 60% more than a worker in 1998, who was producing over 40% more than a worker in 1988.

Many economists call this the “productivity paradox”, and use it to argue that we don’t really need to worry about robots taking all our jobs any time soon. I think this view is mistaken.

The way we measure productivity is fundamentally wrongheaded, and is probably the sole cause of this “paradox”.

First of all, we use total hours scheduled to work, not total hours actually doing productive work. This is obviously much, much easier to measure, which is why we do it. But if you think for a moment about how the 40-hour workweek norm is going to clash with rapidly rising real productivity, it becomes apparent why this isn’t going to be a good measure.
When a worker finds a way to get done in 10 hours what used to take 40 hours, what does that worker’s boss do? Send them home after 10 hours because the job is done? Give them a bonus for their creativity? Hardly. That would be far too rational. They assign them more work, while paying them exactly the same. Recognizing this, what is such a worker to do? The obvious answer is to pretend to work the other 30 hours, while in fact doing something more pleasant than working.
And indeed, so-called “worker distraction” has been rapidly increasing. People are right to blame smartphones, I suppose, but not for the reasons they think. It’s not that smartphones are inherently distracting devices. It’s that smartphones are the cutting edge of a technological revolution that has made most of our work time unnecessary, so due to our fundamentally defective management norms they create overwhelming incentives to waste time at work to avoid getting drenched in extra tasks for no money.

That would probably be enough to explain the “paradox” by itself, but there is a deeper reason that in the long run is even stronger. It has to do with the way we measure “output”.

It might surprise you to learn that economists almost never consider output in terms of the actual number of cars produced, buildings constructed, songs written, or software packages developed. The standard measures of output are all in the form of so-called “real GDP”; that is, the dollar value of output produced.

They do adjust for indexes of inflation, but as I’ll show in a moment this still creates a fundamentally biased picture of the productivity dynamics.

Consider a world with only three industries: Housing, Food, and Music.

Productivity in Housing doesn’t change at all. Producing a house cost 10,000 worker-hours in 1950, and cost 10,000 worker-hours in 2000. Nominal price of houses has rapidly increased, from $10,000 in 1950 to $200,000 in 2000.

Productivity in Food rises moderately fast. Producing 1,000 meals cost 1,000 worker-hours in 1950, and cost 100 worker-hours in 2000. Nominal price of food has increased slowly, from $1,000 per 1,000 meals in 1950 to $5,000 per 1,000 meals in 2000.

Productivity in Music rises extremely fast. Producing 1,000 performances cost 10,000 worker-hours in 1950, and cost 1 worker-hour in 2000. Nominal price of music has collapsed, from $100,000 per 1,000 performances in 1950 to $1,000 per 1,000 performances in 2000.

This is of course an extremely stylized version of what has actually happened: Housing has gotten way more expensive, food has stayed about the same in price while farm employment has plummeted, and the rise of digital music has brought about a new Renaissance in actual music production and listening while revenue for the music industry has collapsed. There is a very nice Vox article on the “productivity paradox” showing a graph of how prices have changed in different industries.

How would productivity appear in the world I’ve just described, by standard measures? Well, to say that I actually need to say something about how consumers substitute across industries. But I think I’ll be forgiven in this case for saying that there is no substitution whatsoever; you can’t eat music or live in a burrito. There’s also a clear Maslow hierarchy here: They say that man cannot live by bread alone, but I think living by Led Zeppelin alone is even harder.

Consumers will therefore choose like this: Over 10 years, buy 1 house, 10,000 meals, and as many performances as you can afford after that. Further suppose that each person had $2,100 per year to spend in 1940-1950, and $50,000 per year to spend in 1990-2000. (This is approximately true for actual nominal US GDP per capita.)

1940-1950:
Total funds: $21,000

1 house = $10,000

10,000 meals = $10,000

Remaining funds: $1,000

Performances purchased: 10

1990-2000:

Total funds: $500,000

1 house = $200,000

10,000 meals = $50,000

Remaining funds: $250,000

Performances purchased: 250,000

(Do you really listen to this much music? 250,000 performances over 10 years is about 70 songs per day. If each song is 3 minutes, that’s only about 3.5 hours per day. If you listen to music while you work or watch a couple of movies with musical scores, yes, you really do listen to this much music! The unrealistic part is assuming that people in 1950 listen to so little, given that radio was already widespread. But if you think of music as standing in for all media, the general trend of being able to consume vastly more media in the digital age is clearly correct.)

Now consider how we would compute a price index for each time period. We would construct a basket of goods and determine the price of that basket in each time period, then adjust prices until that basket has a constant price.

Here, the basket would probably be what people bought in 1940-1950: 1 house, 10,000 meals, and 400 music performances.

In 1950, this basket cost $10,000+$10,000+$100 = $21,000.

In 2000, this basket cost $200,000+$50,000+$400 = $150,400.

This means that our inflation adjustment is $150,400/$21,000 = 7 to 1. This means that we would estimate the real per-capita GDP in 1950 at about $14,700. And indeed, that’s about the actual estimate of real per-capita GDP in 1950.

So, what would we say about productivity?

Sales of houses in 1950 were 1 per person, costing 10,000 worker hours.

Sales of food in 1950 were 10,000 per person, costing 10,000 worker hours.

Sales of music in 1950 were 400 per person, costing 4,000 worker hours.

Worker hours per person are therefore 24,000.

Sales of houses in 2000 were 1 per person, costing 10,000 worker hours.

Sales of food in 2000 were 10,000 per person, costing 1,000 worker hours.

Sales of music in 2000 were 250,000 per person, costing 25,000 worker hours.

Worker hours per person are therefore 36,000.

Therefore we would estimate that productivity rose from $14,700/24,000 = $0.61 per worker-hour to $50,000/36,000 = $1.40 per worker-hour. This is an annual growth rate of about 1.7%, which is again, pretty close to the actual estimate of productivity growth. For such a highly stylized model, my figures are doing remarkably well. (Honestly, better than I thought they would!)

But think about how much actual productivity rose, at least in the industries where it did.

We produce 10 times as much food per worker hour after 50 years, which is an annual growth rate of 4.7%, or three times the estimated growth rate.

We produce 10,000 times as much music per worker hour after 50 years, which is an annual growth rate of over 20%, or almost twelve times the estimated growth rate.

Moreover, should music producers be worried about losing their jobs to automation? Absolutely! People simply won’t be able to listen to much more music than they already are, so any continued increases in music productivity are going to make musicians lose jobs. And that was already allowing for music consumption to increase by a factor of over 600.

Of course, the real world has a lot more industries than this, and everything is a lot more complicated. We do actually substitute across some of those industries, unlike in this model.

But I hope I’ve gotten at least the basic point across that when things become drastically cheaper as technological progress often does, simply adjusting for inflation doesn’t do the job. One dollar of music today isn’t the same thing as one dollar of music a century ago, even if you inflation-adjust their dollars to match ours. We ought to be measuring in hours of music; an hour of music is much the same thing as an hour of music a century ago.

And likewise, that secretary/weather forecaster/news reporter/accountant/musician/filmmaker in your pocket that you call a “smartphone” really ought to be counted as more than just a simple inflation adjustment on its market price. The fact that it is mind-bogglingly cheaper to get these services than it used to be is the technological progress we care about; it’s not some statistical artifact to be removed by proper measurement.

Combine that with actually measuring the hours of real, productive work, and I think you’ll find that productivity is still rising quite rapidly, and that we should still be worried about what automation is going to do to our jobs.