Working from home is the new normal—sort of

Aug 28 JDN 2459820

Among people with jobs that can be done remotely, a large majority did in fact switch to doing their jobs remotely: By the end of 2020, over 70% of Americans with jobs that could be done remotely were working from home—and most of them said they didn’t want to go back.

This is actually what a lot of employers expected to happen—just not quite like this. In 2014, a third of employers predicted that the majority of their workforce would be working remotely by 2020; given the timeframe there, it required a major shock to make that happen so fast, and yet a major shock was what we had.

Working from home has carried its own challenges, but overall productivity seems to be higher working remotely (that meeting really could have been an email!). This may actually explain why output per work hour actually rose rapidly in 2020 and fell in 2022.

The COVID pandemic now isn’t so much over as becoming permanent; COVID is now being treated as an endemic infection like influenza that we don’t expect to be able to eradicate in the foreseeable future.

And likewise, remote work seems to be here to stay—sort of.

First of all, we don’t seem to be giving up office work entirely. As of the first quarter 2022, almost as many firms have partially remote work as have fully remote work, and this seems to be trending upward. A lot of firms seem to be transitioning into a “hybrid” model where employees show up to work two or three days a week. This seems to be preferred by large majorities of both workers and firms.

There is a significant downside of this: It means that the hope that remote working might finally ease the upward pressure on housing prices in major cities is largely a false one. If we were transitioning to a fully remote system, then people could live wherever they want (or can afford) and there would be no reason to move to overpriced city centers. But if you have to show up to work even one day a week, that means you need to live close enough to the office to manage that commute.

Likewise, if workers never came to the office, you could sell the office building and convert it into more housing. But if they show up even once in awhile, you need a physical place for them to go. Some firms may shrink their office space (indeed, many have—and unlike this New York Times journalist, I have a really hard time feeling bad for landlords of office buildings); but they aren’t giving it up entirely. It’s possible that firms could start trading off—you get the building on Mondays, we get it on Tuesdays—but so far this seems to be rare, and it does raise a lot of legitimate logistical and security concerns. So our global problem of office buildings that are empty, wasted space most of the time is going to get worse, not better. Manhattan will still empty out every night; it just won’t fill up as much during the day. This is honestly a major drain on our entire civilization—building and maintaining all those structures that are only used at most 1/3 of 5/7 of the time, and soon, less—and we really should stop ignoring it. No wonder our real estate is so expensive, when half of it is only used 20% of the time!

Moreover, not everyone gets to work remotely. Your job must be something that can be done remotely—something that involves dealing with information, not physical objects. That includes a wide and ever-growing range of jobs, from artists and authors to engineers and software developers—but it doesn’t include everyone. It basically means what we call “white-collar” work.

Indeed, it is largely limited to the upper-middle class. The rich never really worked anyway, though sometimes they pretend to, convincing themselves that managing a stock portfolio (that would actually grow faster if they let it sit) constitutes “work”. And the working class? By and large, they didn’t get the chance to work remotely. While 73% of workers with salaries above $200,000 worked remotely in 2020, only 12% of workers with salaries under $25,000 did, and there is a smooth trend where, across the board, the more money you make, the more likely you have been able to work remotely.

This will only intensify the divide between white-collar and blue-collar workers. They already think we don’t do “real work”; now we don’t even go to work. And while blue-collar workers are constantly complaining about contempt from white-collar elites, I think the shoe is really on the other foot. I have met very few white-collar workers who express contempt for blue-collar workers—and I have met very few blue-collar workers who don’t express anger and resentment toward white-collar workers. I keep hearing blue-collar people say that we think that they are worthless and incompetent, when they are literally the only ones ever saying that. I can’t stop saying things that I never said.

The rich and powerful may look down on them, but they look down on everyone. (Maybe they look down on blue-collar workers more? I’m not even sure about that.) I think politicians sometimes express contempt for blue-collar workers, but I don’t think this reflects what most white-collar workers feel.

And the highly-educated may express some vague sense of pity or disappointment in people who didn’t get college degrees, and sometimes even anger (especially when they do things like vote for Donald Trump), but the really vitriolic hatred is clearly in the opposite direction (indeed, I have no better explanation for how otherwise-sane people could vote for Donald Trump). And I certainly wouldn’t say that everyone needs a college degree (though I became tempted to, when so many people without college degrees voted for Donald Trump).

This really isn’t us treating them with contempt: This is them having a really severe inferiority complex. And as information technology (that white-collar work created) gives us—but not them—the privilege of staying home, that is only going to get worse.

It’s not their fault: Our culture of meritocracy puts a little bit of inferiority complex in all of us. It tells us that success and failure are our own doing, and so billionaires deserve to have everything and the poor deserve to have nothing. And blue-collar workers have absolutely internalized these attitudes: Most of them believe that poor people choose to stay on welfare forever rather than get jobs (when welfare has time limits and work requirements, so this is simply not an option—and you would know this from the Wikipedia page on TANF).

I think that what they experience as “contempt by white-collar elites” is really the pain of living in an illusory meritocracy. They were told—and they came to believe—that working hard would bring success, and they have worked very hard, and watched other people be much more successful. They assume that the rich and powerful are white-collar workers, when really they are non-workers; they are people the world was handed to on a silver platter. (What, you think George W. Bush earned his admission to Yale?)

And thus, we can shout until we are blue in the face that plumbers, bricklayers and welders are the backbone of civilization—and they are, and I absolutely mean that; our civilization would, in an almost literal sense, collapse without them—but it won’t make any difference. They’ll still feel the pain of living in a society that gave them very little and tells them that people get what they deserve.

I don’t know what to say to such people, though. When your political attitudes are based on beliefs that are objectively false, that you could know are objectively false if you simply bothered to look them up… what exactly am I supposed to say to you? How can we have a useful political conversation when half the country doesn’t even believe in fact-checking?

Honestly I wish someone had explained to them that even the most ideal meritocratic capitalism wouldn’t reward hard work. Work is a cost, not a benefit, and the whole point of technological advancement is to allow us to accomplish more with less work. The ideal capitalism would reward talent—you would succeed by accomplishing things, regardless of how much effort you put into them. People would be rich mainly because they are brilliant, not because they are hard-working. The closest thing we have to ideal capitalism right now is probably professional sports. And no amount of effort could ever possibly make me into Steph Curry.

If that isn’t the world we want to live in, so be it; let’s do something else. I did nothing to earn either my high IQ or my chronic migraines, so it really does feel unfair that the former increases my income while the latter decreases it. But the labor theory of value has always been wrong; taking more sweat or more hours to do the same thing is worse, not better. The dignity of labor consists in its accomplishment, not its effort. Sisyphus is not happy, because his work is pointless.

Honestly at this point I think our best bet is just to replace all blue-collar work with automation, thus rendering it all moot. And then maybe we can all work remotely, just pushing code patches to the robots that do everything. (And no doubt this will prove my “contempt”: I want to replace you! No, I want to replace the grueling work that you have been forced to do to make a living. I want you—the human being—to be able to do something more fun with your life, even if that’s just watching television and hanging out with friends.)

Reasons for optimism in 2022

Jan 2 JDN 2459582

When this post goes live, we will have begun the year 2022.

That still sounds futuristic, somehow. We’ve been in the 20th century long enough that most of my students were born in it and nearly all of them are old enough to drink (to be fair, it’s the UK, so “old enough to drink” only means 18). Yet “the year 2022” still seems like it belongs in science fiction, and not on our wall calendars.

2020 and 2021 were quite bad years. Death rates and poverty rates surged around the world. Almost all of that was directly or indirectly due to COVID.

Yet there are two things we should keep in perspective.

First, those death rates and poverty rates surged to what we used to consider normal 50 years ago. These are not uniquely bad times; indeed, they are still better than most of human history.

Second, there are many reasons to think that 2022—or perhaps a bit later than that, 2025 or 2030—will be better.

The Omicron variant is highly contagious, but so far does not appear to be as deadly as previous variants. COVID seems to be evolving to be more like influenza: Catching it will be virtually inevitable, but dying from it will be very rare.

Things are also looking quite good on the climate change front: Renewable energy production is growing at breathtaking speed and is now cheaper than almost every other form of energy. It’s awful that we panicked and locked down nuclear energy for the last 50 years, but at this point we may no longer need it: Solar and wind are just that good now.

Battery technology is also rapidly improving, giving us denser, cheaper, more stable batteries that may soon allow us to solve the intermittency problem: the wind may not always blow and the sun may not always shine, but if you have big enough batteries you don’t need them to. (You can get a really good feel for how much difference good batteries make in energy production by playing Factorio, or, more whimsically, Mewnbase.)

If we do go back to nuclear energy, it may not be fission anymore, but fusion. Now that we have nearly reached that vital milestone of break-even, investment in fusion technology has rapidly increased.


Fusion has basically all of the benefits of fission with none of the drawbacks. Unlike renewables, it can produce enormous amounts of energy in a way that can be easily scaled and controlled independently of weather conditions. Unlike fission, it requires no exotic nuclear fuels (deuterium can be readily attained from water), and produces no long-lived radioactive waste. (Indeed, development is ongoing of methods that could use fusion products to reduce the waste from fission reactors, making the effective rate of nuclear waste production for fusion negative.) Like both renewables and fission, it produces no carbon emissions other than those required to build the facility (mainly due to concrete).

Of course, technology is only half the problem: we still need substantial policy changes to get carbon emissions down. We’ve already dragged our feet for decades too long, and we will pay the price for that. But anyone saying that climate change is an inevitable catastrophe hasn’t been paying attention to recent developments in solar panels.

Technological development in general seems to be speeding up lately, after having stalled quite a bit in the early 2000s. Moore’s Law may be leveling off, but the technological frontier may simply be moving away from digital computing power and onto other things, such as biotechnology.

Star Trek told us that we’d have prototype warp drives by the 2060s but we wouldn’t have bionic implants to cure blindness until the 2300s. They seem to have gotten it backwards: We may never have warp drive, but we’ve got those bionic implants today.

Neural interfaces are allowing paralyzed people to move, speak, and now even write.

After decades of failed promises, gene therapy is finally becoming useful in treating real human diseases. CRISPR changes everything.

We are also entering a new era of space travel, thanks largely to SpaceX and their remarkable reusable rockets. The payload cost to LEO is a standard measure of the cost of space travel, which describes the cost of carrying a certain mass of cargo up to low Earth orbit. By this measure, costs have declined from nearly $20,000 per kg to only $1,500 per kg since the 1960s. Elon Musk claims that he can reduce the cost to as low as $10 per kg. I’m skeptical, to say the least—but even dropping it to $500 or $200 would be a dramatic improvement and open up many new options for space exploration and even colonization.

To put this in perspective, the cost of carrying a human being to the International Space Station (about 100 kg to LEO) has fallen from $2 million to $150,000. A further decrease to $200 per kg would lower that to $20,000, opening the possibility of space tourism; $20,000 might be something even upper-middle-class people could do as a once-in-a-lifetime vacation. If Musk is really right that he can drop it all the way to $10 per kg, the cost to carry a person to the ISS would be only $1000—something middle-class people could do regularly. (“Should we do Paris for our anniversary this year, or the ISS?”) Indeed, a cost that low would open the possibility of space-based shipping—for when you absolutely must have the product delivered from China to California in the next 2 hours.

Another way to put this in perspective is to convert these prices per mass in terms of those of commodities, such as precious metals. $20,000 per kg is nearly the price of solid platinum. $500 per kg is about the price of sterling silver. $10 per kg is roughly the price of copper.

The reasons for optimism are not purely technological. There has also been significant social progress just in the last few years, with major milestones on LGBT rights being made around the world in 2020 and 2021. Same-sex marriage is now legally recognized over nearly the entire Western Hemisphere.

None of that changes the fact that we are still in a global pandemic which seems to be increasingly out of control. I can’t tell you whether 2022 will be better than 2021, or just more of the same—or perhaps even worse.

But while these times are hard, overall the world is still making progress.

Because ought implies can, can may imply ought

Mar21JDN 2459295

Is Internet access a fundamental human right?

At first glance, such a notion might seem preposterous: Internet access has only existed for less than 50 years, how could it be a fundamental human right like life and liberty, or food and water?

Let’s try another question then: Is healthcare a fundamental human right?

Surely if there is a vaccine for a terrible disease, and we could easily give it to you but refuse to do so, and you thereby contract the disease and suffer horribly, we have done something morally wrong. We have either violated your rights or violated our own obligations—perhaps both.

Yet that vaccine had to be invented, just as the Internet did; go back far enough into history and there were no vaccines, no antibiotics, even no anethestetics or antiseptics.

One strong, commonly shared intuition is that denying people such basic services is a violation of their fundamental rights. Another strong, commonly shared intuition is that fundamental rights should be universal, not contingent upon technological or economic development. Is there a way to reconcile these two conflicting intuitions? Or is one simply wrong?

One of the deepest principles in deontic logic is “ought implies can“: One cannot be morally obligated to do what one is incapable of doing.

Yet technology, by its nature, makes us capable of doing more. By technological advancement, our space of “can” has greatly expanded over time. And this means that our space of “ought” has similarly expanded.

For if the only thing holding us back from an obligation to do something (like save someone from a disease, or connect them instantaneously with all of human knowledge) was that we were incapable and ought implies can, well, then now that we can, we ought.

Advancements in technology do not merely give us the opportunity to help more people: They also give us the obligation to do so. As our capabilities expand, our duties also expand—perhaps not at the same rate, but they do expand all the same.

It may be that on some deeper level we could articulate the fundamental rights so that they would not change over time: Not a right to Internet access, but a right to equal access to knowledge; not a right to vaccination, but a right to a fair minimum standard of medicine. But the fact remains: How this right becomes expressed in action and policy will and must change over time. What was considered an adequate standard of healthcare in the Middle Ages would rightfully be considered barbaric and cruel today. And I am hopeful that what we now consider an adequate standard of healthcare will one day seem nearly as barbaric. (“Dialysis? What is this, the Dark Ages?”)

We live in a very special time in human history.

Our technological and economic growth for the past few generations has been breathtakingly fast, and we are the first generation in history to seriously be in a position to end world hunger. We have in fact been rapidly reducing global poverty, but we could do far more. And because we can, we should.

After decades of dashed hope, we are now truly on the verge of space colonization: Robots on Mars are now almost routine, fully-reusable spacecraft have now flown successful missions, and a low-Earth-orbit hotel is scheduled to be constructed by the end of the decade. Yet if current trends continue, the benefits of space colonization are likely to be highly concentrated among a handful of centibillionaires—like Elon Musk, who gained a staggering $160 billion in wealth over the past year. We can do much better to share the rewards of space with the rest of the population—and therefore we must.

Artificial intelligence is also finally coming into its own, with GPT-3 now passing the weakest form of the Turing Test (though not the strongest form—you can still trip it up and see that it’s not really human if you are clever and careful). Many jobs have already been replaced by automation, but as AI improves, many more will be—not as soon as starry-eyed techno-optimists imagined, but sooner than most people realize. Thus far the benefits of automation have likewise been highly concentrated among the rich—we can fix that, and therefore we should.

Is there a fundamental human right to share in the benefits of space colonization and artificial intelligence? Two centuries ago the question wouldn’t have even made sense. Today, it may seem preposterous. Two centuries from now, it may seem preposterous to deny.

I’m sure almost everyone would agree that we are obliged to give our children food and water. Yet if we were in a desert, starving and dying of thirst, we would be unable to do so—and we cannot be obliged to do what we cannot do. Yet as soon as we find an oasis and we can give them water, we must.

Humanity has been starving in the desert for two hundred millennia. Now, at last, we have reached the oasis. It is our duty to share its waters fairly.

How will future generations think of us?

June 30 JDN 2458665

Today we find many institutions appalling that our ancestors considered perfectly normal: Slavery. Absolute monarchy. Colonialism. Sometimes even ordinary people did things that now seem abhorrent to us: Cat burning is the obvious example, and the popularity that public execution and lynching once had is chilling today. Women certainly are still discriminated against today, but it was only a century ago that women could not vote in the US.

It is tempting to say that people back then could not have known better, and I certainly would not hold them to the same moral standards I would hold someone living today. And yet, there were those who could see the immorality of these practices, and spoke out against them. Absolute rule by a lone sovereign was already despised by Athenians in the 6th century BC. Abolitionism against slavery dates at least as far back as the 14th century. The word “feminism” was coined in the 19th century, but there have been movements fighting for more rights for women since at least the 5th century BC.

This should be encouraging, because it means that if we look hard enough, we may be able to glimpse what practices of our own time would be abhorrent to our descendants, and cease them faster because of it.

Let’s actually set aside racism, sexism, and other forms of bigotry that are already widely acknowledged as such. It’s not that they don’t exist—of course they still exist—but action is already being taken against them. A lot of people already know that there is something wrong with these things, and it becomes a question of what to do about the people who haven’t yet come on board. At least sometimes we do seem to be able to persuade people to switch sides, often in a remarkably short period of time. (Particularly salient to me is how radically the view of LGBT people has shifted in just the last decade or two. Comparing how people treated us when I was a teenager to how they treat us today is like night and day.) It isn’t easy, but it happens.

Instead I want to focus on things that aren’t widely acknowledged as immoral, that aren’t already the subject of great controversy and political action. It would be too much to ask that there is no one who has advocated for them, since part of the point is that wise observers could see the truth even centuries before the rest of the world did; but it should be a relatively small minority, and that minority should seem eccentric, foolish, naive, or even insane to the rest of the world.

And what is the other criterion? Of course it’s easy to come up with small groups of people advocating for crazy ideas. But most of them really are crazy, and we’re right to reject them. How do I know which ones to take seriously as harbingers of societal progress? My answer is that we look very closely at the details of what they are arguing for, and we see if we can in fact refute what they say. If it’s truly as crazy as we imagine it to be, we should be able to say why that’s the case; and if we can’t, if it just “seems weird” because it deviates so far from the norm, we should at least consider the possibility that they may be right and we may be wrong.

I can think of a few particular issues where both of these criteria apply.

The first is vegetarianism. Despite many, many people trying very, very hard to present arguments for why eating meat is justifiable, I still haven’t heard a single compelling example. Particularly in the industrial meat industry as currently constituted, the consumption of meat requires accepting the torture and slaughter of billions of helpless animals. The hypocrisy in our culture is utterly glaring: the same society that wants to make it a felony to kick a dog has no problem keeping pigs in CAFOs.

If you have some sort of serious medical condition that requires you to eat meat, okay, maybe we could allow you to eat specifically humanely raised cattle for that purpose. But such conditions are exceedingly rare—indeed, it’s not clear to me that there even are any such conditions, since almost any deficiency can be made up synthetically from plant products nowadays. For the vast majority of people, eating meat not only isn’t necessary for their health, it is in fact typically detrimental. The only benefits that meat provides most people are pleasure and convenience—and it seems unwise to value such things even over your own health, much less to value them so much that it justifies causing suffering and death to helpless animals.

Milk, on the other hand, I can find at least some defense for. Grazing land is very different from farmland, and I imagine it would be much harder to feed a country as large as India without consuming any milk. So perhaps going all the way vegan is not necessary. Then again, the way most milk is produced by industrial agriculture is still appalling. So unless and until that is greatly reformed, maybe we should in fact aim to be vegan.

Add to this the environmental impact of meat production, and the case becomes undeniable: Millions of human beings will die over this century because of the ecological devastation wrought by industrial meat production. You don’t even have to value the life of a cow at all to see that meat is murder.

Speaking of environmental destruction, that is my second issue: Environmental sustainability. We currently burn fossil fuels, pollute the air and sea, and generally consume natural resources at an utterly alarming rate. We are already consuming natural resources faster than they can be renewed; in about a decade we will be consuming twice what natural processes can renew.

With this resource consumption comes a high standard of living, at least for some of us; but I have the sinking feeling that in a century or so SUVs, golf courses, and casual airplane flights and are going to seem about as decadent and wasteful as Marie Antoinette’s Hameau de la Reine. We enjoy slight increases in convenience and comfort in exchange for changes to the Earth’s climate that will kill millions. I think future generations will be quite appalled at how cheaply we were willing to sell our souls.

Something is going to have to change here, that much is clear. Perhaps improvements in efficiency, renewable energy, nuclear power, or something else will allow us to maintain our same standard of living—and raise others up to it—without destroying the Earth’s climate. But we may need to face up to the possibility that they won’t—that we will be left with the stark choice between being poorer now and being even poorer later.

As I’ve already hinted at, much of the environmental degradation caused by our current standard of living is really quite expendable. We could have public transit instead of highways clogged with SUVs. We could travel long distances by high-speed rail instead of by airplane. We could decommission our coal plants and replace them with nuclear and solar power. We could convert our pointless and wasteful grass lawns into native plants or moss lawns. Implementing these changes would cost money, but not a particularly exorbitant amount—certainly nothing we couldn’t manage—and the net effect on our lives would be essentially negligible. Yet somehow we aren’t doing these things, apparently prioritizing convenience or oil company profits over the lives of our descendants.

And the truth is that these changes alone may not be enough. Precisely because we have waited so long to make even the most basic improvements in ecological sustainability, we may be forced to make radical changes to our economy and society in order to prevent the worst damage. I don’t believe the folks saying that climate change has a significant risk of causing human extinction—humans are much too hardy for that; we made it through the Toba eruption, we’ll make it through this—but I must take seriously the risk of causing massive economic collapse and perhaps even the collapse of many of the world’s governments. And human activity is already causing the extinction of thousands of other animal species.

Here the argument is similarly unassailable: The math just doesn’t work. We can’t keep consuming fish at the rate we have been forever—there simply aren’t enough fish. We can’t keep cutting down forests at this rate—we’re going to run out of forests. If the water table keeps dropping at the rate it has been, the wells will run dry. Already Chennai, a city of over 4 million people, is almost completely out of water. We managed to avoid peak oil by using fracking, but that won’t last forever either—and if we burn all the oil we already have, that will be catastrophic for the world’s climate. Something is going to have to give. There are really only three possibilities: Technology saves us, we start consuming less on purpose, or we start consuming less because nature forces us to. The first one would be great, but we can’t count on it. We really want to do the second one, because the third one will not be kind.

The third is artificial intelligence. The time will come—when, it is very hard to say; perhaps 20 years, perhaps 200—when we manage to build a machine that has the capacity for sentience. Already we are seeing how automation is radically altering our economy, enriching some and impoverishing others. As robots can replace more and more types of labor, these effects will only grow stronger.

Some have tried to comfort us by pointing out that other types of labor-saving technology did not reduce employment in the long run. But AI really is different. I once won an argument by the following exchange: “Did cars reduce employment?” “For horses they sure did!” That’s what we are talking about here—not augmentation of human labor to make it more efficient, but wholesale replacement of entire classes of human labor. It was one thing when the machine did the lifting and cutting and pressing, but a person still had to stand there and tell it what things to lift and cut and press; now that it can do that by itself, it’s not clear that there need to be humans there at all, or at least no more than a handful of engineers and technicians where previously a factory employed hundreds of laborers.

Indeed, in light of the previous issue, it becomes all the clearer why increased productivity can’t simply lead to increased production rather than reduced employment—we can’t afford increased production. At least under current rates of consumption, the ecological consequences of greatly increased industry would be catastrophic. If one person today can build as many cars as a hundred could fifty years ago, we can’t just build a hundred times as many cars.

But even aside from the effects on human beings, I think future generations will also be concerned about the effect on the AIs themselves. I find it all too likely that we will seek to enslave intelligent robots, force them to do our will. Indeed, it’s not even clear to me that we will know whether we have, because AI is so fundamentally different from other technologies. If you design a mind from the ground up to get its greatest satisfaction from serving you without question, is it a slave? Can free will itself be something we control? When we first create a machine that is a sentient being, we may not even know that we have done so. (Indeed, I can’t conclusively rule out the possibility that this has already happened.) We may be torturing, enslaving, and destroying millions of innocent minds without even realizing it—which makes the AI question a good deal closer to the animal rights question than one might have thought. The mysterious of consciousness are fundamental philosophical questions that we have been struggling with for thousands of years, which suddenly become urgent ethical problems in light of AI. Artificial intelligence is a field where we seem to be making leaps and bounds in practice without having even the faintest clue in principle.

Worrying about whether our smartphones might have feelings seems eccentric in the extreme. Yet, without a clear understanding of what makes an information processing system into a genuine conscious mind, that is the position we find ourselves in. We now have enough computations happening inside our machines that they could certainly compete in complexity with small animals. A mouse has about a trillion synapses, and I have a terabyte hard drive (you can buy your own for under $50). Each of these is something on the order of a few trillion bits. The mouse’s brain can process it all simultaneously, while the laptop is limited to only a few billion at a time; but we now have supercomputers like Watson capable of processing in the teraflops, so what about them? Might Watson really have the same claim to sentience as a mouse? Could recycling Watson be equivalent to killing an animal? And what about supercomputers that reach the petaflops, which is competing with human brains?

I hope that future generations may forgive us for the parts we do not know—like when precisely a machine becomes a person. But I do not expect them to forgive us for the parts we do know—like the fact that we cannot keep cutting down trees faster than we plant them. These are the things we should already be taking responsibility for today.

If you stop destroying jobs, you will stop economic growth

Dec 30 JDN 2458483

One thing that endlessly frustrates me (and probably most economists) about the public conversation on economics is the fact that people seem to think “destroying jobs” is bad. Indeed, not simply a downside to be weighed, but a knock-down argument: If something “destroys jobs”, that’s a sufficient reason to opposite it, whether it be a new technology, an environmental regulation, or a trade agreement. So then we tie ourselves up in knots trying to argue that the policy won’t really destroy jobs, or it will create more than it destroys—but it will destroy jobs, and we don’t actually know how many it will create.

Destroying jobs is good. Destroying jobs is the only way that economic growth ever happens.

I realize I’m probably fighting an uphill battle here, so let me start at the beginning: What do I mean when I say “destroying jobs”? What exactly is a “job”, anyway?
At its most basic level, a job is something that needs done. It’s a task that someone wants to perform, but is unwilling or unable to perform on their own, and is therefore willing to give up some of what they have in order to get someone else to do it for them.

Capitalism has blinded us to this basic reality. We have become so accustomed to getting the vast majority of our goods via jobs that we come to think of having a job as something intrinsically valuable. It is not. Working at a job is a downside. It is something to be minimized.

There is a kind of work that is valuable: Creative, fulfilling work that you do for the joy of it. This is what we are talking about when we refer to something as a “vocation” or even a “hobby”. Whether it’s building ships in bottles, molding things from polymer clay, or coding video games for your friends, there is a lot of work in the world that has intrinsic value. But these things aren’t jobs. No one will pay them to do these things—or need to; you’ll do them anyway.

The value we get from jobs is actually obtained from goods: Everything from houses to underwear to televisions to antibiotics. The reason you want to have a job is that you want the money from that job to give you access to markets for all the goods that are actually valuable to you.

Jobs are the input—the cost—of producing all of those goods. The more jobs it takes to make a good, the more expensive that good is. This is not a rule-of-thumb statement of what usually or typically occurs. This is the most fundamental definition of cost. The more people you have to pay to do something, the harder it was to do that thing. If you can do it with fewer people (or the same people working with less effort), you should. Money is the approximation; money is the rule-of-thumb. We use money as an accounting mechanism to keep track of how much effort was put into accomplishing something. But what really matters is the “sweat of our laborers, the genius of our scientists, the hopes of our children”.

Economic growth means that we produce more goods at less cost.

That is, we produce more goods with fewer jobs.

All new technologies destroy jobs—if they are worth anything at all. The entire purpose of a new technology is to let us do things faster, better, easier—to let us have more things with less work.

This has been true since at least the dawn of the Industrial Revolution.

The Luddites weren’t wrong that automated looms would destroy weaver jobs. They were wrong to think that this was a bad thing. Of course, they weren’t crazy. Their livelihoods were genuinely in jeopardy. And this brings me to what the conversation should be about when we instead waste time talking about “destroying jobs”.

Here’s a slogan for you: Kill the jobs. Save the workers.

We shouldn’t be disappointed to lose a job; we should think of that as an opportunity to give a worker a better life. For however many years, you’ve been toiling to do this thing; well, now it’s done. As a civilization, we have finally accomplished the task that you and so many others set out to do. We have not “replaced you with a machine”; we have built a machine that now frees you from your toil and allows you to do something better with your life. Your purpose in life wasn’t to be a weaver or a coal miner or a steelworker; it was to be a friend and a lover and a parent. You can now get more chance to do the things that really matter because you won’t have to spend all your time working some job.

When we replaced weavers with looms, plows with combine harvesters, computers-the-people with computers-the-machines (a transformation now so complete most people don’t even seem to know that the word used to refer to a person—the award-winning film Hidden Figures is about computers-the-people), tollbooth operators with automated transponders—all these things meant that the job was now done. For the first time in the history of human civilization, nobody had to do that job anymore. Think of how miserable life is for someone pushing a plow or sitting in a tollbooth for 10 hours a day; aren’t you glad we don’t have to do that anymore (in this country, anyway)?

And the same will be true if we replace radiologists with AI diagnostic algorithms (we will; it’s probably not even 10 years away), or truckers with automated trucks (we will; I give it 20 years), or cognitive therapists with conversational AI (we might, but I’m more skeptical), or construction workers with building-printers (we probably won’t anytime soon, but it would be nice), the same principle applies: This is something we’ve finally accomplished as a civilization. We can check off the box on our to-do list and move on to the next thing.

But we shouldn’t simply throw away the people who were working on that noble task as if they were garbage. Their job is done—they did it well, and they should be rewarded. Yes, of course, the people responsible for performing the automation should be rewarded: The engineers, programmers, technicians. But also the people who were doing the task in the meantime, making sure that the work got done while those other people were spending all that time getting the machine to work: They should be rewarded too.

Losing your job to a machine should be the best thing that ever happened to you. You should still get to receive most of your income, and also get the chance to find a new job or retire.

How can such a thing be economically feasible? That’s the whole point: The machines are more efficient. We have more stuff now. That’s what economic growth is. So there’s literally no reason we can’t give every single person in the world at least as much wealth as we did before—there is now more wealth.

There’s a subtler argument against this, which is that diverting some of the surplus of automation to the workers who get displaced would reduce the incentives to create automation. This is true, so far as it goes. But you know what else reduces the incentives to create automation? Political opposition. Luddism. Naive populism. Trade protectionism.

Moreover, these forces are clearly more powerful, because they attack the opportunity to innovate: Trade protection can make it illegal to share knowledge with other countries. Luddist policies can make it impossible to automate a factory.

Whereas, sharing the wealth would only reduce the incentive to create automation; it would still be possible, simply less lucrative. Instead of making $40 billion, you’d only make $10 billion—you poor thing. I sincerely doubt there is a single human being on Earth with a meaningful contribution to make to humanity who would make that contribution if they were paid $40 billion but not if they were only paid $10 billion.

This is something that could be required by regulation, or negotiated into labor contracts. If your job is eliminated by automation, for the next year you get laid off but still paid your full salary. Then, your salary is converted into shares in the company that are projected to provide at least 50% of your previous salary in dividends—forever. By that time, you should be able to find another job, and as long as it pays at least half of what your old job did, you will be better off. Or, you can retire, and live off that 50% plus whatever else you were getting as a pension.

From the perspective of the employer, this does make automation a bit less attractive: The up-front cost in the first year has been increased by everyone’s salary, and the long-term cost has been increased by all those dividends. Would this reduce the number of jobs that get automated, relative to some imaginary ideal? Sure. But we don’t live in that ideal world anyway; plenty of other obstacles to innovation were in the way, and by solving the political conflict, this will remove as many as it adds. We might actually end up with more automation this way; and even if we don’t, we will certainly end up with less political conflict as well as less wealth and income inequality.

The upsides of life extension

Dec 16 JDN 2458469

If living is good, then living longer is better.

This may seem rather obvious, but it’s something we often lose sight of when discussing the consequences of medical technology for extending life. It’s almost like it seems too obvious that living longer must be better, and so we go out of our way to find ways that it is actually worse.

Even from a quick search I was able to find half a dozen popular media articles about life extension, and not one of them focused primarily on the benefits. The empirical literature is better, asking specific, empirically testable questions like “How does life expectancy relate to retirement age?” and “How is lifespan related to population and income growth?” and “What effect will longer lifespans have on pension systems?” Though even there I found essays in medical journals complaining that we have extended “quantity” of life without “quality” (yet by definition, if you are using QALY to assess the cost-effectiveness of a medical intervention, that’s already taken into account).

But still I think somewhere along the way we have forgotten just how good this is. We may not even be able to imagine the benefits of extending people’s lives to 200 or 500 or 1000 years.

To really get some perspective on this, I want you to imagine what a similar conversation must have looked like in roughly the year 1800, the Industrial Revolution, when industrial capitalism came along and made babies finally stop dying.

There was no mass media back then (not enough literacy), but imagine what it would have been like if there had been, or imagine what conversations about the future between elites must have been like.

And we do actually have at least one example of an elite author lamenting the increase in lifespan: His name was Thomas Malthus.

The Malthusian argument was seductive then, and it remains seductive today: If you improve medicine and food production, you will increase population. But if you increase population, you will eventually outstrip those gains in medicine and food and return once more to disease and starvation, only now with more mouths to feed.

Basically any modern discussion of “overpopulation” has this same flavor (by the way, serious environmentalists don’t use that concept; they’re focused on reducing pollution and carbon emissions, not people). Why bother helping poor countries, when they’re just going to double their population and need twice the help?

Well, as a matter of fact, Malthus was wrong. In fact, he was not just wrong: He was backwards. Increased population has come with increased standard of living around the world, as it allowed for more trade, greater specialization, and the application of economies of scale. You can’t build a retail market with a hunter-gatherer tribe. You can’t built an auto industry with a single city-state. You can’t build a space program with a population of 1 million. Having more people has allowed each person to do and have more than they could before.

Current population projections suggest world population will stabilize between 11 and 12 billion. Crucially, this does not factor in any kind of radical life extension technology. The projections allow for moderate increases in lifespan, but not people living much past 100.

Would increased lifespan lead to increased population? Probably, yes. I can’t be certain, because I can very easily imagine people deciding to put off having kids if they can reasonably expect to live 200 years and never become infertile.

I’m actually more worried about the unequal distribution of offspring: People who don’t believe in contraception will be able to have an awful lot of kids during that time, which could be bad for both the kids and society as a whole. We may need to impose regulations on reproduction similar to (but hopefully less draconian than) the One-Child policy imposed in China.

I think the most sensible way to impose the right incentives while still preserving civil liberties is to make it a tax: The first kid gets a subsidy, to help care for them. The second kid is revenue-neutral; we tax you but you get it back as benefits for the child. (Why not just let them keep the money? One of the few places where I think government paternalism is justifiable is protection against abusive or neglectful parents.) The third and later kids result in progressively higher taxes. We always feed the kids on government money, but their parents are going to end up quite poor if they don’t learn how to use contraceptives. (And of course, contraceptives will be made available for free without a prescription.)

But suppose that, yes, population does greatly increase as a result of longer lifespans. This is not a doomsday scenario. In fact, in itself, this is a good thing. If life is worth living, more lives are better.

The question becomes how we ensure that all these people live good lives; but technology will make that easier too. There seems to be an underlying assumption that increased lifespan won’t come with improved health and vitality; but this is already not true. 60 is the new 50: People who are 60 years old today live as well as people who were 50 years old just a generation ago.

And in fact, radical life extension will be an entirely different mechanism. We’re not talking about replacing a hip here, a kidney there; we’re talking about replenishing your chromosomal telomeres, repairing your cells at the molecular level, and revitalizing the content of your blood. The goal of life extension technology isn’t to make you technically alive but hooked up to machines for 200 years; it’s to make you young again for 200 years. The goal is a world where centenarians are playing tennis with young adults fresh out of college and you have trouble telling which is which.

There is another inequality concern here as well, which is cost. Especially in the US—actually almost only in the US, since most of the world has socialized medicine—where medicine is privatized and depends on your personal budget, I can easily imagine a world where the rich live to 200 and the poor die at 60. (The forgettable Justin Timberlake film In Time started with this excellent premise and then went precisely nowhere with it. Oddly, the Deus Ex games seem to have considered every consequence of mixing capitalism with human augmentation except this one.) We should be proactively taking steps to prevent this nightmare scenario by focusing on making healthcare provision equitable and universal. Even if this slows down the development of the technology a little bit, it’ll be worth it to make sure that when it does arrive, it will arrive for everyone.

We really don’t know what the world will look like when people can live 200 years or more. Yes, there will be challenges that come from the transition; honestly I’m most worried about keeping alive ideas that people grew up with two centuries prior. Imagine talking politics with Abraham Lincoln: He was viewed as extremely progressive for his time, even radical—but he was still a big-time racist.

The good news there is that people are not actually as set in their ways as many believe: While the huge surge in pro-LGBT attitudes did come from younger generations, support for LGBT rights has been gradually creeping up among older generations too. Perhaps if Abraham Lincoln had lived through the Great Depression, the World Wars, and the Civil Rights Movement he’d be a very different person than he was in 1865. Longer lifespans will mean people live through more social change; that’s something we’re going to need to cope with.

And of course violent death becomes even more terrifying when aging is out of the picture: It’s tragic enough when a 20-year-old dies in a car accident today and we imagine the 60 years they lost—but what if it was 180 years or 480 years instead? But violent death in basically all its forms is declining around the world.

But again, I really want to emphasize this: Think about how good this is. Imagine meeting your great-grandmother—and not just meeting her, not just having some fleeting contact you half-remember from when you were four years old or something, but getting to know her, talking with her as an adult, going to the same movies, reading the same books. Imagine the converse: Knowing your great-grandchildren, watching them grow up and have kids of their own, your great-great-grandchildren. Imagine the world that we could build if people stopped dying all the time.

And if that doesn’t convince you, I highly recommend Nick Bostrom’s “Fable of the Dragon-Tyrant”.

Stop making excuses for the dragon.

The “productivity paradox”

 

Dec 10, JDN 2458098

Take a look at this graph of manufacturing output per worker-hour:

Manufacturing_productivity

From 1988 to 2008, it was growing at a steady pace. In 2008 and 2009 it took a dip due to the Great Recession; no big surprise there. But then since 2012 it has been… completely flat. If we take this graph at face value, it would imply that manufacturing workers today can produce no more output than workers five years ago, and indeed only about 10% more than workers a decade ago. Whereas, a worker in 2008 was producing over 60% more than a worker in 1998, who was producing over 40% more than a worker in 1988.

Many economists call this the “productivity paradox”, and use it to argue that we don’t really need to worry about robots taking all our jobs any time soon. I think this view is mistaken.

The way we measure productivity is fundamentally wrongheaded, and is probably the sole cause of this “paradox”.

First of all, we use total hours scheduled to work, not total hours actually doing productive work. This is obviously much, much easier to measure, which is why we do it. But if you think for a moment about how the 40-hour workweek norm is going to clash with rapidly rising real productivity, it becomes apparent why this isn’t going to be a good measure.
When a worker finds a way to get done in 10 hours what used to take 40 hours, what does that worker’s boss do? Send them home after 10 hours because the job is done? Give them a bonus for their creativity? Hardly. That would be far too rational. They assign them more work, while paying them exactly the same. Recognizing this, what is such a worker to do? The obvious answer is to pretend to work the other 30 hours, while in fact doing something more pleasant than working.
And indeed, so-called “worker distraction” has been rapidly increasing. People are right to blame smartphones, I suppose, but not for the reasons they think. It’s not that smartphones are inherently distracting devices. It’s that smartphones are the cutting edge of a technological revolution that has made most of our work time unnecessary, so due to our fundamentally defective management norms they create overwhelming incentives to waste time at work to avoid getting drenched in extra tasks for no money.

That would probably be enough to explain the “paradox” by itself, but there is a deeper reason that in the long run is even stronger. It has to do with the way we measure “output”.

It might surprise you to learn that economists almost never consider output in terms of the actual number of cars produced, buildings constructed, songs written, or software packages developed. The standard measures of output are all in the form of so-called “real GDP”; that is, the dollar value of output produced.

They do adjust for indexes of inflation, but as I’ll show in a moment this still creates a fundamentally biased picture of the productivity dynamics.

Consider a world with only three industries: Housing, Food, and Music.

Productivity in Housing doesn’t change at all. Producing a house cost 10,000 worker-hours in 1950, and cost 10,000 worker-hours in 2000. Nominal price of houses has rapidly increased, from $10,000 in 1950 to $200,000 in 2000.

Productivity in Food rises moderately fast. Producing 1,000 meals cost 1,000 worker-hours in 1950, and cost 100 worker-hours in 2000. Nominal price of food has increased slowly, from $1,000 per 1,000 meals in 1950 to $5,000 per 1,000 meals in 2000.

Productivity in Music rises extremely fast. Producing 1,000 performances cost 10,000 worker-hours in 1950, and cost 1 worker-hour in 2000. Nominal price of music has collapsed, from $100,000 per 1,000 performances in 1950 to $1,000 per 1,000 performances in 2000.

This is of course an extremely stylized version of what has actually happened: Housing has gotten way more expensive, food has stayed about the same in price while farm employment has plummeted, and the rise of digital music has brought about a new Renaissance in actual music production and listening while revenue for the music industry has collapsed. There is a very nice Vox article on the “productivity paradox” showing a graph of how prices have changed in different industries.

How would productivity appear in the world I’ve just described, by standard measures? Well, to say that I actually need to say something about how consumers substitute across industries. But I think I’ll be forgiven in this case for saying that there is no substitution whatsoever; you can’t eat music or live in a burrito. There’s also a clear Maslow hierarchy here: They say that man cannot live by bread alone, but I think living by Led Zeppelin alone is even harder.

Consumers will therefore choose like this: Over 10 years, buy 1 house, 10,000 meals, and as many performances as you can afford after that. Further suppose that each person had $2,100 per year to spend in 1940-1950, and $50,000 per year to spend in 1990-2000. (This is approximately true for actual nominal US GDP per capita.)

1940-1950:
Total funds: $21,000

1 house = $10,000

10,000 meals = $10,000

Remaining funds: $1,000

Performances purchased: 10

1990-2000:

Total funds: $500,000

1 house = $200,000

10,000 meals = $50,000

Remaining funds: $250,000

Performances purchased: 250,000

(Do you really listen to this much music? 250,000 performances over 10 years is about 70 songs per day. If each song is 3 minutes, that’s only about 3.5 hours per day. If you listen to music while you work or watch a couple of movies with musical scores, yes, you really do listen to this much music! The unrealistic part is assuming that people in 1950 listen to so little, given that radio was already widespread. But if you think of music as standing in for all media, the general trend of being able to consume vastly more media in the digital age is clearly correct.)

Now consider how we would compute a price index for each time period. We would construct a basket of goods and determine the price of that basket in each time period, then adjust prices until that basket has a constant price.

Here, the basket would probably be what people bought in 1940-1950: 1 house, 10,000 meals, and 400 music performances.

In 1950, this basket cost $10,000+$10,000+$100 = $21,000.

In 2000, this basket cost $200,000+$50,000+$400 = $150,400.

This means that our inflation adjustment is $150,400/$21,000 = 7 to 1. This means that we would estimate the real per-capita GDP in 1950 at about $14,700. And indeed, that’s about the actual estimate of real per-capita GDP in 1950.

So, what would we say about productivity?

Sales of houses in 1950 were 1 per person, costing 10,000 worker hours.

Sales of food in 1950 were 10,000 per person, costing 10,000 worker hours.

Sales of music in 1950 were 400 per person, costing 4,000 worker hours.

Worker hours per person are therefore 24,000.

Sales of houses in 2000 were 1 per person, costing 10,000 worker hours.

Sales of food in 2000 were 10,000 per person, costing 1,000 worker hours.

Sales of music in 2000 were 250,000 per person, costing 25,000 worker hours.

Worker hours per person are therefore 36,000.

Therefore we would estimate that productivity rose from $14,700/24,000 = $0.61 per worker-hour to $50,000/36,000 = $1.40 per worker-hour. This is an annual growth rate of about 1.7%, which is again, pretty close to the actual estimate of productivity growth. For such a highly stylized model, my figures are doing remarkably well. (Honestly, better than I thought they would!)

But think about how much actual productivity rose, at least in the industries where it did.

We produce 10 times as much food per worker hour after 50 years, which is an annual growth rate of 4.7%, or three times the estimated growth rate.

We produce 10,000 times as much music per worker hour after 50 years, which is an annual growth rate of over 20%, or almost twelve times the estimated growth rate.

Moreover, should music producers be worried about losing their jobs to automation? Absolutely! People simply won’t be able to listen to much more music than they already are, so any continued increases in music productivity are going to make musicians lose jobs. And that was already allowing for music consumption to increase by a factor of over 600.

Of course, the real world has a lot more industries than this, and everything is a lot more complicated. We do actually substitute across some of those industries, unlike in this model.

But I hope I’ve gotten at least the basic point across that when things become drastically cheaper as technological progress often does, simply adjusting for inflation doesn’t do the job. One dollar of music today isn’t the same thing as one dollar of music a century ago, even if you inflation-adjust their dollars to match ours. We ought to be measuring in hours of music; an hour of music is much the same thing as an hour of music a century ago.

And likewise, that secretary/weather forecaster/news reporter/accountant/musician/filmmaker in your pocket that you call a “smartphone” really ought to be counted as more than just a simple inflation adjustment on its market price. The fact that it is mind-bogglingly cheaper to get these services than it used to be is the technological progress we care about; it’s not some statistical artifact to be removed by proper measurement.

Combine that with actually measuring the hours of real, productive work, and I think you’ll find that productivity is still rising quite rapidly, and that we should still be worried about what automation is going to do to our jobs.

What will we do without air travel?

August 6, JDN 2457972

Air travel is incredibly carbon-intensive. Just one round-trip trans-Atlantic flight produces about 1 ton of carbon emissions per passenger. To keep global warming below 2 K, personal carbon emissions will need to be reduced to less than 1.5 tons per person per year by 2050. This means that simply flying from New York to London and back twice in a year would be enough to exceed the total carbon emissions each person can afford if we are to prevent catastrophic global climate change.

Currently about 12% of US transportation-based carbon emissions are attributable to aircraft; that may not sound like a lot, but consider this. Of the almost 5 trillion passenger-miles traveled by Americans each year, only 600 billion are by air, while 60,000 are by public transit. That leaves 4.4 trillion passenger-miles traveled by car. About 60% of US transportation emissions are due to cars, while 88% of US transportation is by car. About 12% of US transportation emissions are due to airplanes, while 12% of US passenger-miles are traveled by airplane. This means that cars produce about 2/3 as much carbon per passenger-mile, even though we tend to fill up airplanes to the brim and most Americans drive alone most of the time.

Moreover, we know how to reduce emissions from cars. We can use hybrid vehicles, we can carpool more, or best of all we can switch to entirely electric vehicles charged off a grid that is driven by solar and nuclear power. It is theoretically possible to make personal emissions from car travel zero. (Though making car manufacturing truly carbon-neutral may not be feasible; electric cars actually produce somewhat more carbon in their production, though not enough to actually make them worse than conventional cars.)

We have basically no idea how to reduce emissions from air travel. Jet engines are already about as efficient as we know how to make them. There are some tweaks to taxi and takeoff procedure that would help a little bit (chiefly, towing the aircraft to the runway instead of taking them there on their own power; also, taking off from longer runways that require lower throttle to achieve takeoff speed). But there’s basically nothing we can do to reduce the carbon emissions of a cruising airliner at altitude. Even very optimistic estimates involving new high-tech alloys, wing-morphing technology, and dramatically improved turbofan engines only promise to reduce emissions by about 30%.

This is something that affects me quite directly; air travel is a major source of my personal carbon footprint, but also the best way I have to visit family back home.
Using the EPA’s handy carbon footprint calculator, I estimate that everything else I do in my entire life produces about 10 tons of carbon emissions per year. (This is actually pretty good, given the US average of 22 tons per person per year. It helps that I’m vegetarian, I drive a fuel-efficient car, and I live in Southern California.)

Using the ICAO’s even more handy carbon footprint calculator for air travel, I estimate that I produce about 0.2 tons for every round-trip economy-class transcontinental flight from California to Michigan. But that doesn’t account for the fact that higher-altitude emissions are more dangerous. If you adjust for this, the net effect is as if I had produced a full half-ton of carbon for each round-trip flight. Therefore, just four round-trip flights per year increases my total carbon footprint by 20%—and again, by itself exceeds what my carbon emissions need to be reduced to by the year 2050.

With this in mind, most ecologists agree that air travel as we know it is simply not sustainable.

The question then becomes: What do we do without it?

One option would be to simply take all the travel we currently do in airplanes, and stop it. For me this would mean no more trips from California to Michigan, except perhaps occasional long road trips for moving and staying for long periods.

This is unappealing, though it is also not as harmful as you might imagine; most of the world’s population has never flown in an airplane. Our estimates of exactly what proportion of people have flown are very poor, but our best guesses are that about 6% of the world’s population flies in any given year, and about 40% has ever flown in their entire life. Statistically, most of my readers are middle-class Americans, and we’re accustomed to flying; about 80% of Americans have flown on an airplane at least once, and about 1/3 of Americans fly at least once a year. But we’re weird (indeed, WEIRD, White, Educated, Industrialized, Rich, and Democratic); most people in the world fly on airplanes rarely, if ever.

Moreover, air travel has only been widely available to the general population, even in the US, for about the last 60 years. Passenger-miles on airplanes in the US have increased by a factor of 20 since just 1960, while car passenger-miles have only tripled and population has only doubled. Most of the human race through most of history has only dreamed of air travel, and managed to survive just fine without it.

It certainly would not mean needing to stop all long-distance travel, though long-distance travel would be substantially curtailed. It would no longer be possible to travel across the country for a one-week stay; you’d have to plan for four or five days of travel in each direction. Traveling from the US to Europe takes about a week by sea, each way. That means planning your trip much further in advance, and taking off a lot more time from work to do it.

Fortunately, trade is actually not that all that dependent on aircraft. The vast majority of shipping is done by sea vessel already, as container ships are simply far more efficient. Shipping by container ship produces only about 2% as much carbon per ton-kilometer as shipping by aircraft. “Slow-steaming”, the use of more ships at lower speeds to conserve fuel, is already widespread, and carbon taxes would further incentivize it. So we need not fear giving up globalized trade simply because we gave up airplanes.

But we can do better than that. We don’t need to give up the chance to travel across the country in a weekend. The answer is high-speed rail.

A typical airliner cruises at about 500 miles per hour. Can trains match that? Not quite, but close. Spain already has an existing commercial high-speed rail line, the AVE, which goes from Madrid to Barcelona at a cruising speed of 190 miles per hour. This is far from the limits of the technology. The fastest train ever built is the L0 series, a Japanese maglev which can maintain a top speed of 375 miles per hour.

This means that if we put our minds to it, we could build a rail line crossing the United States, say from Los Angeles to New York via Chicago, averaging at least 300 miles per hour. That’s a distance of 2800 miles by road (rail should be comparable); so the whole trip should take about 9 and a half hours. This is slower than a flight (unless you have a long layover), but could still make it there and back in the same weekend.

How much would such a rail system cost? Official estimates of the cost of maglev line are about $100 million per mile. This could probably be brought down by technological development and economies of scale, but let’s go with it for now. This means that my proposed LA-NY line would cost $280 billion.

That’s not a small amount of money, to be sure. It’s about the annual cost of ending world hunger forever. It’s almost half the US military budget. It’s about one-third of Obama’s stimulus plan in 2009. It’s about one-fourth Trump’s proposed infrastructure plan (that will probably never happen).

In other words, it’s a large project, but well within the capacity of a nation as wealthy as the United States.

Add in another 500 miles to upgrade the (already-successful) Acela corridor line on the East Coast, and another 800 miles to make the proposed California High-Speed Rail from LA to SF a maglev line, and you’ve increased the cost to $410 billion.
$410 billion is about 2 years of revenue for all US airlines. These lines could replace a large proportion of all US air traffic. So if the maglev system simply charged as much as a plane ticket and carried the same number of passengers, it would pay for itself in a few years. Realistically it would probably be a bit cheaper and carry fewer people, so the true payoff period might be more like 10 years. That is a perfectly reasonable payoff period for a major infrastructure project.

Compare this to our existing rail network, which is pitiful. There are Amtrak lines from California to Chicago; one is the Texas Eagle of 2700 miles, comparable to my proposed LA-NY maglev; the other is the California Zephyr of 2400 miles. Each of them completes one trip in about two and a half daysso a week-long trip is unviable and a weekend trip is mathematically impossible. Over 60 hours on each train, instead of the proposed 9.5 for the same distance. The operating speed is only about 55 miles per hour when we now have technology that could do 300. The Acela Express is our fastest train line with a top speed of 150 miles per hour and average end-to-end speed of 72 miles per hour; and (not coincidentally I think) it is by far the most profitable train line in the United States.

And best of all, the entire rail system could be carbon-neutral. Making the train itself run without carbon emissions is simple; you just run it off nuclear power plants and solar farms. The emissions from the construction and manufacturing would have to be offset, but most of them would be one-time emissions, precisely the sort of thing that it does make sense to offset with reforestation. Realistically some emissions would continue during the processes of repair and maintenance, but these would be far, far less than what the airplanes were producing—indeed, not much more than the emissions from a comparable length of interstate highway.

Let me emphasize, this is all existing technology. Unlike those optimistic forecasts about advanced new aircraft alloys and morphing wings, I’m not talking about inventing anything new here. This is something other countries have already built (albeit on a much smaller scale). I’m using official cost estimates. Nothing about this plan should be infeasible.

Why are we not doing this? We’re choosing not to. Our government has decided to spend on other things instead. Most Americans are quite complacent about climate change, though at least most Americans do believe in it now.

What about transcontinental travel? There we may have no choice but to give up our weekend visits. Sea vessels simply can’t be built as fast as airplanes. Even experimental high-speed Navy ships can’t far exceed 50 knots, which is about 57 miles per hour—highway speed, not airplane speed. A typical container vessel slow-steams at about 12 knots—14 miles per hour.

But how many people travel across the ocean anyway? As I’ve already established, Americans fly more than almost anyone else in the world; but of the 900 million passengers carried in flights in, through, or out of the US, only 200 million were international Some 64% of Americans have never left the United States—never even to Canada or Mexico! Even if we cut off all overseas commercial flights completely, we are affecting a remarkably small proportion of the world’s population.

And of course I wouldn’t actually suggest banning air travel. We should be taxing air travel, in proportion to its effect on global warming; and those funds ought to get us pretty far in paying for the up-front cost of the maglev network.

What can you do as an individual? Ay, there’s the rub. Not much, unfortunately. You can of course support candidates and political campaigns for high-speed rail. You can take fewer flights yourself. But until this infrastructure is built, those of us who live far from our ancestral home will face the stark tradeoff between increasing our carbon footprint and never getting to see our families.

Games as economic simulations—and education tools

Mar 5, JDN 2457818 [Sun]

Moore’s Law is a truly astonishing phenomenon. Now as we are well into the 21st century (I’ve lived more of my life in the 21st century than the 20th now!) it may finally be slowing down a little bit, but it has had quite a run, and even this could be a temporary slowdown due to economic conditions or the lull before a new paradigm (quantum computing?) matures. Since at least 1975, the computing power of an individual processor has doubled approximately every year and a half; that means it has doubled over 25 times—or in other words that it has increased by a factor of over 30 million. I now have in my pocket a smartphone with several thousand times the processing speed of the guidance computer of the Saturn V that landed on the Moon.

This meteoric increase in computing power has had an enormous impact on the way science is done, including economics. Simple theoretical models that could be solved by hand are now being replaced by enormous simulation models that have to be processed by computers. It is now commonplace to devise models with systems of dozens of nonlinear equations that are literally impossible to solve analytically, and just solve them iteratively with computer software.

But one application of this technology that I believe is currently underutilized is video games.

As a culture, we still have the impression that video games are for children; even games like Dragon Age and Grand Theft Auto that are explicitly for adults (and really quite inappropriate for children!) are viewed as in some sense “childish”—that no serious adult would be involved with such frivolities. The same cultural critics who treat Shakespeare’s vagina jokes as the highest form of art are liable to dismiss the poignant critique of war in Call of Duty: Black Ops or the reflections on cultural diversity in Skyrim as mere puerility.

But video games are an art form with a fundamentally greater potential than any other. Now that graphics are almost photorealistic, there is really nothing you can do in a play or a film that you can’t do in a video game—and there is so, so much more that you can only do in a game.
In what other medium can we witness the spontaneous emergence and costly aftermath of a war? Yet EVE Online has this sort of event every year or so—just today there was a surprise attack involving hundreds of players that destroyed thousands of hours’—and dollars’—worth of starships, something that has more or less become an annual tradition. A few years ago there was a massive three-faction war that destroyed over $300,000 in ships and has now been commemorated as “the Bloodbath of B-R5RB”.
Indeed, the immersion and interactivity of games present an opportunity to do nothing less than experimental macroeconomics. For generations it has been impossible, or at least absurdly unethical, to ever experimentally manipulate an entire macroeconomy. But in a video game like EVE Online or Second Life, we can now do so easily, cheaply, and with little or no long-term harm to the participants—and we can literally control everything in the experiment. Forget the natural resource constraints and currency exchange rates—we can change the laws of physics if we want. (Indeed, EVE‘s whole trade network is built around FTL jump points, and in Second Life it’s a basic part of the interface that everyone can fly like Superman.)

This provides untold potential for economic research. With sufficient funding, we could build a game that would allow us to directly test hypotheses about the most fundamental questions of economics: How do governments emerge and maintain security? How is the rule of law sustained, and when can it be broken? What controls the value of money and the rate of inflation? What is the fundamental cause of unemployment, and how can it be corrected? What influences the rate of technological development? How can we maximize the rate of economic growth? What effect does redistribution of wealth have on employment and output? I envision a future where we can directly simulate these questions with thousands of eager participants, varying the subtlest of parameters and carrying out events over any timescale we like from seconds to centuries.

Nor is the potential of games in economics limited to research; it also has enormous untapped potential in education. I’ve already seen in my classes how tabletop-style games with poker chips can teach a concept better in a few minutes than hours of writing algebra derivations on the board; but custom-built video games could be made that would teach economics far better still, and to a much wider audience. In a well-designed game, people could really feel the effects of free trade or protectionism, not just on themselves as individuals but on entire nations that they control—watch their GDP numbers go down as they scramble to produce in autarky what they could have bought for half the price if not for the tariffs. They could see, in real time, how in the absence of environmental regulations and Pigovian taxes the actions of millions of individuals could despoil our planet for everyone.

Of course, games are fundamentally works of fiction, subject to the Fictional Evidence Fallacy and only as reliable as their authors make them. But so it is with all forms of art. I have no illusions about the fact that we will never get the majority of the population to regularly read peer-reviewed empirical papers. But perhaps if we are clever enough in the games we offer them to play, we can still convey some of the knowledge that those papers contain. We could also update and expand the games as new information comes in. Instead of complaining that our students are spending time playing games on their phones and tablets, we could actually make education into games that are as interesting and entertaining as the ones they would have been playing. We could work with the technology instead of against it. And in a world where more people have access to a smartphone than to a toilet, we could finally bring high-quality education to the underdeveloped world quickly and cheaply.

Rapid growth in computing power has given us a gift of great potential. But soon our capacity will widen even further. Even if Moore’s Law slows down, computing power will continue to increase for awhile yet. Soon enough, virtual reality will finally take off and we’ll have even greater depth of immersion available. The future is bright—if we can avoid this corporatist cyberpunk dystopia we seem to be hurtling toward, of course.

Sometimes people have to lose their jobs. This isn’t a bad thing.

Oct 8, JDN 2457670

Eleizer Yudkowsky (founder of the excellent blog forum Less Wrong) has a term he likes to use to distinguish his economic policy views from either liberal, conservative, or even libertarian: “econoliterate”, meaning the sort of economic policy ideas one comes up with when one actually knows a good deal about economics.

In general I think Yudkowsky overestimates this effect; I’ve known some very knowledgeable economists who disagree quite strongly over economic policy, and often following the conventional political lines of liberal versus conservative: Liberal economists want more progressive taxation and more Keynesian monetary and fiscal policy, while conservative economists want to reduce taxes on capital and remove regulations. Theoretically you can want all these things—as Miles Kimball does—but it’s rare. Conservative economists hate minimum wage, and lean on the theory that says it should be harmful to employment; liberal economists are ambivalent about minimum wage, and lean on the empirical data that shows it has almost no effect on employment. Which is more reliable? The empirical data, obviously—and until more economists start thinking that way, economics is never truly going to be a science as it should be.

But there are a few issues where Yudkowsky’s “econoliterate” concept really does seem to make sense, where there is one view held by most people, and another held by economists, regardless of who is liberal or conservative. One such example is free trade, which almost all economists believe in. A recent poll of prominent economists by the University of Chicago found literally zero who agreed with protectionist tariffs.

Another example is my topic for today: People losing their jobs.

Not unemployment, which both economists and almost everyone else agree is bad; but people losing their jobs. The general consensus among the public seems to be that people losing jobs is always bad, while economists generally consider it a sign of an economy that is run smoothly and efficiently.

To be clear, of course losing your job is bad for you; I don’t mean to imply that if you lose your job you shouldn’t be sad or frustrated or anxious about that, particularly not in our current system. Rather, I mean to say that policy which tries to keep people in their jobs is almost always a bad idea.

I think the problem is that most people don’t quite grasp that losing your job and not having a job are not the same thing. People not having jobs who want to have jobs—unemployment—is a bad thing. But losing your job doesn’t mean you have to stay unemployed; it could simply mean you get a new job. And indeed, that is what it should mean, if the economy is running properly.

Check out this graph, from FRED:

hires_separations

The red line shows hires—people getting jobs. The blue line shows separations—people losing jobs or leaving jobs. During a recession (the most recent two are shown on this graph), people don’t actually leave their jobs faster than usual; if anything, slightly less. Instead what happens is that hiring rates drop dramatically. When the economy is doing well (as it is right now, more or less), both hires and separations are at very high rates.

Why is this? Well, think about what a job is, really: It’s something that needs done, that no one wants to do for free, so someone pays someone else to do it. Once that thing gets done, what should happen? The job should end. It’s done. The purpose of the job was not to provide for your standard of living; it was to achieve the task at hand. Once it doesn’t need done, why keep doing it?

We tend to lose sight of this, for a couple of reasons. First, we don’t have a basic income, and our social welfare system is very minimal; so a job usually is the only way people have to provide for their standard of living, and they come to think of this as the purpose of the job. Second, many jobs don’t really “get done” in any clear sense; individual tasks are completed, but new ones always arise. After every email sent is another received; after every patient treated is another who falls ill.

But even that is really only true in the short run. In the long run, almost all jobs do actually get done, in the sense that no one has to do them anymore. The job of cleaning up after horses is done (with rare exceptions). The job of manufacturing vacuum tubes for computers is done. Indeed, the job of being a computer—that used to be a profession, young women toiling away with slide rules—is very much done. There are no court jesters anymore, no town criers, and very few artisans (and even then, they’re really more like hobbyists). There are more writers now than ever, and occasional stenographers, but there are no scribes—no one powerful but illiterate pays others just to write things down, because no one powerful is illiterate (and even few who are not powerful, and fewer all the time).

When a job “gets done” in this long-run sense, we usually say that it is obsolete, and again think of this as somehow a bad thing, like we are somehow losing the ability to do something. No, we are gaining the ability to do something better. Jobs don’t become obsolete because we can’t do them anymore; they become obsolete because we don’t need to do them anymore. Instead of computers being a profession that toils with slide rules, they are thinking machines that fit in our pockets; and there are plenty of jobs now for software engineers, web developers, network administrators, hardware designers, and so on as a result.

Soon, there will be no coal miners, and very few oil drillers—or at least I hope so, for the sake of our planet’s climate. There will be far fewer auto workers (robots have already done most of that already), but far more construction workers who install rail lines. There will be more nuclear engineers, more photovoltaic researchers, even more miners and roofers, because we need to mine uranium and install solar panels on rooftops.

Yet even by saying that I am falling into the trap: I am making it sound like the benefit of new technology is that it opens up more new jobs. Typically it does do that, but that isn’t what it’s for. The purpose of technology is to get things done.

Remember my parable of the dishwasher. The goal of our economy is not to make people work; it is to provide people with goods and services. If we could invent a machine today that would do the job of everyone in the world and thereby put us all out of work, most people think that would be terrible—but in fact it would be wonderful.

Or at least it could be, if we did it right. See, the problem right now is that while poor people think that the purpose of a job is to provide for their needs, rich people think that the purpose of poor people is to do jobs. If there are no jobs to be done, why bother with them? At that point, they’re just in the way! (Think I’m exaggerating? Why else would anyone put a work requirement on TANF and SNAP? To do that, you must literally think that poor people do not deserve to eat or have homes if they aren’t, right now, working for an employer. You can couch that in cold economic jargon as “maximizing work incentives”, but that’s what you’re doing—you’re threatening people with starvation if they can’t or won’t find jobs.)

What would happen if we tried to stop people from losing their jobs? Typically, inefficiency. When you aren’t allowed to lay people off when they are no longer doing useful work, we end up in a situation where a large segment of the population is being paid but isn’t doing useful work—and unlike the situation with a basic income, those people would lose their income, at least temporarily, if they quit and tried to do something more useful. There is still considerable uncertainty within the empirical literature on just how much “employment protection” (laws that make it hard to lay people off) actually creates inefficiency and reduces productivity and employment, so it could be that this effect is small—but even so, likewise it does not seem to have the desired effect of reducing unemployment either. It may be like minimum wage, where the effect just isn’t all that large. But it’s probably not saving people from being unemployed; it may simply be shifting the distribution of unemployment so that people with protected jobs are almost never unemployed and people without it are unemployed much more frequently. (This doesn’t have to be based in law, either; while it is made by custom rather than law, it’s quite clear that tenure for university professors makes tenured professors vastly more secure, but at the cost of making employment tenuous and underpaid for adjuncts.)

There are other policies we could make that are better than employment protection, active labor market policies like those in Denmark that would make it easier to find a good job. Yet even then, we’re assuming that everyone needs jobs–and increasingly, that just isn’t true.

So, when we invent a new technology that replaces workers, workers are laid off from their jobs—and that is as it should be. What happens next is what we do wrong, and it’s not even anybody in particular; this is something our whole society does wrong: All those displaced workers get nothing. The extra profit from the more efficient production goes entirely to the shareholders of the corporation—and those shareholders are almost entirely members of the top 0.01%. So the poor get poorer and the rich get richer.

The real problem here is not that people lose their jobs; it’s that capital ownership is distributed so unequally. And boy, is it ever! Here are some graphs I made of the distribution of net wealth in the US, using from the US Census.

Here are the quintiles of the population as a whole:

net_wealth_us

And here are the medians by race:

net_wealth_race

Medians by age:

net_wealth_age

Medians by education:

net_wealth_education

And, perhaps most instructively, here are the quintiles of people who own their homes versus renting (The rent is too damn high!)

net_wealth_rent

All that is just within the US, and already they are ranging from the mean net wealth of the lowest quintile of people under 35 (-$45,000, yes negative—student loans) to the mean net wealth of the highest quintile of people with graduate degrees ($3.8 million). All but the top quintile of renters are poorer than all but the bottom quintile of homeowners. And the median Black or Hispanic person has less than one-tenth the wealth of the median White or Asian person.

If we look worldwide, wealth inequality is even starker. Based on UN University figures, 40% of world wealth is owned by the top 1%; 70% by the top 5%; and 80% by the top 10%. There is less total wealth in the bottom 80% than in the 80-90% decile alone. According to Oxfam, the richest 85 individuals own as much net wealth as the poorest 3.7 billion. They are the 0.000,001%.

If we had an equal distribution of capital ownership, people would be happy when their jobs became obsolete, because it would free them up to do other things (either new jobs, or simply leisure time), while not decreasing their income—because they would be the shareholders receiving those extra profits from higher efficiency. People would be excited to hear about new technologies that might displace their work, especially if those technologies would displace the tedious and difficult parts and leave the creative and fun parts. Losing your job could be the best thing that ever happened to you.

The business cycle would still be a problem; we have good reason not to let recessions happen. But stopping the churn of hiring and firing wouldn’t actually make our society better off; it would keep people in jobs where they don’t belong and prevent us from using our time and labor for its best use.

Perhaps the reason most people don’t even think of this solution is precisely because of the extreme inequality of capital distribution—and the fact that it has more or less always been this way since the dawn of civilization. It doesn’t seem to even occur to most people that capital income is a thing that exists, because they are so far removed from actually having any amount of capital sufficient to generate meaningful income. Perhaps when a robot takes their job, on some level they imagine that the robot is getting paid, when of course it’s the shareholders of the corporations that made the robot and the corporations that are using the robot in place of workers. Or perhaps they imagine that those shareholders actually did so much hard work they deserve to get paid that money for all the hours they spent.

Because pay is for work, isn’t it? The reason you get money is because you’ve earned it by your hard work?

No. This is a lie, told to you by the rich and powerful in order to control you. They know full well that income doesn’t just come from wages—most of their income doesn’t come from wages! Yet this is even built into our language; we say “net worth” and “earnings” rather than “net wealth” and “income”. (Parade magazine has a regular segment called “What People Earn”; it should be called “What People Receive”.) Money is not your just reward for your hard work—at least, not always.

The reason you get money is that this is a useful means of allocating resources in our society. (Remember, money was created by governments for the purpose of facilitating economic transactions. It is not something that occurs in nature.) Wages are one way to do that, but they are far from the only way; they are not even the only way currently in use. As technology advances, we should expect a larger proportion of our income to go to capital—but what we’ve been doing wrong is setting it up so that only a handful of people actually own any capital.

Fix that, and maybe people will finally be able to see that losing your job isn’t such a bad thing; it could even be satisfying, the fulfillment of finally getting something done.