Reasons for optimism in 2022

Jan 2 JDN 2459582

When this post goes live, we will have begun the year 2022.

That still sounds futuristic, somehow. We’ve been in the 20th century long enough that most of my students were born in it and nearly all of them are old enough to drink (to be fair, it’s the UK, so “old enough to drink” only means 18). Yet “the year 2022” still seems like it belongs in science fiction, and not on our wall calendars.

2020 and 2021 were quite bad years. Death rates and poverty rates surged around the world. Almost all of that was directly or indirectly due to COVID.

Yet there are two things we should keep in perspective.

First, those death rates and poverty rates surged to what we used to consider normal 50 years ago. These are not uniquely bad times; indeed, they are still better than most of human history.

Second, there are many reasons to think that 2022—or perhaps a bit later than that, 2025 or 2030—will be better.

The Omicron variant is highly contagious, but so far does not appear to be as deadly as previous variants. COVID seems to be evolving to be more like influenza: Catching it will be virtually inevitable, but dying from it will be very rare.

Things are also looking quite good on the climate change front: Renewable energy production is growing at breathtaking speed and is now cheaper than almost every other form of energy. It’s awful that we panicked and locked down nuclear energy for the last 50 years, but at this point we may no longer need it: Solar and wind are just that good now.

Battery technology is also rapidly improving, giving us denser, cheaper, more stable batteries that may soon allow us to solve the intermittency problem: the wind may not always blow and the sun may not always shine, but if you have big enough batteries you don’t need them to. (You can get a really good feel for how much difference good batteries make in energy production by playing Factorio, or, more whimsically, Mewnbase.)

If we do go back to nuclear energy, it may not be fission anymore, but fusion. Now that we have nearly reached that vital milestone of break-even, investment in fusion technology has rapidly increased.


Fusion has basically all of the benefits of fission with none of the drawbacks. Unlike renewables, it can produce enormous amounts of energy in a way that can be easily scaled and controlled independently of weather conditions. Unlike fission, it requires no exotic nuclear fuels (deuterium can be readily attained from water), and produces no long-lived radioactive waste. (Indeed, development is ongoing of methods that could use fusion products to reduce the waste from fission reactors, making the effective rate of nuclear waste production for fusion negative.) Like both renewables and fission, it produces no carbon emissions other than those required to build the facility (mainly due to concrete).

Of course, technology is only half the problem: we still need substantial policy changes to get carbon emissions down. We’ve already dragged our feet for decades too long, and we will pay the price for that. But anyone saying that climate change is an inevitable catastrophe hasn’t been paying attention to recent developments in solar panels.

Technological development in general seems to be speeding up lately, after having stalled quite a bit in the early 2000s. Moore’s Law may be leveling off, but the technological frontier may simply be moving away from digital computing power and onto other things, such as biotechnology.

Star Trek told us that we’d have prototype warp drives by the 2060s but we wouldn’t have bionic implants to cure blindness until the 2300s. They seem to have gotten it backwards: We may never have warp drive, but we’ve got those bionic implants today.

Neural interfaces are allowing paralyzed people to move, speak, and now even write.

After decades of failed promises, gene therapy is finally becoming useful in treating real human diseases. CRISPR changes everything.

We are also entering a new era of space travel, thanks largely to SpaceX and their remarkable reusable rockets. The payload cost to LEO is a standard measure of the cost of space travel, which describes the cost of carrying a certain mass of cargo up to low Earth orbit. By this measure, costs have declined from nearly $20,000 per kg to only $1,500 per kg since the 1960s. Elon Musk claims that he can reduce the cost to as low as $10 per kg. I’m skeptical, to say the least—but even dropping it to $500 or $200 would be a dramatic improvement and open up many new options for space exploration and even colonization.

To put this in perspective, the cost of carrying a human being to the International Space Station (about 100 kg to LEO) has fallen from $2 million to $150,000. A further decrease to $200 per kg would lower that to $20,000, opening the possibility of space tourism; $20,000 might be something even upper-middle-class people could do as a once-in-a-lifetime vacation. If Musk is really right that he can drop it all the way to $10 per kg, the cost to carry a person to the ISS would be only $1000—something middle-class people could do regularly. (“Should we do Paris for our anniversary this year, or the ISS?”) Indeed, a cost that low would open the possibility of space-based shipping—for when you absolutely must have the product delivered from China to California in the next 2 hours.

Another way to put this in perspective is to convert these prices per mass in terms of those of commodities, such as precious metals. $20,000 per kg is nearly the price of solid platinum. $500 per kg is about the price of sterling silver. $10 per kg is roughly the price of copper.

The reasons for optimism are not purely technological. There has also been significant social progress just in the last few years, with major milestones on LGBT rights being made around the world in 2020 and 2021. Same-sex marriage is now legally recognized over nearly the entire Western Hemisphere.

None of that changes the fact that we are still in a global pandemic which seems to be increasingly out of control. I can’t tell you whether 2022 will be better than 2021, or just more of the same—or perhaps even worse.

But while these times are hard, overall the world is still making progress.

What we can be thankful for

Nov 24 JDN 2458812

Thanksgiving is upon us, yet as more and more evidence is revealed implicating President Trump in grievous crimes, as US carbon emissions that had been declining are now trending upward again, as our air quality deteriorates for the first time in decades, it may be hard to see what we should be thankful for.

But these are exceptions to a broader trend: The world is getting better, in almost every way, remarkably quickly. Homicide rates in the US are lower than they’ve been since the 1960s. Worldwide, the homicide rate has fallen 20% since 1990.

While world carbon emissions are still increasing, on a per capita basis they are actually starting to decline, and on an efficiency basis (kilograms of carbon-equivalent per dollar of GDP) they are at their lowest ever. This trend is likely to continue: The price of solar power has rapidly declined to the point where it is now the cheapest form of electric power.
The number—not just proportion, absolute number—of people in extreme poverty has declined by almost two-thirds within my own lifetime. The proportion is the lowest it has ever been in human history. World life expectancy is at its highest ever. Death rates from infectious disease fell by over 85% over the 20th century, and are now at their lowest ever.

I wouldn’t usually cite Reason as a source, but they’re right on this one: Defeat appears imminent for all four Horsemen of the Apocalypse. Pestilence, Famine, War, and even Death are all on the decline. We have a great deal to be grateful for: We are living in a golden age.

This is not to say that we should let ourselves become complacent and stop trying to make the world better: On the contrary, it proves that the world can be made better, which gives us every reason to redouble our efforts to do so.

The upsides of life extension

Dec 16 JDN 2458469

If living is good, then living longer is better.

This may seem rather obvious, but it’s something we often lose sight of when discussing the consequences of medical technology for extending life. It’s almost like it seems too obvious that living longer must be better, and so we go out of our way to find ways that it is actually worse.

Even from a quick search I was able to find half a dozen popular media articles about life extension, and not one of them focused primarily on the benefits. The empirical literature is better, asking specific, empirically testable questions like “How does life expectancy relate to retirement age?” and “How is lifespan related to population and income growth?” and “What effect will longer lifespans have on pension systems?” Though even there I found essays in medical journals complaining that we have extended “quantity” of life without “quality” (yet by definition, if you are using QALY to assess the cost-effectiveness of a medical intervention, that’s already taken into account).

But still I think somewhere along the way we have forgotten just how good this is. We may not even be able to imagine the benefits of extending people’s lives to 200 or 500 or 1000 years.

To really get some perspective on this, I want you to imagine what a similar conversation must have looked like in roughly the year 1800, the Industrial Revolution, when industrial capitalism came along and made babies finally stop dying.

There was no mass media back then (not enough literacy), but imagine what it would have been like if there had been, or imagine what conversations about the future between elites must have been like.

And we do actually have at least one example of an elite author lamenting the increase in lifespan: His name was Thomas Malthus.

The Malthusian argument was seductive then, and it remains seductive today: If you improve medicine and food production, you will increase population. But if you increase population, you will eventually outstrip those gains in medicine and food and return once more to disease and starvation, only now with more mouths to feed.

Basically any modern discussion of “overpopulation” has this same flavor (by the way, serious environmentalists don’t use that concept; they’re focused on reducing pollution and carbon emissions, not people). Why bother helping poor countries, when they’re just going to double their population and need twice the help?

Well, as a matter of fact, Malthus was wrong. In fact, he was not just wrong: He was backwards. Increased population has come with increased standard of living around the world, as it allowed for more trade, greater specialization, and the application of economies of scale. You can’t build a retail market with a hunter-gatherer tribe. You can’t built an auto industry with a single city-state. You can’t build a space program with a population of 1 million. Having more people has allowed each person to do and have more than they could before.

Current population projections suggest world population will stabilize between 11 and 12 billion. Crucially, this does not factor in any kind of radical life extension technology. The projections allow for moderate increases in lifespan, but not people living much past 100.

Would increased lifespan lead to increased population? Probably, yes. I can’t be certain, because I can very easily imagine people deciding to put off having kids if they can reasonably expect to live 200 years and never become infertile.

I’m actually more worried about the unequal distribution of offspring: People who don’t believe in contraception will be able to have an awful lot of kids during that time, which could be bad for both the kids and society as a whole. We may need to impose regulations on reproduction similar to (but hopefully less draconian than) the One-Child policy imposed in China.

I think the most sensible way to impose the right incentives while still preserving civil liberties is to make it a tax: The first kid gets a subsidy, to help care for them. The second kid is revenue-neutral; we tax you but you get it back as benefits for the child. (Why not just let them keep the money? One of the few places where I think government paternalism is justifiable is protection against abusive or neglectful parents.) The third and later kids result in progressively higher taxes. We always feed the kids on government money, but their parents are going to end up quite poor if they don’t learn how to use contraceptives. (And of course, contraceptives will be made available for free without a prescription.)

But suppose that, yes, population does greatly increase as a result of longer lifespans. This is not a doomsday scenario. In fact, in itself, this is a good thing. If life is worth living, more lives are better.

The question becomes how we ensure that all these people live good lives; but technology will make that easier too. There seems to be an underlying assumption that increased lifespan won’t come with improved health and vitality; but this is already not true. 60 is the new 50: People who are 60 years old today live as well as people who were 50 years old just a generation ago.

And in fact, radical life extension will be an entirely different mechanism. We’re not talking about replacing a hip here, a kidney there; we’re talking about replenishing your chromosomal telomeres, repairing your cells at the molecular level, and revitalizing the content of your blood. The goal of life extension technology isn’t to make you technically alive but hooked up to machines for 200 years; it’s to make you young again for 200 years. The goal is a world where centenarians are playing tennis with young adults fresh out of college and you have trouble telling which is which.

There is another inequality concern here as well, which is cost. Especially in the US—actually almost only in the US, since most of the world has socialized medicine—where medicine is privatized and depends on your personal budget, I can easily imagine a world where the rich live to 200 and the poor die at 60. (The forgettable Justin Timberlake film In Time started with this excellent premise and then went precisely nowhere with it. Oddly, the Deus Ex games seem to have considered every consequence of mixing capitalism with human augmentation except this one.) We should be proactively taking steps to prevent this nightmare scenario by focusing on making healthcare provision equitable and universal. Even if this slows down the development of the technology a little bit, it’ll be worth it to make sure that when it does arrive, it will arrive for everyone.

We really don’t know what the world will look like when people can live 200 years or more. Yes, there will be challenges that come from the transition; honestly I’m most worried about keeping alive ideas that people grew up with two centuries prior. Imagine talking politics with Abraham Lincoln: He was viewed as extremely progressive for his time, even radical—but he was still a big-time racist.

The good news there is that people are not actually as set in their ways as many believe: While the huge surge in pro-LGBT attitudes did come from younger generations, support for LGBT rights has been gradually creeping up among older generations too. Perhaps if Abraham Lincoln had lived through the Great Depression, the World Wars, and the Civil Rights Movement he’d be a very different person than he was in 1865. Longer lifespans will mean people live through more social change; that’s something we’re going to need to cope with.

And of course violent death becomes even more terrifying when aging is out of the picture: It’s tragic enough when a 20-year-old dies in a car accident today and we imagine the 60 years they lost—but what if it was 180 years or 480 years instead? But violent death in basically all its forms is declining around the world.

But again, I really want to emphasize this: Think about how good this is. Imagine meeting your great-grandmother—and not just meeting her, not just having some fleeting contact you half-remember from when you were four years old or something, but getting to know her, talking with her as an adult, going to the same movies, reading the same books. Imagine the converse: Knowing your great-grandchildren, watching them grow up and have kids of their own, your great-great-grandchildren. Imagine the world that we could build if people stopped dying all the time.

And if that doesn’t convince you, I highly recommend Nick Bostrom’s “Fable of the Dragon-Tyrant”.

Stop making excuses for the dragon.

Fighting the zero-sum paradigm

Dec 2 JDN 2458455

It should be obvious at this point that there are deep, perhaps even fundamental, divides between the attitudes and beliefs of different political factions. It can be very difficult to even understand, much less sympathize, with the concerns of people who are racist, misogynistic, homophobic, xenophobic, and authoritarian.
But at the end of the day we still have to live in the same country as these people, so we’d better try to understand how they think. And maybe, just maybe, that understanding will help us to change them.

There is one fundamental belief system that I believe underlies almost all forms of extremism. Right now right-wing extremism is the major threat to global democracy, but left-wing extremism subscribes to the same core paradigm (consistent with Horseshoe Theory).

I think the best term for this is the zero-sum paradigm. The idea is quite simple: There is a certain amount of valuable “stuff” (money, goods, land, status, happiness) in the world, and the only political question is who gets how much.

Thus, any improvement in anyone’s life must, necessarily, come at someone else’s expense. If I become richer, you become poorer. If I become stronger, you become weaker. Any improvement in my standard of living is a threat to your status.

If this belief were true, it would justify, or at least rationalize, all sorts of destructive behavior: Any harm I can inflict upon someone else will yield a benefit for me, by some fundamental conservation law of the universe.

Viewed in this light, beliefs like patriarchy and White supremacy suddenly become much more comprehensible: Why would you want to spend so much effort hurting women and Black people? Because, by the fundamental law of zero-sum, any harm to women is a benefit to men, and any harm to Black people is a benefit to White people. The world is made of “teams”, and you are fighting for your own against all the others.

And I can even see why such an attitude is seductive: It’s simple and easy to understand. And there are many circumstances where it can be approximately true.
When you are bargaining with your boss over a wage, one dollar more for you is one dollar less for your boss.
When your factory outsources production to China, one more job for China is one less job for you.

When we vote for President, one more vote for the Democrats is one less vote for the Republicans.

But of course the world is not actually zero-sum. Both you and your boss would be worse off if your job were to disappear; they need your work and you need their money. For every job that is outsourced to China, another job is created in the United States. And democracy itself is such a profound public good that it basically overwhelms all others.

In fact, it is precisely when a system is running well that the zero-sum paradigm becomes closest to true. In the space of all possible allocations, it is the efficient ones that behave in something like a zero-sum way, because when the system is efficient, we are already producing as much as we can.

This may be part of why populist extremism always seems to assert itself during periods of global prosperity, as in the 1920s and today: It is precisely when the world is running at its full capacity that it feels most like someone else’s gain must come at your loss.

Yet if we live according to the zero-sum paradigm, we will rapidly destroy the prosperity that made that paradigm seem plausible. A trade war between the US and China would put millions out of work in both countries. A real war with conventional weapons would kill millions. A nuclear war would kill billions.

This is what we must convey: We must show people just how good things are right now.

This is not an easy task; when people want to believe the world is falling apart, they can very easily find excuses to do so. You can point to the statistics showing a global decline in homicide, but one dramatic shooting on the TV news will wipe that all away. You can show the worldwide rise in real incomes across the board, but that won’t console someone who just lost their job and blames outsourcing or immigrants.

Indeed, many people will be offended by the attempt—the mere suggestion that the world is actually in very good shape and overall getting better will be perceived as an attempt to deny or dismiss the problems and injustices that still exist.

I encounter this especially from the left: Simply pointing out the objective fact that the wealth gap between White and Black households is slowly closing is often taken as a claim that racism no longer exists or doesn’t matter. Congratulating the meteoric rise in women’s empowerment around the world is often paradoxically viewed as dismissing feminism instead of lauding it.

I think the best case against progress can be made with regard to global climate change: Carbon emissions are not falling nearly fast enough, and the world is getting closer to the brink of truly catastrophic ecological damage. Yet even here the zero-sum paradigm is clearly holding us back; workers in fossil-fuel industries think that the only way to reduce carbon emissions is to make their families suffer, but that’s simply not true. We can make them better off too.

Talking about injustice feels righteous. Talking about progress doesn’t. Yet I think what the world needs most right now—the one thing that might actually pull us back from the brink of fascism or even war—is people talking about progress.

If people think that the world is full of failure and suffering and injustice, they will want to tear down the whole system and start over with something else. In a world that is largely democratic, that very likely means switching to authoritarianism. If people think that this is as bad as it gets, they will be willing to accept or even instigate violence in order to change to almost anything else.

But if people realize that in fact the world is full of success and prosperity and progress, that things are right now quite literally better in almost every way for almost every person in almost every country than they were a hundred—or even fifty—years ago, they will not be so eager to tear the system down and start anew. Centrism is often mocked (partly because it is confused with false equivalence), but in a world where life is improving this quickly for this many people, “stay the course” sounds awfully attractive to me.
That doesn’t mean we should ignore the real problems and injustices that still exist, of course. There is still a great deal of progress left to be made.  But I believe we are more likely to make progress if we acknowledge and seek to continue the progress we have already made, than if we allow ourselves to fall into despair as if that progress did not exist.

The “productivity paradox”

 

Dec 10, JDN 2458098

Take a look at this graph of manufacturing output per worker-hour:

Manufacturing_productivity

From 1988 to 2008, it was growing at a steady pace. In 2008 and 2009 it took a dip due to the Great Recession; no big surprise there. But then since 2012 it has been… completely flat. If we take this graph at face value, it would imply that manufacturing workers today can produce no more output than workers five years ago, and indeed only about 10% more than workers a decade ago. Whereas, a worker in 2008 was producing over 60% more than a worker in 1998, who was producing over 40% more than a worker in 1988.

Many economists call this the “productivity paradox”, and use it to argue that we don’t really need to worry about robots taking all our jobs any time soon. I think this view is mistaken.

The way we measure productivity is fundamentally wrongheaded, and is probably the sole cause of this “paradox”.

First of all, we use total hours scheduled to work, not total hours actually doing productive work. This is obviously much, much easier to measure, which is why we do it. But if you think for a moment about how the 40-hour workweek norm is going to clash with rapidly rising real productivity, it becomes apparent why this isn’t going to be a good measure.
When a worker finds a way to get done in 10 hours what used to take 40 hours, what does that worker’s boss do? Send them home after 10 hours because the job is done? Give them a bonus for their creativity? Hardly. That would be far too rational. They assign them more work, while paying them exactly the same. Recognizing this, what is such a worker to do? The obvious answer is to pretend to work the other 30 hours, while in fact doing something more pleasant than working.
And indeed, so-called “worker distraction” has been rapidly increasing. People are right to blame smartphones, I suppose, but not for the reasons they think. It’s not that smartphones are inherently distracting devices. It’s that smartphones are the cutting edge of a technological revolution that has made most of our work time unnecessary, so due to our fundamentally defective management norms they create overwhelming incentives to waste time at work to avoid getting drenched in extra tasks for no money.

That would probably be enough to explain the “paradox” by itself, but there is a deeper reason that in the long run is even stronger. It has to do with the way we measure “output”.

It might surprise you to learn that economists almost never consider output in terms of the actual number of cars produced, buildings constructed, songs written, or software packages developed. The standard measures of output are all in the form of so-called “real GDP”; that is, the dollar value of output produced.

They do adjust for indexes of inflation, but as I’ll show in a moment this still creates a fundamentally biased picture of the productivity dynamics.

Consider a world with only three industries: Housing, Food, and Music.

Productivity in Housing doesn’t change at all. Producing a house cost 10,000 worker-hours in 1950, and cost 10,000 worker-hours in 2000. Nominal price of houses has rapidly increased, from $10,000 in 1950 to $200,000 in 2000.

Productivity in Food rises moderately fast. Producing 1,000 meals cost 1,000 worker-hours in 1950, and cost 100 worker-hours in 2000. Nominal price of food has increased slowly, from $1,000 per 1,000 meals in 1950 to $5,000 per 1,000 meals in 2000.

Productivity in Music rises extremely fast. Producing 1,000 performances cost 10,000 worker-hours in 1950, and cost 1 worker-hour in 2000. Nominal price of music has collapsed, from $100,000 per 1,000 performances in 1950 to $1,000 per 1,000 performances in 2000.

This is of course an extremely stylized version of what has actually happened: Housing has gotten way more expensive, food has stayed about the same in price while farm employment has plummeted, and the rise of digital music has brought about a new Renaissance in actual music production and listening while revenue for the music industry has collapsed. There is a very nice Vox article on the “productivity paradox” showing a graph of how prices have changed in different industries.

How would productivity appear in the world I’ve just described, by standard measures? Well, to say that I actually need to say something about how consumers substitute across industries. But I think I’ll be forgiven in this case for saying that there is no substitution whatsoever; you can’t eat music or live in a burrito. There’s also a clear Maslow hierarchy here: They say that man cannot live by bread alone, but I think living by Led Zeppelin alone is even harder.

Consumers will therefore choose like this: Over 10 years, buy 1 house, 10,000 meals, and as many performances as you can afford after that. Further suppose that each person had $2,100 per year to spend in 1940-1950, and $50,000 per year to spend in 1990-2000. (This is approximately true for actual nominal US GDP per capita.)

1940-1950:
Total funds: $21,000

1 house = $10,000

10,000 meals = $10,000

Remaining funds: $1,000

Performances purchased: 10

1990-2000:

Total funds: $500,000

1 house = $200,000

10,000 meals = $50,000

Remaining funds: $250,000

Performances purchased: 250,000

(Do you really listen to this much music? 250,000 performances over 10 years is about 70 songs per day. If each song is 3 minutes, that’s only about 3.5 hours per day. If you listen to music while you work or watch a couple of movies with musical scores, yes, you really do listen to this much music! The unrealistic part is assuming that people in 1950 listen to so little, given that radio was already widespread. But if you think of music as standing in for all media, the general trend of being able to consume vastly more media in the digital age is clearly correct.)

Now consider how we would compute a price index for each time period. We would construct a basket of goods and determine the price of that basket in each time period, then adjust prices until that basket has a constant price.

Here, the basket would probably be what people bought in 1940-1950: 1 house, 10,000 meals, and 400 music performances.

In 1950, this basket cost $10,000+$10,000+$100 = $21,000.

In 2000, this basket cost $200,000+$50,000+$400 = $150,400.

This means that our inflation adjustment is $150,400/$21,000 = 7 to 1. This means that we would estimate the real per-capita GDP in 1950 at about $14,700. And indeed, that’s about the actual estimate of real per-capita GDP in 1950.

So, what would we say about productivity?

Sales of houses in 1950 were 1 per person, costing 10,000 worker hours.

Sales of food in 1950 were 10,000 per person, costing 10,000 worker hours.

Sales of music in 1950 were 400 per person, costing 4,000 worker hours.

Worker hours per person are therefore 24,000.

Sales of houses in 2000 were 1 per person, costing 10,000 worker hours.

Sales of food in 2000 were 10,000 per person, costing 1,000 worker hours.

Sales of music in 2000 were 250,000 per person, costing 25,000 worker hours.

Worker hours per person are therefore 36,000.

Therefore we would estimate that productivity rose from $14,700/24,000 = $0.61 per worker-hour to $50,000/36,000 = $1.40 per worker-hour. This is an annual growth rate of about 1.7%, which is again, pretty close to the actual estimate of productivity growth. For such a highly stylized model, my figures are doing remarkably well. (Honestly, better than I thought they would!)

But think about how much actual productivity rose, at least in the industries where it did.

We produce 10 times as much food per worker hour after 50 years, which is an annual growth rate of 4.7%, or three times the estimated growth rate.

We produce 10,000 times as much music per worker hour after 50 years, which is an annual growth rate of over 20%, or almost twelve times the estimated growth rate.

Moreover, should music producers be worried about losing their jobs to automation? Absolutely! People simply won’t be able to listen to much more music than they already are, so any continued increases in music productivity are going to make musicians lose jobs. And that was already allowing for music consumption to increase by a factor of over 600.

Of course, the real world has a lot more industries than this, and everything is a lot more complicated. We do actually substitute across some of those industries, unlike in this model.

But I hope I’ve gotten at least the basic point across that when things become drastically cheaper as technological progress often does, simply adjusting for inflation doesn’t do the job. One dollar of music today isn’t the same thing as one dollar of music a century ago, even if you inflation-adjust their dollars to match ours. We ought to be measuring in hours of music; an hour of music is much the same thing as an hour of music a century ago.

And likewise, that secretary/weather forecaster/news reporter/accountant/musician/filmmaker in your pocket that you call a “smartphone” really ought to be counted as more than just a simple inflation adjustment on its market price. The fact that it is mind-bogglingly cheaper to get these services than it used to be is the technological progress we care about; it’s not some statistical artifact to be removed by proper measurement.

Combine that with actually measuring the hours of real, productive work, and I think you’ll find that productivity is still rising quite rapidly, and that we should still be worried about what automation is going to do to our jobs.

Think of this as a moral recession

August 27, JDN 2457993

The Great Depression was, without doubt, the worst macroeconomic event of the last 200 years. Over 30 million people became unemployed. Unemployment exceeded 20%. Standard of living fell by as much as a third in the United States. Political unrest spread across the world, and the collapsing government of Germany ultimately became the Third Reich and triggered the Second World War If we ignore the world war, however, the effect on mortality rates was surprisingly small. (“Other than that, Mrs. Lincoln, how was the play?”)

And yet, how long do you suppose it took for economic growth to repair the damage? 80 years? 50 years? 30 years? 20 years? Try ten to fifteen. By 1940, the US, US, Germany, and Japan all had a per-capita GDP at least as high as in 1930. By 1945, every country in Europe had a per-capita GDP at least as high as they did before the Great Depression.

The moral of this story is this: Recessions are bad, and can have far-reaching consequences; but ultimately what really matters in the long run is growth.

Assuming the same growth otherwise, a country that had a recession as large as the Great Depression would be about 70% as rich as one that didn’t.

But over 100 years, a country that experienced 3% growth instead of 2% growth would be over two and a half times richer.

Therefore, in terms of standard of living only, if you were given the choice between having a Great Depression but otherwise growing at 3%, and having no recessions but growing at 2%, your grandchildren will be better off if you chose the former. (Of course, given the possibility of political unrest or even war, the depression could very well end up worse.)

With that in mind, I want you to think of the last few years—and especially the last few months—as a moral recession. Donald Trump being President of the United States is clearly a step backward for human civilization, and it seems to have breathed new life into some of the worst ideologies our society has ever harbored, from extreme misogyny, homophobia, right-wing nationalism, and White supremacism to outright Neo-Nazism. When one of the central debates in our public discourse is what level of violence is justifiable against Nazis under what circumstances, something has gone terribly, terribly wrong.

But much as recessions are overwhelmed in the long run by economic growth, there is reason to be confident that this moral backslide is temporary and will be similarly overwhelmed by humanity’s long-run moral progress.

What moral progress, you ask? Let’s remind ourselves.

Just 100 years ago, women could not vote in the United States.

160 years ago, slavery was legal in 15 US states.

Just 3 years ago, same-sex marriage was illegal in 14 US states. Yes, you read that number correctly. I said three. There are gay couples graduating high school and getting married now who as freshmen didn’t think they would be allowed to get married.

That’s just the United States. What about the rest of the world?

100 years ago, almost all of the world’s countries were dictatorships. Today, half of the world’s countries are democracies. Indeed, thanks to India, the majority of the world’s population now lives under democracy.

35 years ago, the Soviet Union still ruled most of Eastern Europe and Northern Asia with an iron fist (or should I say “curtain”?).

30 years ago, the number of human beings in extreme poverty—note I said number, not just rate; the world population was two-thirds what it is today—was twice as large as it is today.

Over the last 65 years, the global death rate due to war has fallen from 250 per million to just 10 per million.

The global literacy rate has risen from 40% to 80% in just 50 years.

World life expectancy has increased by 6 years in just the last 20 years.

We are living in a golden age. Do not forget that.

Indeed, if there is anything that could destroy all these astonishing achievements, I think it would be our failure to appreciate them.

If you listen to what these Neo-Nazi White supremacists say about their grievances, they sound like the spoiled children of millionaires (I mean, they elected one President, after all). They are outraged because they only get 90% of what they want instead of 100%—or even outraged not because they didn’t get what they wanted but because someone else they don’t know also did.

If you listen to the far left, their complaints don’t make much more sense. If you didn’t actually know any statistics, you’d think that life is just as bad for Black people in America today as it was under Jim Crow or even slavery. Well, it’s not even close. I’m not saying racism is gone; it’s definitely still here. But the civil rights movement has made absolutely enormous strides, from banning school segregation and housing redlining to reforming prison sentences and instituting affirmative action programs. Simply the fact that “racist” is now widely considered a terrible thing to be is a major accomplishment in itself. A typical Black person today, despite having only about 60% of the income of a typical White person, is still richer than a typical White person was just 50 years ago. While the 71% high school completion rate Black people currently have may not sound great, it’s much higher than the 50% rate that the whole US population had as recently as 1950.

Yes, there are some things that aren’t going very well right now. The two that I think are most important are climate change and income inequality. As both the global mean temperature anomaly and the world top 1% income share continue to rise, millions of people will suffer and die needlessly from diseases of poverty and natural disasters.

And of course if Neo-Nazis manage to take hold of the US government and try to repeat the Third Reich, that could be literally the worst thing that ever happened. If it triggered a nuclear war, it unquestionably would be literally the worst thing that ever happened. Both these events are unlikely—but not nearly as unlikely as they should be. (Five Thirty Eight interviewed several nuclear experts who estimated a probability of imminent nuclear war at a horrifying five percent.) So I certainly don’t want to make anyone complacent about these very grave problems.

But I worry also that we go too far the other direction, and fail to celebrate the truly amazing progress humanity has made thus far. We hear so often that we are treading water, getting nowhere, or even falling backward, that we begin to feel as though the fight for moral progress is utterly hopeless. If all these centuries of fighting for justice really had gotten us nowhere, the only sensible thing to do at this point would be to give up. But on the contrary, we have made enormous progress in an incredibly short period of time. We are on the verge of finally winning this fight. The last thing we want to do now is give up.

Zootopia taught us constructive responses to bigotry

Sep 10, JDN 2457642

Zootopia wasn’t just a good movie; Zootopia was a great movie. I’m not just talking about its grosses (over $1 billion worldwide), or its ratings, 8.1 on IMDB, 98% from critics and 93% from viewers on Rotten Tomatoes, 78 from critics and 8.8 from users on Metacritic. No, I’m talking about its impact on the world. This movie isn’t just a fun and adorable children’s movie (though it is that). This movie is a work of art that could have profound positive effects on our society.

Why? Because Zootopia is about bigotry—and more than that, it doesn’t just say “bigotry is bad, bigots are bad”; it provides us with a constructive response to bigotry, and forces us to confront the possibility that sometimes the bigots are us.

Indeed, it may be no exaggeration (though I’m sure I’ll get heat on the Internet for suggesting it) to say that Zootopia has done more to fight bigotry than most social justice activists will achieve in their entire lives. Don’t get me wrong, some social justice activists have done great things; and indeed, I may have to count myself in this “most activists” category, since I can’t point to any major accomplishments I’ve yet made in social justice.

But one of the biggest problems I see in the social justice community is the tendency to exclude and denigrate (in sociology jargon, “other” as a verb) people for acts of bigotry, even quite mild ones. Make one vaguely sexist joke, and you may as well be a rapist. Use racially insensitive language by accident, and clearly you are a KKK member. Say something ignorant about homosexuality, and you may as well be Rick Santorum. It becomes less about actually moving the world forward, and more about reaffirming our tribal unity as social justice activists. We are the pure ones. We never do wrong. All the rest of you are broken, and the only way to fix yourself is to become one of us in every way.

In the process of fighting tribal bigotry, we form our own tribe and become our own bigots.

Zootopia offers us another way. If you haven’t seen it, go rent it on DVD or stream it on Netflix right now. Seriously, this blog post will be here when you get back. I’m not going to play any more games with “spoilers!” though. It is definitely worth seeing, and from this point forward I’m going to presume you have.

The brilliance of Zootopia lies in the fact that it made bigotry what it is—not some evil force that infests us from outside, nor something that only cruel, evil individuals would ever partake in, but thoughts and attitudes that we all may have from time to time, that come naturally, and even in some cases might be based on a kernel of statistical truth. Judy Hopps is prey, she grew up in a rural town surrounded by others of her own species (with a population the size of New York City according to the sign, because this is still sometimes a silly Disney movie). She only knew a handful of predators growing up, yet when she moves to Zootopia suddenly she’s confronted with thousands of them, all around her. She doesn’t know what most predators are like, or how best to deal with them.

What she does know is that her ancestors were terrorized, murdered, and quite literally eaten by the ancestors of predators. Her instinctual fear of predators isn’t something utterly arbitrary; it was written into the fabric of her DNA by her ancestral struggle for survival. She has a reason to hate and fear predators that, on its face, actually seems to make sense.

And when there is a spree of murders, all committed by predators, it feels natural to us that Judy would fall back on her old prejudices; indeed, the brilliance of it is that they don’t immediately feel like prejudices. It takes us a moment to let her off-the-cuff comments at the press conference sink in (and Nick’s shocked reaction surely helps), before we realize that was really bigoted. Our adorable, innocent, idealistic, beloved protagonist is a bigot!

Or rather, she has done something bigoted. Because she is such a sympathetic character, we avoid the implication that she is a bigot, that this is something permanent and irredeemable about her. We have already seen the good in her, so we know that this bigotry isn’t what defines who she is. And in the end, she realizes where she went wrong and learns to do better. Indeed, it is ultimately revealed that the murders were orchestrated by someone whose goal was specifically to trigger those ancient ancestral feuds, and Judy reveals that plot and ultimately ends up falling in love with a predator herself.

What Zootopia is really trying to tell us is that we are all Judy Hopps. Every one of us most likely harbors some prejudiced attitude toward someone. If it’s not Black people or women or Muslims or gays, well, how about rednecks? Or Republicans? Or (perhaps the hardest for me) Trump supporters? If you are honest with yourself, there is probably some group of people on this planet that you harbor attitudes of disdain or hatred toward that nonetheless contains a great many good people who do not deserve your disdain.

And conversely, all bigots are Judy Hopps too, or at least the vast majority of them. People don’t wake up in the morning concocting evil schemes for the sake of being evil like cartoon supervillains. (Indeed, perhaps the greatest thing about Zootopia is that it is a cartoon in the sense of being animated, but it is not a cartoon in the sense of being morally simplistic. Compare Captain Planet, wherein polluters aren’t hardworking coal miners with no better options or even corrupt CEOs out to make an extra dollar to go with their other billion; no, they pollute on purpose, for no reason, because they are simply evil. Now that is a cartoon.) Normal human beings don’t plan to make the world a worse place. A handful of psychopaths might, but even then I think it’s more that they don’t care; they aren’t trying to make the world worse, they just don’t particularly mind if they do, as long as they get what they want. Robert Mugabe and Kim-Jong Un are despicable human beings with the blood of millions on their hands, but even they aren’t trying to make the world worse.

And thus, if your theory of bigotry requires that bigots are inhuman monsters who harm others by their sheer sadistic evil, that theory is plainly wrong. Actually I think when stated outright, hardly anyone would agree with that theory; but the important thing is that we often act as if we do. When someone does something bigoted, we shun them, deride them, push them as far as we can to the fringes of our own social group or even our whole society. We don’t say that your statement was racist; we say you are racist. We don’t say your joke was sexist; we say you are sexist. We don’t say your decision was homophobic; we say you are homophobic. We define bigotry as part of your identity, something as innate and ineradicable as your race or sex or sexual orientation itself.

I think I know why we do this: It is to protect ourselves from the possibility that we ourselves might sometimes do bigoted things. Because only bigots do bigoted things, and we know that we are not bigots.

We laugh at this when someone else does it: “But some of my best friends are Black!” “Happy #CincoDeMayo; I love Hispanics!” But that is the very same psychological defense mechanism we’re using ourselves, albeit in a more extreme application. When we commit an act that is accused of being bigoted, we begin searching for contextual evidence outside that act to show that we are not bigoted. The truth we must ultimately confront is that this is irrelevant: The act can still be bigoted even if we are not overall bigots—for we are all Judy Hopps.

This seems like terrible news, even when delivered by animated animals (or fuzzy muppets in Avenue Q), because we tend to hear it as “We are all bigots.” We hear this as saying that bigotry is inevitable, inescapable, literally written into the fabric of our DNA. At that point, we may as well give up, right? It’s hopeless!

But that much we know can’t be true. It could be (indeed, likely is) true that some amount of bigotry is inevitable, just as no country has ever managed to reach zero homicide or zero disease. But just as rates of homicide and disease have precipitously declined with the advancement of human civilization (starting around industrial capitalism, as I pointed out in a previous post!), so indeed have rates of bigotry, at least in recent times.

For goodness’ sake, it used to be a legal, regulated industry to buy and sell other human beings in the United States! This was seen as normal; indeed many argued that it was economically indispensable.

Is 1865 too far back for you? How about racially segregated schools, which were only eliminated from US law in 1954, a time where my parents were both alive? (To be fair, only barely; my father was a month old.) Yes, even today the racial composition of our schools is far from evenly mixed; but it used to be a matter of law that Black children could not go to school with White children.

Women were only granted the right to vote in the US in 1920. My parents weren’t alive yet, but there definitely are people still alive today who were children when the Nineteenth Amendment was ratified.

Same-sex marriage was not legalized across the United States until last year. My own life plans were suddenly and directly affected by this change.

We have made enormous progress against bigotry, in a remarkably short period of time. It has been argued that social change progresses by the death of previous generations; but that simply can’t be true, because we are moving much too fast for that! Attitudes toward LGBT people have improved dramatically in just the last decade.

Instead, it must be that we are actually changing people’s minds. Not everyone’s, to be sure; and often not as quickly as we’d like. But bit by bit, we tear bigotry down, like people tearing off tiny pieces of the Berlin Wall in 1989.

It is important to understand what we are doing here. We are not getting rid of bigots; we are getting rid of bigotry. We want to convince people, “convert” them if you like, not shun them or eradicate them. And we want to strive to improve our own behavior, because we know it will not always be perfect. By forgiving others for their mistakes, we can learn to forgive ourselves for our own.

It is only by talking about bigoted actions and bigoted ideas, rather than bigoted people, that we can hope to make this progress. Someone can’t change who they are, but they can change what they believe and what they do. And along those same lines, it’s important to be clear about detailed, specific actions that people can take to make themselves and the world better.

Don’t just say “Check your privilege!” which at this point is basically a meaningless Applause Light. Instead say “Here are some articles I think you should read on police brutality, including this one from The American Conservative. And there’s a Black Lives Matter protest next weekend, would you like to join me there to see what we do?” Don’t just say “Stop being so racist toward immigrants!”; say “Did you know that about a third of undocumented immigrants are college students on overstayed visas? If we deport all these people, won’t that break up families?” Don’t try to score points. Don’t try to show that you’re the better person. Try to understand, inform, and persuade. You are talking to Judy Hopps, for we are all Judy Hopps.

And when you find false beliefs or bigoted attitudes in yourself, don’t deny them, don’t suppress them, don’t make excuses for them—but also don’t hate yourself for having them. Forgive yourself for your mistake, and then endeavor to correct it. For we are all Judy Hopps.

Believing in civilization without believing in colonialism

JDN 2457541

In a post last week I presented some of the overwhelming evidence that society has been getting better over time, particularly since the start of the Industrial Revolution. I focused mainly on infant mortality rates—babies not dying—but there are lots of other measures you could use as well. Despite popular belief, poverty is rapidly declining, and is now the lowest it’s ever been. War is rapidly declining. Crime is rapidly declining in First World countries, and to the best of our knowledge crime rates are stable worldwide. Public health is rapidly improving. Lifespans are getting longer. And so on, and so on. It’s not quite true to say that every indicator of human progress is on an upward trend, but the vast majority of really important indicators are.

Moreover, there is every reason to believe that this great progress is largely the result of what we call “civilization”, even Western civilization: Stable, centralized governments, strong national defense, representative democracy, free markets, openness to global trade, investment in infrastructure, science and technology, secularism, a culture that values innovation, and freedom of speech and the press. We did not get here by Marxism, nor agragrian socialism, nor primitivism, nor anarcho-capitalism. We did not get here by fascism, nor theocracy, nor monarchy. This progress was built by the center-left welfare state, “social democracy”, “modified capitalism”, the system where free, open markets are coupled with a strong democratic government to protect and steer them.

This fact is basically beyond dispute; the evidence is overwhelming. The serious debate in development economics is over which parts of the Western welfare state are most conducive to raising human well-being, and which parts of the package are more optional. And even then, some things are fairly obvious: Stable government is clearly necessary, while speaking English is clearly optional.

Yet many people are resistant to this conclusion, or even offended by it, and I think I know why: They are confusing the results of civilization with the methods by which it was established.

The results of civilization are indisputably positive: Everything I just named above, especially babies not dying.

But the methods by which civilization was established are not; indeed, some of the greatest atrocities in human history are attributable at least in part to attempts to “spread civilization” to “primitive” or “savage” people.
It is therefore vital to distinguish between the result, civilization, and the processes by which it was effected, such as colonialism and imperialism.

First, it’s important not to overstate the link between civilization and colonialism.

We tend to associate colonialism and imperialism with White people from Western European cultures conquering other people in other cultures; but in fact colonialism and imperialism are basically universal to any human culture that attains sufficient size and centralization. India engaged in colonialism, Persia engaged in imperialism, China engaged in imperialism, the Mongols were of course major imperialists, and don’t forget the Ottoman Empire; and did you realize that Tibet and Mali were at one time imperialists as well? And of course there are a whole bunch of empires you’ve probably never heard of, like the Parthians and the Ghaznavids and the Ummayyads. Even many of the people we’re accustoming to thinking of as innocent victims of colonialism were themselves imperialists—the Aztecs certainly were (they even sold people into slavery and used them for human sacrifice!), as were the Pequot, and the Iroquois may not have outright conquered anyone but were definitely at least “soft imperialists” the way that the US is today, spreading their influence around and using economic and sometimes military pressure to absorb other cultures into their own.

Of course, those were all civilizations, at least in the broadest sense of the word; but before that, it’s not that there wasn’t violence, it just wasn’t organized enough to be worthy of being called “imperialism”. The more general concept of intertribal warfare is a human universal, and some hunter-gatherer tribes actually engage in an essentially constant state of warfare we call “endemic warfare”. People have been grouping together to kill other people they perceived as different for at least as long as there have been people to do so.

This is of course not to excuse what European colonial powers did when they set up bases on other continents and exploited, enslaved, or even murdered the indigenous population. And the absolute numbers of people enslaved or killed are typically larger under European colonialism, mainly because European cultures became so powerful and conquered almost the entire world. Even if European societies were not uniquely predisposed to be violent (and I see no evidence to say that they were—humans are pretty much humans), they were more successful in their violent conquering, and so more people suffered and died. It’s also a first-mover effect: If the Ming Dynasty had supported Zheng He more in his colonial ambitions, I’d probably be writing this post in Mandarin and reflecting on why Asian cultures have engaged in so much colonial oppression.

While there is a deeply condescending paternalism (and often post-hoc rationalization of your own self-interested exploitation) involved in saying that you are conquering other people in order to civilize them, humans are also perfectly capable of committing atrocities for far less noble-sounding motives. There are holy wars such as the Crusades and ethnic genocides like in Rwanda, and the Arab slave trade was purely for profit and didn’t even have the pretense of civilizing people (not that the Atlantic slave trade was ever really about that anyway).

Indeed, I think it’s important to distinguish between colonialists who really did make some effort at civilizing the populations they conquered (like Britain, and also the Mongols actually) and those that clearly were just using that as an excuse to rape and pillage (like Spain and Portugal). This is similar to but not quite the same thing as the distinction between settler colonialism, where you send colonists to live there and build up the country, and exploitation colonialism, where you send military forces to take control of the existing population and exploit them to get their resources. Countries that experienced settler colonialism (such as the US and Australia) have fared a lot better in the long run than countries that experienced exploitation colonialism (such as Haiti and Zimbabwe).

The worst consequences of colonialism weren’t even really anyone’s fault, actually. The reason something like 98% of all Native Americans died as a result of European colonization was not that Europeans killed them—they did kill thousands of course, and I hope it goes without saying that that’s terrible, but it was a small fraction of the total deaths. The reason such a huge number died and whole cultures were depopulated was disease, and the inability of medical technology in any culture at that time to handle such a catastrophic plague. The primary cause was therefore accidental, and not really foreseeable given the state of scientific knowledge at the time. (I therefore think it’s wrong to consider it genocide—maybe democide.) Indeed, what really would have saved these people would be if Europe had advanced even faster into industrial capitalism and modern science, or else waited to colonize until they had; and then they could have distributed vaccines and antibiotics when they arrived. (Of course, there is evidence that a few European colonists used the diseases intentionally as biological weapons, which no amount of vaccine technology would prevent—and that is indeed genocide. But again, this was a small fraction of the total deaths.)

However, even with all those caveats, I hope we can all agree that colonialism and imperialism were morally wrong. No nation has the right to invade and conquer other nations; no one has the right to enslave people; no one has the right to kill people based on their culture or ethnicity.

My point is that it is entirely possible to recognize that and still appreciate that Western civilization has dramatically improved the standard of human life over the last few centuries. It simply doesn’t follow from the fact that British government and culture were more advanced and pluralistic that British soldiers can just go around taking over other people’s countries and planting their own flag (follow the link if you need some comic relief from this dark topic). That was the moral failing of colonialism; not that they thought their society was better—for in many ways it was—but that they thought that gave them the right to terrorize, slaughter, enslave, and conquer people.

Indeed, the “justification” of colonialism is a lot like that bizarre pseudo-utilitarianism I mentioned in my post on torture, where the mere presence of some benefit is taken to justify any possible action toward achieving that benefit. No, that’s not how morality works. You can’t justify unlimited evil by any good—it has to be a greater good, as in actually greater.

So let’s suppose that you do find yourself encountering another culture which is clearly more primitive than yours; their inferior technology results in them living in poverty and having very high rates of disease and death, especially among infants and children. What, if anything, are you justified in doing to intervene to improve their condition?

One idea would be to hold to the Prime Directive: No intervention, no sir, not ever. This is clearly what Gene Roddenberry thought of imperialism, hence why he built it into the Federation’s core principles.

But does that really make sense? Even as Star Trek shows progressed, the writers kept coming up with situations where the Prime Directive really seemed like it should have an exception, and sometimes decided that the honorable crew of Enterprise or Voyager really should intervene in this more primitive society to save them from some terrible fate. And I hope I’m not committing a Fictional Evidence Fallacy when I say that if your fictional universe specifically designed not to let that happen makes that happen, well… maybe it’s something we should be considering.

What if people are dying of a terrible disease that you could easily cure? Should you really deny them access to your medicine to avoid intervening in their society?

What if the primitive culture is ruled by a horrible tyrant that you could easily depose with little or no bloodshed? Should you let him continue to rule with an iron fist?

What if the natives are engaged in slavery, or even their own brand of imperialism against other indigenous cultures? Can you fight imperialism with imperialism?

And then we have to ask, does it really matter whether their babies are being murdered by the tyrant or simply dying from malnutrition and infection? The babies are just as dead, aren’t they? Even if we say that being murdered by a tyrant is worse than dying of malnutrition, it can’t be that much worse, can it? Surely 10 babies dying of malnutrition is at least as bad as 1 baby being murdered?

But then it begins to seem like we have a duty to intervene, and moreover a duty that applies in almost every circumstance! If you are on opposite sides of the technology threshold where infant mortality drops from 30% to 1%, how can you justify not intervening?

I think the best answer here is to keep in mind the very large costs of intervention as well as the potentially large benefits. The answer sounds simple, but is actually perhaps the hardest possible answer to apply in practice: You must do a cost-benefit analysis. Furthermore, you must do it well. We can’t demand perfection, but it must actually be a serious good-faith effort to predict the consequences of different intervention policies.

We know that people tend to resist most outside interventions, especially if you have the intention of toppling their leaders (even if they are indeed tyrannical). Even the simple act of offering people vaccines could be met with resistance, as the native people might think you are poisoning them or somehow trying to control them. But in general, opening contact with with gifts and trade is almost certainly going to trigger less hostility and therefore be more effective than going in guns blazing.

If you do use military force, it must be targeted at the particular leaders who are most harmful, and it must be designed to achieve swift, decisive victory with minimal collateral damage. (Basically I’m talking about just war theory.) If you really have such an advanced civilization, show it by exhibiting total technological dominance and minimizing the number of innocent people you kill. The NATO interventions in Kosovo and Libya mostly got this right. The Vietnam War and Iraq War got it totally wrong.

As you change their society, you should be prepared to bear most of the cost of transition; you are, after all, much richer than they are, and also the ones responsible for effecting the transition. You should not expect to see short-term gains for your own civilization, only long-term gains once their culture has advanced to a level near your own. You can’t bear all the costs of course—transition is just painful, no matter what you do—but at least the fungible economic costs should be borne by you, not by the native population. Examples of doing this wrong include basically all the standard examples of exploitation colonialism: Africa, the Caribbean, South America. Examples of doing this right include West Germany and Japan after WW2, and South Korea after the Korean War—which is to say, the greatest economic successes in the history of the human race. This was us winning development, humanity. Do this again everywhere and we will have not only ended world hunger, but achieved global prosperity.

What happens if we apply these principles to real-world colonialism? It does not fare well. Nor should it, as we’ve already established that most if not all real-world colonialism was morally wrong.

15th and 16th century colonialism fail immediately; they offer no benefit to speak of. Europe’s technological superiority was enough to give them gunpowder but not enough to drop their infant mortality rate. Maybe life was better in 16th century Spain than it was in the Aztec Empire, but honestly not by all that much; and life in the Iroquois Confederacy was in many ways better than life in 15th century England. (Though maybe that justifies some Iroquois imperialism, at least their “soft imperialism”?)

If these principles did justify any real-world imperialism—and I am not convinced that it does—it would only be much later imperialism, like the British Empire in the 19th and 20th century. And even then, it’s not clear that the talk of “civilizing” people and “the White Man’s Burden” was much more than rationalization, an attempt to give a humanitarian justification for what were really acts of self-interested economic exploitation. Even though India and South Africa are probably better off now than they were when the British first took them over, it’s not at all clear that this was really the goal of the British government so much as a side effect, and there are a lot of things the British could have done differently that would obviously have made them better off still—you know, like not implementing the precursors to apartheid, or making India a parliamentary democracy immediately instead of starting with the Raj and only conceding to democracy after decades of protest. What actually happened doesn’t exactly look like Britain cared nothing for actually improving the lives of people in India and South Africa (they did build a lot of schools and railroads, and sought to undermine slavery and the caste system), but it also doesn’t look like that was their only goal; it was more like one goal among several which also included the strategic and economic interests of Britain. It isn’t enough that Britain was a better society or even that they made South Africa and India better societies than they were; if the goal wasn’t really about making people’s lives better where you are intervening, it’s clearly not justified intervention.

And that’s the relatively beneficent imperialism; the really horrific imperialists throughout history made only the barest pretense of spreading civilization and were clearly interested in nothing more than maximizing their own wealth and power. This is probably why we get things like the Prime Directive; we saw how bad it can get, and overreacted a little by saying that intervening in other cultures is always, always wrong, no matter what. It was only a slight overreaction—intervening in other cultures is usually wrong, and almost all historical examples of it were wrong—but it is still an overreaction. There are exceptional cases where intervening in another culture can be not only morally right but obligatory.

Indeed, one underappreciated consequence of colonialism and imperialism is that they have triggered a backlash against real good-faith efforts toward economic development. People in Africa, Asia, and Latin America see economists from the US and the UK (and most of the world’s top economists are in fact educated in the US or the UK) come in and tell them that they need to do this and that to restructure their society for greater prosperity, and they understandably ask: “Why should I trust you this time?” The last two or four or seven batches of people coming from the US and Europe to intervene in their countries exploited them or worse, so why is this time any different?

It is different, of course; UNDP is not the East India Company, not by a longshot. Even for all their faults, the IMF isn’t the East India Company either. Indeed, while these people largely come from the same places as the imperialists, and may be descended from them, they are in fact completely different people, and moral responsibility does not inherit across generations. While the suspicion is understandable, it is ultimately unjustified; whatever happened hundreds of years ago, this time most of us really are trying to help—and it’s working.

What is progress? How far have we really come?

JDN 2457534

It is a controversy that has lasted throughout the ages: Is the world getting better? Is it getting worse? Or is it more or less staying the same, changing in ways that don’t really constitute improvements or detriments?

The most obvious and indisputable change in human society over the course of history has been the advancement of technology. At one extreme there are techno-utopians, who believe that technology will solve all the world’s problems and bring about a glorious future; at the other extreme are anarcho-primitivists, who maintain that civilization, technology, and industrialization were all grave mistakes, removing us from our natural state of peace and harmony.

I am not a techno-utopian—I do not believe that technology will solve all our problems—but I am much closer to that end of the scale. Technology has solved a lot of our problems, and will continue to solve a lot more. My aim in this post is to convince you that progress is real, that things really are, on the whole, getting better.

One of the more baffling arguments against progress comes from none other than Jared Diamond, the social scientist most famous for Guns, Germs and Steel (which oddly enough is mainly about horses and goats). About seven months before I was born, Diamond wrote an essay for Discover magazine arguing quite literally that agriculture—and by extension, civilization—was a mistake.

Diamond fortunately avoids the usual argument based solely on modern hunter-gatherers, which is a selection bias if ever I heard one. Instead his main argument seems to be that paleontological evidence shows an overall decrease in health around the same time as agriculture emerged. But that’s still an endogeneity problem, albeit a subtler one. Maybe agriculture emerged as a response to famine and disease. Or maybe they were both triggered by rising populations; higher populations increase disease risk, and are also basically impossible to sustain without agriculture.

I am similarly dubious of the claim that hunter-gatherers are always peaceful and egalitarian. It does seem to be the case that herders are more violent than other cultures, as they tend to form honor cultures that punish all sleights with overwhelming violence. Even after the Industrial Revolution there were herder honor cultures—the Wild West. Yet as Steven Pinker keeps trying to tell people, the death rates due to homicide in all human cultures appear to have steadily declined for thousands of years.

I read an article just a few days ago on the Scientific American blog which included the following claim so astonishingly nonsensical it makes me wonder if the authors can even do arithmetic or read statistical tables correctly:

I keep reminding readers (see Further Reading), the evidence is overwhelming that war is a relatively recent cultural invention. War emerged toward the end of the Paleolithic era, and then only sporadically. A new study by Japanese researchers published in the Royal Society journal Biology Letters corroborates this view.

Six Japanese scholars led by Hisashi Nakao examined the remains of 2,582 hunter-gatherers who lived 12,000 to 2,800 years ago, during Japan’s so-called Jomon Period. The researchers found bashed-in skulls and other marks consistent with violent death on 23 skeletons, for a mortality rate of 0.89 percent.

That is supposed to be evidence that ancient hunter-gatherers were peaceful? The global homicide rate today is 62 homicides per million people per year. Using the worldwide life expectancy of 71 years (which is biasing against modern civilization because our life expectancy is longer), that means that the worldwide lifetime homicide rate is 4,400 homicides per million people, or 0.44%—that’s less than half the homicide rate of these “peaceful” hunter-gatherers. If you compare just against First World countries, the difference is even starker; let’s use the US, which has the highest homicide rate in the First World. Our homicide rate is 38 homicides per million people per year, which at our life expectancy of 79 years is 3,000 homicides per million people, or an overall homicide rate of 0.3%, slightly more than a third of this “peaceful” ancient culture. The most peaceful societies today—notably Japan, where these remains were found—have homicide rates as low as 3 per million people per year, which is a lifetime homicide rate of 0.02%, forty times smaller than their supposedly utopian ancestors. (Yes, all of Japan has fewer total homicides than Chicago. I’m sure it has nothing to do with their extremely strict gun control laws.) Indeed, to get a modern homicide rate as high as these hunter-gatherers, you need to go to a country like Congo, Myanmar, or the Central African Republic. To get a substantially higher homicide rate, you essentially have to be in Latin America. Honduras, the murder capital of the world, has a lifetime homicide rate of about 6.7%.

Again, how did I figure these things out? By reading basic information from publicly-available statistical tables and then doing some simple arithmetic. Apparently these paleoanthropologists couldn’t be bothered to do that, or didn’t know how to do it correctly, before they started proclaiming that human nature is peaceful and civilization is the source of violence. After an oversight as egregious as that, it feels almost petty to note that a sample size of a few thousand people from one particular region and culture isn’t sufficient data to draw such sweeping judgments or speak of “overwhelming” evidence.

Of course, in order to decide whether progress is a real phenomenon, we need a clearer idea of what we mean by progress. It would be presumptuous to use per-capita GDP, though there can be absolutely no doubt that technology and capitalism do in fact raise per-capita GDP. If we measure by inequality, modern society clearly fares much worse (our top 1% share and Gini coefficient may be higher than Classical Rome!), but that is clearly biased in the opposite direction, because the main way we have raised inequality is by raising the ceiling, not lowering the floor. Most of our really good measures (like the Human Development Index) only exist for the last few decades and can barely even be extrapolated back through the 20th century.

How about babies not dying? This is my preferred measure of a society’s value. It seems like something that should be totally uncontroversial: Babies dying is bad. All other things equal, a society is better if fewer babies die.

I suppose it doesn’t immediately follow that all things considered a society is better if fewer babies die; maybe the dying babies could be offset by some greater good. Perhaps a totalitarian society where no babies die is in fact worse than a free society in which a few babies die, or perhaps we should be prepared to accept some small amount of babies dying in order to save adults from poverty, or something like that. But without some really powerful overriding reason, babies not dying probably means your society is doing something right. (And since most ancient societies were in a state of universal poverty and quite frequently tyranny, these exceptions would only strengthen my case.)

Well, get ready for some high-yield truth bombs about infant mortality rates.

It’s hard to get good data for prehistoric cultures, but the best data we have says that infant mortality in ancient hunter-gatherer cultures was about 20-50%, with a best estimate around 30%. This is statistically indistinguishable from early agricultural societies.

Indeed, 30% seems to be the figure humanity had for most of history. Just shy of a third of all babies died for most of history.

In Medieval times, infant mortality was about 30%.

This same rate (fluctuating based on various plagues) persisted into the Enlightenment—Sweden has the best records, and their infant mortality rate in 1750 was about 30%.

The decline in infant mortality began slowly: During the Industrial Era, infant mortality was about 15% in isolated villages, but still as high as 40% in major cities due to high population densities with poor sanitation.

Even as recently as 1900, there were US cities with infant mortality rates as high as 30%, though the overall rate was more like 10%.

Most of the decline was recent and rapid: Just within the US since WW2, infant mortality fell from about 5.5% to 0.7%, though there remains a substantial disparity between White and Black people.

Globally, the infant mortality rate fell from 6.3% to 3.2% within my lifetime, and in Africa today, the region where it is worst, it is about 5.5%—or what it was in the US in the 1940s.

This precipitous decline in babies dying is the main reason ancient societies have such low life expectancies; actually once they reached adulthood they lived to be about 70 years old, not much worse than we do today. So my multiplying everything by 71 actually isn’t too far off even for ancient societies.

Let me make a graph for you here, of the approximate rate of babies dying over time from 10,000 BC to today:

Infant_mortality.png

Let’s zoom in on the last 250 years, where the data is much more solid:

Infant_mortality_recent.png

I think you may notice something in these graphs. There is quite literally a turning point for humanity, a kink in the curve where we suddenly begin a rapid decline from an otherwise constant mortality rate.

That point occurs around or shortly before 1800—that is, it occurs at industrial capitalism. Adam Smith (not to mention Thomas Jefferson) was writing at just about the point in time when humanity made a sudden and unprecedented shift toward saving the lives of millions of babies.

So now, think about that the next time you are tempted to say that capitalism is an evil system that destroys the world; the evidence points to capitalism quite literally saving babies from dying.

How would it do so? Well, there’s that rising per-capita GDP we previously ignored, for one thing. But more important seems to be the way that industrialization and free markets support technological innovation, and in this case especially medical innovation—antibiotics and vaccines. Our higher rates of literacy and better communication, also a result of raised standard of living and improved technology, surely didn’t hurt. I’m not often in agreement with the Cato Institute, but they’re right about this one: Industrial capitalism is the chief source of human progress.

Billions of babies would have died but we saved them. So yes, I’m going to call that progress. Civilization, and in particular industrialization and free markets, have dramatically improved human life over the last few hundred years.

In a future post I’ll address one of the common retorts to this basically indisputable fact: “You’re making excuses for colonialism and imperialism!” No, I’m not. Saying that modern capitalism is a better system (not least because it saves babies) is not at all the same thing as saying that our ancestors were justified in using murder, slavery, and tyranny to force people into it.

Oppression is quantitative.

JDN 2457082 EDT 11:15.

Economists are often accused of assigning dollar values to everything, of being Oscar Wilde’s definition of a cynic, someone who knows the price of everything and the value of nothing. And there is more than a little truth to this, particularly among neoclassical economists; I was alarmed a few days ago to receive an email response from an economist that included the word ‘altruism’ in scare quotes as though this were somehow a problematic or unrealistic concept. (Actually, altruism is already formally modeled by biologists, and my claim that human beings are altruistic would be so uncontroversial among evolutionary biologists as to be considered trivial.)

But sometimes this accusation is based upon things economists do that is actually tremendously useful, even necessary to good policymaking: We make everything quantitative. Nothing is ever “yes” or “no” to an economist (sometimes even when it probably should be; the debate among economists in the 1960s over whether slavery is economically efficient does seem rather beside the point), but always more or less; never good or bad but always better or worse. For example, as I discussed in my post on minimum wage, the mainstream position among economists is not that minimum wage is always harmful nor that minimum wage is always beneficial, but that minimum wage is a policy with costs and benefits that on average neither increases nor decreases unemployment. The mainstream position among economists about climate policy is that we should institute either a high carbon tax or a system of cap-and-trade permits; no economist I know wants us to either do nothing and let the market decide (a position most Republicans currently seem to take) or suddenly ban coal and oil (the latter is a strawman position I’ve heard environmentalists accused of, but I’ve never actually heard advocated; even Greenpeace wants to ban offshore drilling, not oil in general.).

This makes people uncomfortable, I think, because they want moral issues to be simple. They want “good guys” who are always right and “bad guys” who are always wrong. (Speaking of strawman environmentalism, a good example of this is Captain Planet, in which no one ever seems to pollute the environment in order to help people or even in order to make money; no, they simply do it because the hate clean water and baby animals.) They don’t want to talk about options that are more good or less bad; they want one option that is good and all other options that are bad.

This attitude tends to become infused with righteousness, such that anyone who disagrees is an agent of the enemy. Politics is the mind-killer, after all. If you acknowledge that there might be some downside to a policy you agree with, that’s like betraying your team.

But in reality, the failure to acknowledge downsides can lead to disaster. Problems that could have been prevented are instead ignored and denied. Getting the other side to recognize the downsides of their own policies might actually help you persuade them to your way of thinking. And appreciating that there is a continuum of possibilities that are better and worse in various ways to various degrees is what allows us to make the world a better place even as we know that it will never be perfect.

There is a common refrain you’ll hear from a lot of social justice activists which sounds really nice and egalitarian, but actually has the potential to completely undermine the entire project of social justice.

This is the idea that oppression can’t be measured quantitatively, and we shouldn’t try to compare different levels of oppression. The notion that some people are more oppressed than others is often derided as the Oppression Olympics. (Some use this term more narrowly to mean when a discussion is derailed by debate over who has it worse—but then the problem is really discussions being derailed, isn’t it?)

This sounds nice, because it means we don’t have to ask hard questions like, “Which is worse, sexism or racism?” or “Who is worse off, people with cancer or people with diabetes?” These are very difficult questions, and maybe they aren’t the right ones to ask—after all, there’s no reason to think that fighting racism and fighting sexism are mutually exclusive; they can in fact be complementary. Research into cancer only prevents us from doing research into diabetes if our total research budget is fixed—this is more than anything else an argument for increasing research budgets.

But we must not throw out the baby with the bathwater. Oppression is quantitative. Some kinds of oppression are clearly worse than others.

Why is this important? Because otherwise you can’t measure progress. If you have a strictly qualitative notion of oppression where it’s black-and-white, on-or-off, oppressed-or-not, then we haven’t made any progress on just about any kind of oppression. There is still racism, there is still sexism, there is still homophobia, there is still religious discrimination. Maybe these things will always exist to some extent. This makes the fight for social justice a hopeless Sisyphean task.

But in fact, that’s not true at all. We’ve made enormous progress. Unbelievably fast progress. Mind-boggling progress. For hundreds of millennia humanity made almost no progress at all, and then in the last few centuries we have suddenly leapt toward justice.

Sexism used to mean that women couldn’t own property, they couldn’t vote, they could be abused and raped with impunity—or even beaten or killed for being raped (which Saudi Arabia still does by the way). Now sexism just means that women aren’t paid as well, are underrepresented in positions of power like Congress and Fortune 500 CEOs, and they are still sometimes sexually harassed or raped—but when men are caught doing this they go to prison for years. This change happened in only about 100 years. That’s fantastic.

Racism used to mean that Black people were literally property to be bought and sold. They were slaves. They had no rights at all, they were treated like animals. They were frequently beaten to death. Now they can vote, hold office—one is President!—and racism means that our culture systematically discriminates against them, particularly in the legal system. Racism used to mean you could be lynched; now it just means that it’s a bit harder to get a job and the cops will sometimes harass you. This took only about 200 years. That’s amazing.

Homophobia used to mean that gay people were criminals. We could be sent to prison or even executed for the crime of making love in the wrong way. If we were beaten or murdered, it was our fault for being faggots. Now, homophobia means that we can’t get married in some states (and fewer all the time!), we’re depicted on TV in embarrassing stereotypes, and a lot of people say bigoted things about us. This has only taken about 50 years! That’s astonishing.

And above all, the most extreme example: Religious discrimination used to mean you could be burned at the stake for not being Catholic. It used to mean—and in some countries still does mean—that it’s illegal to believe in certain religions. Now, it means that Muslims are stereotyped because, well, to be frank, there are some really scary things about Muslim culture and some really scary people who are Muslim leaders. (Personally, I think Muslims should be more upset about Ahmadinejad and Al Qaeda than they are about being profiled in airports.) It means that we atheists are annoyed by “In God We Trust”, but we’re no longer burned at the stake. This has taken longer, more like 500 years. But even though it took a long time, I’m going to go out on a limb and say that this progress is wonderful.

Obviously, there’s a lot more progress remaining to be made on all these issues, and others—like economic inequality, ableism, nationalism, and animal rights—but the point is that we have made a lot of progress already. Things are better than they used to be—a lot betterand keeping this in mind will help us preserve the hope and dedication necessary to make things even better still.

If you think that oppression is either-or, on-or-off, you can’t celebrate this progress, and as a result the whole fight seems hopeless. Why bother, when it’s always been on, and will probably never be off? But we started with oppression that was absolutely horrific, and now it’s considerably milder. That’s real progress. At least within the First World we have gone from 90% oppressed to 25% oppressed, and we can bring it down to 10% or 1% or 0.1% or even 0.01%. Those aren’t just numbers, those are the lives of millions of people. As democracy spreads worldwide and poverty is eradicated, oppression declines. Step by step, social changes are made, whether by protest marches or forward-thinking politicians or even by lawyers and lobbyists (they aren’t all corrupt).

And indeed, a four-year-old Black girl with a mental disability living in Ghana whose entire family’s income is $3 a day is more oppressed than I am, and not only do I have no qualms about saying that, it would feel deeply unseemly to deny it. I am not totally unoppressed—I am a bisexual atheist with chronic migraines and depression in a country that is suspicious of atheists, systematically discriminates against LGBT people, and does not make proper accommodations for chronic disorders, particularly mental ones. But I am far less oppressed, and that little girl (she does exist, though I know not her name) could be made much less oppressed than she is even by relatively simple interventions (like a basic income). In order to make her fully and totally unoppressed, we would need such a radical restructuring of human society that I honestly can’t really imagine what it would look like. Maybe something like The Culture? Even then as Iain Banks imagines it, there is inequality between those within The Culture and those outside it, and there have been wars like the Idiran-Culture War which killed billions, and among those trillions of people on thousands of vast orbital habitats someone, somewhere is probably making a speciesist remark. Yet I can state unequivocally that life in The Culture would be better than my life here now, which is better than the life of that poor disabled girl in Ghana.

To be fair, we can’t actually put a precise number on it—though many economists try, and one of my goals is to convince them to improve their methods so that they stop using willingness-to-pay and instead try to actually measure utility by something like QALY. A precise number would help, actually—it would allow us to do cost-benefit analyses to decide where to focus our efforts. But while we don’t need a precise number to tell when we are making progress, we do need to acknowledge that there are degrees of oppression, some worse than others.

Oppression is quantitative. And our goal should be minimizing that quantity.