The upsides of life extension

Dec 16 JDN 2458469

If living is good, then living longer is better.

This may seem rather obvious, but it’s something we often lose sight of when discussing the consequences of medical technology for extending life. It’s almost like it seems too obvious that living longer must be better, and so we go out of our way to find ways that it is actually worse.

Even from a quick search I was able to find half a dozen popular media articles about life extension, and not one of them focused primarily on the benefits. The empirical literature is better, asking specific, empirically testable questions like “How does life expectancy relate to retirement age?” and “How is lifespan related to population and income growth?” and “What effect will longer lifespans have on pension systems?” Though even there I found essays in medical journals complaining that we have extended “quantity” of life without “quality” (yet by definition, if you are using QALY to assess the cost-effectiveness of a medical intervention, that’s already taken into account).

But still I think somewhere along the way we have forgotten just how good this is. We may not even be able to imagine the benefits of extending people’s lives to 200 or 500 or 1000 years.

To really get some perspective on this, I want you to imagine what a similar conversation must have looked like in roughly the year 1800, the Industrial Revolution, when industrial capitalism came along and made babies finally stop dying.

There was no mass media back then (not enough literacy), but imagine what it would have been like if there had been, or imagine what conversations about the future between elites must have been like.

And we do actually have at least one example of an elite author lamenting the increase in lifespan: His name was Thomas Malthus.

The Malthusian argument was seductive then, and it remains seductive today: If you improve medicine and food production, you will increase population. But if you increase population, you will eventually outstrip those gains in medicine and food and return once more to disease and starvation, only now with more mouths to feed.

Basically any modern discussion of “overpopulation” has this same flavor (by the way, serious environmentalists don’t use that concept; they’re focused on reducing pollution and carbon emissions, not people). Why bother helping poor countries, when they’re just going to double their population and need twice the help?

Well, as a matter of fact, Malthus was wrong. In fact, he was not just wrong: He was backwards. Increased population has come with increased standard of living around the world, as it allowed for more trade, greater specialization, and the application of economies of scale. You can’t build a retail market with a hunter-gatherer tribe. You can’t built an auto industry with a single city-state. You can’t build a space program with a population of 1 million. Having more people has allowed each person to do and have more than they could before.

Current population projections suggest world population will stabilize between 11 and 12 billion. Crucially, this does not factor in any kind of radical life extension technology. The projections allow for moderate increases in lifespan, but not people living much past 100.

Would increased lifespan lead to increased population? Probably, yes. I can’t be certain, because I can very easily imagine people deciding to put off having kids if they can reasonably expect to live 200 years and never become infertile.

I’m actually more worried about the unequal distribution of offspring: People who don’t believe in contraception will be able to have an awful lot of kids during that time, which could be bad for both the kids and society as a whole. We may need to impose regulations on reproduction similar to (but hopefully less draconian than) the One-Child policy imposed in China.

I think the most sensible way to impose the right incentives while still preserving civil liberties is to make it a tax: The first kid gets a subsidy, to help care for them. The second kid is revenue-neutral; we tax you but you get it back as benefits for the child. (Why not just let them keep the money? One of the few places where I think government paternalism is justifiable is protection against abusive or neglectful parents.) The third and later kids result in progressively higher taxes. We always feed the kids on government money, but their parents are going to end up quite poor if they don’t learn how to use contraceptives. (And of course, contraceptives will be made available for free without a prescription.)

But suppose that, yes, population does greatly increase as a result of longer lifespans. This is not a doomsday scenario. In fact, in itself, this is a good thing. If life is worth living, more lives are better.

The question becomes how we ensure that all these people live good lives; but technology will make that easier too. There seems to be an underlying assumption that increased lifespan won’t come with improved health and vitality; but this is already not true. 60 is the new 50: People who are 60 years old today live as well as people who were 50 years old just a generation ago.

And in fact, radical life extension will be an entirely different mechanism. We’re not talking about replacing a hip here, a kidney there; we’re talking about replenishing your chromosomal telomeres, repairing your cells at the molecular level, and revitalizing the content of your blood. The goal of life extension technology isn’t to make you technically alive but hooked up to machines for 200 years; it’s to make you young again for 200 years. The goal is a world where centenarians are playing tennis with young adults fresh out of college and you have trouble telling which is which.

There is another inequality concern here as well, which is cost. Especially in the US—actually almost only in the US, since most of the world has socialized medicine—where medicine is privatized and depends on your personal budget, I can easily imagine a world where the rich live to 200 and the poor die at 60. (The forgettable Justin Timberlake film In Time started with this excellent premise and then went precisely nowhere with it. Oddly, the Deus Ex games seem to have considered every consequence of mixing capitalism with human augmentation except this one.) We should be proactively taking steps to prevent this nightmare scenario by focusing on making healthcare provision equitable and universal. Even if this slows down the development of the technology a little bit, it’ll be worth it to make sure that when it does arrive, it will arrive for everyone.

We really don’t know what the world will look like when people can live 200 years or more. Yes, there will be challenges that come from the transition; honestly I’m most worried about keeping alive ideas that people grew up with two centuries prior. Imagine talking politics with Abraham Lincoln: He was viewed as extremely progressive for his time, even radical—but he was still a big-time racist.

The good news there is that people are not actually as set in their ways as many believe: While the huge surge in pro-LGBT attitudes did come from younger generations, support for LGBT rights has been gradually creeping up among older generations too. Perhaps if Abraham Lincoln had lived through the Great Depression, the World Wars, and the Civil Rights Movement he’d be a very different person than he was in 1865. Longer lifespans will mean people live through more social change; that’s something we’re going to need to cope with.

And of course violent death becomes even more terrifying when aging is out of the picture: It’s tragic enough when a 20-year-old dies in a car accident today and we imagine the 60 years they lost—but what if it was 180 years or 480 years instead? But violent death in basically all its forms is declining around the world.

But again, I really want to emphasize this: Think about how good this is. Imagine meeting your great-grandmother—and not just meeting her, not just having some fleeting contact you half-remember from when you were four years old or something, but getting to know her, talking with her as an adult, going to the same movies, reading the same books. Imagine the converse: Knowing your great-grandchildren, watching them grow up and have kids of their own, your great-great-grandchildren. Imagine the world that we could build if people stopped dying all the time.

And if that doesn’t convince you, I highly recommend Nick Bostrom’s “Fable of the Dragon-Tyrant”.

Stop making excuses for the dragon.

Fighting the zero-sum paradigm

Dec 2 JDN 2458455

It should be obvious at this point that there are deep, perhaps even fundamental, divides between the attitudes and beliefs of different political factions. It can be very difficult to even understand, much less sympathize, with the concerns of people who are racist, misogynistic, homophobic, xenophobic, and authoritarian.
But at the end of the day we still have to live in the same country as these people, so we’d better try to understand how they think. And maybe, just maybe, that understanding will help us to change them.

There is one fundamental belief system that I believe underlies almost all forms of extremism. Right now right-wing extremism is the major threat to global democracy, but left-wing extremism subscribes to the same core paradigm (consistent with Horseshoe Theory).

I think the best term for this is the zero-sum paradigm. The idea is quite simple: There is a certain amount of valuable “stuff” (money, goods, land, status, happiness) in the world, and the only political question is who gets how much.

Thus, any improvement in anyone’s life must, necessarily, come at someone else’s expense. If I become richer, you become poorer. If I become stronger, you become weaker. Any improvement in my standard of living is a threat to your status.

If this belief were true, it would justify, or at least rationalize, all sorts of destructive behavior: Any harm I can inflict upon someone else will yield a benefit for me, by some fundamental conservation law of the universe.

Viewed in this light, beliefs like patriarchy and White supremacy suddenly become much more comprehensible: Why would you want to spend so much effort hurting women and Black people? Because, by the fundamental law of zero-sum, any harm to women is a benefit to men, and any harm to Black people is a benefit to White people. The world is made of “teams”, and you are fighting for your own against all the others.

And I can even see why such an attitude is seductive: It’s simple and easy to understand. And there are many circumstances where it can be approximately true.
When you are bargaining with your boss over a wage, one dollar more for you is one dollar less for your boss.
When your factory outsources production to China, one more job for China is one less job for you.

When we vote for President, one more vote for the Democrats is one less vote for the Republicans.

But of course the world is not actually zero-sum. Both you and your boss would be worse off if your job were to disappear; they need your work and you need their money. For every job that is outsourced to China, another job is created in the United States. And democracy itself is such a profound public good that it basically overwhelms all others.

In fact, it is precisely when a system is running well that the zero-sum paradigm becomes closest to true. In the space of all possible allocations, it is the efficient ones that behave in something like a zero-sum way, because when the system is efficient, we are already producing as much as we can.

This may be part of why populist extremism always seems to assert itself during periods of global prosperity, as in the 1920s and today: It is precisely when the world is running at its full capacity that it feels most like someone else’s gain must come at your loss.

Yet if we live according to the zero-sum paradigm, we will rapidly destroy the prosperity that made that paradigm seem plausible. A trade war between the US and China would put millions out of work in both countries. A real war with conventional weapons would kill millions. A nuclear war would kill billions.

This is what we must convey: We must show people just how good things are right now.

This is not an easy task; when people want to believe the world is falling apart, they can very easily find excuses to do so. You can point to the statistics showing a global decline in homicide, but one dramatic shooting on the TV news will wipe that all away. You can show the worldwide rise in real incomes across the board, but that won’t console someone who just lost their job and blames outsourcing or immigrants.

Indeed, many people will be offended by the attempt—the mere suggestion that the world is actually in very good shape and overall getting better will be perceived as an attempt to deny or dismiss the problems and injustices that still exist.

I encounter this especially from the left: Simply pointing out the objective fact that the wealth gap between White and Black households is slowly closing is often taken as a claim that racism no longer exists or doesn’t matter. Congratulating the meteoric rise in women’s empowerment around the world is often paradoxically viewed as dismissing feminism instead of lauding it.

I think the best case against progress can be made with regard to global climate change: Carbon emissions are not falling nearly fast enough, and the world is getting closer to the brink of truly catastrophic ecological damage. Yet even here the zero-sum paradigm is clearly holding us back; workers in fossil-fuel industries think that the only way to reduce carbon emissions is to make their families suffer, but that’s simply not true. We can make them better off too.

Talking about injustice feels righteous. Talking about progress doesn’t. Yet I think what the world needs most right now—the one thing that might actually pull us back from the brink of fascism or even war—is people talking about progress.

If people think that the world is full of failure and suffering and injustice, they will want to tear down the whole system and start over with something else. In a world that is largely democratic, that very likely means switching to authoritarianism. If people think that this is as bad as it gets, they will be willing to accept or even instigate violence in order to change to almost anything else.

But if people realize that in fact the world is full of success and prosperity and progress, that things are right now quite literally better in almost every way for almost every person in almost every country than they were a hundred—or even fifty—years ago, they will not be so eager to tear the system down and start anew. Centrism is often mocked (partly because it is confused with false equivalence), but in a world where life is improving this quickly for this many people, “stay the course” sounds awfully attractive to me.
That doesn’t mean we should ignore the real problems and injustices that still exist, of course. There is still a great deal of progress left to be made.  But I believe we are more likely to make progress if we acknowledge and seek to continue the progress we have already made, than if we allow ourselves to fall into despair as if that progress did not exist.

The “productivity paradox”

 

Dec 10, JDN 2458098

Take a look at this graph of manufacturing output per worker-hour:

Manufacturing_productivity

From 1988 to 2008, it was growing at a steady pace. In 2008 and 2009 it took a dip due to the Great Recession; no big surprise there. But then since 2012 it has been… completely flat. If we take this graph at face value, it would imply that manufacturing workers today can produce no more output than workers five years ago, and indeed only about 10% more than workers a decade ago. Whereas, a worker in 2008 was producing over 60% more than a worker in 1998, who was producing over 40% more than a worker in 1988.

Many economists call this the “productivity paradox”, and use it to argue that we don’t really need to worry about robots taking all our jobs any time soon. I think this view is mistaken.

The way we measure productivity is fundamentally wrongheaded, and is probably the sole cause of this “paradox”.

First of all, we use total hours scheduled to work, not total hours actually doing productive work. This is obviously much, much easier to measure, which is why we do it. But if you think for a moment about how the 40-hour workweek norm is going to clash with rapidly rising real productivity, it becomes apparent why this isn’t going to be a good measure.
When a worker finds a way to get done in 10 hours what used to take 40 hours, what does that worker’s boss do? Send them home after 10 hours because the job is done? Give them a bonus for their creativity? Hardly. That would be far too rational. They assign them more work, while paying them exactly the same. Recognizing this, what is such a worker to do? The obvious answer is to pretend to work the other 30 hours, while in fact doing something more pleasant than working.
And indeed, so-called “worker distraction” has been rapidly increasing. People are right to blame smartphones, I suppose, but not for the reasons they think. It’s not that smartphones are inherently distracting devices. It’s that smartphones are the cutting edge of a technological revolution that has made most of our work time unnecessary, so due to our fundamentally defective management norms they create overwhelming incentives to waste time at work to avoid getting drenched in extra tasks for no money.

That would probably be enough to explain the “paradox” by itself, but there is a deeper reason that in the long run is even stronger. It has to do with the way we measure “output”.

It might surprise you to learn that economists almost never consider output in terms of the actual number of cars produced, buildings constructed, songs written, or software packages developed. The standard measures of output are all in the form of so-called “real GDP”; that is, the dollar value of output produced.

They do adjust for indexes of inflation, but as I’ll show in a moment this still creates a fundamentally biased picture of the productivity dynamics.

Consider a world with only three industries: Housing, Food, and Music.

Productivity in Housing doesn’t change at all. Producing a house cost 10,000 worker-hours in 1950, and cost 10,000 worker-hours in 2000. Nominal price of houses has rapidly increased, from $10,000 in 1950 to $200,000 in 2000.

Productivity in Food rises moderately fast. Producing 1,000 meals cost 1,000 worker-hours in 1950, and cost 100 worker-hours in 2000. Nominal price of food has increased slowly, from $1,000 per 1,000 meals in 1950 to $5,000 per 1,000 meals in 2000.

Productivity in Music rises extremely fast. Producing 1,000 performances cost 10,000 worker-hours in 1950, and cost 1 worker-hour in 2000. Nominal price of music has collapsed, from $100,000 per 1,000 performances in 1950 to $1,000 per 1,000 performances in 2000.

This is of course an extremely stylized version of what has actually happened: Housing has gotten way more expensive, food has stayed about the same in price while farm employment has plummeted, and the rise of digital music has brought about a new Renaissance in actual music production and listening while revenue for the music industry has collapsed. There is a very nice Vox article on the “productivity paradox” showing a graph of how prices have changed in different industries.

How would productivity appear in the world I’ve just described, by standard measures? Well, to say that I actually need to say something about how consumers substitute across industries. But I think I’ll be forgiven in this case for saying that there is no substitution whatsoever; you can’t eat music or live in a burrito. There’s also a clear Maslow hierarchy here: They say that man cannot live by bread alone, but I think living by Led Zeppelin alone is even harder.

Consumers will therefore choose like this: Over 10 years, buy 1 house, 10,000 meals, and as many performances as you can afford after that. Further suppose that each person had $2,100 per year to spend in 1940-1950, and $50,000 per year to spend in 1990-2000. (This is approximately true for actual nominal US GDP per capita.)

1940-1950:
Total funds: $21,000

1 house = $10,000

10,000 meals = $10,000

Remaining funds: $1,000

Performances purchased: 10

1990-2000:

Total funds: $500,000

1 house = $200,000

10,000 meals = $50,000

Remaining funds: $250,000

Performances purchased: 250,000

(Do you really listen to this much music? 250,000 performances over 10 years is about 70 songs per day. If each song is 3 minutes, that’s only about 3.5 hours per day. If you listen to music while you work or watch a couple of movies with musical scores, yes, you really do listen to this much music! The unrealistic part is assuming that people in 1950 listen to so little, given that radio was already widespread. But if you think of music as standing in for all media, the general trend of being able to consume vastly more media in the digital age is clearly correct.)

Now consider how we would compute a price index for each time period. We would construct a basket of goods and determine the price of that basket in each time period, then adjust prices until that basket has a constant price.

Here, the basket would probably be what people bought in 1940-1950: 1 house, 10,000 meals, and 400 music performances.

In 1950, this basket cost $10,000+$10,000+$100 = $21,000.

In 2000, this basket cost $200,000+$50,000+$400 = $150,400.

This means that our inflation adjustment is $150,400/$21,000 = 7 to 1. This means that we would estimate the real per-capita GDP in 1950 at about $14,700. And indeed, that’s about the actual estimate of real per-capita GDP in 1950.

So, what would we say about productivity?

Sales of houses in 1950 were 1 per person, costing 10,000 worker hours.

Sales of food in 1950 were 10,000 per person, costing 10,000 worker hours.

Sales of music in 1950 were 400 per person, costing 4,000 worker hours.

Worker hours per person are therefore 24,000.

Sales of houses in 2000 were 1 per person, costing 10,000 worker hours.

Sales of food in 2000 were 10,000 per person, costing 1,000 worker hours.

Sales of music in 2000 were 250,000 per person, costing 25,000 worker hours.

Worker hours per person are therefore 36,000.

Therefore we would estimate that productivity rose from $14,700/24,000 = $0.61 per worker-hour to $50,000/36,000 = $1.40 per worker-hour. This is an annual growth rate of about 1.7%, which is again, pretty close to the actual estimate of productivity growth. For such a highly stylized model, my figures are doing remarkably well. (Honestly, better than I thought they would!)

But think about how much actual productivity rose, at least in the industries where it did.

We produce 10 times as much food per worker hour after 50 years, which is an annual growth rate of 4.7%, or three times the estimated growth rate.

We produce 10,000 times as much music per worker hour after 50 years, which is an annual growth rate of over 20%, or almost twelve times the estimated growth rate.

Moreover, should music producers be worried about losing their jobs to automation? Absolutely! People simply won’t be able to listen to much more music than they already are, so any continued increases in music productivity are going to make musicians lose jobs. And that was already allowing for music consumption to increase by a factor of over 600.

Of course, the real world has a lot more industries than this, and everything is a lot more complicated. We do actually substitute across some of those industries, unlike in this model.

But I hope I’ve gotten at least the basic point across that when things become drastically cheaper as technological progress often does, simply adjusting for inflation doesn’t do the job. One dollar of music today isn’t the same thing as one dollar of music a century ago, even if you inflation-adjust their dollars to match ours. We ought to be measuring in hours of music; an hour of music is much the same thing as an hour of music a century ago.

And likewise, that secretary/weather forecaster/news reporter/accountant/musician/filmmaker in your pocket that you call a “smartphone” really ought to be counted as more than just a simple inflation adjustment on its market price. The fact that it is mind-bogglingly cheaper to get these services than it used to be is the technological progress we care about; it’s not some statistical artifact to be removed by proper measurement.

Combine that with actually measuring the hours of real, productive work, and I think you’ll find that productivity is still rising quite rapidly, and that we should still be worried about what automation is going to do to our jobs.

Think of this as a moral recession

August 27, JDN 2457993

The Great Depression was, without doubt, the worst macroeconomic event of the last 200 years. Over 30 million people became unemployed. Unemployment exceeded 20%. Standard of living fell by as much as a third in the United States. Political unrest spread across the world, and the collapsing government of Germany ultimately became the Third Reich and triggered the Second World War If we ignore the world war, however, the effect on mortality rates was surprisingly small. (“Other than that, Mrs. Lincoln, how was the play?”)

And yet, how long do you suppose it took for economic growth to repair the damage? 80 years? 50 years? 30 years? 20 years? Try ten to fifteen. By 1940, the US, US, Germany, and Japan all had a per-capita GDP at least as high as in 1930. By 1945, every country in Europe had a per-capita GDP at least as high as they did before the Great Depression.

The moral of this story is this: Recessions are bad, and can have far-reaching consequences; but ultimately what really matters in the long run is growth.

Assuming the same growth otherwise, a country that had a recession as large as the Great Depression would be about 70% as rich as one that didn’t.

But over 100 years, a country that experienced 3% growth instead of 2% growth would be over two and a half times richer.

Therefore, in terms of standard of living only, if you were given the choice between having a Great Depression but otherwise growing at 3%, and having no recessions but growing at 2%, your grandchildren will be better off if you chose the former. (Of course, given the possibility of political unrest or even war, the depression could very well end up worse.)

With that in mind, I want you to think of the last few years—and especially the last few months—as a moral recession. Donald Trump being President of the United States is clearly a step backward for human civilization, and it seems to have breathed new life into some of the worst ideologies our society has ever harbored, from extreme misogyny, homophobia, right-wing nationalism, and White supremacism to outright Neo-Nazism. When one of the central debates in our public discourse is what level of violence is justifiable against Nazis under what circumstances, something has gone terribly, terribly wrong.

But much as recessions are overwhelmed in the long run by economic growth, there is reason to be confident that this moral backslide is temporary and will be similarly overwhelmed by humanity’s long-run moral progress.

What moral progress, you ask? Let’s remind ourselves.

Just 100 years ago, women could not vote in the United States.

160 years ago, slavery was legal in 15 US states.

Just 3 years ago, same-sex marriage was illegal in 14 US states. Yes, you read that number correctly. I said three. There are gay couples graduating high school and getting married now who as freshmen didn’t think they would be allowed to get married.

That’s just the United States. What about the rest of the world?

100 years ago, almost all of the world’s countries were dictatorships. Today, half of the world’s countries are democracies. Indeed, thanks to India, the majority of the world’s population now lives under democracy.

35 years ago, the Soviet Union still ruled most of Eastern Europe and Northern Asia with an iron fist (or should I say “curtain”?).

30 years ago, the number of human beings in extreme poverty—note I said number, not just rate; the world population was two-thirds what it is today—was twice as large as it is today.

Over the last 65 years, the global death rate due to war has fallen from 250 per million to just 10 per million.

The global literacy rate has risen from 40% to 80% in just 50 years.

World life expectancy has increased by 6 years in just the last 20 years.

We are living in a golden age. Do not forget that.

Indeed, if there is anything that could destroy all these astonishing achievements, I think it would be our failure to appreciate them.

If you listen to what these Neo-Nazi White supremacists say about their grievances, they sound like the spoiled children of millionaires (I mean, they elected one President, after all). They are outraged because they only get 90% of what they want instead of 100%—or even outraged not because they didn’t get what they wanted but because someone else they don’t know also did.

If you listen to the far left, their complaints don’t make much more sense. If you didn’t actually know any statistics, you’d think that life is just as bad for Black people in America today as it was under Jim Crow or even slavery. Well, it’s not even close. I’m not saying racism is gone; it’s definitely still here. But the civil rights movement has made absolutely enormous strides, from banning school segregation and housing redlining to reforming prison sentences and instituting affirmative action programs. Simply the fact that “racist” is now widely considered a terrible thing to be is a major accomplishment in itself. A typical Black person today, despite having only about 60% of the income of a typical White person, is still richer than a typical White person was just 50 years ago. While the 71% high school completion rate Black people currently have may not sound great, it’s much higher than the 50% rate that the whole US population had as recently as 1950.

Yes, there are some things that aren’t going very well right now. The two that I think are most important are climate change and income inequality. As both the global mean temperature anomaly and the world top 1% income share continue to rise, millions of people will suffer and die needlessly from diseases of poverty and natural disasters.

And of course if Neo-Nazis manage to take hold of the US government and try to repeat the Third Reich, that could be literally the worst thing that ever happened. If it triggered a nuclear war, it unquestionably would be literally the worst thing that ever happened. Both these events are unlikely—but not nearly as unlikely as they should be. (Five Thirty Eight interviewed several nuclear experts who estimated a probability of imminent nuclear war at a horrifying five percent.) So I certainly don’t want to make anyone complacent about these very grave problems.

But I worry also that we go too far the other direction, and fail to celebrate the truly amazing progress humanity has made thus far. We hear so often that we are treading water, getting nowhere, or even falling backward, that we begin to feel as though the fight for moral progress is utterly hopeless. If all these centuries of fighting for justice really had gotten us nowhere, the only sensible thing to do at this point would be to give up. But on the contrary, we have made enormous progress in an incredibly short period of time. We are on the verge of finally winning this fight. The last thing we want to do now is give up.

Zootopia taught us constructive responses to bigotry

Sep 10, JDN 2457642

Zootopia wasn’t just a good movie; Zootopia was a great movie. I’m not just talking about its grosses (over $1 billion worldwide), or its ratings, 8.1 on IMDB, 98% from critics and 93% from viewers on Rotten Tomatoes, 78 from critics and 8.8 from users on Metacritic. No, I’m talking about its impact on the world. This movie isn’t just a fun and adorable children’s movie (though it is that). This movie is a work of art that could have profound positive effects on our society.

Why? Because Zootopia is about bigotry—and more than that, it doesn’t just say “bigotry is bad, bigots are bad”; it provides us with a constructive response to bigotry, and forces us to confront the possibility that sometimes the bigots are us.

Indeed, it may be no exaggeration (though I’m sure I’ll get heat on the Internet for suggesting it) to say that Zootopia has done more to fight bigotry than most social justice activists will achieve in their entire lives. Don’t get me wrong, some social justice activists have done great things; and indeed, I may have to count myself in this “most activists” category, since I can’t point to any major accomplishments I’ve yet made in social justice.

But one of the biggest problems I see in the social justice community is the tendency to exclude and denigrate (in sociology jargon, “other” as a verb) people for acts of bigotry, even quite mild ones. Make one vaguely sexist joke, and you may as well be a rapist. Use racially insensitive language by accident, and clearly you are a KKK member. Say something ignorant about homosexuality, and you may as well be Rick Santorum. It becomes less about actually moving the world forward, and more about reaffirming our tribal unity as social justice activists. We are the pure ones. We never do wrong. All the rest of you are broken, and the only way to fix yourself is to become one of us in every way.

In the process of fighting tribal bigotry, we form our own tribe and become our own bigots.

Zootopia offers us another way. If you haven’t seen it, go rent it on DVD or stream it on Netflix right now. Seriously, this blog post will be here when you get back. I’m not going to play any more games with “spoilers!” though. It is definitely worth seeing, and from this point forward I’m going to presume you have.

The brilliance of Zootopia lies in the fact that it made bigotry what it is—not some evil force that infests us from outside, nor something that only cruel, evil individuals would ever partake in, but thoughts and attitudes that we all may have from time to time, that come naturally, and even in some cases might be based on a kernel of statistical truth. Judy Hopps is prey, she grew up in a rural town surrounded by others of her own species (with a population the size of New York City according to the sign, because this is still sometimes a silly Disney movie). She only knew a handful of predators growing up, yet when she moves to Zootopia suddenly she’s confronted with thousands of them, all around her. She doesn’t know what most predators are like, or how best to deal with them.

What she does know is that her ancestors were terrorized, murdered, and quite literally eaten by the ancestors of predators. Her instinctual fear of predators isn’t something utterly arbitrary; it was written into the fabric of her DNA by her ancestral struggle for survival. She has a reason to hate and fear predators that, on its face, actually seems to make sense.

And when there is a spree of murders, all committed by predators, it feels natural to us that Judy would fall back on her old prejudices; indeed, the brilliance of it is that they don’t immediately feel like prejudices. It takes us a moment to let her off-the-cuff comments at the press conference sink in (and Nick’s shocked reaction surely helps), before we realize that was really bigoted. Our adorable, innocent, idealistic, beloved protagonist is a bigot!

Or rather, she has done something bigoted. Because she is such a sympathetic character, we avoid the implication that she is a bigot, that this is something permanent and irredeemable about her. We have already seen the good in her, so we know that this bigotry isn’t what defines who she is. And in the end, she realizes where she went wrong and learns to do better. Indeed, it is ultimately revealed that the murders were orchestrated by someone whose goal was specifically to trigger those ancient ancestral feuds, and Judy reveals that plot and ultimately ends up falling in love with a predator herself.

What Zootopia is really trying to tell us is that we are all Judy Hopps. Every one of us most likely harbors some prejudiced attitude toward someone. If it’s not Black people or women or Muslims or gays, well, how about rednecks? Or Republicans? Or (perhaps the hardest for me) Trump supporters? If you are honest with yourself, there is probably some group of people on this planet that you harbor attitudes of disdain or hatred toward that nonetheless contains a great many good people who do not deserve your disdain.

And conversely, all bigots are Judy Hopps too, or at least the vast majority of them. People don’t wake up in the morning concocting evil schemes for the sake of being evil like cartoon supervillains. (Indeed, perhaps the greatest thing about Zootopia is that it is a cartoon in the sense of being animated, but it is not a cartoon in the sense of being morally simplistic. Compare Captain Planet, wherein polluters aren’t hardworking coal miners with no better options or even corrupt CEOs out to make an extra dollar to go with their other billion; no, they pollute on purpose, for no reason, because they are simply evil. Now that is a cartoon.) Normal human beings don’t plan to make the world a worse place. A handful of psychopaths might, but even then I think it’s more that they don’t care; they aren’t trying to make the world worse, they just don’t particularly mind if they do, as long as they get what they want. Robert Mugabe and Kim-Jong Un are despicable human beings with the blood of millions on their hands, but even they aren’t trying to make the world worse.

And thus, if your theory of bigotry requires that bigots are inhuman monsters who harm others by their sheer sadistic evil, that theory is plainly wrong. Actually I think when stated outright, hardly anyone would agree with that theory; but the important thing is that we often act as if we do. When someone does something bigoted, we shun them, deride them, push them as far as we can to the fringes of our own social group or even our whole society. We don’t say that your statement was racist; we say you are racist. We don’t say your joke was sexist; we say you are sexist. We don’t say your decision was homophobic; we say you are homophobic. We define bigotry as part of your identity, something as innate and ineradicable as your race or sex or sexual orientation itself.

I think I know why we do this: It is to protect ourselves from the possibility that we ourselves might sometimes do bigoted things. Because only bigots do bigoted things, and we know that we are not bigots.

We laugh at this when someone else does it: “But some of my best friends are Black!” “Happy #CincoDeMayo; I love Hispanics!” But that is the very same psychological defense mechanism we’re using ourselves, albeit in a more extreme application. When we commit an act that is accused of being bigoted, we begin searching for contextual evidence outside that act to show that we are not bigoted. The truth we must ultimately confront is that this is irrelevant: The act can still be bigoted even if we are not overall bigots—for we are all Judy Hopps.

This seems like terrible news, even when delivered by animated animals (or fuzzy muppets in Avenue Q), because we tend to hear it as “We are all bigots.” We hear this as saying that bigotry is inevitable, inescapable, literally written into the fabric of our DNA. At that point, we may as well give up, right? It’s hopeless!

But that much we know can’t be true. It could be (indeed, likely is) true that some amount of bigotry is inevitable, just as no country has ever managed to reach zero homicide or zero disease. But just as rates of homicide and disease have precipitously declined with the advancement of human civilization (starting around industrial capitalism, as I pointed out in a previous post!), so indeed have rates of bigotry, at least in recent times.

For goodness’ sake, it used to be a legal, regulated industry to buy and sell other human beings in the United States! This was seen as normal; indeed many argued that it was economically indispensable.

Is 1865 too far back for you? How about racially segregated schools, which were only eliminated from US law in 1954, a time where my parents were both alive? (To be fair, only barely; my father was a month old.) Yes, even today the racial composition of our schools is far from evenly mixed; but it used to be a matter of law that Black children could not go to school with White children.

Women were only granted the right to vote in the US in 1920. My parents weren’t alive yet, but there definitely are people still alive today who were children when the Nineteenth Amendment was ratified.

Same-sex marriage was not legalized across the United States until last year. My own life plans were suddenly and directly affected by this change.

We have made enormous progress against bigotry, in a remarkably short period of time. It has been argued that social change progresses by the death of previous generations; but that simply can’t be true, because we are moving much too fast for that! Attitudes toward LGBT people have improved dramatically in just the last decade.

Instead, it must be that we are actually changing people’s minds. Not everyone’s, to be sure; and often not as quickly as we’d like. But bit by bit, we tear bigotry down, like people tearing off tiny pieces of the Berlin Wall in 1989.

It is important to understand what we are doing here. We are not getting rid of bigots; we are getting rid of bigotry. We want to convince people, “convert” them if you like, not shun them or eradicate them. And we want to strive to improve our own behavior, because we know it will not always be perfect. By forgiving others for their mistakes, we can learn to forgive ourselves for our own.

It is only by talking about bigoted actions and bigoted ideas, rather than bigoted people, that we can hope to make this progress. Someone can’t change who they are, but they can change what they believe and what they do. And along those same lines, it’s important to be clear about detailed, specific actions that people can take to make themselves and the world better.

Don’t just say “Check your privilege!” which at this point is basically a meaningless Applause Light. Instead say “Here are some articles I think you should read on police brutality, including this one from The American Conservative. And there’s a Black Lives Matter protest next weekend, would you like to join me there to see what we do?” Don’t just say “Stop being so racist toward immigrants!”; say “Did you know that about a third of undocumented immigrants are college students on overstayed visas? If we deport all these people, won’t that break up families?” Don’t try to score points. Don’t try to show that you’re the better person. Try to understand, inform, and persuade. You are talking to Judy Hopps, for we are all Judy Hopps.

And when you find false beliefs or bigoted attitudes in yourself, don’t deny them, don’t suppress them, don’t make excuses for them—but also don’t hate yourself for having them. Forgive yourself for your mistake, and then endeavor to correct it. For we are all Judy Hopps.

Believing in civilization without believing in colonialism

JDN 2457541

In a post last week I presented some of the overwhelming evidence that society has been getting better over time, particularly since the start of the Industrial Revolution. I focused mainly on infant mortality rates—babies not dying—but there are lots of other measures you could use as well. Despite popular belief, poverty is rapidly declining, and is now the lowest it’s ever been. War is rapidly declining. Crime is rapidly declining in First World countries, and to the best of our knowledge crime rates are stable worldwide. Public health is rapidly improving. Lifespans are getting longer. And so on, and so on. It’s not quite true to say that every indicator of human progress is on an upward trend, but the vast majority of really important indicators are.

Moreover, there is every reason to believe that this great progress is largely the result of what we call “civilization”, even Western civilization: Stable, centralized governments, strong national defense, representative democracy, free markets, openness to global trade, investment in infrastructure, science and technology, secularism, a culture that values innovation, and freedom of speech and the press. We did not get here by Marxism, nor agragrian socialism, nor primitivism, nor anarcho-capitalism. We did not get here by fascism, nor theocracy, nor monarchy. This progress was built by the center-left welfare state, “social democracy”, “modified capitalism”, the system where free, open markets are coupled with a strong democratic government to protect and steer them.

This fact is basically beyond dispute; the evidence is overwhelming. The serious debate in development economics is over which parts of the Western welfare state are most conducive to raising human well-being, and which parts of the package are more optional. And even then, some things are fairly obvious: Stable government is clearly necessary, while speaking English is clearly optional.

Yet many people are resistant to this conclusion, or even offended by it, and I think I know why: They are confusing the results of civilization with the methods by which it was established.

The results of civilization are indisputably positive: Everything I just named above, especially babies not dying.

But the methods by which civilization was established are not; indeed, some of the greatest atrocities in human history are attributable at least in part to attempts to “spread civilization” to “primitive” or “savage” people.
It is therefore vital to distinguish between the result, civilization, and the processes by which it was effected, such as colonialism and imperialism.

First, it’s important not to overstate the link between civilization and colonialism.

We tend to associate colonialism and imperialism with White people from Western European cultures conquering other people in other cultures; but in fact colonialism and imperialism are basically universal to any human culture that attains sufficient size and centralization. India engaged in colonialism, Persia engaged in imperialism, China engaged in imperialism, the Mongols were of course major imperialists, and don’t forget the Ottoman Empire; and did you realize that Tibet and Mali were at one time imperialists as well? And of course there are a whole bunch of empires you’ve probably never heard of, like the Parthians and the Ghaznavids and the Ummayyads. Even many of the people we’re accustoming to thinking of as innocent victims of colonialism were themselves imperialists—the Aztecs certainly were (they even sold people into slavery and used them for human sacrifice!), as were the Pequot, and the Iroquois may not have outright conquered anyone but were definitely at least “soft imperialists” the way that the US is today, spreading their influence around and using economic and sometimes military pressure to absorb other cultures into their own.

Of course, those were all civilizations, at least in the broadest sense of the word; but before that, it’s not that there wasn’t violence, it just wasn’t organized enough to be worthy of being called “imperialism”. The more general concept of intertribal warfare is a human universal, and some hunter-gatherer tribes actually engage in an essentially constant state of warfare we call “endemic warfare”. People have been grouping together to kill other people they perceived as different for at least as long as there have been people to do so.

This is of course not to excuse what European colonial powers did when they set up bases on other continents and exploited, enslaved, or even murdered the indigenous population. And the absolute numbers of people enslaved or killed are typically larger under European colonialism, mainly because European cultures became so powerful and conquered almost the entire world. Even if European societies were not uniquely predisposed to be violent (and I see no evidence to say that they were—humans are pretty much humans), they were more successful in their violent conquering, and so more people suffered and died. It’s also a first-mover effect: If the Ming Dynasty had supported Zheng He more in his colonial ambitions, I’d probably be writing this post in Mandarin and reflecting on why Asian cultures have engaged in so much colonial oppression.

While there is a deeply condescending paternalism (and often post-hoc rationalization of your own self-interested exploitation) involved in saying that you are conquering other people in order to civilize them, humans are also perfectly capable of committing atrocities for far less noble-sounding motives. There are holy wars such as the Crusades and ethnic genocides like in Rwanda, and the Arab slave trade was purely for profit and didn’t even have the pretense of civilizing people (not that the Atlantic slave trade was ever really about that anyway).

Indeed, I think it’s important to distinguish between colonialists who really did make some effort at civilizing the populations they conquered (like Britain, and also the Mongols actually) and those that clearly were just using that as an excuse to rape and pillage (like Spain and Portugal). This is similar to but not quite the same thing as the distinction between settler colonialism, where you send colonists to live there and build up the country, and exploitation colonialism, where you send military forces to take control of the existing population and exploit them to get their resources. Countries that experienced settler colonialism (such as the US and Australia) have fared a lot better in the long run than countries that experienced exploitation colonialism (such as Haiti and Zimbabwe).

The worst consequences of colonialism weren’t even really anyone’s fault, actually. The reason something like 98% of all Native Americans died as a result of European colonization was not that Europeans killed them—they did kill thousands of course, and I hope it goes without saying that that’s terrible, but it was a small fraction of the total deaths. The reason such a huge number died and whole cultures were depopulated was disease, and the inability of medical technology in any culture at that time to handle such a catastrophic plague. The primary cause was therefore accidental, and not really foreseeable given the state of scientific knowledge at the time. (I therefore think it’s wrong to consider it genocide—maybe democide.) Indeed, what really would have saved these people would be if Europe had advanced even faster into industrial capitalism and modern science, or else waited to colonize until they had; and then they could have distributed vaccines and antibiotics when they arrived. (Of course, there is evidence that a few European colonists used the diseases intentionally as biological weapons, which no amount of vaccine technology would prevent—and that is indeed genocide. But again, this was a small fraction of the total deaths.)

However, even with all those caveats, I hope we can all agree that colonialism and imperialism were morally wrong. No nation has the right to invade and conquer other nations; no one has the right to enslave people; no one has the right to kill people based on their culture or ethnicity.

My point is that it is entirely possible to recognize that and still appreciate that Western civilization has dramatically improved the standard of human life over the last few centuries. It simply doesn’t follow from the fact that British government and culture were more advanced and pluralistic that British soldiers can just go around taking over other people’s countries and planting their own flag (follow the link if you need some comic relief from this dark topic). That was the moral failing of colonialism; not that they thought their society was better—for in many ways it was—but that they thought that gave them the right to terrorize, slaughter, enslave, and conquer people.

Indeed, the “justification” of colonialism is a lot like that bizarre pseudo-utilitarianism I mentioned in my post on torture, where the mere presence of some benefit is taken to justify any possible action toward achieving that benefit. No, that’s not how morality works. You can’t justify unlimited evil by any good—it has to be a greater good, as in actually greater.

So let’s suppose that you do find yourself encountering another culture which is clearly more primitive than yours; their inferior technology results in them living in poverty and having very high rates of disease and death, especially among infants and children. What, if anything, are you justified in doing to intervene to improve their condition?

One idea would be to hold to the Prime Directive: No intervention, no sir, not ever. This is clearly what Gene Roddenberry thought of imperialism, hence why he built it into the Federation’s core principles.

But does that really make sense? Even as Star Trek shows progressed, the writers kept coming up with situations where the Prime Directive really seemed like it should have an exception, and sometimes decided that the honorable crew of Enterprise or Voyager really should intervene in this more primitive society to save them from some terrible fate. And I hope I’m not committing a Fictional Evidence Fallacy when I say that if your fictional universe specifically designed not to let that happen makes that happen, well… maybe it’s something we should be considering.

What if people are dying of a terrible disease that you could easily cure? Should you really deny them access to your medicine to avoid intervening in their society?

What if the primitive culture is ruled by a horrible tyrant that you could easily depose with little or no bloodshed? Should you let him continue to rule with an iron fist?

What if the natives are engaged in slavery, or even their own brand of imperialism against other indigenous cultures? Can you fight imperialism with imperialism?

And then we have to ask, does it really matter whether their babies are being murdered by the tyrant or simply dying from malnutrition and infection? The babies are just as dead, aren’t they? Even if we say that being murdered by a tyrant is worse than dying of malnutrition, it can’t be that much worse, can it? Surely 10 babies dying of malnutrition is at least as bad as 1 baby being murdered?

But then it begins to seem like we have a duty to intervene, and moreover a duty that applies in almost every circumstance! If you are on opposite sides of the technology threshold where infant mortality drops from 30% to 1%, how can you justify not intervening?

I think the best answer here is to keep in mind the very large costs of intervention as well as the potentially large benefits. The answer sounds simple, but is actually perhaps the hardest possible answer to apply in practice: You must do a cost-benefit analysis. Furthermore, you must do it well. We can’t demand perfection, but it must actually be a serious good-faith effort to predict the consequences of different intervention policies.

We know that people tend to resist most outside interventions, especially if you have the intention of toppling their leaders (even if they are indeed tyrannical). Even the simple act of offering people vaccines could be met with resistance, as the native people might think you are poisoning them or somehow trying to control them. But in general, opening contact with with gifts and trade is almost certainly going to trigger less hostility and therefore be more effective than going in guns blazing.

If you do use military force, it must be targeted at the particular leaders who are most harmful, and it must be designed to achieve swift, decisive victory with minimal collateral damage. (Basically I’m talking about just war theory.) If you really have such an advanced civilization, show it by exhibiting total technological dominance and minimizing the number of innocent people you kill. The NATO interventions in Kosovo and Libya mostly got this right. The Vietnam War and Iraq War got it totally wrong.

As you change their society, you should be prepared to bear most of the cost of transition; you are, after all, much richer than they are, and also the ones responsible for effecting the transition. You should not expect to see short-term gains for your own civilization, only long-term gains once their culture has advanced to a level near your own. You can’t bear all the costs of course—transition is just painful, no matter what you do—but at least the fungible economic costs should be borne by you, not by the native population. Examples of doing this wrong include basically all the standard examples of exploitation colonialism: Africa, the Caribbean, South America. Examples of doing this right include West Germany and Japan after WW2, and South Korea after the Korean War—which is to say, the greatest economic successes in the history of the human race. This was us winning development, humanity. Do this again everywhere and we will have not only ended world hunger, but achieved global prosperity.

What happens if we apply these principles to real-world colonialism? It does not fare well. Nor should it, as we’ve already established that most if not all real-world colonialism was morally wrong.

15th and 16th century colonialism fail immediately; they offer no benefit to speak of. Europe’s technological superiority was enough to give them gunpowder but not enough to drop their infant mortality rate. Maybe life was better in 16th century Spain than it was in the Aztec Empire, but honestly not by all that much; and life in the Iroquois Confederacy was in many ways better than life in 15th century England. (Though maybe that justifies some Iroquois imperialism, at least their “soft imperialism”?)

If these principles did justify any real-world imperialism—and I am not convinced that it does—it would only be much later imperialism, like the British Empire in the 19th and 20th century. And even then, it’s not clear that the talk of “civilizing” people and “the White Man’s Burden” was much more than rationalization, an attempt to give a humanitarian justification for what were really acts of self-interested economic exploitation. Even though India and South Africa are probably better off now than they were when the British first took them over, it’s not at all clear that this was really the goal of the British government so much as a side effect, and there are a lot of things the British could have done differently that would obviously have made them better off still—you know, like not implementing the precursors to apartheid, or making India a parliamentary democracy immediately instead of starting with the Raj and only conceding to democracy after decades of protest. What actually happened doesn’t exactly look like Britain cared nothing for actually improving the lives of people in India and South Africa (they did build a lot of schools and railroads, and sought to undermine slavery and the caste system), but it also doesn’t look like that was their only goal; it was more like one goal among several which also included the strategic and economic interests of Britain. It isn’t enough that Britain was a better society or even that they made South Africa and India better societies than they were; if the goal wasn’t really about making people’s lives better where you are intervening, it’s clearly not justified intervention.

And that’s the relatively beneficent imperialism; the really horrific imperialists throughout history made only the barest pretense of spreading civilization and were clearly interested in nothing more than maximizing their own wealth and power. This is probably why we get things like the Prime Directive; we saw how bad it can get, and overreacted a little by saying that intervening in other cultures is always, always wrong, no matter what. It was only a slight overreaction—intervening in other cultures is usually wrong, and almost all historical examples of it were wrong—but it is still an overreaction. There are exceptional cases where intervening in another culture can be not only morally right but obligatory.

Indeed, one underappreciated consequence of colonialism and imperialism is that they have triggered a backlash against real good-faith efforts toward economic development. People in Africa, Asia, and Latin America see economists from the US and the UK (and most of the world’s top economists are in fact educated in the US or the UK) come in and tell them that they need to do this and that to restructure their society for greater prosperity, and they understandably ask: “Why should I trust you this time?” The last two or four or seven batches of people coming from the US and Europe to intervene in their countries exploited them or worse, so why is this time any different?

It is different, of course; UNDP is not the East India Company, not by a longshot. Even for all their faults, the IMF isn’t the East India Company either. Indeed, while these people largely come from the same places as the imperialists, and may be descended from them, they are in fact completely different people, and moral responsibility does not inherit across generations. While the suspicion is understandable, it is ultimately unjustified; whatever happened hundreds of years ago, this time most of us really are trying to help—and it’s working.

What is progress? How far have we really come?

JDN 2457534

It is a controversy that has lasted throughout the ages: Is the world getting better? Is it getting worse? Or is it more or less staying the same, changing in ways that don’t really constitute improvements or detriments?

The most obvious and indisputable change in human society over the course of history has been the advancement of technology. At one extreme there are techno-utopians, who believe that technology will solve all the world’s problems and bring about a glorious future; at the other extreme are anarcho-primitivists, who maintain that civilization, technology, and industrialization were all grave mistakes, removing us from our natural state of peace and harmony.

I am not a techno-utopian—I do not believe that technology will solve all our problems—but I am much closer to that end of the scale. Technology has solved a lot of our problems, and will continue to solve a lot more. My aim in this post is to convince you that progress is real, that things really are, on the whole, getting better.

One of the more baffling arguments against progress comes from none other than Jared Diamond, the social scientist most famous for Guns, Germs and Steel (which oddly enough is mainly about horses and goats). About seven months before I was born, Diamond wrote an essay for Discover magazine arguing quite literally that agriculture—and by extension, civilization—was a mistake.

Diamond fortunately avoids the usual argument based solely on modern hunter-gatherers, which is a selection bias if ever I heard one. Instead his main argument seems to be that paleontological evidence shows an overall decrease in health around the same time as agriculture emerged. But that’s still an endogeneity problem, albeit a subtler one. Maybe agriculture emerged as a response to famine and disease. Or maybe they were both triggered by rising populations; higher populations increase disease risk, and are also basically impossible to sustain without agriculture.

I am similarly dubious of the claim that hunter-gatherers are always peaceful and egalitarian. It does seem to be the case that herders are more violent than other cultures, as they tend to form honor cultures that punish all sleights with overwhelming violence. Even after the Industrial Revolution there were herder honor cultures—the Wild West. Yet as Steven Pinker keeps trying to tell people, the death rates due to homicide in all human cultures appear to have steadily declined for thousands of years.

I read an article just a few days ago on the Scientific American blog which included the following claim so astonishingly nonsensical it makes me wonder if the authors can even do arithmetic or read statistical tables correctly:

I keep reminding readers (see Further Reading), the evidence is overwhelming that war is a relatively recent cultural invention. War emerged toward the end of the Paleolithic era, and then only sporadically. A new study by Japanese researchers published in the Royal Society journal Biology Letters corroborates this view.

Six Japanese scholars led by Hisashi Nakao examined the remains of 2,582 hunter-gatherers who lived 12,000 to 2,800 years ago, during Japan’s so-called Jomon Period. The researchers found bashed-in skulls and other marks consistent with violent death on 23 skeletons, for a mortality rate of 0.89 percent.

That is supposed to be evidence that ancient hunter-gatherers were peaceful? The global homicide rate today is 62 homicides per million people per year. Using the worldwide life expectancy of 71 years (which is biasing against modern civilization because our life expectancy is longer), that means that the worldwide lifetime homicide rate is 4,400 homicides per million people, or 0.44%—that’s less than half the homicide rate of these “peaceful” hunter-gatherers. If you compare just against First World countries, the difference is even starker; let’s use the US, which has the highest homicide rate in the First World. Our homicide rate is 38 homicides per million people per year, which at our life expectancy of 79 years is 3,000 homicides per million people, or an overall homicide rate of 0.3%, slightly more than a third of this “peaceful” ancient culture. The most peaceful societies today—notably Japan, where these remains were found—have homicide rates as low as 3 per million people per year, which is a lifetime homicide rate of 0.02%, forty times smaller than their supposedly utopian ancestors. (Yes, all of Japan has fewer total homicides than Chicago. I’m sure it has nothing to do with their extremely strict gun control laws.) Indeed, to get a modern homicide rate as high as these hunter-gatherers, you need to go to a country like Congo, Myanmar, or the Central African Republic. To get a substantially higher homicide rate, you essentially have to be in Latin America. Honduras, the murder capital of the world, has a lifetime homicide rate of about 6.7%.

Again, how did I figure these things out? By reading basic information from publicly-available statistical tables and then doing some simple arithmetic. Apparently these paleoanthropologists couldn’t be bothered to do that, or didn’t know how to do it correctly, before they started proclaiming that human nature is peaceful and civilization is the source of violence. After an oversight as egregious as that, it feels almost petty to note that a sample size of a few thousand people from one particular region and culture isn’t sufficient data to draw such sweeping judgments or speak of “overwhelming” evidence.

Of course, in order to decide whether progress is a real phenomenon, we need a clearer idea of what we mean by progress. It would be presumptuous to use per-capita GDP, though there can be absolutely no doubt that technology and capitalism do in fact raise per-capita GDP. If we measure by inequality, modern society clearly fares much worse (our top 1% share and Gini coefficient may be higher than Classical Rome!), but that is clearly biased in the opposite direction, because the main way we have raised inequality is by raising the ceiling, not lowering the floor. Most of our really good measures (like the Human Development Index) only exist for the last few decades and can barely even be extrapolated back through the 20th century.

How about babies not dying? This is my preferred measure of a society’s value. It seems like something that should be totally uncontroversial: Babies dying is bad. All other things equal, a society is better if fewer babies die.

I suppose it doesn’t immediately follow that all things considered a society is better if fewer babies die; maybe the dying babies could be offset by some greater good. Perhaps a totalitarian society where no babies die is in fact worse than a free society in which a few babies die, or perhaps we should be prepared to accept some small amount of babies dying in order to save adults from poverty, or something like that. But without some really powerful overriding reason, babies not dying probably means your society is doing something right. (And since most ancient societies were in a state of universal poverty and quite frequently tyranny, these exceptions would only strengthen my case.)

Well, get ready for some high-yield truth bombs about infant mortality rates.

It’s hard to get good data for prehistoric cultures, but the best data we have says that infant mortality in ancient hunter-gatherer cultures was about 20-50%, with a best estimate around 30%. This is statistically indistinguishable from early agricultural societies.

Indeed, 30% seems to be the figure humanity had for most of history. Just shy of a third of all babies died for most of history.

In Medieval times, infant mortality was about 30%.

This same rate (fluctuating based on various plagues) persisted into the Enlightenment—Sweden has the best records, and their infant mortality rate in 1750 was about 30%.

The decline in infant mortality began slowly: During the Industrial Era, infant mortality was about 15% in isolated villages, but still as high as 40% in major cities due to high population densities with poor sanitation.

Even as recently as 1900, there were US cities with infant mortality rates as high as 30%, though the overall rate was more like 10%.

Most of the decline was recent and rapid: Just within the US since WW2, infant mortality fell from about 5.5% to 0.7%, though there remains a substantial disparity between White and Black people.

Globally, the infant mortality rate fell from 6.3% to 3.2% within my lifetime, and in Africa today, the region where it is worst, it is about 5.5%—or what it was in the US in the 1940s.

This precipitous decline in babies dying is the main reason ancient societies have such low life expectancies; actually once they reached adulthood they lived to be about 70 years old, not much worse than we do today. So my multiplying everything by 71 actually isn’t too far off even for ancient societies.

Let me make a graph for you here, of the approximate rate of babies dying over time from 10,000 BC to today:

Infant_mortality.png

Let’s zoom in on the last 250 years, where the data is much more solid:

Infant_mortality_recent.png

I think you may notice something in these graphs. There is quite literally a turning point for humanity, a kink in the curve where we suddenly begin a rapid decline from an otherwise constant mortality rate.

That point occurs around or shortly before 1800—that is, it occurs at industrial capitalism. Adam Smith (not to mention Thomas Jefferson) was writing at just about the point in time when humanity made a sudden and unprecedented shift toward saving the lives of millions of babies.

So now, think about that the next time you are tempted to say that capitalism is an evil system that destroys the world; the evidence points to capitalism quite literally saving babies from dying.

How would it do so? Well, there’s that rising per-capita GDP we previously ignored, for one thing. But more important seems to be the way that industrialization and free markets support technological innovation, and in this case especially medical innovation—antibiotics and vaccines. Our higher rates of literacy and better communication, also a result of raised standard of living and improved technology, surely didn’t hurt. I’m not often in agreement with the Cato Institute, but they’re right about this one: Industrial capitalism is the chief source of human progress.

Billions of babies would have died but we saved them. So yes, I’m going to call that progress. Civilization, and in particular industrialization and free markets, have dramatically improved human life over the last few hundred years.

In a future post I’ll address one of the common retorts to this basically indisputable fact: “You’re making excuses for colonialism and imperialism!” No, I’m not. Saying that modern capitalism is a better system (not least because it saves babies) is not at all the same thing as saying that our ancestors were justified in using murder, slavery, and tyranny to force people into it.