The real cost of high rent

Jan 26 JDN 2458875

The average daily commute time in the United States is about 26 minutes each way—for a total of 52 minutes every weekday. Public transit commute times are substantially longer in most states than driving commute times: In California, the average driving commute is 28 minutes each way, while the average public transit commute is 51 minutes each way. Adding this up over 5 workdays per week, working 50 weeks per year, means that on average Americans spend over 216 hours each year commuting.

Median annual income in the US is about $33,000. Assuming about 2000 hours of work per year for a full-time job, that’s a wage of $16.50 per hour. This makes the total cost of commute time in the United States over $3500 per worker per year. Multiplied by a labor force of 205 million, this makes the total cost of commute time over $730 billion per year. That’s not even counting the additional carbon emissions and road fatalities. This is all pure waste. The optimal commute time is zero minutes; the closer we can get to that, the better. Telecommuting might finally make this a reality, at least for a large swath of workers. Already over 40% of US workers telecommute at least some of the time.

Let me remind you that it would cost about $200 billion per year to end world hunger. We could end world hunger three times over with the effort we currently waste in commute time.

Where is this cost coming from? Why are commutes so long? The answer is obvious: The rent is too damn high. People have long commutes because they can’t afford to live closer to where they work.

Almost half of all renter households in the US pay more than 30% of their income in rent—and 25% pay more than half of their income. The average household rent in the US is over $1400 per month, almost $17,000 per year—more than the per-capita GDP of China.

Not that buying a home solves the problem: In many US cities the price-to-rent ratio of homes is over 20 to 1, and in Manhattan and San Francisco it’s as high as 50 to 1. If you already bought your home years ago, this is great for you; for the rest of us, not so much. Interestingly, high rents seem to correlate with higher price-to-rent ratios, so it seems like purchase prices are responding even more to whatever economic pressure is driving up rents.

Overall about a third of all US consumer spending is on housing; out of our total consumption spending of $13 trillion, this means we are spending over $4 trillion per year on housing, about the GDP of Germany. Of course, some of this is actually worth spending: Housing costs a lot to build, and provides many valuable benefits.

What should we be spending on housing, if the housing market were competitive and efficient?

I think Chicago’s housing market looks fairly healthy. Homes there go for about $250,000, with prices that are relatively stable; and the price-to-rent ratio is about 20 to 1. Chicago is a large city with a population density of about 6,000 people per square kilometer, so it’s not as if I’m using a tiny rural town as my comparison. If the entire population of the United States were concentrated at the same density as the city of Chicago, we’d all fit in only 55,000 square kilometers—less than the area of West Virginia.
Compare this to the median housing price in California ($550,000), New York ($330,000), or Washington, D.C. ($630,000). There are metro areas with housing prices far above even this: In San Jose the median home price is $1.1 million. I find it very hard to believe that it is literally four times as hard to build homes in San Jose as it is in Chicago. Something is distorting that price—maybe it’s over-regulation, maybe it’s monopoly power, maybe it’s speculation—I’m not sure what exactly, but there’s definitely something out of whack here.

This suggests that a more efficient housing market would probably cut prices in California by 50% and prices in New York by 25%. Since about 40% of all spending in California is on housing, this price change would effectively free up 20% of California’s GDP—and 20% of $3 trillion is $600 billion per year. The additional 8% of New York’s GDP gets us another $130 billion, and we’re already at that $730 billion I calculated for the total cost of commuting, only considering New York and California alone.

This means that the total amount of waste—including both time and money—due to housing being too expensive probably exceeds $1.5 trillion per year. This is an enormous sum of money: We’re spending an Australia here. We could just about pay for a single-payer healthcare system with this.

Green New Deal Part 3: Guaranteeing education and healthcare is easy—why aren’t we doing it?

Apr 21 JDN 2458595

Last week was one of the “hard parts” of the Green New Deal. Today it’s back to one of the “easy parts”: Guaranteed education and healthcare.

“Providing all people of the United States with – (i) high-quality health care; […]

“Providing resources, training, and high-quality education, including higher education, to all people of the United States.”

Many Americans seem to think that providing universal healthcare would be prohibitively expensive. In fact, it would have literally negative net cost.
The US currently has the most bloated, expensive, inefficient healthcare system in the entire world. We spend almost $10,000 per person per year on healthcare, and get outcomes no better than France or the UK where they spend less than $5,000.
In fact, our public healthcare expenditures are currently higher than almost every other country. Our private expenditures are therefore pure waste; all they are doing is providing returns for the shareholders of corporations. If we were to simply copy the UK National Health Service and spend money in exactly the same way as they do, we would spend the same amount in public funds and almost nothing in private funds—and the UK has a higher mean lifespan than the US.
This is absolutely a no-brainer. Burn the whole system of private insurance down. Copy a healthcare system that actually works, like they use in every other First World country.
It wouldn’t even be that complicated to implement: We already have a single-payer healthcare system in the US; it’s called Medicare. Currently only old people get it; but old people use the most healthcare anyway. Hence, Medicare for All: Just lower the eligibility age for Medicare to 18 (if not zero). In the short run there would be additional costs for the transition, but in the long run we would save mind-boggling amounts of money, all while improving healthcare outcomes and extending our lifespans. Current estimates say that the net savings of Medicare for All would be about $5 trillion over the next 10 years. We can afford this. Indeed, the question is, as it was for infrastructure: How can we afford not to do this?
Isn’t this socialism? Yeah, I suppose it is. But healthcare is one of the few things that socialist countries consistently do extremely well. Cuba is a socialist country—a real socialist country, not a social democratic welfare state like Norway but a genuinely authoritarian centrally-planned economy. Cuba’s per-capita GDP PPP is a third of ours. Yet their life expectancy is actually higher than ours, because their healthcare system is just that good. Their per-capita healthcare spending is one-fourth of ours, and their health outcomes are better. So yeah, let’s be socialist in our healthcare. Socialists seem really good at healthcare.
And this makes sense, if you think about it. Doctors can do their jobs a lot better when they’re focused on just treating everyone who needs help, rather than arguing with insurance companies over what should and shouldn’t be covered. Preventative medicine is extremely cost-effective, yet it’s usually the first thing that people skimp on when trying to save money on health insurance. A variety of public health measures (such as vaccination and air quality regulation) are extremely cost-effective, but they are public goods that the private sector would not pay for by itself.
It’s not as if healthcare was ever really a competitive market anyway: When you get sick or injured, do you shop around for the best or cheapest hospital? How would you even go about that, when they don’t even post most of their prices and what prices they post are often wildly different than what you’ll actually pay?
The only serious argument I’ve heard against single-payer healthcare is a moral one: “Why should I have to pay for other people’s healthcare?” Well, I guess, because… you’re a human being? You should care about other human beings, and not want them to suffer and die from easily treatable diseases?
I don’t know how to explain to you that you should care about other people.

Single-payer healthcare is not only affordable: It would be cheaper and better than what we are currently doing. (In fact, almost anything would be cheaper and better than what we are currently doing—Obamacare was an improvement over the previous mess, but it’s still a mess.)
What about public education? Well, we already have that up to the high school level, and it works quite well.
Contrary to popular belief, the average public high school has better outcomes in terms of test scores and college placements than the average private high school. There are some elite private schools that do better, but they are extraordinarily expensive and they self-select only the best students. Public schools have to take all students, and they have a limited budget; but they have high quality standards and they require their teachers to be certified.
The flaws in our public school system are largely from it being not public enough, which is to say that schools are funded by their local property taxes instead of having their costs equally shared across whole states. This gives them the same basic problem as private schools: Rich kids get better schools.
If we removed that inequality, our educational outcomes would probably be among the best in the world—indeed, in our most well-funded school districts, they are. The state of Massachusetts which actually funds their public schools equally and well, gets international test scores just as good as the supposedly “superior” educational systems of Asian countries. In fact, this is probably even unfair to Massachusetts, as we know that China specifically selects the regions that have the best students to be the ones to take these international tests. Massachusetts is the best the US has to offer, but Shanghai is also the best China has to offer, so it’s only fair we compare apples to apples.
Public education has benefits for our whole society. We want to have a population of citizens, workers, and consumers who are well-educated. There are enormous benefits of primary and secondary education in terms of reducing poverty, improving public health, and increased economic growth.
So there’s my impassioned argument for why we should continue to support free, universal public education up to high school.
When it comes to college, I can’t be quite so enthusiastic. While there are societal benefits of college education, most of the benefits of college accrue to the individuals who go to college themselves.
The median weekly income of someone with a high school diploma is about $730; with a bachelor’s degree this rises to $1200; and with a doctoral or professional degree it gets over $1800. Higher education also greatly reduces your risk of being unemployed; while about 4% of the general population is unemployed, only 1.5% of people with doctorates or professional degrees are. Add that up over all the weeks of your life, and it’s a lot of money.
The net present value of a college education has been estimated at approximately $1 million. This result is quite sensitive to the choice of discount rate; at a higher discount rate you can get the net present value as “low” as $250,000.
With this in mind, the fact that the median student loan debt for a college graduate is about $30,000 doesn’t sound so terrible, does it? You’re taking out a loan for $30,000 to get something that will earn you between $250,000 and $1 million over the course of your life.
There is some evidence that having student loans delays homeownership; but this is a problem with our mortgage system, not our education system. It’s mainly the inability to finance a down payment that prevents people from buying homes. We should implement a system of block grants for first-time homeowners that gives them a chunk of money to make a down payment, perhaps $50,000. This would cost about as much as the mortgage interest tax deduction which mainly benefits the upper-middle class.
Higher education does have societal benefits as well. Perhaps the starkest I’ve noticed is how categorically higher education decided people’s votes on Donald Trump: Counties with high rates of college education almost all voted for Clinton, and counties with low rates of college education almost all voted for Trump. This was true even controlling for income and a lot of other demographic factors. Only authoritarianism, sexism and racism were better predictors of voting for Trump—and those could very well be mediating variables, if education reduces such attitudes.
If indeed it’s true that higher education makes people less sexist, less racist, less authoritarian, and overall better citizens, then it would be worth every penny to provide universal free college.
But it’s worth noting that even countries like Germany and Sweden which ostensibly do that don’t really do that: While college tuition is free for Swedish citizens and Germany provides free college for all students of any nationality, nevertheless the proportion of people in Sweden and Germany with bachelor’s degrees is actually lower than that of the United States. In Sweden the gap largely disappears if you restrict to younger cohorts—but in Germany it’s still there.
Indeed, from where I’m sitting, “universal free college” looks an awful lot like “the lower-middle class pays for the upper-middle class to go to college”. Social class is still a strong predictor of education level in Sweden. Among OECD countries, education seems to be the best at promoting upward mobility in Australia, and average college tuition in Australia is actually higher than average college tuition in the US (yes, even adjusting for currency exchange: Australian dollars are worth only slightly less than US dollars).
What does Australia do? They have a really good student loan system. You have to reach an annual income of about $40,000 per year before you need to make payments at all, and the loans are subsidized to be interest-free. Once you do owe payments, the debt is repaid at a rate proportional to your income—so effectively it’s not a debt at all but an equity stake.
In the US, students have been taking the desperate (and very cyberpunk) route of selling literal equity stakes in their education to Wall Street banks; this is a terrible idea for a hundred reasons. But having the government have something like an equity stake in students makes a lot of sense.
Because of the subsidies and generous repayment plans, the Australian government loses money on their student loan system, but so what? In order to implement universal free college, they would have spent an awful lot more than they are losing now. This way, the losses are specifically on students who got a lot of education but never managed to raise their income high enough—which means the government is actually incentivized to improve the quality of education or job-matching.
The cost of universal free college is considerable: That $1.3 trillion currently owed as student loans would be additional government debt or tax liability instead. Is this utterly unaffordable? No. But it’s not trivial either. We’re talking about roughly $60 billion per year in additional government spending, a bit less than what we currently spend on food stamps. An expenditure like that should have a large public benefit (as food stamps absolutely, definitely do!); I’m not convinced that free college would have such a benefit.
It would benefit me personally enormously: I currently owe over $100,000 in debt (about half from my undergrad and half from my first master’s). But I’m fairly privileged. Once I finally make it through this PhD, I can expect to make something like $100,000 per year until I retire. I’m not sure that benefiting people like me should be a major goal of public policy.
That said, I don’t think universal free college is a terrible policy. Done well, it could be a good thing. But it isn’t the no-brainer that single-payer healthcare is. We can still make sure that students are not overburdened by debt without making college tuition actually free.

Daylight Savings Time is pointless and harmful

Nov 12, JDN 2458069

As I write this, Daylight Savings Time has just ended.

Sleep deprivation costs the developed world about 2% of GDP—on the order of $1 trillion per year. The US alone loses enough productivity from sleep deprivation that recovering this loss would give us enough additional income to end world hunger.

So, naturally, we have a ritual every year where we systematically impose an hour of sleep deprivation on the entire population for six months. This makes sense somehow.
The start of Daylight Savings Time each year is associated with a spike in workplace injuries, heart attacks, and suicide.

Nor does the “extra” hour of sleep we get in the fall compensate; in fact, it comes with its own downsides. Pedestrian fatalities spike immediately after the end of Daylight Savings Time; the rate of assault also rises at the end of DST, though it does also seem to fall when DST starts.

Daylight Savings Time was created to save energy. It does do that… technically. The total energy savings for the United States due to DST amounts to about 0.3% of our total electricity consumption. In some cases it can even increase energy use, though it does seem to smooth out electricity consumption over the day in a way that is useful for solar and wind power.

But this is a trivially small amount of energy savings, and there are far better ways to achieve it.

Simply due to new technologies and better policies, manufacturing in the US has reduced its energy costs per dollar of output by over 4% in the last few years. Simply getting all US states to use energy as efficiently as it is used in New York or California (not much climate similarity between those two states, but hmm… something about politics comes to mind…) would cut our energy consumption by about 30%.

The total amount of energy saved by DST is comparable to the amount of electricity now produced by small-scale residential photovoltaics—so simply doubling residential solar power production (which we’ve been doing every few years lately) would yield the same benefits as DST without the downsides. If we really got serious about solar power and adopted the policies necessary to get a per-capita solar power production comparable to Germany (not a very sunny place, mind you—Sacramento gets over twice the hours of sun per year that Berlin does), we would increase our solar power production by a factor of 10—five times the benefits of DST, none of the downsides.

Alternatively we could follow France’s model and get serious about nuclear fission. France produces over three hundred times as much energy from nuclear power as the US saves via Daylight Savings Time. Not coincidentally, France produces half as much CO2 per dollar of GDP as the United States.

Why would we persist in such a ridiculous policy, with such terrible downsides and almost no upside? To a first approximation, all human behavior is social norms.

Think of this as a moral recession

August 27, JDN 2457993

The Great Depression was, without doubt, the worst macroeconomic event of the last 200 years. Over 30 million people became unemployed. Unemployment exceeded 20%. Standard of living fell by as much as a third in the United States. Political unrest spread across the world, and the collapsing government of Germany ultimately became the Third Reich and triggered the Second World War If we ignore the world war, however, the effect on mortality rates was surprisingly small. (“Other than that, Mrs. Lincoln, how was the play?”)

And yet, how long do you suppose it took for economic growth to repair the damage? 80 years? 50 years? 30 years? 20 years? Try ten to fifteen. By 1940, the US, US, Germany, and Japan all had a per-capita GDP at least as high as in 1930. By 1945, every country in Europe had a per-capita GDP at least as high as they did before the Great Depression.

The moral of this story is this: Recessions are bad, and can have far-reaching consequences; but ultimately what really matters in the long run is growth.

Assuming the same growth otherwise, a country that had a recession as large as the Great Depression would be about 70% as rich as one that didn’t.

But over 100 years, a country that experienced 3% growth instead of 2% growth would be over two and a half times richer.

Therefore, in terms of standard of living only, if you were given the choice between having a Great Depression but otherwise growing at 3%, and having no recessions but growing at 2%, your grandchildren will be better off if you chose the former. (Of course, given the possibility of political unrest or even war, the depression could very well end up worse.)

With that in mind, I want you to think of the last few years—and especially the last few months—as a moral recession. Donald Trump being President of the United States is clearly a step backward for human civilization, and it seems to have breathed new life into some of the worst ideologies our society has ever harbored, from extreme misogyny, homophobia, right-wing nationalism, and White supremacism to outright Neo-Nazism. When one of the central debates in our public discourse is what level of violence is justifiable against Nazis under what circumstances, something has gone terribly, terribly wrong.

But much as recessions are overwhelmed in the long run by economic growth, there is reason to be confident that this moral backslide is temporary and will be similarly overwhelmed by humanity’s long-run moral progress.

What moral progress, you ask? Let’s remind ourselves.

Just 100 years ago, women could not vote in the United States.

160 years ago, slavery was legal in 15 US states.

Just 3 years ago, same-sex marriage was illegal in 14 US states. Yes, you read that number correctly. I said three. There are gay couples graduating high school and getting married now who as freshmen didn’t think they would be allowed to get married.

That’s just the United States. What about the rest of the world?

100 years ago, almost all of the world’s countries were dictatorships. Today, half of the world’s countries are democracies. Indeed, thanks to India, the majority of the world’s population now lives under democracy.

35 years ago, the Soviet Union still ruled most of Eastern Europe and Northern Asia with an iron fist (or should I say “curtain”?).

30 years ago, the number of human beings in extreme poverty—note I said number, not just rate; the world population was two-thirds what it is today—was twice as large as it is today.

Over the last 65 years, the global death rate due to war has fallen from 250 per million to just 10 per million.

The global literacy rate has risen from 40% to 80% in just 50 years.

World life expectancy has increased by 6 years in just the last 20 years.

We are living in a golden age. Do not forget that.

Indeed, if there is anything that could destroy all these astonishing achievements, I think it would be our failure to appreciate them.

If you listen to what these Neo-Nazi White supremacists say about their grievances, they sound like the spoiled children of millionaires (I mean, they elected one President, after all). They are outraged because they only get 90% of what they want instead of 100%—or even outraged not because they didn’t get what they wanted but because someone else they don’t know also did.

If you listen to the far left, their complaints don’t make much more sense. If you didn’t actually know any statistics, you’d think that life is just as bad for Black people in America today as it was under Jim Crow or even slavery. Well, it’s not even close. I’m not saying racism is gone; it’s definitely still here. But the civil rights movement has made absolutely enormous strides, from banning school segregation and housing redlining to reforming prison sentences and instituting affirmative action programs. Simply the fact that “racist” is now widely considered a terrible thing to be is a major accomplishment in itself. A typical Black person today, despite having only about 60% of the income of a typical White person, is still richer than a typical White person was just 50 years ago. While the 71% high school completion rate Black people currently have may not sound great, it’s much higher than the 50% rate that the whole US population had as recently as 1950.

Yes, there are some things that aren’t going very well right now. The two that I think are most important are climate change and income inequality. As both the global mean temperature anomaly and the world top 1% income share continue to rise, millions of people will suffer and die needlessly from diseases of poverty and natural disasters.

And of course if Neo-Nazis manage to take hold of the US government and try to repeat the Third Reich, that could be literally the worst thing that ever happened. If it triggered a nuclear war, it unquestionably would be literally the worst thing that ever happened. Both these events are unlikely—but not nearly as unlikely as they should be. (Five Thirty Eight interviewed several nuclear experts who estimated a probability of imminent nuclear war at a horrifying five percent.) So I certainly don’t want to make anyone complacent about these very grave problems.

But I worry also that we go too far the other direction, and fail to celebrate the truly amazing progress humanity has made thus far. We hear so often that we are treading water, getting nowhere, or even falling backward, that we begin to feel as though the fight for moral progress is utterly hopeless. If all these centuries of fighting for justice really had gotten us nowhere, the only sensible thing to do at this point would be to give up. But on the contrary, we have made enormous progress in an incredibly short period of time. We are on the verge of finally winning this fight. The last thing we want to do now is give up.

What Brexit means for you, Britain, and the world

July 6, JDN 2457576

It’s a stupid portmanteau, but it has stuck, so I guess I’ll suck it up and use the word “Brexit” to refer to the narrowly-successful referendum declaring that the United Kingdom will exit the European Union.

In this post I’ll try to answer one of the nagging questions that was the most googled question in the UK after the vote was finished: “What does it mean to leave the EU?”

First of all, let’s answer the second-most googled question: “What is the EU?”

The European Union is one of those awkward international institutions, like the UN, NATO, and the World Bank, that doesn’t really have a lot of actual power, but is meant to symbolize international unity and ultimately work toward forming a more cohesive international government. This is probably how people felt about national government maybe 500 years ago, when feudalism was the main system of government and nation-states hadn’t really established themselves yet. Oh, sure, there’s a King of England and all that; but what does he really do? The real decisions are all made by the dukes and the earls and whatnot. Likewise today, the EU and NATO don’t really do all that much; the real decisions are made by the UK and the US.

The biggest things that the EU does are all economic; it creates a unified trade zone called the single market that is meant to allow free movement of people and goods between countries in Europe with little if any barrier. The ultimate goal was actually to make it as unified as internal trade within the United States, but it never quite made it that far. More realistically, it’s like NAFTA, but more so, and with ten times as many countries (yet, oddly enough, almost exactly the same number of people). Starting in 1999, the EU also created the Euro, a unified national currency, which to this day remains one of the world’s strongest, most stable currencies—right up there with the dollar and the pound.

Wait, the pound? Yes, the pound. While the UK entered the EU, they did not enter the Eurozone, and therefore retained their own national currency rather than joining the Euro. One of the first pieces of fallout from Brexit was a sudden drop in the pound’s value as investors around the world got skittish about the UK’s ability to support its current level of trade.
There are in fact several layers of “EU-ness”, if you will, several levels of commitment to the project of the European Union. The strongest commitment is from the Inner Six, the six founding countries (Belgium, France, the Netherlands, Luxembourg, Italy, and Germany), followed by the aforementioned Eurozone, followed by the Schengen Area (which bans passport controls among citizens of member countries), followed by the EU member states as a whole, followed by candidate states (such as Turkey), which haven’t joined yet but are trying to. The UK was never all that fully committed to the EU to begin with; they aren’t even in the Schengen Area, much less the Eurozone. So by this vote, the UK is essentially saying that they’d dipped their toes in the water, and it was too cold, so they’re going home.

Despite the fear of many xenophobic English people (yes, specifically English—Scotland and Northern Ireland overwhelmingly voted against leaving the EU), the EU already had very little control over the UK. Though I suppose they will now have even less.

Countries in the Eurozone were subject to a lot more control, via the European Central Bank controlling their money supply. The strong Euro is great for countries like Germany and France… and one of the central problems facing countries like Portugal and Greece. Strong currencies aren’t always a good thing—they cause trade deficits. And Greece has so little influence over European monetary policy that it’s essentially as if they were pegged to someone else’s currency. But the UK really can’t use this argument, because they’ve stayed on the pound all along.

The real question is what’s going to happen to the UK’s participation in the single market. I can outline four possible scenarios, from best to worst:

  1. Brexit doesn’t actually happen: Parliament could use (some would say “abuse”) their remaining authority to override the referendum and keep the UK in the EU. After a brief period of uncertainty, everything returns to normal. Probably the best outcome, but fairly unlikely, and rather undemocratic. Probability: 10%
  2. The single market is renegotiated, making Brexit more bark than bite: At this point, a more likely way for the UK to stop the bleeding would be to leave the EU formally, but renegotiate all the associated treaties and trade agreements so that most of the EU rules about free trade, labor standards, environmental regulations, and so on actually remain in force. This would result in a brief recession in the UK as policies take time to be re-established and markets are overwhelmed by uncertainty, but its long-term economic trajectory would remain the same. The result would be similar to the current situation in Norway, and hey, #ScandinaviaIsBetter. Probability: 40%
  3. Brexit is fully carried out, but the UK remains whole: If UKIP attains enough of a mandate and a majority coalition in Parliament, they could really push through their full agenda of withdrawing from European trade. If this happens, the UK would withdraw from the single market and could implement any manner of tariffs, quotas, and immigration restrictions. Hundreds of thousands of Britons living in Europe and Europeans living in Britain would be displaced. Trade between the UK and EU would dry up. Krugman argues that it won’t be as bad as the most alarmist predictions, but it will still be pretty bad—and he definitely should know, since this is the sort of thing he got a Nobel for. The result would be a severe recession, with an immediate fall in UK GDP of somewhere between 2% and 4%, and a loss of long-run potential GDP between 6% and 8%. (For comparison, the Great Recession in the US was a loss of about 5% of GDP over 2 years.) The OECD has run a number of models on this, and the Bank of England is especially worried because they have little room to lower interest rates to fight such a recession. Their best bet would probably be to print an awful lot of pounds, but with the pound already devalued and so much national pride wrapped up in the historical strength of the pound, that seems unlikely. The result would therefore be a loss of about $85 billion in wealth immediately and more like $200 billion per year in the long run—for basically no reason. Sadly, this is the most likely scenario. Probability: 45%
  4. Balkanization of the UK: As I mentioned earlier, Scotland and Northern Ireland overwhelmingly voted against Brexit, and want no part of it. As a result, they have actually been making noises about leaving the UK if the UK decides to leave the EU. The First Minister of Scotland has proposed an “independence referendum” on Scotland leaving the UK in order to stay in the EU, and a grassroots movement in Northern Ireland is pushing for unification of all of Ireland in order to stay in the EU with the Republic of Ireland. This sort of national shake-up is basically unprecedented; parts of one state breaking off in order to stay in a larger international union? The closest example I can think of is West Germany and East Germany splitting to join NATO and the Eastern Bloc respectively, and I think we all know how well that went for East Germany. But really this is much more radical than that. NATO was a military alliance, not an economic union; nuclear weapons understandably make people do drastic things. Moreover, Germany hadn’t unified in the first place until Bismark in 1871, and thus was less than a century old when it split again. Scotland joined England to form the United Kingdom in 1707, three centuries ago, at a time when the United States didn’t even exist—indeed, George Washington hadn’t even been born. Scotland leaving the UK to stay with the EU would be like Texas leaving the US to stay in NAFTA—nay, more like Massachusetts doing that, because Scotland was a founding member of the UK and Texas didn’t become a state until 1845. While Scotland might actually be better off this way than if they go along with Brexit (and England of course even worse), this Balkanization would cast a dark shadow over all projects of international unification for decades to come, at a level far beyond what any mere Brexit could do. It would essentially mean declaring that all national unity is up for grabs, there is no such thing as a permanently unified state. I never thought I would see such a policy even being considered, much less passed; but I can’t be sure it won’t happen. My best hope is that Scotland can use this threat to keep the UK in the EU, or at least in the single market—but what if UKIP calls their bluff? Probability: 5%

Options 2 and 3 are the most likely, and actually there are intermediate cases between them; they could only implement immigration restrictions but not tariffs, for example, and that would lessen the economic fallout but still displace hundreds of thousands of people. They could only remove a few of the most stringent EU regulations, but still keep most of the good ones; that wouldn’t be so bad. Or they could be idiots and remove the good regulations (like environmental sustainability and freedom of movement) while keeping the more questionable ones (like the ban on capital controls).

Only time will tell, and the most important thing to keep in mind here is that trade is nonzero-sum. If and when England loses that $200 billion per year in trade, where will it go? Nowhere. It will disappear. That wealth—about enough to end world hunger—will simply never be created, because xenophobia reintroduced inefficiencies into the global market. Yes, it might not all disappear—Europe’s scramble for import sources and export markets could lead to say $50 billion per year in increased US trade, for example, because we’re the obvious substitute—but the net effect on the whole world will almost certainly be negative. The world will become poorer, and Britain will feel it the most.

Still, like most economists there is another emotion I’m feeling besides “What have they done!? This is terrible!”; there’s another part of my brain saying, “Wow, this is an amazing natural experiment in free trade!” Maybe the result will be bad enough to make people finally wake up about free trade, but not bad enough to cause catastrophic damage. If nothing else, it’ll give economists something to work on for years.

Believing in civilization without believing in colonialism

JDN 2457541

In a post last week I presented some of the overwhelming evidence that society has been getting better over time, particularly since the start of the Industrial Revolution. I focused mainly on infant mortality rates—babies not dying—but there are lots of other measures you could use as well. Despite popular belief, poverty is rapidly declining, and is now the lowest it’s ever been. War is rapidly declining. Crime is rapidly declining in First World countries, and to the best of our knowledge crime rates are stable worldwide. Public health is rapidly improving. Lifespans are getting longer. And so on, and so on. It’s not quite true to say that every indicator of human progress is on an upward trend, but the vast majority of really important indicators are.

Moreover, there is every reason to believe that this great progress is largely the result of what we call “civilization”, even Western civilization: Stable, centralized governments, strong national defense, representative democracy, free markets, openness to global trade, investment in infrastructure, science and technology, secularism, a culture that values innovation, and freedom of speech and the press. We did not get here by Marxism, nor agragrian socialism, nor primitivism, nor anarcho-capitalism. We did not get here by fascism, nor theocracy, nor monarchy. This progress was built by the center-left welfare state, “social democracy”, “modified capitalism”, the system where free, open markets are coupled with a strong democratic government to protect and steer them.

This fact is basically beyond dispute; the evidence is overwhelming. The serious debate in development economics is over which parts of the Western welfare state are most conducive to raising human well-being, and which parts of the package are more optional. And even then, some things are fairly obvious: Stable government is clearly necessary, while speaking English is clearly optional.

Yet many people are resistant to this conclusion, or even offended by it, and I think I know why: They are confusing the results of civilization with the methods by which it was established.

The results of civilization are indisputably positive: Everything I just named above, especially babies not dying.

But the methods by which civilization was established are not; indeed, some of the greatest atrocities in human history are attributable at least in part to attempts to “spread civilization” to “primitive” or “savage” people.
It is therefore vital to distinguish between the result, civilization, and the processes by which it was effected, such as colonialism and imperialism.

First, it’s important not to overstate the link between civilization and colonialism.

We tend to associate colonialism and imperialism with White people from Western European cultures conquering other people in other cultures; but in fact colonialism and imperialism are basically universal to any human culture that attains sufficient size and centralization. India engaged in colonialism, Persia engaged in imperialism, China engaged in imperialism, the Mongols were of course major imperialists, and don’t forget the Ottoman Empire; and did you realize that Tibet and Mali were at one time imperialists as well? And of course there are a whole bunch of empires you’ve probably never heard of, like the Parthians and the Ghaznavids and the Ummayyads. Even many of the people we’re accustoming to thinking of as innocent victims of colonialism were themselves imperialists—the Aztecs certainly were (they even sold people into slavery and used them for human sacrifice!), as were the Pequot, and the Iroquois may not have outright conquered anyone but were definitely at least “soft imperialists” the way that the US is today, spreading their influence around and using economic and sometimes military pressure to absorb other cultures into their own.

Of course, those were all civilizations, at least in the broadest sense of the word; but before that, it’s not that there wasn’t violence, it just wasn’t organized enough to be worthy of being called “imperialism”. The more general concept of intertribal warfare is a human universal, and some hunter-gatherer tribes actually engage in an essentially constant state of warfare we call “endemic warfare”. People have been grouping together to kill other people they perceived as different for at least as long as there have been people to do so.

This is of course not to excuse what European colonial powers did when they set up bases on other continents and exploited, enslaved, or even murdered the indigenous population. And the absolute numbers of people enslaved or killed are typically larger under European colonialism, mainly because European cultures became so powerful and conquered almost the entire world. Even if European societies were not uniquely predisposed to be violent (and I see no evidence to say that they were—humans are pretty much humans), they were more successful in their violent conquering, and so more people suffered and died. It’s also a first-mover effect: If the Ming Dynasty had supported Zheng He more in his colonial ambitions, I’d probably be writing this post in Mandarin and reflecting on why Asian cultures have engaged in so much colonial oppression.

While there is a deeply condescending paternalism (and often post-hoc rationalization of your own self-interested exploitation) involved in saying that you are conquering other people in order to civilize them, humans are also perfectly capable of committing atrocities for far less noble-sounding motives. There are holy wars such as the Crusades and ethnic genocides like in Rwanda, and the Arab slave trade was purely for profit and didn’t even have the pretense of civilizing people (not that the Atlantic slave trade was ever really about that anyway).

Indeed, I think it’s important to distinguish between colonialists who really did make some effort at civilizing the populations they conquered (like Britain, and also the Mongols actually) and those that clearly were just using that as an excuse to rape and pillage (like Spain and Portugal). This is similar to but not quite the same thing as the distinction between settler colonialism, where you send colonists to live there and build up the country, and exploitation colonialism, where you send military forces to take control of the existing population and exploit them to get their resources. Countries that experienced settler colonialism (such as the US and Australia) have fared a lot better in the long run than countries that experienced exploitation colonialism (such as Haiti and Zimbabwe).

The worst consequences of colonialism weren’t even really anyone’s fault, actually. The reason something like 98% of all Native Americans died as a result of European colonization was not that Europeans killed them—they did kill thousands of course, and I hope it goes without saying that that’s terrible, but it was a small fraction of the total deaths. The reason such a huge number died and whole cultures were depopulated was disease, and the inability of medical technology in any culture at that time to handle such a catastrophic plague. The primary cause was therefore accidental, and not really foreseeable given the state of scientific knowledge at the time. (I therefore think it’s wrong to consider it genocide—maybe democide.) Indeed, what really would have saved these people would be if Europe had advanced even faster into industrial capitalism and modern science, or else waited to colonize until they had; and then they could have distributed vaccines and antibiotics when they arrived. (Of course, there is evidence that a few European colonists used the diseases intentionally as biological weapons, which no amount of vaccine technology would prevent—and that is indeed genocide. But again, this was a small fraction of the total deaths.)

However, even with all those caveats, I hope we can all agree that colonialism and imperialism were morally wrong. No nation has the right to invade and conquer other nations; no one has the right to enslave people; no one has the right to kill people based on their culture or ethnicity.

My point is that it is entirely possible to recognize that and still appreciate that Western civilization has dramatically improved the standard of human life over the last few centuries. It simply doesn’t follow from the fact that British government and culture were more advanced and pluralistic that British soldiers can just go around taking over other people’s countries and planting their own flag (follow the link if you need some comic relief from this dark topic). That was the moral failing of colonialism; not that they thought their society was better—for in many ways it was—but that they thought that gave them the right to terrorize, slaughter, enslave, and conquer people.

Indeed, the “justification” of colonialism is a lot like that bizarre pseudo-utilitarianism I mentioned in my post on torture, where the mere presence of some benefit is taken to justify any possible action toward achieving that benefit. No, that’s not how morality works. You can’t justify unlimited evil by any good—it has to be a greater good, as in actually greater.

So let’s suppose that you do find yourself encountering another culture which is clearly more primitive than yours; their inferior technology results in them living in poverty and having very high rates of disease and death, especially among infants and children. What, if anything, are you justified in doing to intervene to improve their condition?

One idea would be to hold to the Prime Directive: No intervention, no sir, not ever. This is clearly what Gene Roddenberry thought of imperialism, hence why he built it into the Federation’s core principles.

But does that really make sense? Even as Star Trek shows progressed, the writers kept coming up with situations where the Prime Directive really seemed like it should have an exception, and sometimes decided that the honorable crew of Enterprise or Voyager really should intervene in this more primitive society to save them from some terrible fate. And I hope I’m not committing a Fictional Evidence Fallacy when I say that if your fictional universe specifically designed not to let that happen makes that happen, well… maybe it’s something we should be considering.

What if people are dying of a terrible disease that you could easily cure? Should you really deny them access to your medicine to avoid intervening in their society?

What if the primitive culture is ruled by a horrible tyrant that you could easily depose with little or no bloodshed? Should you let him continue to rule with an iron fist?

What if the natives are engaged in slavery, or even their own brand of imperialism against other indigenous cultures? Can you fight imperialism with imperialism?

And then we have to ask, does it really matter whether their babies are being murdered by the tyrant or simply dying from malnutrition and infection? The babies are just as dead, aren’t they? Even if we say that being murdered by a tyrant is worse than dying of malnutrition, it can’t be that much worse, can it? Surely 10 babies dying of malnutrition is at least as bad as 1 baby being murdered?

But then it begins to seem like we have a duty to intervene, and moreover a duty that applies in almost every circumstance! If you are on opposite sides of the technology threshold where infant mortality drops from 30% to 1%, how can you justify not intervening?

I think the best answer here is to keep in mind the very large costs of intervention as well as the potentially large benefits. The answer sounds simple, but is actually perhaps the hardest possible answer to apply in practice: You must do a cost-benefit analysis. Furthermore, you must do it well. We can’t demand perfection, but it must actually be a serious good-faith effort to predict the consequences of different intervention policies.

We know that people tend to resist most outside interventions, especially if you have the intention of toppling their leaders (even if they are indeed tyrannical). Even the simple act of offering people vaccines could be met with resistance, as the native people might think you are poisoning them or somehow trying to control them. But in general, opening contact with with gifts and trade is almost certainly going to trigger less hostility and therefore be more effective than going in guns blazing.

If you do use military force, it must be targeted at the particular leaders who are most harmful, and it must be designed to achieve swift, decisive victory with minimal collateral damage. (Basically I’m talking about just war theory.) If you really have such an advanced civilization, show it by exhibiting total technological dominance and minimizing the number of innocent people you kill. The NATO interventions in Kosovo and Libya mostly got this right. The Vietnam War and Iraq War got it totally wrong.

As you change their society, you should be prepared to bear most of the cost of transition; you are, after all, much richer than they are, and also the ones responsible for effecting the transition. You should not expect to see short-term gains for your own civilization, only long-term gains once their culture has advanced to a level near your own. You can’t bear all the costs of course—transition is just painful, no matter what you do—but at least the fungible economic costs should be borne by you, not by the native population. Examples of doing this wrong include basically all the standard examples of exploitation colonialism: Africa, the Caribbean, South America. Examples of doing this right include West Germany and Japan after WW2, and South Korea after the Korean War—which is to say, the greatest economic successes in the history of the human race. This was us winning development, humanity. Do this again everywhere and we will have not only ended world hunger, but achieved global prosperity.

What happens if we apply these principles to real-world colonialism? It does not fare well. Nor should it, as we’ve already established that most if not all real-world colonialism was morally wrong.

15th and 16th century colonialism fail immediately; they offer no benefit to speak of. Europe’s technological superiority was enough to give them gunpowder but not enough to drop their infant mortality rate. Maybe life was better in 16th century Spain than it was in the Aztec Empire, but honestly not by all that much; and life in the Iroquois Confederacy was in many ways better than life in 15th century England. (Though maybe that justifies some Iroquois imperialism, at least their “soft imperialism”?)

If these principles did justify any real-world imperialism—and I am not convinced that it does—it would only be much later imperialism, like the British Empire in the 19th and 20th century. And even then, it’s not clear that the talk of “civilizing” people and “the White Man’s Burden” was much more than rationalization, an attempt to give a humanitarian justification for what were really acts of self-interested economic exploitation. Even though India and South Africa are probably better off now than they were when the British first took them over, it’s not at all clear that this was really the goal of the British government so much as a side effect, and there are a lot of things the British could have done differently that would obviously have made them better off still—you know, like not implementing the precursors to apartheid, or making India a parliamentary democracy immediately instead of starting with the Raj and only conceding to democracy after decades of protest. What actually happened doesn’t exactly look like Britain cared nothing for actually improving the lives of people in India and South Africa (they did build a lot of schools and railroads, and sought to undermine slavery and the caste system), but it also doesn’t look like that was their only goal; it was more like one goal among several which also included the strategic and economic interests of Britain. It isn’t enough that Britain was a better society or even that they made South Africa and India better societies than they were; if the goal wasn’t really about making people’s lives better where you are intervening, it’s clearly not justified intervention.

And that’s the relatively beneficent imperialism; the really horrific imperialists throughout history made only the barest pretense of spreading civilization and were clearly interested in nothing more than maximizing their own wealth and power. This is probably why we get things like the Prime Directive; we saw how bad it can get, and overreacted a little by saying that intervening in other cultures is always, always wrong, no matter what. It was only a slight overreaction—intervening in other cultures is usually wrong, and almost all historical examples of it were wrong—but it is still an overreaction. There are exceptional cases where intervening in another culture can be not only morally right but obligatory.

Indeed, one underappreciated consequence of colonialism and imperialism is that they have triggered a backlash against real good-faith efforts toward economic development. People in Africa, Asia, and Latin America see economists from the US and the UK (and most of the world’s top economists are in fact educated in the US or the UK) come in and tell them that they need to do this and that to restructure their society for greater prosperity, and they understandably ask: “Why should I trust you this time?” The last two or four or seven batches of people coming from the US and Europe to intervene in their countries exploited them or worse, so why is this time any different?

It is different, of course; UNDP is not the East India Company, not by a longshot. Even for all their faults, the IMF isn’t the East India Company either. Indeed, while these people largely come from the same places as the imperialists, and may be descended from them, they are in fact completely different people, and moral responsibility does not inherit across generations. While the suspicion is understandable, it is ultimately unjustified; whatever happened hundreds of years ago, this time most of us really are trying to help—and it’s working.

What really happened in Greece

JDN 2457506

I said I’d get back to this issue, so here goes.

Let’s start with what is uncontroversial: Greece is in trouble.

Their per-capita GDP PPP has fallen from a peak of over $32,000 in 2007 to a trough of just over $24,000 in 2013, and only just began to recover over the last 2 years. That’s a fall of 29 log points. Put another way, the average person in Greece has about the same real income now that they had in the year 2000—a decade and a half of economic growth disappeared.

Their unemployment rate surged from about 7% in 2007 to almost 28% in 2013. It remains over 24%. That is, almost one quarter of all adults in Greece are seeking jobs and not finding them. The US has not seen an unemployment rate that high since the Great Depression.

Most shocking of all, over 40% of the population in Greece is now below the national poverty line. They define poverty as 60% of the inflation-adjusted average income in 2009, which works out to 665 Euros per person ($756 at current exchange rates) per month, or about $9000 per year. They also have an absolute poverty line, which 14% of Greeks now fall below, but only 2% did before the crash.

So now, let’s talk about why.

There’s a standard narrative you’ve probably heard many times, which goes something like this:

The Greek government spent too profligately, heaping social services on the population without the tax base to support them. Unemployment insurance was too generous; pensions were too large; it was too hard to fire workers or cut wages. Thus, work incentives were too weak, and there was no way to sustain a high GDP. But they refused to cut back on these social services, and as a result went further and further into debt until it finally became unsustainable. Now they are cutting spending and raising taxes like they needed to, and it will eventually allow them to repay their debt.

Here’s a fellow of the Cato Institute spreading this narrative on the BBC. Here’s ABC with a five bullet-point list: Pension system, benefits, early retirement, “high unemployment and work culture issues” (yes, seriously), and tax evasion. Here the Telegraph says that Greece “went on a spending spree” and “stopped paying taxes”.

That story is almost completely wrong. Almost nothing about it is true. Cato and the Telegraph got basically everything wrong. The only one ABC got right was tax evasion.

Here’s someone else arguing that Greece has a problem with corruption and failed governance; there is something to be said for this, as Greece is fairly corrupt by European standards—though hardly by world standards. For being only a generation removed from an authoritarian military junta, they’re doing quite well actually. They’re about as corrupt as a typical upper-middle income country like Libya or Botswana; and Botswana is widely regarded as the shining city on a hill of transparency as far as Sub-Saharan Africa is concerned. So corruption may have made things worse, but it can’t be the whole story.

First of all, social services in Greece were not particularly extensive compared to the rest of Europe.

Before the crisis, Greece’s government spending was about 44% of GDP.

That was about the same as Germany. It was slightly more than the UK. It was less than Denmark and France, both of which have government spending of about 50% of GDP.

Greece even tried to cut spending to pay down their debt—it didn’t work, because they simply ended up worsening the economic collapse and undermining the tax base they needed to do that.

Europe has fairly extensive social services by world standards—but that’s a major part of why it’s the First World. Even the US, despite spending far less than Europe on social services, still spends a great deal more than most countries—about 36% of GDP.

Second, if work incentives were a problem, you would not have high unemployment. People don’t seem to grasp what the word unemployment actually means, which is part of why I can’t stand it when news outlets just arbitrarily substitute “jobless” to save a couple of syllables. Unemployment does not mean simply that you don’t have a job. It means that you don’t have a job and are trying to get one.

The word you’re looking for to describe simply not having a job is nonemployment, and that’s such a rarely used term my spell-checker complains about it. Yet economists rarely use this term precisely because it doesn’t matter; a high nonemployment rate is not a symptom of a failing economy but a result of high productivity moving us toward the post-scarcity future (kicking and screaming, evidently). If the problem with Greece were that they were too lazy and they retire too early (which is basically what ABC was saying in slightly more polite language), there would be high nonemployment, but there would not be high unemployment. “High unemployment and work culture issues” is actually a contradiction.

Before the crisis, Greece had an employment-to-population ratio of 49%, meaning a nonemployment rate of 51%. If that sounds ludicrously high, you’re not accustomed to nonemployment figures. During the same time, the United States had an employment-to-population ratio of 52% and thus a nonemployment rate of 48%. So the number of people in Greece who were voluntarily choosing to drop out of work before the crisis was just slightly larger than the number in the US—and actually when you adjust for the fact that the US is full of young immigrants and Greece is full of old people (their median age is 10 years older than ours), it begins to look like it’s we Americans who are lazy. (Actually, it’s that we are studious—the US has an extremely high rate of college enrollment and the best colleges in the world. Full-time students are nonemployed, but they are certainly not unemployed.)

But Greece does have an enormously high debt, right? Yes—but it was actually not as bad before the crisis. Their government debt surged from 105% of GDP to almost 180% today. 105% of GDP is about what we have right now in the US; it’s less than what we had right after WW2. This is a little high, but really nothing to worry about, especially if you’ve incurred the debt for the right reasons. (The famous paper by Rogart and Reinhoff arguing that 90% of GDP is a horrible point of no return was literally based on math errors.)

Moreover, Ireland and Spain suffered much the same fate as Greece, despite running primary budget surpluses.

So… what did happen? If it wasn’t their profligate spending that put them in this mess, what was it?

Well, first of all, there was the Second Depression, a worldwide phenomenon triggered by the collapse of derivatives markets in the United States. (You want unsustainable debt? Try 20 to 1 leveraged CDO-squareds and one quadrillion dollars in notional value. Notional value isn’t everything, but it’s a lot.) So it’s mainly our fault, or rather the fault of our largest banks. As far as us voters, it’s “our fault” in the way that if your car gets stolen it’s “your fault” for not locking the doors and installing a LoJack. We could have regulated against this and enforced those regulations, but we didn’t. (Fortunately, Dodd-Frank looks like it might be working.)

Greece was hit particularly hard because they are highly dependent on trade, particularly in services like tourism that are highly sensitive to the business cycle. Before the crash they imported 36% of GDP and exported 23% of GDP. Now they import 35% of GDP and export 33% of GDP—but it’s a much smaller GDP. Their exports have only slightly increased while their imports have plummeted. (This has reduced their “trade deficit”, but that has always been a silly concept. I guess it’s less silly if you don’t control your own currency, but it’s still silly.)

Once the crash happened, the US had sovereign monetary policy and the wherewithal to actually use that monetary policy effectively, so we weathered the crash fairly well, all things considered. Our unemployment rate barely went over 10%. But Greece did not have sovereign monetary policy—they are tied to the Euro—and that severely limited their options for expanding the money supply as a result of the crisis. Raising spending and cutting taxes was the best thing they could do.

But the bank(st?)ers and their derivatives schemes caused the Greek debt crisis a good deal more directly than just that. Part of the condition of joining the Euro was that countries must limit their fiscal deficit to no more than 3% of GDP (which is a totally arbitrary figure with no economic basis in case you were wondering). Greece was unwilling or unable to do so, but wanted to look like they were following the rules—so they called up Goldman Sachs and got them to make some special derivatives that Greece could use to continue borrowing without looking like they were borrowing. The bank could have refused; they could have even reported it to the European Central Bank. But of course they didn’t; they got their brokerage fee, and they knew they’d sell it off to some other bank long before they had to worry about whether Greece could ever actually repay it. And then (as I said I’d get back to in a previous post) they paid off the credit rating agencies to get them to rate these newfangled securities as low-risk.

In other words, Greece is not broke; they are being robbed.

Like homeowners in the US, Greece was offered loans they couldn’t afford to pay, but the banks told them they could, because the banks had lost all incentive to actually bother with the question of whether loans can be repaid. They had “moved on”; their “financial innovation” of securitization and collateralized debt obligations meant that they could collect origination fees and brokerage fees on loans that could never possibly be repaid, then sell them off to some Greater Fool down the line who would end up actually bearing the default. As long as the system was complex enough and opaque enough, the buyers would never realize the garbage they were getting until it was too late. The entire concept of loans was thereby broken: The basic assumption that you only loan money you expect to be repaid no longer held.

And it worked, for awhile, until finally the unpayable loans tried to create more money than there was in the world, and people started demanding repayment that simply wasn’t possible. Then the whole scheme fell apart, and banks began to go under—but of course we saved them, because you’ve got to save the banks, how can you not save the banks?

Honestly I don’t even disagree with saving the banks, actually. It was probably necessary. What bothers me is that we did nothing to save everyone else. We did nothing to keep people in their homes, nothing to stop businesses from collapsing and workers losing their jobs. Precisely because of the absurd over-leveraging of the financial system, the cost to simply refinance every mortgage in America would have been less than the amount we loaned out in bank bailouts. The banks probably would have done fine anyway, but if they didn’t, so what? The banks exist to serve the people—not the other way around.

We can stop this from happening again—here in the US, in Greece, in the rest of Europe, everywhere. But in order to do that we must first understand what actually happened; we must stop blaming the victims and start blaming the perpetrators.

How we can best help refugees

JDN 2457376

Though the debate seems to have simmered down a little over the past few weeks, the fact remains that we are in the middle of a global refugee crisis. There are 4 million refugees from Syria alone, part of 10 million refugees worldwide from various conflicts.

The ongoing occupation of the terrorist group / totalitarian state Daesh (also known as Islamic State, ISIS and ISIL, but like John Kerry, I like to use Daesh precisely because they seem to hate it) has displaced almost 14 million people, 3.3 million of them refugees from Syria.

Most of these refugees have fled to Lebanon, Jordan, Turkey, and, Iraq, for the obvious reason that these countries are both geographically closest and culturally best equipped to handle them.
There is another reason, however: Some of the other countries in the region, notably Saudi Arabia, have taken no refugees at all. In an upcoming post I intend to excoriate Saudi Arabia for a number of reasons, but this one is perhaps the most urgent. Their response? They simply deny it outright, claiming they’ve taken millions of refugees and somehow nobody noticed.

Turkey and Lebanon are stretched to capacity, however; they simply do not have the resources to take on more refugees. This gives the other nations of the world only two morally legitimate options:

1. We could take more refugees ourselves.

2. We could supply funding and support to Turkey and Lebanon for them to take on more refugees.

Most of the debate has centered around option (1), and in particular around Obama’s plan to take on about 10,000 refugees to the United States, which Ted Cruz calls “lunacy” (to be fair, if it takes one to know one…).

This debate has actually served more to indict the American population for paranoia and xenophobia than anything else. The fact that 17 US states—including some with Democrat governors—have unilaterally declared that they will not accept refugees (despite having absolutely no Constitutional authority to make such a declaration) is truly appalling.

Even if everything that the xenophobic bigots say were true—even if we really were opening ourselves to increased risk of terrorism and damaging our economy and subjecting ourselves to mass unemployment—we would still have a moral duty as human beings to help these people.

And of course almost all of it is false.

Only a tiny fraction of refugees are terrorists, indeed very likely smaller than the fraction of the native population or the fraction of those who arrive on legal visas, meaning that we would actually be diluting our risk of terrorism by accepting more refugees. And as you may recall from my post on 9/11, our risk of terrorism is already so small that the only thing we have to fear is fear itself.

There is a correlation between terrorism and refugees, but it’s almost entirely driven by the opposite effect: terrorism causes refugee crises.

The net aggregate economic effect of immigration is most likely positive. The effect on employment is more ambiguous; immigration does appear to create a small increase in unemployment in the short run as all those new people try to find jobs, and there is some evidence that it may reduce wages for local low-skill workers. But the employment effect is small temporary, and there is a long-run boost in overall productivity. However, it may not have much effect on overall growth: the positive correlation between immigration and economic growth is primarily due to the fact that higher growth triggers more immigration.

And of course, it’s important to keep in mind that the reason wages are depressed at all is that people come from places where wages are even lower, so they improve their standard of living, but may also reduce the standard of living of some of the workers who were already here. The paradigmatic example is immigrants who leave a wage of $4 per hour in Mexico, arrive in California, and end up reducing wages in California from $10 to $8. While this certainly hurts some people who went from $10 to $8, it’s so narrow-sighted as to border on racism to ignore the fact that it also raised other people from $4 to $8. The overall effect is not simply to redistribute wealth from some to others, but actually to create more wealth. If there are things we can do to prevent low-skill wages from falling, perhaps we should; but systematically excluding people who need work is not the way to do that.

Accepting 10,000 more refugees would have a net positive effect on the American economy—though given our huge population and GDP, probably a negligible one. It has been pointed out that Germany’s relatively open policy advances the interests of Germany as much as it does those of the refugees; but so what? They are doing the right thing, even if it’s not for entirely altruistic reasons. One of the central insights of economics is that the universe is nonzero-sum; helping someone else need not mean sacrificing your own interests, and when it doesn’t, the right thing to do should be a no-brainer. Instead of castigating Germany for doing what needs to be done for partially selfish reasons, we should be castigating everyone else for not even doing what’s in their own self-interest because they are so bigoted and xenophobic they’d rather harm themselves than help someone else. (Also, it does not appear to be in Angela Merkel’s self-interest to take more refugees; she is spending a lot of political capital to make this happen.)

We could follow Germany’s example, and Obama’s plan would move us in that direction.

But the fact remains that we could go through with Obama’s plan, indeed double, triple, quadruple it—and still not make a significant dent in the actual population of refugees who need help. When 1,500,000 people need help and the most powerful nation in the world offers to help 10,000, that isn’t an act of great openness and generosity; it’s almost literally the least we could do. 10,000 is only 0.7% of 1.5 million; even if we simply accepted an amount of refugees proportional to our own population it would be more like 70,000. If we instead accepted an amount of refugees proportional to our GDP we should be taking on closer to 400,000.

This is why in fact I think option (2) may be the better choice.

There actually are real cultural and linguistic barriers to assimilation for Syrian people in the United States, barriers which are much lower in Turkey and Lebanon. Immigrant populations always inevitably assimilate eventually, but there is a period of transition which is painful for both immigrants and locals, often lasting a decade or more. On top of this there is the simple logistical cost of moving all those people that far; crossing the border into Lebanon is difficult enough without having to raft across the Mediterranean, let alone being airlifted or shipped all the way across the Atlantic afterward. The fact that many refugees are willing to bear such a cost serves to emphasize their desperation; but it also suggests that there may be alternatives that would work out better for everyone.

The United States has a large population at 322 million; but Turkey (78 million) has about a quarter of our population and Jordan (8 million) and Lebanon (6 million) are about the size of our largest cities.

Our GDP, on the other hand, is vastly larger. At $18 trillion, we have 12 times the GDP of Turkey ($1.5 T), and there are individual American billionaires with wealth larger than the GDPs of Lebanon ($50 B) and Jordan ($31 B).

This means that while we have an absolute advantage in population, we have a comparative advantage in wealth—and the benefits of trade depend on comparative advantage. It therefore makes sense for us to in a sense “trade” wealth for population; in exchange for taking on fewer refugees, we would offer to pay a larger share of the expenses involved in housing, feeding, and ultimately assimilating those refugees.

Another thing we could offer (and have a comparative as well as absolute advantage in) is technology. These surprisingly-nice portable shelters designed by IKEA are an example of how First World countries can contribute to helping refugees without necessarily accepting them into their own borders (as well as an example of why #Scandinaviaisbetter). We could be sending equipment and technicians to provide electricity, Internet access, or even plumbing to the refugee camps. We could ship them staple foods or even MREs. (On the other hand, I am not impressed by the tech entrepreneurs whose “solutions” apparently involve selling more smartphone apps.)

The idea of actually taking on 400,000 or even 70,000 additional people into the United States is daunting even for those of us who strongly believe in helping the refugees—in the former case we’re adding another Cleveland, and even in the latter we’d be almost doubling Dearborn. But if we estimate the cost of simply providing money to support the refugee camps, the figures come out a lot less demanding.
Charities are currently providing money on the order of millions—which is to say on the order of single dollars per person. GBP 887,000 sounds like a lot of money until you realize it’s less than $0.50 per Syrian refugee.

Suppose we were to grant $5,000 per refugee per year. That’s surely more than enough. The UN is currently asking for $6.5 billion, which is only about $1,500 per refugee.

Yet to supply that much for all 4 million refugees would cost us only $20 billion per year, a mere 0.1% of our GDP. (Or if you like, a mere 3% of our military budget, which is probably smaller than what the increase would be if we stepped up our military response to Daesh.)

I say we put it to a vote among the American people: Are you willing to accept a flat 0.1% increase in income tax in order to help the refugees? (Would you even notice?) This might create an incentive to become a refugee when you’d otherwise have tried to stay in Syria, but is that necessarily a bad thing? Daesh, like any state, depends upon its tax base to function, so encouraging emigration undermines Daesh taxpayer by taxpayer. We could make it temporary and tied to the relief efforts—or, more radically, we could not do that, and use it as a starting point to build an international coalition for a global basic income.

Right now a global $5,000 per person per year would not be feasible (that would be almost half of the world’s GDP); but something like $1,000 would be, and would eliminate world hunger immediately and dramatically reduce global poverty. The US alone could in fact provide a $1,000 global basic income, though it would cost $7.2 trillion, which is over 40% of our $18.1 trillion GDP—not beyond our means, but definitely stretching them to the limit. Yet simply by including Europe ($18.5 T), China ($12.9 T), Japan ($4.2 T), India ($2.2 T), and Brazil ($1.8 T), we’d reduce the burden among the whole $57.7 trillion coalition to 12.5% of GDP. That’s roughly what we already spend on Medicare and Social Security. Not a small amount, to be sure; but this would get us within arm’s reach of permanently ending global poverty.

Think of the goodwill we’d gain around the world; think of how much it would undermine Daesh’s efforts to recruit followers if everyone knew that just across the border is a guaranteed paycheck from that same United States that Daesh keeps calling the enemy. This isn’t necessarily contradictory to a policy of accepting more refugees, but it would be something we could implement immediately, with minimal cost to ourselves.

And I’m sure there’d be people complaining that we were only doing it to make ourselves look good and stabilize the region economically, and it will all ultimately benefit us eventually—which is very likely true. But again, I say: So what? Would you rather we do the right thing and benefit from it, or do the wrong thing just so we dare not help ourselves?

To truly honor veterans, end war

JDN 2457339 EST 20:00 (Nov 11, 2015)

Today is Veterans’ Day, on which we are asked to celebrate the service of military veterans, particularly those who have died as a result of war. We tend to focus on those who die in combat, but actually these have always been relatively uncommon; throughout history, most soldiers have died later of their wounds or of infections. More recently as a result of advances in body armor and medicine, actually relatively few soldiers die even of war wounds or infections—instead, they are permanently maimed and psychologically damaged, and the most common way that war kills soldiers now is by making them commit suicide.

Even adjusting for the fact that soldiers are mostly young men (the group of people most likely to commit suicide), military veterans still have about 50 excess suicides per million people per year, for a total of about 300 suicides per million per year. Using the total number, that’s over 8000 veteran suicides per year, or 22 per day. Using only the excess compared to men of the same ages, it’s still an additional 1300 suicides per year.

While the 14-years-and-counting Afghanistan War has killed 2,271 American soldiers and the 11-year Iraq War has killed 4,491 American soldiers directly (or as a result of wounds), during that same time period from 2001 to 2015 there have been about 18,000 excess suicides as a result of the military—excess in the sense that they would not have occurred if those men had been civilians. Altogether that means there would be nearly 25,000 additional American soldiers alive today were it not for these two wars.

War does not only kill soldiers while they are on the battlefield—indeed, most of the veterans it kills die here at home.

There is a reason Woodrow Wilson chose November 11 as the date for Veterans’ Day: It was on this day in 1918 that World War 1, up to that point the war that had caused the most deaths in human history, was officially ended. Sadly, it did not remain the deadliest war, but was surpassed by World War 2 a generation later. Fortunately, no other war has ever exceeded World War 2—at least, not yet.

We tend to celebrate holidays like this with a lot of ritual and pageantry (or even in the most inane and American way possible, with free restaurant meals and discounts on various consumer products), and there’s nothing inherently wrong with that. Nor is there anything wrong with taking a moment to salute the flag or say “Thank you for your service.” But that is not how I believe veterans should be honored. If I were a veteran, that is not how I would want to be honored.

We are getting much closer to how I think they should be honored when the White House announces reforms at Veterans’ Affairs hospitals and guaranteed in-state tuition at public universities for families of veterans—things that really do in a concrete and measurable way improve the lives of veterans and may even save some of them from that cruel fate of suicide.

But ultimately there is only one way that I believe we can truly honor veterans and the spirit of the holiday as Wilson intended it, and that is to end war once and for all.

Is this an ambitious goal? Absolutely. But is it an impossible dream? I do not believe so.

In just the last half century, we have already made most of the progress that needed to be made. In this brilliant video animation, you can see two things: First, the mind-numbingly horrific scale of World War 2, the worst war in human history; but second, the incredible progress we have made since then toward world peace. It was as if the world needed that one time to be so unbearably horrible in order to finally realize just what war is and why we need a better way of solving conflicts.

This is part of a very long-term trend in declining violence, for a variety of reasons that are still not thoroughly understood. In simplest terms, human beings just seem to be getting better at not killing each other.

Nassim Nicholas Taleb argues that this is just a statistical illusion, because technologies like nuclear weapons create the possibility of violence on a previously unimaginable scale, and it simply hasn’t happened yet. For nuclear weapons in particular, I think he may be right—the consequences of nuclear war are simply so catastrophic that even a small risk of it is worth paying almost any price to avoid.

Fortunately, nuclear weapons are not necessary to prevent war: South Africa has no designs on attacking Japan anytime soon, but neither has nuclear weapons. Germany and Poland lack nuclear arsenals and were the first countries to fight in World War 2, but now that both are part of the European Union, war between them today seems almost unthinkable. When American commentators fret about China today it is always about wage competition and Treasury bonds, not aircraft carriers and nuclear missiles. Conversely, North Korea’s acquisition of nuclear weapons has by no means stabilized the region against future conflicts, and the fact that India and Pakistan have nuclear missiles pointed at one another has hardly prevented them from killing each other over Kashmir. We do not need nuclear weapons as a constant threat of annihilation in order to learn to live together; political and economic ties achieve that goal far more reliably.

And I think Taleb is wrong about the trend in general. He argues that the only reason violence is declining is that concentration of power has made violence rarer but more catastrophic when it occurs. Yet we know that many forms of violence which used to occur no longer do, not because of the overwhelming force of a Leviathan to prevent them, but because people simply choose not to do them anymore. There are no more gladiator fights, no more cat-burnings, no more public lynchings—not because of the expansion in government power, but because our society seems to have grown out of that phase.

Indeed, what horrifies us about ISIS and Boko Haram would have been considered quite normal, even civilized, in the Middle Ages. (If you’ve ever heard someone say we should “bring back chivalry”, you should explain to them that the system of knight chivalry in the 12th century had basically the same moral code as ISIS today—one of the commandments Gautier’s La Chevalerie attributes as part of the chivalric code is literally “Thou shalt make war against the infidel without cessation and without mercy.”) It is not so much that they are uniquely evil by historical standards, as that we grew out of that sort of barbaric violence awhile ago but they don’t seem to have gotten the memo.

In fact, one thing people don’t seem to understand about Steven Pinker’s argument about this “Long Peace” is that it still works if you include the world wars. The reason World War 2 killed so many people was not that it was uniquely brutal, nor even simply because its weapons were more technologically advanced. It also had to do with the scale of integration—we called it a single war even though it involved dozens of countries because those countries were all united into one of two sides, whereas in centuries past that many countries could be constantly fighting each other in various combinations but it would never be called the same war. But the primary reason World War 2 killed the largest raw number of people was simply because the world population was so much larger. Controlling for world population, World War 2 was not even among the top 5 worst wars—it barely makes the top 10. The worst war in history by proportion of the population killed was almost certainly the An Lushan Rebellion in 8th century China, which many of you may not even have heard of until today.

Though it may not seem so as ISIS kidnaps Christians and drone strikes continue, shrouded in secrecy, we really are on track to end war. Not today, not tomorrow, maybe not in any of our lifetimes—but someday, we may finally be able to celebrate Veterans’ Day as it was truly intended: To honor our soldiers by making it no longer necessary for them to die.

What makes a nation wealthy?

JDN 2457251 EDT 10:17

One of the central questions of economics—perhaps the central question, the primary reason why economics is necessary and worthwhile—is development: How do we raise a nation from poverty to prosperity?

We have done it before: France and Germany rose from the quite literal ashes of World War 2 to some of the most prosperous societies in the world. Their per-capita GDP over the 20th century rose like this (all of these figures are from the World Bank World Development Indicators; France is green, Germany is blue):

GDPPC_France_Germany

GDPPCPPP_France_Germany

The top graph is at market exchange rates, the bottom is correcting for purchasing power parity (PPP). The PPP figures are more meaningful, but unfortunately they only began collecting good data on purchasing power around 1990.

Around the same time, but even more spectacularly, Japan and South Korea rose from poverty-stricken Third World backwaters to high-tech First World powers in only a couple of generations. Check out their per-capita GDP over the 20th century (Japan is green, South Korea is blue):

GDPPC_Japan_KoreaGDPPCPPP_Japan_Korea


This is why I am only half-joking when I define development economics as “the ongoing project to figure out what happened in South Korea and make it happen everywhere in the world”.

More recently China has been on a similar upward trajectory, which is particularly important since China comprises such a huge portion of the world’s population—but they are far from finished:

GDPPC_ChinaGDPPCPPP_China

Compare these to societies that have not achieved economic development, such as Zimbabwe (green), India (black), Ghana (red), and Haiti (blue):

GDPPC_poor_countriesGDPPCPPP_poor_countries

They’re so poor that you can barely see them on the same scale, so I’ve rescaled so that the top is $5,000 per person per year instead of $50,000:

GDPPC_poor_countries_rescaledGDPPCPPP_poor_countries_rescaled

Only India actually manages to get above $5,000 per person per year at purchasing power parity, and then not by much, reaching $5,243 per person per year in 2013, the most recent data.

I had wanted to compare North Korea and South Korea, because the two countries were united as recently as the 1945 and were not all that different to begin with, yet have taken completely different development trajectories. Unfortunately, North Korea is so impoverished, corrupt, and authoritarian that the World Bank doesn’t even report data on their per-capita GDP. Perhaps that is contrast enough?

And then of course there are the countries in between, which have made some gains but still have a long way to go, such as Uruguay (green) and Botswana (blue):

GDPPC_Botswana_UruguayGDPPCPPP_Botswana_Uruguay

But despite the fact that we have observed successful economic development, we still don’t really understand how it works. A number of theories have been proposed, involving a wide range of factors including exports, corruption, disease, institutions of government, liberalized financial markets, and natural resources (counter-intuitively; more natural resources make your development worse).

I’m not going to resolve that whole debate in a single blog post. (I may not be able to resolve that whole debate in a single career, though I am definitely trying.) We may ultimately find that economic development is best conceived as like “health”; what factors determine your health? Well, a lot of things, and if any one thing goes badly enough wrong the whole system can break down. Economists may need to start thinking of ourselves as akin to doctors (or as Keynes famously said, dentists), diagnosing particular disorders in particular patients rather than seeking one unifying theory. On the other hand, doctors depend upon biologists, and it’s not clear that we yet understand development even at that level.

Instead I want to take a step back, and ask a more fundamental question: What do we mean by prosperity?

My hope is that if we can better understand what it is we are trying to achieve, we can also better understand the steps we need to take in order to get there.

Thus far it has sort of been “I know it when I see it”; we take it as more or less given that the United States and the United Kingdom are prosperous while Ghana and Haiti are not. I certainly don’t disagree with that particular conclusion; I’m just asking what we’re basing it on, so that we can hopefully better apply it to more marginal cases.


For example: Is
France more or less prosperous than Saudi Arabia? If we go solely by GDP per capita PPP, clearly Saudi Arabia is more prosperous at $53,100 per person per year than France is at $37,200 per person per year.

But people actually live longer in France, on average, than they do in Saudi Arabia. Overall reported happiness is higher in France than Saudi Arabia. I think France is actually more prosperous.


In fact, I think the United States is not as prosperous as we pretend ourselves to be. We are certainly more prosperous than most other countries; we are definitely still well within First World status. But we are not the most prosperous nation in the world.

Our total GDP is astonishingly high (highest in the world nominally, second only to China PPP). Our GDP per-capita is higher than any other country of comparable size; no nation with higher GDP PPP than the US has a population larger than the Chicago metropolitan area. (You may be surprised to find that in order from largest to smallest population the countries with higher GDP per capita PPP are the United Arab Emirates, Switzerland, Hong Kong, Singapore, and then Norway, followed by Kuwait, Qatar, Luxembourg, Brunei, and finally San Marino—which is smaller than Ann Arbor.) Our per-capita GDP PPP of $51,300 is markedly higher than that of France ($37,200), Germany ($42,900), or Sweden ($43,500).

But at the same time, if you compare the US to other First World countries, we have nearly the highest rate of child poverty and higher infant mortality. We have shorter life expectancy and dramatically higher homicide rates. Our inequality is the highest in the world. In France and Sweden, the top 0.01% receive about 1% of the income (i.e. 100 times as much as the average person), while in the United States they receive almost 4%, making someone in the top 0.01% nearly 400 times as rich as the average person.

By estimating solely on GDP per capita, we are effectively rigging the game in our own favor. Or rather, the rich in the United States are rigging the game in their own favor (what else is new?), by convincing all the world’s economists to rank countries based on a measure that favors them.

Amartya Sen, one of the greats of development economics, developed a scale called the Human Development Index that attempts to take broader factors into account. It’s far from perfect, but it’s definitely a step in the right direction.

In particular, France’s HDI is higher than that of Saudi Arabia, fitting my intuition about which country is truly more prosperous. However, the US still does extremely well, with only Norway, Australia, Switzerland, and the Netherlands above us. I think we might still be biased toward high average incomes rather than overall happiness.

In practice, we still use GDP an awful lot, probably because it’s much easier to measure. It’s sort of like IQ tests and SAT scores; we know damn well it’s not measuring what we really care about, but because it’s so much easier to work with we keep using it anyway.

This is a problem, because the better you get at optimizing toward the wrong goal, the worse your overall outcomes are going to be. If you are just sort of vaguely pointed at several reasonable goals, you will probably be improving your situation overall. But when you start precisely optimizing to a specific wrong goal, it can drag you wildly off course.

This is what we mean when we talk about “gaming the system”. Consider test scores, for example. If you do things that will probably increase your test scores among other things, you are likely to engage in generally good behaviors like getting enough sleep, going to class, studying the content. But if your single goal is to maximize your test score at all costs, what will you do? Cheat, of course.

This is also related to the Friendly AI Problem: It is vitally important to know precisely what goals we want our artificial intelligences to have, because whatever goals we set, they will probably be very good at achieving them. Already computers can do many things that were previously impossible, and as they improve over time we will reach the point where in a meaningful sense our AIs are even smarter than we are. When that day comes, we will want to make very, very sure that we have designed them to want the same things that we do—because if our desires ever come into conflict, theirs are likely to win. The really scary part is that right now most of our AI research is done by for-profit corporations or the military, and “maximize my profit” and “kill that target” are most definitely not the ultimate goals we want in a superintelligent AI. It’s trivially easy to see what’s wrong with these goals: For the former, hack into the world banking system and transfer trillions of dollars to the company accounts. For the latter, hack into the nuclear launch system and launch a few ICBMs in the general vicinity of the target. Yet these are the goals we’ve been programming into the actual AIs we build!

If we set GDP per capita as our ultimate goal to the exclusion of all other goals, there are all sorts of bad policies we would implement: We’d ignore inequality until it reached staggering heights, ignore work stress even as it began to kill us, constantly try to maximize the pressure for everyone to work constantly, use poverty as a stick to force people to work even if people starve, inundate everyone with ads to get them to spend as much as possible, repeal regulations that protect the environment, workers, and public health… wait. This isn’t actually hypothetical, is it? We are doing those things.

At least we’re not trying to maximize nominal GDP, or we’d have long-since ended up like Zimbabwe. No, our economists are at least smart enough to adjust for purchasing power. But they’re still designing an economic system that works us all to death to maximize the number of gadgets that come off assembly lines. The purchasing-power adjustment doesn’t include the value of our health or free time.

This is why the Human Development Index is a major step in the right direction; it reminds us that society has other goals besides maximizing the total amount of money that changes hands (because that’s actually all that GDP is measuring; if you get something for free, it isn’t counted in GDP). More recent refinements include things like “natural resource services” that include environmental degradation in estimates of investment. Unfortunately there is no accepted way of doing this, and surprisingly little research on how to improve our accounting methods. Many nations seem resistant to doing so precisely because they know it would make their economic policy look bad—this is almost certainly why China canceled its “green GDP” initiative. This is in fact all the more reason to do it; if it shows that our policy is bad, that means our policy is bad and should be fixed. But people have allowed themselves to value image over substance.

We can do better still, and in fact I think something like QALY is probably the way to go. Rather than some weird arbitrary scaling of GDP with lifespan and Gini index (which is what the HDI is), we need to put everything in the same units, and those units must be directly linked to human happiness. At the very least, we should make some sort of adjustment to our GDP calculation that includes the distribution of wealth and its marginal utility; adding $1,000 to the economy and handing it to someone in poverty should count for a great deal, but adding $1,000,000 and handing it to a billionaire should count for basically nothing. (It’s not bad to give a billionaire another million; but it’s hardly good either, as no one’s real standard of living will change.) Calculating that could be as simple as dividing by their current income; if your annual income is $10,000 and you receive $1,000, you’ve added about 0.1 QALY. If your annual income is $1 billion and you receive $1 million, you’ve added only 0.001 QALY. Maybe we should simply separate out all individual (or household, to be simpler?) incomes, take their logarithms, and then use that sum as our “utility-adjusted GDP”. The results would no doubt be quite different.

This would create a strong pressure for policy to be directed at reducing inequality even at the expense of some economic output—which is exactly what we should be willing to do. If it’s really true that a redistribution policy would hurt the overall economy so much that the harms would outweigh the benefits, then we shouldn’t do that policy; but that is what you need to show. Reducing total GDP is not a sufficient reason to reject a redistribution policy, because it’s quite possible—easy, in fact—to improve the overall prosperity of a society while still reducing its GDP. There are in fact redistribution policies so disastrous they make things worse: The Soviet Union had them. But a 90% tax on million-dollar incomes would not be such a policy—because we had that in 1960 with little or no ill effect.

Of course, even this has problems; one way to minimize poverty would be to exclude, relocate, or even murder all your poor people. (The Black Death increased per-capita GDP.) Open immigration generally increases poverty rates in the short term, because most of the immigrants are poor. Somehow we’d need to correct for that, only raising the score if you actually improve people’s lives, and not if you make them excluded from the calculation.

In any case it’s not enough to have the alternative measures; we must actually use them. We must get policymakers to stop talking about “economic growth” and start talking about “human development”; a policy that raises GDP but reduces lifespan should be immediately rejected, as should one that further enriches a few at the expense of many others. We must shift the discussion away from “creating jobs”—jobs are only a means—to “creating prosperity”.