“DSGE or GTFO”: Macroeconomics took a wrong turn somewhere

Dec 31, JDN 2458119

The state of macro is good,” wrote Oliver Blanchard—in August 2008. This is rather like the turkey who is so pleased with how the farmer has been feeding him lately, the day before Thanksgiving.

It’s not easy to say exactly where macroeconomics went wrong, but I think Paul Romer is right when he makes the analogy between DSGE (dynamic stochastic general equilbrium) models and string theory. They are mathematically complex and difficult to understand, and people can make their careers by being the only ones who grasp them; therefore they must be right! Nevermind if they have no empirical support whatsoever.

To be fair, DSGE models are at least a little better than string theory; they can at least be fit to real-world data, which is better than string theory can say. But being fit to data and actually predicting data are fundamentally different things, and DSGE models typically forecast no better than far simpler models without their bold assumptions. You don’t need to assume all this stuff about a “representative agent” maximizing a well-defined utility function, or an Euler equation (that doesn’t even fit the data), or this ever-proliferating list of “random shocks” that end up taking up all the degrees of freedom your model was supposed to explain. Just regressing the variables on a few years of previous values of each other (a “vector autoregression” or VAR) generally gives you an equally-good forecast. The fact that these models can be made to fit the data well if you add enough degrees of freedom doesn’t actually make them good models. As Von Neumann warned us, with enough free parameters, you can fit an elephant.

But really what bothers me is not the DSGE but the GTFO (“get the [expletive] out”); it’s not that DSGE models are used, but that it’s almost impossible to get published as a macroeconomic theorist using anything else. Defenders of DSGE typically don’t even argue anymore that it is good; they argue that there are no credible alternatives. They characterize their opponents as “dilettantes” who aren’t opposing DSGE because we disagree with it; no, it must be because we don’t understand it. (Also, regarding that post, I’d just like to note that I now officially satisfy the Athreya Axiom of Absolute Arrogance: I have passed my qualifying exams in a top-50 economics PhD program. Yet my enmity toward DSGE has, if anything, only intensified.)

Of course, that argument only makes sense if you haven’t been actively suppressing all attempts to formulate an alternative, which is precisely what DSGE macroeconomists have been doing for the last two or three decades. And yet despite this suppression, there are alternatives emerging, particularly from the empirical side. There are now empirical approaches to macroeconomics that don’t use DSGE models. Regression discontinuity methods and other “natural experiment” designs—not to mention actual experiments—are quickly rising in popularity as economists realize that these methods allow us to actually empirically test our models instead of just adding more and more mathematical complexity to them.

But there still seems to be a lingering attitude that there is no other way to do macro theory. This is very frustrating for me personally, because deep down I think what I would like to do as a career is macro theory: By temperament I have always viewed the world through a very abstract, theoretical lens, and the issues I care most about—particularly inequality, development, and unemployment—are all fundamentally “macro” issues. I left physics when I realized I would be expected to do string theory. I don’t want to leave economics now that I’m expected to do DSGE. But I also definitely don’t want to do DSGE.

Fortunately with economics I have a backup plan: I can always be an “applied micreconomist” (rather the opposite of a theoretical macroeconomist I suppose), directly attached to the data in the form of empirical analyses or even direct, randomized controlled experiments. And there certainly is plenty of work to be done along the lines of Akerlof and Roth and Shiller and Kahneman and Thaler in cognitive and behavioral economics, which is also generally considered applied micro. I was never going to be an experimental physicist, but I can be an experimental economist. And I do get to use at least some theory: In particular, there’s an awful lot of game theory in experimental economics these days. Some of the most exciting stuff is actually in showing how human beings don’t behave the way classical game theory predicts (particularly in the Ultimatum Game and the Prisoner’s Dilemma), and trying to extend game theory into something that would fit our actual behavior. Cognitive science suggests that the result is going to end up looking quite different from game theory as we know it, and with my cognitive science background I may be particularly well-positioned to lead that charge.

Still, I don’t think I’ll be entirely satisfied if I can’t somehow bring my career back around to macroeconomic issues, and particularly the great elephant in the room of all economics, which is inequality. Underlying everything from Marxism to Trumpism, from the surging rents in Silicon Valley and the crushing poverty of Burkina Faso, to the Great Recession itself, is inequality. It is, in my view, the central question of economics: Who gets what, and why?

That is a fundamentally macro question, but you can’t even talk about that issue in DSGE as we know it; a “representative agent” inherently smooths over all inequality in the economy as though total GDP were all that mattered. A fundamentally new approach to macroeconomics is needed. Hopefully I can be part of that, but from my current position I don’t feel much empowered to fight this status quo. Maybe I need to spend at least a few more years doing something else, making a name for myself, and then I’ll be able to come back to this fight with a stronger position.

In the meantime, I guess there’s plenty of work to be done on cognitive biases and deviations from game theory.

Influenza vaccination, herd immunity, and the Tragedy of the Commons

Dec 24, JDN 2458112

Usually around this time of year I do a sort of “Christmas special” blog post, something about holidays or gifts. But this year I have a rather different seasonal idea in mind. It’s not just the holiday season; it’s also flu season.

Each year, influenza kills over 56,000 people in the US, and between 300,000 and 600,000 people worldwide, mostly in the winter months. And yet, in any given year, only about 40% of adults and 60% of children get the flu vaccine.

The reason for this should be obvious to any student of economics: It’s a Tragedy of the Commons. If enough people got vaccinated that we attained reliable herd immunity (which would take about 90%), then almost nobody would get influenza, and the death rate would plummet. But for any given individual, the vaccine is actually not all that effective. Your risk of getting the flu only drops by about half if you receive the vaccine. The effectiveness is particularly low among the elderly, who are also at the highest risk for serious complications due to influenza.

Thus, for any given individual, the incentive to get vaccinated isn’t all that strong, even though society as a whole would be much better off if we all got vaccinated. Your probability of suffering serious complications from influenza is quite low, and wouldn’t be reduced all that much if you got the vaccine; so even though flu vaccines aren’t that costly in terms of time, money, discomfort, and inconvenience, the cost is just high enough that a lot of us don’t bother to get the shot each year.

On an individual level, my advice is simple: Go get a flu shot. Don’t do it just for yourself; do it for everyone around you. You are protecting the most vulnerable people in our society.

But if we really want everyone to get vaccinated, we need a policy response. I can think of two policies that might work, which can be broadly called a “stick” and a “carrot”.

The “stick” approach would be to make vaccination mandatory, as it already is for many childhood vaccines. Some sort of penalty would have to be introduced, but that’s not the real challenge. The real challenge would be how to actually enforce that penalty: How do we tell who is vaccinated and who isn’t?

When schools make vaccination mandatory, they require vaccination records for admission. It would be simple enough to add annual flu vaccines to the list of required shots for high schools and colleges (though no doubt the anti-vax crowd would make a ruckus). But can you make vaccination mandatory for work? That seems like a much larger violation of civil liberties. Alternatively, we could require that people submit medical records with their tax returns to avoid a tax penalty—but the privacy violations there are quite substantial as well.

Hence, I would favor the “carrot” approach: Use government subsidies to provide a positive incentive for vaccination. Don’t simply make vaccination free; actually pay people to get vaccinated. Make the subsidy larger than the actual cost of the shots, and require that the doctors and pharmacies administering them remit the extra to the customers. Something like $20 per shot ought to do it; since the cost of the shots is also around $20, then vaccinating the full 300 million people of the United States every year would cost about $12 billion; this is less than the estimated economic cost of influenza, so it would essentially pay for itself.

$20 isn’t a lot of money for most people; but then, like I said, the time and inconvenience of a flu shot aren’t that large either. There have been moderately successful (but expensive) programs incentivizing doctors to perform vaccinations, but that’s stupid; frankly I’m amazed it worked at all. It’s patients who need incentivized. Doctors will give you a flu shot if you ask them. The problem is that most people don’t ask.

Do this, and we could potentially save tens of thousands of lives every year, for essentially zero net cost. And that sounds to me like a Christmas wish worth making.

The Irvine Company needs some serious antitrust enforcement

Dec 17, JDN 2458105

I probably wouldn’t even have known about this issue if I hadn’t ended up living in Irvine.

The wealthiest real estate magnate in the United States is Donald Bren, sole owner of the Irvine Company. His net wealth is estimated at $15 billion, which puts him behind the likes of Jeff Bezos or Bill Gates, but well above Donald Trump even at his most optimistic estimates.

Where did he get all this wealth?

The Irvine Company isn’t even particularly shy about its history, though of course they put a positive spin on it. Right there on their own website they talk about how it used to be a series of ranches farmed by immigrants. Look a bit deeper into their complaints about “squatters” and it becomes apparent that the main reason they were able to get so rich is that the immigrant tenant farmers whose land they owned were disallowed by law from owning real estate. (Not to mention how it was originally taken from Native American tribes, as most of the land in the US was.) Then of course the land has increased in price and been passed down from generation to generation.

This isn’t capitalism. Capitalism requires a competitive market with low barriers of entry and trade in real physical capital—machines, vehicles, factories. The ownership of land by a single family that passes down its title through generations while extracting wealth from tenant farmers who aren’t allowed to own anything has another name. We call it feudalism.

The Irvine Company is privately-held, and thus not required to publish its finances the way a publicly-traded company would be, so I can’t tell you exactly what assets its owns or how much profit it makes. But I can tell you that it owns over 57,000 housing units—and there are only 96,000 housing units in the city of Irvine, so that means they literally own 60% of the city. They don’t just own houses either; they also own most of the commercial districts, parks, and streets.

As a proportion of all the housing in the United States, that isn’t so much. Even compared to Southern California (the most densely populated region in North America), it may not seem all that extravagant. But within the city of Irvine itself, this is getting dangerously close to a monopoly. Housing is expensive all over California, so they can’t be entirely blamed—but is it really that hard to believe that letting one company own 60% of your city is going to increase rents?

This is sort of thing that calls for a bold and unequivocal policy response. The Irvine Company should be forced to subdivide itself into multiple companies—perhaps Irvine Residential, Irvine Commercial, and Irvine Civic—and then those companies should be made publicly-traded, and a majority of their shares immediately distributed to the residents of the city. Unlike most land reform proposals, selecting who gets shares is actually quite straightforward: Anyone who pays rent on an Irvine Company property receives a share.

Land reform has a checkered history to say the least, which is probably why policymakers are reluctant to take this sort of approach. But this is a land reform that could be handled swiftly, by a very simple mechanism, with very clear rules. Moreover, it is entirely within the rule of law, as the Irvine Company is obviously at this point an illegitimate monopoly in violation of the Sherman Antitrust Act, Clayton Antitrust Act, and Federal Trade Commission Act. The Herfindahl-Hirschman Index for real estate in the city of Irvine would be at least 3600, well over the standard threshold of 2500 that FTC guidelines consider prima facie evidence of an antitrust violation in the market. Formally, the land reform could be accomplished by collecting damages in an amount necessary to purchase the shares at the (mandatory) IPO, then the beneficiaries of the damages paid in shares would be the residents of Irvine. The FTC is also empowered to bring criminal charges if necessary.

Oddly, most of the talk about the Irvine Company among residents of Irvine centers around their detailed policy decisions, whether expanding road X was a good idea, how you feel about the fact that they built complex Y. (There’s also a bizarre reverence for the Irvine Master Plan; people speak of it as if it were the US Constitution, when it’s actually more like Amazon.com’s five-year revenue targets. This is a for-profit company. Their plan is about taking your money.) This is rather like debating whether or not you have a good king; even if you do, you’re still a feudal subject. No single individual or corporation should have that kind of power over the population of an entire city. This is not a small city, either; Irvine has about three-quarters of the population of Iceland, or a third the population of Boston. Take half of Donald Bren’s $15 billion, divide it evenly over the 250,000 people of the city, and each one gets $30,000. That’s a conservative estimate of how much monopolistic rent the Irvine Company has extracted from the people of Irvine.

By itself, redistributing the assets of the Irvine Company wouldn’t solve the problem of high rents in Southern California. But I think it would help, and I’m honestly having trouble seeing the downsides. The only people who seem to be harmed are billionaires who inherited wealth that was originally extracted from serfs. Like I said, this is within the law, and wouldn’t require new legislation. We would only need to aggressively enforce laws that have been on the books for a century. It doesn’t even seem like it should be politically unpopular, as you’re basically giving a check for tens of thousands of dollars to each voting resident in the city.

Of course, it won’t happen. As usual, I’m imagining more justice in the world than there actually has ever been.

The “productivity paradox”

 

Dec 10, JDN 2458098

Take a look at this graph of manufacturing output per worker-hour:

Manufacturing_productivity

From 1988 to 2008, it was growing at a steady pace. In 2008 and 2009 it took a dip due to the Great Recession; no big surprise there. But then since 2012 it has been… completely flat. If we take this graph at face value, it would imply that manufacturing workers today can produce no more output than workers five years ago, and indeed only about 10% more than workers a decade ago. Whereas, a worker in 2008 was producing over 60% more than a worker in 1998, who was producing over 40% more than a worker in 1988.

Many economists call this the “productivity paradox”, and use it to argue that we don’t really need to worry about robots taking all our jobs any time soon. I think this view is mistaken.

The way we measure productivity is fundamentally wrongheaded, and is probably the sole cause of this “paradox”.

First of all, we use total hours scheduled to work, not total hours actually doing productive work. This is obviously much, much easier to measure, which is why we do it. But if you think for a moment about how the 40-hour workweek norm is going to clash with rapidly rising real productivity, it becomes apparent why this isn’t going to be a good measure.
When a worker finds a way to get done in 10 hours what used to take 40 hours, what does that worker’s boss do? Send them home after 10 hours because the job is done? Give them a bonus for their creativity? Hardly. That would be far too rational. They assign them more work, while paying them exactly the same. Recognizing this, what is such a worker to do? The obvious answer is to pretend to work the other 30 hours, while in fact doing something more pleasant than working.
And indeed, so-called “worker distraction” has been rapidly increasing. People are right to blame smartphones, I suppose, but not for the reasons they think. It’s not that smartphones are inherently distracting devices. It’s that smartphones are the cutting edge of a technological revolution that has made most of our work time unnecessary, so due to our fundamentally defective management norms they create overwhelming incentives to waste time at work to avoid getting drenched in extra tasks for no money.

That would probably be enough to explain the “paradox” by itself, but there is a deeper reason that in the long run is even stronger. It has to do with the way we measure “output”.

It might surprise you to learn that economists almost never consider output in terms of the actual number of cars produced, buildings constructed, songs written, or software packages developed. The standard measures of output are all in the form of so-called “real GDP”; that is, the dollar value of output produced.

They do adjust for indexes of inflation, but as I’ll show in a moment this still creates a fundamentally biased picture of the productivity dynamics.

Consider a world with only three industries: Housing, Food, and Music.

Productivity in Housing doesn’t change at all. Producing a house cost 10,000 worker-hours in 1950, and cost 10,000 worker-hours in 2000. Nominal price of houses has rapidly increased, from $10,000 in 1950 to $200,000 in 2000.

Productivity in Food rises moderately fast. Producing 1,000 meals cost 1,000 worker-hours in 1950, and cost 100 worker-hours in 2000. Nominal price of food has increased slowly, from $1,000 per 1,000 meals in 1950 to $5,000 per 1,000 meals in 2000.

Productivity in Music rises extremely fast. Producing 1,000 performances cost 10,000 worker-hours in 1950, and cost 1 worker-hour in 2000. Nominal price of music has collapsed, from $100,000 per 1,000 performances in 1950 to $1,000 per 1,000 performances in 2000.

This is of course an extremely stylized version of what has actually happened: Housing has gotten way more expensive, food has stayed about the same in price while farm employment has plummeted, and the rise of digital music has brought about a new Renaissance in actual music production and listening while revenue for the music industry has collapsed. There is a very nice Vox article on the “productivity paradox” showing a graph of how prices have changed in different industries.

How would productivity appear in the world I’ve just described, by standard measures? Well, to say that I actually need to say something about how consumers substitute across industries. But I think I’ll be forgiven in this case for saying that there is no substitution whatsoever; you can’t eat music or live in a burrito. There’s also a clear Maslow hierarchy here: They say that man cannot live by bread alone, but I think living by Led Zeppelin alone is even harder.

Consumers will therefore choose like this: Over 10 years, buy 1 house, 10,000 meals, and as many performances as you can afford after that. Further suppose that each person had $2,100 per year to spend in 1940-1950, and $50,000 per year to spend in 1990-2000. (This is approximately true for actual nominal US GDP per capita.)

1940-1950:
Total funds: $21,000

1 house = $10,000

10,000 meals = $10,000

Remaining funds: $1,000

Performances purchased: 10

1990-2000:

Total funds: $500,000

1 house = $200,000

10,000 meals = $50,000

Remaining funds: $250,000

Performances purchased: 250,000

(Do you really listen to this much music? 250,000 performances over 10 years is about 70 songs per day. If each song is 3 minutes, that’s only about 3.5 hours per day. If you listen to music while you work or watch a couple of movies with musical scores, yes, you really do listen to this much music! The unrealistic part is assuming that people in 1950 listen to so little, given that radio was already widespread. But if you think of music as standing in for all media, the general trend of being able to consume vastly more media in the digital age is clearly correct.)

Now consider how we would compute a price index for each time period. We would construct a basket of goods and determine the price of that basket in each time period, then adjust prices until that basket has a constant price.

Here, the basket would probably be what people bought in 1940-1950: 1 house, 10,000 meals, and 400 music performances.

In 1950, this basket cost $10,000+$10,000+$100 = $21,000.

In 2000, this basket cost $200,000+$50,000+$400 = $150,400.

This means that our inflation adjustment is $150,400/$21,000 = 7 to 1. This means that we would estimate the real per-capita GDP in 1950 at about $14,700. And indeed, that’s about the actual estimate of real per-capita GDP in 1950.

So, what would we say about productivity?

Sales of houses in 1950 were 1 per person, costing 10,000 worker hours.

Sales of food in 1950 were 10,000 per person, costing 10,000 worker hours.

Sales of music in 1950 were 400 per person, costing 4,000 worker hours.

Worker hours per person are therefore 24,000.

Sales of houses in 2000 were 1 per person, costing 10,000 worker hours.

Sales of food in 2000 were 10,000 per person, costing 1,000 worker hours.

Sales of music in 2000 were 250,000 per person, costing 25,000 worker hours.

Worker hours per person are therefore 36,000.

Therefore we would estimate that productivity rose from $14,700/24,000 = $0.61 per worker-hour to $50,000/36,000 = $1.40 per worker-hour. This is an annual growth rate of about 1.7%, which is again, pretty close to the actual estimate of productivity growth. For such a highly stylized model, my figures are doing remarkably well. (Honestly, better than I thought they would!)

But think about how much actual productivity rose, at least in the industries where it did.

We produce 10 times as much food per worker hour after 50 years, which is an annual growth rate of 4.7%, or three times the estimated growth rate.

We produce 10,000 times as much music per worker hour after 50 years, which is an annual growth rate of over 20%, or almost twelve times the estimated growth rate.

Moreover, should music producers be worried about losing their jobs to automation? Absolutely! People simply won’t be able to listen to much more music than they already are, so any continued increases in music productivity are going to make musicians lose jobs. And that was already allowing for music consumption to increase by a factor of over 600.

Of course, the real world has a lot more industries than this, and everything is a lot more complicated. We do actually substitute across some of those industries, unlike in this model.

But I hope I’ve gotten at least the basic point across that when things become drastically cheaper as technological progress often does, simply adjusting for inflation doesn’t do the job. One dollar of music today isn’t the same thing as one dollar of music a century ago, even if you inflation-adjust their dollars to match ours. We ought to be measuring in hours of music; an hour of music is much the same thing as an hour of music a century ago.

And likewise, that secretary/weather forecaster/news reporter/accountant/musician/filmmaker in your pocket that you call a “smartphone” really ought to be counted as more than just a simple inflation adjustment on its market price. The fact that it is mind-bogglingly cheaper to get these services than it used to be is the technological progress we care about; it’s not some statistical artifact to be removed by proper measurement.

Combine that with actually measuring the hours of real, productive work, and I think you’ll find that productivity is still rising quite rapidly, and that we should still be worried about what automation is going to do to our jobs.

Why do so many people equate “natural” with “good”?

Dec 3, JDN 2458091

Try searching sometime for “all-natural” products. It doesn’t matter whether you’re looking for dog food, skin cream, clothing, or even furniture polish; you will find some out there that proudly declare themselves “all-natural”. There is a clear sense that there is something good about being natural, some kind of purity that comes from being unsullied by industrial technology. (Of course, when you buy something online that is shipped to you in a box carried on a truck because it’s “all-natural”….)

Food is the most extreme case, where it is by now almost universally agreed that processed food is inherently harmful and the source of all of our dietary problems if not all our social ills.

This is a very strange state of affairs, as there is no particular reason for “natural” and “good” to be in any way related.

First of all, I can clearly come up with examples of all four possible cases: Motherhood is natural and good, but gamma ray bursts are natural and bad. Vaccination is artificial and good, but nuclear weapons are artificial and bad.

Natural Artificial
Good Motherhood Vaccination
Bad Gamma ray bursts Nuclear weapons

But even more than that, it’s difficult to even find a correlation between being natural and being good. If anything, I would expect the correlation to run the other way: Artificial things were created by humans to serve some human purpose, while natural things are simply whatever happens to exist. Most of the harmful artificial things are the result of mistakes, or unintended consequences of otherwise beneficial things—while plenty of harmful natural things are simply inherently harmful and never benefited anyone in any way. Nuclear weapons helped end World War 2. Gamma ray bursts will either hardly affect us at all, or instantly and completely annihilate our entire civilization. I guess they might also lead to some valuable discoveries in astrophysics, but if I were asked to fund a research project with the same risk-reward profile as a gamma ray burst, I would tear up the application and make sure no one else ever saw it again. The kind of irrational panic people had about the possibility of LHC black holes would be a rational panic if applied to a research project with some risk of causing gamma ray bursts.

The current obsession with “natural” products (which is really an oxymoron, if you think about it; it can’t be natural if it’s a product) seems to have arisen as its own unintended consequence of something good, namely the environmentalist movement in the 1960s and 1970s. The very real problems of pollution, natural resource depletion, extinction, global warming, desertification, and ocean acidification led people to rightly ask how the very same industrial processes that brought us our high standard of living could ultimately destroy it if we left them unchecked.

But the best solutions to these problems are themselves artificial: Solar power, nuclear energy, carbon taxes. Trying to go back to some ancient way of life where we didn’t destroy the environment is simply not a viable option at this point; even if such a way of life once existed, there’s no way it could sustain our current population, much less our current standard of living. And given the strong correlation between human migrations and extinction events of large mammals, I’m not convinced that such a way of life ever existed.

So-called “processed food” is really just industrially processed food—which is to say, food processed by the most efficient and productive technologies available. Humans have been processing food for thousands of years, and with very good reason; much of what we eat would be toxic if it weren’t threshed or boiled or fermented. The fact that there are people who complain about “processed food” but eat tofu and cheese is truly quite remarkable—think for a moment about how little resemblance Cheddar bears to the cow from whence it came, or what ingenuity it must have taken people in ancient China to go all the way from soybean to silken tofu. Similarly, anyone who is frightened by “genetically modified organisms” should give some serious thought to what is involved in creating their seedless bananas.

There may be some kernel of truth in the opposition to industrially processed food, however. The problem is not that we process food, nor that we do so by industrial machines. The problem is who processes the food, and why.

Humans have been processing food for thousands of years, yes; but only for the last few hundred have corporations been doing that processing. For most of human history, you processed food to feed your family, or your village, or perhaps to trade with a few neighboring villages or sell to the nearest city. What makes tofu different from, say, Fruit Loops isn’t that the former is less processed; it’s that the latter was designed and manufactured for profit.

Don’t get me wrong; corporations have made many valuable contributions to our society, including our food production, and it is largely their doing that food is now so cheap and plentiful that we could easily feed the entire world’s population. It’s just that, well, it’s also largely their doing that we don’t feed the entire world’s population, because they see no profit in doing so.

The incentives that a peasant village faces in producing its food are pretty well optimized for making the most nutritious food available with the least cost in labor and resources. When your own children and those of your friends and neighbors are going to be eating what you make, you work pretty hard to make sure that the food you make is good for them. And you don’t want to pollute the surrounding water or destroy the forest, because your village depends upon those things too.

The incentives that a corporation faces in producing food are wildly different. Nobody you know is going to be eating this stuff, most likely, and certainly not as their primary diet. You aren’t concerned about nutrition unless you think your customers are; more likely, you expect them to care about taste, so you optimize your designs to make things taste as good as possible regardless of their nutrition. You care about minimizing labor inputs only insofar as they cost you wages—from your perspective, cutting wages is as good as actually saving labor. You want to conserve only the resources that are expensive; resources that are cheap, like water and (with subsidies) corn syrup, you may as well use as much as you like. And above all, you couldn’t care less about the environmental damage you’re causing by your production, because those costs will be borne entirely by someone else, most likely the government or the citizens of whatever country you’re producing in.

Responsible consumers could reduce these effects, but only somewhat, because there is a fundamental asymmetry of information. The corporation “knows” (in that each of the administrators in each of the components that needs to know, knows) what production processes they are using and what subcontractors they are hiring, and could easily figure out how much they are exploiting workers and damaging the environment; but the consumers who care about these things can find out that information with great difficulty, if at all. Consumers who want to be responsible, but don’t have very good information, create incentives for so-called “greenwashing”: Corporations have many good profit-making reasons to say they are environmentally responsible, but far fewer reasons to actually be environmentally responsible.

And that is why you should be skeptical of “all-natural” products, especially if you are skeptical of the role of corporations in our society and our food system. “All-natural” is an adjective that has no legal meaning. The word “organic” can have a legally-defined meaning, if coupled with a certification like the USDA Organic standard. The word “non-toxic” has a legally-defined meaning—there is a long list of toxic compounds it can’t contain in more than trace amounts. There are now certifications for “carbon-neutral”. But “all-natural” offers no such protection. Basically anything can call itself “all-natural”, and if corporations expect you to be willing to pay more for such products, they have no reason not to slap it on everything. This is a problem that I think can only be solved by stringent regulation. Consumer pressure can’t work if there is no transparency in the production chain.

Even taken as something like its common meaning, “not synthetic or artificial”, there’s no reason to think that simply because something is natural, that means it is better, or even more ecologically sustainable. The ecological benefits of ancient methods of production come from the incentives of small-scale local production, not from something inherently more destructive about high-tech industry. (Indeed, water pollution was considerably worse from Medieval peasant villages—especially on a per-capita basis—than it is from modern water treatment systems.)

What exactly is “gentrification”? How should we deal with it?

Nov 26, JDN 2458083

“Gentrification” is a word that is used in a variety of mutually-inconsistent ways. If you compare the way social scientists use it to the way journalists use it, for example, they are almost completely orthogonal.

The word “gentrification” is meant to invoke the concept of a feudal gentry—a hereditary landed class that extracts rents from the rest of the population while contributing little or nothing themselves.

If indeed that is what we are talking about, then obviously this is bad. Moreover, it’s not an entirely unfounded fear; there are some remarkably strong vestiges of feudalism in the developed world, even in the United States where we never formally had a tradition of feudal titles. There really is a significant portion of the world’s wealth held by a handful of billionaire landowner families.

But usually when people say “gentrification” they mean something much broader. Almost any kind of increase in urban real estate prices gets characterized as “gentrification” by at least somebody, and herein lies the problem.

In fact, the kind of change that is most likely to get characterized as “gentrification” isn’t even the rising real estate prices we should be most worried about. People aren’t concerned when the prices of suburban homes double in 20 years. You might think that things that are already too expensive getting more expensive would be the main concern, but on the contrary, people are most likely to cry “gentrification” when housing prices rise in poor areas where housing is cheap.

One of the most common fears about gentrification is that it will displace local residents. In fact, the best quasi-experimental studies show little or no displacement effect. It’s actually mainly middle-class urbanites who get displaced by rising rents. Poor people typically own their homes, and actually benefit from rising housing prices. Young upwardly-mobile middle-class people move to cities to rent apartments near where they work, and tend to assume that’s how everyone lives, but it’s not. Rising rents in a city are far more likely to push out its grad students than they are poor families that have lived there for generations. Part of why displacement does not occur may be because of policies specifically implemented to fight it, such as subsidized housing and rent control. If that’s so, let’s keep on subsidizing housing (though rent control will always be a bad idea).

Nor is gentrification actually a very widespread phenomenon. The majority of poor neighborhoods remain poor indefinitely. In most studies, only about 30% of neighborhoods classified as “gentrifiable” actually end up “gentrifying”. Less than 10% of the neighborhoods that had high poverty rates in 1970 had low poverty rates in 2010.

Most people think gentrification reduces crime, but in the short run the opposite is the case. Robbery and larceny are higher in gentrifying neighborhoods. Criminals are already there, and suddenly they get much more valuable targets to steal from, so they do.

There is also a general perception that gentrification involves White people pushing Black people out, but this is also an overly simplistic view. First of all, a lot of gentrification is led by upwardly-mobile Black and Latino people. Black people who live in gentrified neighborhoods seem to be better off than Black people who live in non-gentrified neighborhoods; though selection bias may contribute to this effect, it can’t be all that strong, or we’d observe a much stronger displacement effect. Moreover, some studies have found that gentrification actually tends to increase the racial diversity of neighborhoods, and may actually help fight urban self-segregation, though it does also tend to increase racial polarization by forcing racial mixing.

What should we conclude from all this? I think the right conclusion is we are asking the wrong question.

Rising housing prices in poor areas aren’t inherently good or inherently bad, and policies designed specifically to increase or decrease housing prices are likely to have harmful side effects. What we need to be focusing on is not houses or neighborhoods but people. Poverty is definitely a problem, for sure. Therefore we should be fighting poverty, not “gentrification”. Directly transfer wealth from the rich to the poor, and then let the housing market fall where it may.

There is still some role for government in urban planning more generally, regarding things like disaster preparedness, infrastructure development, and transit systems. It may even be worthwhile to design regulations or incentives that directly combat racial segregation at the neighborhood level, for, as the Schelling Segregation Model shows, it doesn’t take a large amount of discriminatory preference to have a large impact on socioeconomic outcomes. But don’t waste effort fighting “gentrification”; directly design policies that will incentivize desegregation.

Rising rent as a proportion of housing prices is still bad, and the fundamental distortions in our mortgage system that prevent people from buying houses are a huge problem. But rising housing prices are most likely to be harmful in rich neighborhoods, where housing is already overpriced; in poor neighborhoods where housing is cheap, rising prices might well be a good thing.
In fact, I have a proposal to rapidly raise homeownership across the United States, which is almost guaranteed to work, directly corrects an enormous distortion in financial markets, and would cost about as much as the mortgage interest deduction (which should probably be eliminated, as most economists agree). Give each US adult a one-time grant voucher which gives them $40,000 that can only be spent as a down payment on purchasing a home. Each time someone turns 18, they get a voucher. You only get one over your lifetime, so use it wisely (otherwise the policy could become extremely expensive); but this is an immediate direct transfer of wealth that also reduces your credit constraint. I know I for one would be house-hunting right now if I were offered such a voucher. The mortgage interest deduction means nothing to me, because I can’t afford a down payment. Where the mortgage interest deduction is regressive, benefiting the rich more than the poor, this policy gives everyone the same amount, like a basic income.

In the short run, this policy would probably be expensive, as we’d have to pay out a large number of vouchers at once; but with our current long-run demographic trends, the amortized cost is basically the same as the mortgage interest deduction. And the US government especially should care about the long-run amortized cost, as it is an institution that has lasted over 200 years without ever missing a payment and can currently borrow at negative real interest rates.

Why risking nuclear war should be a war crime

Nov 19, JDN 2458078

“What is the value of a human life?” is a notoriously difficult question, probably because people keep trying to answer it in terms of dollars, and it rightfully offends our moral sensibilities to do so. We shouldn’t be valuing people in terms of dollars—we should be valuing dollars in terms of their benefits to people.

So let me ask a simpler question: Does the value of an individual human life increase, decrease, or stay the same, as we increase the number of people in the world?

A case can be made that it should stay the same: Why should my value as a person depend upon how many other people there are? Everything that I am, I still am, whether there are a billion other people or a thousand.

But in fact I think the correct answer is that it decreases. This is for two reasons: First, anything that I can do is less valuable if there are other people who can do it better. This is true whether we’re talking about writing blog posts or ending world hunger. Second, and most importantly, if the number of humans in the world gets small enough, we begin to face danger of total permanent extinction.

If the value of a human life is constant, then 1,000 deaths is equally bad whether it happens in a population of 10,000 or a population of 10 billion. That doesn’t seem right, does it? It seems more reasonable to say that losing ten percent should have a roughly constant effect; in that case losing 1,000 people in a population of 10,000 is equally bad as losing 1 billion in a population of 10 billion. If that seems too strong, we could choose some value in between, and say perhaps that losing 1,000 out of 10,000 is equally bad as losing 1 million out of 1 billion. This would mean that the value of 1 person’s life today is about 1/1,000 of what it was immediately after the Toba Event.

Of course, with such uncertainty, perhaps it’s safest to assume constant value. This seems the fairest, and it is certainly a reasonable approximation.

In any case, I think it should be obvious that the inherent value of a human life does not increase as you add more human lives. Losing 1,000 people out of a population of 7 billion is not worse than losing 1,000 people out of a population of 10,000. That way lies nonsense.

Yet if we agree that the value of a human life is not increasing, this has a very important counter-intuitive consequence: It means that increasing the risk of a global catastrophe is at least as bad as causing a proportional number of deaths. Specifically, it implies that a 1% risk of global nuclear war is worse than killing 10 million people outright.

The calculation is simple: If the value of a human life is a constant V, then the expected utility (admittedly, expected utility theory has its flaws) from killing 10 million people is -10 million V. But the expected utility from a 1% risk of global nuclear war is 1% times -V times the expected number of deaths from such a nuclear war—and I think even 2 billion is a conservative estimate. (0.01)(-2 billion) V = -20 million V.

This probably sounds too abstract, or even cold, so let me put it another way. Suppose we had the choice between two worlds, and these were the only worlds we could choose from. In world A, there are 100 leaders who each make choices that result in 10 million deaths. In world B, there are 100 leaders who each make choices that result in a 1% chance of nuclear war. Which world should we choose?

The choice is a terrible one, to be sure.

In world A, 1 billion people die.

Yet what happens in world B?

If the risks are independent, we can’t just multiply by 100 to get a guarantee of nuclear war. The actual probability is 1-(1-0.01)^100 = 63%. Yet even so, (0.63)(2 billion) = 1.26 billion. The expected number of deaths is higher in world B. Indeed, the most likely scenario is that 2 billion people die.

Yet this is probably too conservative. The risks are most likely positively correlated; two world leaders who each take a 1% chance of nuclear war probably do so in response to one another. Therefore maybe adding up the chances isn’t actually so unreasonable—for all practical intents and purposes, we may be best off considering nuclear war in world B as guaranteed to happen. In that case, world B is even worse.

And that is all assuming that the nuclear war is relatively contained. Major cities are hit, then a peace treaty is signed, and we manage to rebuild human civilization more or less as it was. This is what most experts on the issue believe would happen; but I for one am not so sure. The nuclear winter and total collapse of institutions and infrastructure could result in a global apocalypse that would result in human extinctionnot 2 billion deaths but 7 billion, and an end to all of humanity’s projects once and forever This is the kind of outcome we should be prepared to do almost anything to prevent.

What does this imply for global policy? It means that we should be far more aggressive in punishing any action that seems to bring the world closer to nuclear war. Even tiny increases in risk, of the sort that would ordinarily be considered negligible, are as bad as murder. A measurably large increase is as bad as genocide.

Of course, in practice, we have to be able to measure something in order to punish it. We can’t have politicians imprisoned over 0.000001% chances of nuclear war, because such a chance is so tiny that there would be no way to attain even reasonable certainty that such a change had even occurred, much less who was responsible.

Even for very large chances—and in this context, 1% is very large—it would be highly problematic to directly penalize increasing the probability, as we have no consistent, fair, objective measure of that probability.

Therefore in practice what I think we must do is severely and mercilessly penalize certain types of actions that would be reasonably expected to increase the probability of catastrophic nuclear war.

If we had the chance to start over from the Manhattan Project, maybe simply building a nuclear weapon should be considered a war crime. But at this point, nuclear proliferation has already proceeded far enough that this is no longer a viable option. At least the US and Russia for the time being seem poised to maintain their nuclear arsenals, and in fact it’s probably better for them to keep maintaining and updating them rather than leaving decades-old ICBMs to rot.

What can we do instead?

First, we probably need to penalize speech that would tend to incite war between nuclear powers. Normally I am fiercely opposed to restrictions on speech, but this is nuclear war we’re talking about. We can’t take any chances on this one. If there is even a slight chance that a leader’s rhetoric might trigger a nuclear conflict, they should be censored, punished, and probably even imprisoned. Making even a veiled threat of nuclear war is like pointing a gun at someone’s head and threatening to shoot them—only the gun is pointed at everyone’s head simultaneously. This isn’t just yelling “fire” in a crowded theater; it’s literally threatening to burn down every theater in the world at once.

Such a regulation must be designed to allow speech that is necessary for diplomatic negotiations, as conflicts will invariably arise between any two countries. We need to find a way to draw the line so that it’s possible for a US President to criticize Russia’s intervention in the Ukraine or for a Chinese President to challenge US trade policy, without being accused of inciting war between nuclear powers. But one thing is quite clear: Wherever we draw that line, President Trump’s statement about “fire and fury” definitely crosses it. This is a direct threat of nuclear war, and it should be considered a war crime. That reason by itself—let alone his web of Russian entanglements and violations of the Emoluments Clause—should be sufficient to not only have Trump removed from office, but to have him tried at the Hague. Impulsiveness and incompetence are no excuse when weapons of mass destruction are involved.

Second, any nuclear policy that would tend to increase first-strike capability rather than second-strike capability should be considered a violation of international law. In case you are unfamiliar with such terms: First-strike capability consists of weapons such as ICBMs that are only viable to use as the opening salvo of an attack, because their launch sites can be easily located and targeted. Second-strike capability consists of weapons such as submarines that are more concealable, so it’s much more likely that they could wait for an attack to happen, confirm who was responsible and how much damage was done, and then retaliate afterward.
Even that retaliation would be difficult to justify: It’s effectively answering genocide with genocide, the ultimate expression of “an eye for an eye” writ large upon humanity’s future. I’ve previously written about my Credible Targeted Conventional Response strategy that makes it both more ethical and more credible to respond to a nuclear attack with a non-nuclear retaliation. But at least second-strike weapons are not inherently only functional at starting a nuclear war. A first-strike weapon can theoretically be fired in response to a surprise attack, but only before the attack hits you—which gives you literally minutes to decide the fate of the world, most likely with only the sketchiest of information upon which to base your decision. Second-strike weapons allow deliberation. They give us a chance to think carefully for a moment before we unleash irrevocable devastation.

All the launch codes should of course be randomized onetime pads for utmost security. But in addition to the launch codes themselves, I believe that anyone who wants to launch a nuclear weapon should be required to type, letter by letter (no copy-pasting), and then have the machine read aloud, Oppenheimer’s line about Shiva, “Now I am become Death, the destroyer of worlds.” Perhaps the passphrase should conclude with something like “I hereby sentence millions of innocent children to death by fire, and millions more to death by cancer.” I want it to be as salient as possible in the heads of every single soldier and technician just exactly how many innocent people they are killing. And if that means they won’t turn the key—so be it. (Indeed, I wouldn’t mind if every Hellfire missile required a passphrase of “By the authority vested in me by the United States of America, I hereby sentence you to death or dismemberment.” Somehow I think our drone strike numbers might go down. And don’t tell me they couldn’t; this isn’t like shooting a rifle in a firefight. These strikes are planned days in advance and specifically designed to be unpredictable by their targets.)

If everyone is going to have guns pointed at each other, at least in a second-strike world they’re wearing body armor and the first one to pull the trigger won’t automatically be the last one left standing.

Third, nuclear non-proliferation treaties need to be strengthened into disarmament treaties, with rapid but achievable timelines for disarmament of all nuclear weapons, starting with the nations that have the largest arsenals. Random inspections of the disarmament should be performed without warning on a frequent schedule. Any nation that is so much as a day late on their disarmament deadlines needs to have its leaders likewise hauled off to the Hague. If there is any doubt at all in your mind whether your government will meet its deadlines, you need to double your disarmament budget. And if your government is too corrupt or too bureaucratic to meet its deadlines even if they try, well, you’d better shape up fast. We’ll keep removing and imprisoning your leaders until you do. Once again, nothing can be left to chance.

We might want to maintain some small nuclear arsenal for the sole purpose of deflecting asteroids from colliding with the Earth. If so, that arsenal should be jointly owned and frequently inspected by both the United States and Russia—not just the nuclear superpowers, but also the only two nations with sufficient rocket launch capability in any case. The launch of the deflection missiles should require joint authorization from the presidents of both nations. But in fact nuclear weapons are probably not necessary for such a deflection; nuclear rockets would probably be a better option. Vaporizing the asteroid wouldn’t accomplish much, even if you could do it; what you actually want to do is impart as much sideways momentum as possible.

What I’m saying probably sounds extreme. It may even seem unjust or irrational. But look at those numbers again. Think carefully about the value of a human life. When we are talking about a risk of total human extinction, this is what rationality looks like. Zero tolerance for drug abuse or even terrorism is a ridiculous policy that does more harm than good. Zero tolerance for risk of nuclear war may be the only hope for humanity’s ongoing survival.

Throughout the vastness of the universe, there are probably billions of civilizations—I need only assume one civilization for every hundred galaxies. Of the civilizations that were unwilling to adopt zero tolerance policies on weapons of mass destruction and bear any cost, however unthinkable, to prevent their own extinction, there is almost boundless diversity, but they all have one thing in common: None of them will exist much longer. The only civilizations that last are the ones that refuse to tolerate weapons of mass destruction.