“DSGE or GTFO”: Macroeconomics took a wrong turn somewhere

Dec 31, JDN 2458119

The state of macro is good,” wrote Oliver Blanchard—in August 2008. This is rather like the turkey who is so pleased with how the farmer has been feeding him lately, the day before Thanksgiving.

It’s not easy to say exactly where macroeconomics went wrong, but I think Paul Romer is right when he makes the analogy between DSGE (dynamic stochastic general equilbrium) models and string theory. They are mathematically complex and difficult to understand, and people can make their careers by being the only ones who grasp them; therefore they must be right! Nevermind if they have no empirical support whatsoever.

To be fair, DSGE models are at least a little better than string theory; they can at least be fit to real-world data, which is better than string theory can say. But being fit to data and actually predicting data are fundamentally different things, and DSGE models typically forecast no better than far simpler models without their bold assumptions. You don’t need to assume all this stuff about a “representative agent” maximizing a well-defined utility function, or an Euler equation (that doesn’t even fit the data), or this ever-proliferating list of “random shocks” that end up taking up all the degrees of freedom your model was supposed to explain. Just regressing the variables on a few years of previous values of each other (a “vector autoregression” or VAR) generally gives you an equally-good forecast. The fact that these models can be made to fit the data well if you add enough degrees of freedom doesn’t actually make them good models. As Von Neumann warned us, with enough free parameters, you can fit an elephant.

But really what bothers me is not the DSGE but the GTFO (“get the [expletive] out”); it’s not that DSGE models are used, but that it’s almost impossible to get published as a macroeconomic theorist using anything else. Defenders of DSGE typically don’t even argue anymore that it is good; they argue that there are no credible alternatives. They characterize their opponents as “dilettantes” who aren’t opposing DSGE because we disagree with it; no, it must be because we don’t understand it. (Also, regarding that post, I’d just like to note that I now officially satisfy the Athreya Axiom of Absolute Arrogance: I have passed my qualifying exams in a top-50 economics PhD program. Yet my enmity toward DSGE has, if anything, only intensified.)

Of course, that argument only makes sense if you haven’t been actively suppressing all attempts to formulate an alternative, which is precisely what DSGE macroeconomists have been doing for the last two or three decades. And yet despite this suppression, there are alternatives emerging, particularly from the empirical side. There are now empirical approaches to macroeconomics that don’t use DSGE models. Regression discontinuity methods and other “natural experiment” designs—not to mention actual experiments—are quickly rising in popularity as economists realize that these methods allow us to actually empirically test our models instead of just adding more and more mathematical complexity to them.

But there still seems to be a lingering attitude that there is no other way to do macro theory. This is very frustrating for me personally, because deep down I think what I would like to do as a career is macro theory: By temperament I have always viewed the world through a very abstract, theoretical lens, and the issues I care most about—particularly inequality, development, and unemployment—are all fundamentally “macro” issues. I left physics when I realized I would be expected to do string theory. I don’t want to leave economics now that I’m expected to do DSGE. But I also definitely don’t want to do DSGE.

Fortunately with economics I have a backup plan: I can always be an “applied micreconomist” (rather the opposite of a theoretical macroeconomist I suppose), directly attached to the data in the form of empirical analyses or even direct, randomized controlled experiments. And there certainly is plenty of work to be done along the lines of Akerlof and Roth and Shiller and Kahneman and Thaler in cognitive and behavioral economics, which is also generally considered applied micro. I was never going to be an experimental physicist, but I can be an experimental economist. And I do get to use at least some theory: In particular, there’s an awful lot of game theory in experimental economics these days. Some of the most exciting stuff is actually in showing how human beings don’t behave the way classical game theory predicts (particularly in the Ultimatum Game and the Prisoner’s Dilemma), and trying to extend game theory into something that would fit our actual behavior. Cognitive science suggests that the result is going to end up looking quite different from game theory as we know it, and with my cognitive science background I may be particularly well-positioned to lead that charge.

Still, I don’t think I’ll be entirely satisfied if I can’t somehow bring my career back around to macroeconomic issues, and particularly the great elephant in the room of all economics, which is inequality. Underlying everything from Marxism to Trumpism, from the surging rents in Silicon Valley and the crushing poverty of Burkina Faso, to the Great Recession itself, is inequality. It is, in my view, the central question of economics: Who gets what, and why?

That is a fundamentally macro question, but you can’t even talk about that issue in DSGE as we know it; a “representative agent” inherently smooths over all inequality in the economy as though total GDP were all that mattered. A fundamentally new approach to macroeconomics is needed. Hopefully I can be part of that, but from my current position I don’t feel much empowered to fight this status quo. Maybe I need to spend at least a few more years doing something else, making a name for myself, and then I’ll be able to come back to this fight with a stronger position.

In the meantime, I guess there’s plenty of work to be done on cognitive biases and deviations from game theory.

Influenza vaccination, herd immunity, and the Tragedy of the Commons

Dec 24, JDN 2458112

Usually around this time of year I do a sort of “Christmas special” blog post, something about holidays or gifts. But this year I have a rather different seasonal idea in mind. It’s not just the holiday season; it’s also flu season.

Each year, influenza kills over 56,000 people in the US, and between 300,000 and 600,000 people worldwide, mostly in the winter months. And yet, in any given year, only about 40% of adults and 60% of children get the flu vaccine.

The reason for this should be obvious to any student of economics: It’s a Tragedy of the Commons. If enough people got vaccinated that we attained reliable herd immunity (which would take about 90%), then almost nobody would get influenza, and the death rate would plummet. But for any given individual, the vaccine is actually not all that effective. Your risk of getting the flu only drops by about half if you receive the vaccine. The effectiveness is particularly low among the elderly, who are also at the highest risk for serious complications due to influenza.

Thus, for any given individual, the incentive to get vaccinated isn’t all that strong, even though society as a whole would be much better off if we all got vaccinated. Your probability of suffering serious complications from influenza is quite low, and wouldn’t be reduced all that much if you got the vaccine; so even though flu vaccines aren’t that costly in terms of time, money, discomfort, and inconvenience, the cost is just high enough that a lot of us don’t bother to get the shot each year.

On an individual level, my advice is simple: Go get a flu shot. Don’t do it just for yourself; do it for everyone around you. You are protecting the most vulnerable people in our society.

But if we really want everyone to get vaccinated, we need a policy response. I can think of two policies that might work, which can be broadly called a “stick” and a “carrot”.

The “stick” approach would be to make vaccination mandatory, as it already is for many childhood vaccines. Some sort of penalty would have to be introduced, but that’s not the real challenge. The real challenge would be how to actually enforce that penalty: How do we tell who is vaccinated and who isn’t?

When schools make vaccination mandatory, they require vaccination records for admission. It would be simple enough to add annual flu vaccines to the list of required shots for high schools and colleges (though no doubt the anti-vax crowd would make a ruckus). But can you make vaccination mandatory for work? That seems like a much larger violation of civil liberties. Alternatively, we could require that people submit medical records with their tax returns to avoid a tax penalty—but the privacy violations there are quite substantial as well.

Hence, I would favor the “carrot” approach: Use government subsidies to provide a positive incentive for vaccination. Don’t simply make vaccination free; actually pay people to get vaccinated. Make the subsidy larger than the actual cost of the shots, and require that the doctors and pharmacies administering them remit the extra to the customers. Something like $20 per shot ought to do it; since the cost of the shots is also around $20, then vaccinating the full 300 million people of the United States every year would cost about $12 billion; this is less than the estimated economic cost of influenza, so it would essentially pay for itself.

$20 isn’t a lot of money for most people; but then, like I said, the time and inconvenience of a flu shot aren’t that large either. There have been moderately successful (but expensive) programs incentivizing doctors to perform vaccinations, but that’s stupid; frankly I’m amazed it worked at all. It’s patients who need incentivized. Doctors will give you a flu shot if you ask them. The problem is that most people don’t ask.

Do this, and we could potentially save tens of thousands of lives every year, for essentially zero net cost. And that sounds to me like a Christmas wish worth making.

The Irvine Company needs some serious antitrust enforcement

Dec 17, JDN 2458105

I probably wouldn’t even have known about this issue if I hadn’t ended up living in Irvine.

The wealthiest real estate magnate in the United States is Donald Bren, sole owner of the Irvine Company. His net wealth is estimated at $15 billion, which puts him behind the likes of Jeff Bezos or Bill Gates, but well above Donald Trump even at his most optimistic estimates.

Where did he get all this wealth?

The Irvine Company isn’t even particularly shy about its history, though of course they put a positive spin on it. Right there on their own website they talk about how it used to be a series of ranches farmed by immigrants. Look a bit deeper into their complaints about “squatters” and it becomes apparent that the main reason they were able to get so rich is that the immigrant tenant farmers whose land they owned were disallowed by law from owning real estate. (Not to mention how it was originally taken from Native American tribes, as most of the land in the US was.) Then of course the land has increased in price and been passed down from generation to generation.

This isn’t capitalism. Capitalism requires a competitive market with low barriers of entry and trade in real physical capital—machines, vehicles, factories. The ownership of land by a single family that passes down its title through generations while extracting wealth from tenant farmers who aren’t allowed to own anything has another name. We call it feudalism.

The Irvine Company is privately-held, and thus not required to publish its finances the way a publicly-traded company would be, so I can’t tell you exactly what assets its owns or how much profit it makes. But I can tell you that it owns over 57,000 housing units—and there are only 96,000 housing units in the city of Irvine, so that means they literally own 60% of the city. They don’t just own houses either; they also own most of the commercial districts, parks, and streets.

As a proportion of all the housing in the United States, that isn’t so much. Even compared to Southern California (the most densely populated region in North America), it may not seem all that extravagant. But within the city of Irvine itself, this is getting dangerously close to a monopoly. Housing is expensive all over California, so they can’t be entirely blamed—but is it really that hard to believe that letting one company own 60% of your city is going to increase rents?

This is sort of thing that calls for a bold and unequivocal policy response. The Irvine Company should be forced to subdivide itself into multiple companies—perhaps Irvine Residential, Irvine Commercial, and Irvine Civic—and then those companies should be made publicly-traded, and a majority of their shares immediately distributed to the residents of the city. Unlike most land reform proposals, selecting who gets shares is actually quite straightforward: Anyone who pays rent on an Irvine Company property receives a share.

Land reform has a checkered history to say the least, which is probably why policymakers are reluctant to take this sort of approach. But this is a land reform that could be handled swiftly, by a very simple mechanism, with very clear rules. Moreover, it is entirely within the rule of law, as the Irvine Company is obviously at this point an illegitimate monopoly in violation of the Sherman Antitrust Act, Clayton Antitrust Act, and Federal Trade Commission Act. The Herfindahl-Hirschman Index for real estate in the city of Irvine would be at least 3600, well over the standard threshold of 2500 that FTC guidelines consider prima facie evidence of an antitrust violation in the market. Formally, the land reform could be accomplished by collecting damages in an amount necessary to purchase the shares at the (mandatory) IPO, then the beneficiaries of the damages paid in shares would be the residents of Irvine. The FTC is also empowered to bring criminal charges if necessary.

Oddly, most of the talk about the Irvine Company among residents of Irvine centers around their detailed policy decisions, whether expanding road X was a good idea, how you feel about the fact that they built complex Y. (There’s also a bizarre reverence for the Irvine Master Plan; people speak of it as if it were the US Constitution, when it’s actually more like Amazon.com’s five-year revenue targets. This is a for-profit company. Their plan is about taking your money.) This is rather like debating whether or not you have a good king; even if you do, you’re still a feudal subject. No single individual or corporation should have that kind of power over the population of an entire city. This is not a small city, either; Irvine has about three-quarters of the population of Iceland, or a third the population of Boston. Take half of Donald Bren’s $15 billion, divide it evenly over the 250,000 people of the city, and each one gets $30,000. That’s a conservative estimate of how much monopolistic rent the Irvine Company has extracted from the people of Irvine.

By itself, redistributing the assets of the Irvine Company wouldn’t solve the problem of high rents in Southern California. But I think it would help, and I’m honestly having trouble seeing the downsides. The only people who seem to be harmed are billionaires who inherited wealth that was originally extracted from serfs. Like I said, this is within the law, and wouldn’t require new legislation. We would only need to aggressively enforce laws that have been on the books for a century. It doesn’t even seem like it should be politically unpopular, as you’re basically giving a check for tens of thousands of dollars to each voting resident in the city.

Of course, it won’t happen. As usual, I’m imagining more justice in the world than there actually has ever been.

The “productivity paradox”

 

Dec 10, JDN 2458098

Take a look at this graph of manufacturing output per worker-hour:

Manufacturing_productivity

From 1988 to 2008, it was growing at a steady pace. In 2008 and 2009 it took a dip due to the Great Recession; no big surprise there. But then since 2012 it has been… completely flat. If we take this graph at face value, it would imply that manufacturing workers today can produce no more output than workers five years ago, and indeed only about 10% more than workers a decade ago. Whereas, a worker in 2008 was producing over 60% more than a worker in 1998, who was producing over 40% more than a worker in 1988.

Many economists call this the “productivity paradox”, and use it to argue that we don’t really need to worry about robots taking all our jobs any time soon. I think this view is mistaken.

The way we measure productivity is fundamentally wrongheaded, and is probably the sole cause of this “paradox”.

First of all, we use total hours scheduled to work, not total hours actually doing productive work. This is obviously much, much easier to measure, which is why we do it. But if you think for a moment about how the 40-hour workweek norm is going to clash with rapidly rising real productivity, it becomes apparent why this isn’t going to be a good measure.
When a worker finds a way to get done in 10 hours what used to take 40 hours, what does that worker’s boss do? Send them home after 10 hours because the job is done? Give them a bonus for their creativity? Hardly. That would be far too rational. They assign them more work, while paying them exactly the same. Recognizing this, what is such a worker to do? The obvious answer is to pretend to work the other 30 hours, while in fact doing something more pleasant than working.
And indeed, so-called “worker distraction” has been rapidly increasing. People are right to blame smartphones, I suppose, but not for the reasons they think. It’s not that smartphones are inherently distracting devices. It’s that smartphones are the cutting edge of a technological revolution that has made most of our work time unnecessary, so due to our fundamentally defective management norms they create overwhelming incentives to waste time at work to avoid getting drenched in extra tasks for no money.

That would probably be enough to explain the “paradox” by itself, but there is a deeper reason that in the long run is even stronger. It has to do with the way we measure “output”.

It might surprise you to learn that economists almost never consider output in terms of the actual number of cars produced, buildings constructed, songs written, or software packages developed. The standard measures of output are all in the form of so-called “real GDP”; that is, the dollar value of output produced.

They do adjust for indexes of inflation, but as I’ll show in a moment this still creates a fundamentally biased picture of the productivity dynamics.

Consider a world with only three industries: Housing, Food, and Music.

Productivity in Housing doesn’t change at all. Producing a house cost 10,000 worker-hours in 1950, and cost 10,000 worker-hours in 2000. Nominal price of houses has rapidly increased, from $10,000 in 1950 to $200,000 in 2000.

Productivity in Food rises moderately fast. Producing 1,000 meals cost 1,000 worker-hours in 1950, and cost 100 worker-hours in 2000. Nominal price of food has increased slowly, from $1,000 per 1,000 meals in 1950 to $5,000 per 1,000 meals in 2000.

Productivity in Music rises extremely fast. Producing 1,000 performances cost 10,000 worker-hours in 1950, and cost 1 worker-hour in 2000. Nominal price of music has collapsed, from $100,000 per 1,000 performances in 1950 to $1,000 per 1,000 performances in 2000.

This is of course an extremely stylized version of what has actually happened: Housing has gotten way more expensive, food has stayed about the same in price while farm employment has plummeted, and the rise of digital music has brought about a new Renaissance in actual music production and listening while revenue for the music industry has collapsed. There is a very nice Vox article on the “productivity paradox” showing a graph of how prices have changed in different industries.

How would productivity appear in the world I’ve just described, by standard measures? Well, to say that I actually need to say something about how consumers substitute across industries. But I think I’ll be forgiven in this case for saying that there is no substitution whatsoever; you can’t eat music or live in a burrito. There’s also a clear Maslow hierarchy here: They say that man cannot live by bread alone, but I think living by Led Zeppelin alone is even harder.

Consumers will therefore choose like this: Over 10 years, buy 1 house, 10,000 meals, and as many performances as you can afford after that. Further suppose that each person had $2,100 per year to spend in 1940-1950, and $50,000 per year to spend in 1990-2000. (This is approximately true for actual nominal US GDP per capita.)

1940-1950:
Total funds: $21,000

1 house = $10,000

10,000 meals = $10,000

Remaining funds: $1,000

Performances purchased: 10

1990-2000:

Total funds: $500,000

1 house = $200,000

10,000 meals = $50,000

Remaining funds: $250,000

Performances purchased: 250,000

(Do you really listen to this much music? 250,000 performances over 10 years is about 70 songs per day. If each song is 3 minutes, that’s only about 3.5 hours per day. If you listen to music while you work or watch a couple of movies with musical scores, yes, you really do listen to this much music! The unrealistic part is assuming that people in 1950 listen to so little, given that radio was already widespread. But if you think of music as standing in for all media, the general trend of being able to consume vastly more media in the digital age is clearly correct.)

Now consider how we would compute a price index for each time period. We would construct a basket of goods and determine the price of that basket in each time period, then adjust prices until that basket has a constant price.

Here, the basket would probably be what people bought in 1940-1950: 1 house, 10,000 meals, and 400 music performances.

In 1950, this basket cost $10,000+$10,000+$100 = $21,000.

In 2000, this basket cost $200,000+$50,000+$400 = $150,400.

This means that our inflation adjustment is $150,400/$21,000 = 7 to 1. This means that we would estimate the real per-capita GDP in 1950 at about $14,700. And indeed, that’s about the actual estimate of real per-capita GDP in 1950.

So, what would we say about productivity?

Sales of houses in 1950 were 1 per person, costing 10,000 worker hours.

Sales of food in 1950 were 10,000 per person, costing 10,000 worker hours.

Sales of music in 1950 were 400 per person, costing 4,000 worker hours.

Worker hours per person are therefore 24,000.

Sales of houses in 2000 were 1 per person, costing 10,000 worker hours.

Sales of food in 2000 were 10,000 per person, costing 1,000 worker hours.

Sales of music in 2000 were 250,000 per person, costing 25,000 worker hours.

Worker hours per person are therefore 36,000.

Therefore we would estimate that productivity rose from $14,700/24,000 = $0.61 per worker-hour to $50,000/36,000 = $1.40 per worker-hour. This is an annual growth rate of about 1.7%, which is again, pretty close to the actual estimate of productivity growth. For such a highly stylized model, my figures are doing remarkably well. (Honestly, better than I thought they would!)

But think about how much actual productivity rose, at least in the industries where it did.

We produce 10 times as much food per worker hour after 50 years, which is an annual growth rate of 4.7%, or three times the estimated growth rate.

We produce 10,000 times as much music per worker hour after 50 years, which is an annual growth rate of over 20%, or almost twelve times the estimated growth rate.

Moreover, should music producers be worried about losing their jobs to automation? Absolutely! People simply won’t be able to listen to much more music than they already are, so any continued increases in music productivity are going to make musicians lose jobs. And that was already allowing for music consumption to increase by a factor of over 600.

Of course, the real world has a lot more industries than this, and everything is a lot more complicated. We do actually substitute across some of those industries, unlike in this model.

But I hope I’ve gotten at least the basic point across that when things become drastically cheaper as technological progress often does, simply adjusting for inflation doesn’t do the job. One dollar of music today isn’t the same thing as one dollar of music a century ago, even if you inflation-adjust their dollars to match ours. We ought to be measuring in hours of music; an hour of music is much the same thing as an hour of music a century ago.

And likewise, that secretary/weather forecaster/news reporter/accountant/musician/filmmaker in your pocket that you call a “smartphone” really ought to be counted as more than just a simple inflation adjustment on its market price. The fact that it is mind-bogglingly cheaper to get these services than it used to be is the technological progress we care about; it’s not some statistical artifact to be removed by proper measurement.

Combine that with actually measuring the hours of real, productive work, and I think you’ll find that productivity is still rising quite rapidly, and that we should still be worried about what automation is going to do to our jobs.

Why do so many people equate “natural” with “good”?

Dec 3, JDN 2458091

Try searching sometime for “all-natural” products. It doesn’t matter whether you’re looking for dog food, skin cream, clothing, or even furniture polish; you will find some out there that proudly declare themselves “all-natural”. There is a clear sense that there is something good about being natural, some kind of purity that comes from being unsullied by industrial technology. (Of course, when you buy something online that is shipped to you in a box carried on a truck because it’s “all-natural”….)

Food is the most extreme case, where it is by now almost universally agreed that processed food is inherently harmful and the source of all of our dietary problems if not all our social ills.

This is a very strange state of affairs, as there is no particular reason for “natural” and “good” to be in any way related.

First of all, I can clearly come up with examples of all four possible cases: Motherhood is natural and good, but gamma ray bursts are natural and bad. Vaccination is artificial and good, but nuclear weapons are artificial and bad.

Natural Artificial
Good Motherhood Vaccination
Bad Gamma ray bursts Nuclear weapons

But even more than that, it’s difficult to even find a correlation between being natural and being good. If anything, I would expect the correlation to run the other way: Artificial things were created by humans to serve some human purpose, while natural things are simply whatever happens to exist. Most of the harmful artificial things are the result of mistakes, or unintended consequences of otherwise beneficial things—while plenty of harmful natural things are simply inherently harmful and never benefited anyone in any way. Nuclear weapons helped end World War 2. Gamma ray bursts will either hardly affect us at all, or instantly and completely annihilate our entire civilization. I guess they might also lead to some valuable discoveries in astrophysics, but if I were asked to fund a research project with the same risk-reward profile as a gamma ray burst, I would tear up the application and make sure no one else ever saw it again. The kind of irrational panic people had about the possibility of LHC black holes would be a rational panic if applied to a research project with some risk of causing gamma ray bursts.

The current obsession with “natural” products (which is really an oxymoron, if you think about it; it can’t be natural if it’s a product) seems to have arisen as its own unintended consequence of something good, namely the environmentalist movement in the 1960s and 1970s. The very real problems of pollution, natural resource depletion, extinction, global warming, desertification, and ocean acidification led people to rightly ask how the very same industrial processes that brought us our high standard of living could ultimately destroy it if we left them unchecked.

But the best solutions to these problems are themselves artificial: Solar power, nuclear energy, carbon taxes. Trying to go back to some ancient way of life where we didn’t destroy the environment is simply not a viable option at this point; even if such a way of life once existed, there’s no way it could sustain our current population, much less our current standard of living. And given the strong correlation between human migrations and extinction events of large mammals, I’m not convinced that such a way of life ever existed.

So-called “processed food” is really just industrially processed food—which is to say, food processed by the most efficient and productive technologies available. Humans have been processing food for thousands of years, and with very good reason; much of what we eat would be toxic if it weren’t threshed or boiled or fermented. The fact that there are people who complain about “processed food” but eat tofu and cheese is truly quite remarkable—think for a moment about how little resemblance Cheddar bears to the cow from whence it came, or what ingenuity it must have taken people in ancient China to go all the way from soybean to silken tofu. Similarly, anyone who is frightened by “genetically modified organisms” should give some serious thought to what is involved in creating their seedless bananas.

There may be some kernel of truth in the opposition to industrially processed food, however. The problem is not that we process food, nor that we do so by industrial machines. The problem is who processes the food, and why.

Humans have been processing food for thousands of years, yes; but only for the last few hundred have corporations been doing that processing. For most of human history, you processed food to feed your family, or your village, or perhaps to trade with a few neighboring villages or sell to the nearest city. What makes tofu different from, say, Fruit Loops isn’t that the former is less processed; it’s that the latter was designed and manufactured for profit.

Don’t get me wrong; corporations have made many valuable contributions to our society, including our food production, and it is largely their doing that food is now so cheap and plentiful that we could easily feed the entire world’s population. It’s just that, well, it’s also largely their doing that we don’t feed the entire world’s population, because they see no profit in doing so.

The incentives that a peasant village faces in producing its food are pretty well optimized for making the most nutritious food available with the least cost in labor and resources. When your own children and those of your friends and neighbors are going to be eating what you make, you work pretty hard to make sure that the food you make is good for them. And you don’t want to pollute the surrounding water or destroy the forest, because your village depends upon those things too.

The incentives that a corporation faces in producing food are wildly different. Nobody you know is going to be eating this stuff, most likely, and certainly not as their primary diet. You aren’t concerned about nutrition unless you think your customers are; more likely, you expect them to care about taste, so you optimize your designs to make things taste as good as possible regardless of their nutrition. You care about minimizing labor inputs only insofar as they cost you wages—from your perspective, cutting wages is as good as actually saving labor. You want to conserve only the resources that are expensive; resources that are cheap, like water and (with subsidies) corn syrup, you may as well use as much as you like. And above all, you couldn’t care less about the environmental damage you’re causing by your production, because those costs will be borne entirely by someone else, most likely the government or the citizens of whatever country you’re producing in.

Responsible consumers could reduce these effects, but only somewhat, because there is a fundamental asymmetry of information. The corporation “knows” (in that each of the administrators in each of the components that needs to know, knows) what production processes they are using and what subcontractors they are hiring, and could easily figure out how much they are exploiting workers and damaging the environment; but the consumers who care about these things can find out that information with great difficulty, if at all. Consumers who want to be responsible, but don’t have very good information, create incentives for so-called “greenwashing”: Corporations have many good profit-making reasons to say they are environmentally responsible, but far fewer reasons to actually be environmentally responsible.

And that is why you should be skeptical of “all-natural” products, especially if you are skeptical of the role of corporations in our society and our food system. “All-natural” is an adjective that has no legal meaning. The word “organic” can have a legally-defined meaning, if coupled with a certification like the USDA Organic standard. The word “non-toxic” has a legally-defined meaning—there is a long list of toxic compounds it can’t contain in more than trace amounts. There are now certifications for “carbon-neutral”. But “all-natural” offers no such protection. Basically anything can call itself “all-natural”, and if corporations expect you to be willing to pay more for such products, they have no reason not to slap it on everything. This is a problem that I think can only be solved by stringent regulation. Consumer pressure can’t work if there is no transparency in the production chain.

Even taken as something like its common meaning, “not synthetic or artificial”, there’s no reason to think that simply because something is natural, that means it is better, or even more ecologically sustainable. The ecological benefits of ancient methods of production come from the incentives of small-scale local production, not from something inherently more destructive about high-tech industry. (Indeed, water pollution was considerably worse from Medieval peasant villages—especially on a per-capita basis—than it is from modern water treatment systems.)