Free trade is not the problem. Billionaires are the problem.

JDN 2457468

One thing that really stuck out to me about the analysis of the outcome of the Michigan primary elections was that people kept talking about trade; when Bernie Sanders, a center-left social democrat, and Donald Trump, a far-right populist nationalist (and maybe even crypto-fascist) are the winners, something strange is at work. The one common element that the two victors seemed to have was their opposition to free trade agreements. And while people give many reasons to support Trump, many quite baffling, his staunch protectionism is one of the stronger voices. While Sanders is not as staunchly protectionist, he definitely has opposed many free-trade agreements.

Most of the American middle class feels as though they are running in place, working as hard as they can to stay where they are and never moving forward. The income statistics back them up on this; as you can see in this graph from FRED, real median household income in the US is actually lower than it was a decade ago; it never really did recover from the Second Depression:

US_median_household_income

As I talk to people about why they think this is, one of the biggest reasons they always give is some variant of “We keep sending our jobs to China.” There is this deep-seated intuition most Americans seem to have that the degradation of the middle class is the result of trade globalization. Bernie Sanders speaks about ending this by changes in tax policy and stronger labor regulations (which actually makes some sense); Donald Trump speaks of ending this by keeping out all those dirty foreigners (which appeals to the worst in us); but ultimately, they both are working from the narrative that free trade is the problem.

But free trade is not the problem. Like almost all economists, I support free trade. Free trade agreements might be part of the problem—but that’s because a lot of free trade agreements aren’t really about free trade. Many trade agreements, especially the infamous TRIPS accord, were primarily about restricting trade—specifically on “intellectual property” goods like patented drugs and copyrighted books. They were about expanding the monopoly power of corporations over their products so that the monopoly applied not just to the United States, but indeed to the whole world. This is the opposite of free trade and everything that it stands for. The TPP was a mixed bag, with some genuinely free-trade provisions (removing tariffs on imported cars) and some awful anti-trade provisions (making patents on drugs even stronger).

Every product we buy as an import is another product we sell as an export. This is not quite true, as the US does run a trade deficit; but our trade deficit is small compared to our overall volume of trade (which is ludicrously huge). Total US exports for 2014, the last full year we’ve fully tabulated, were $3.306 trillion—roughly the entire budget of the federal government. Total US imports for 2014 were $3.578 trillion. This makes our trade deficit $272 billion, which is 7.6% of our imports, or about 1.5% of our GDP of $18.148 trillion. So to be more precise, every 100 products we buy as imports are 92 products we sell as exports.

If we stopped making all these imports, what would happen? Well, for one thing, millions of people in China would lose their jobs and fall back into poverty. But even if you’re just looking at the US specifically, there’s no reason to think that domestic production would increase nearly as much as the volume of trade was reduced, because the whole point of trade is that it’s more efficient than domestic production alone. It is actually generous to think that by switching to autarky we’d have even half the domestic production that we’re currently buying in imports. And then of course countries we export to would retaliate, and we’d lose all those exports. The net effect of cutting ourselves off from world trade would be a loss of about $1.5 trillion in GDP—average income would drop by 8%.

Now, to be fair, there are winners and losers. Offshoring of manufacturing does destroy the manufacturing jobs that are offshored; but at least when done properly, it also creates new jobs by improved efficiency. These two effects are about the same size, so the overall effect is a small decline in the overall number of US manufacturing jobs. It’s not nearly large enough to account for the collapsing middle class.

Globalization may be one contributor to rising inequality, as may changes in technology that make some workers (software programmers) wildly more productive as they make other workers (cashiers, machinists, and soon truck drivers) obsolete. But those of us who have looked carefully at the causes of rising income inequality know that this is at best a small part of what’s really going on.

The real cause is what Bernie Sanders is always on about: The 1%. Gains in income in the US for the last few decades (roughly as long as I’ve been alive) have been concentrated in a very small minority of the population—in fact, even 1% may be too coarse. Most of the income gains have actually gone to more like the top 0.5% or top 0.25%, and the most spectacular increases in income have all been concentrated in the top 0.01%.

The story that we’ve been told—I dare say sold—by the mainstream media (which is, lets face it, owned by a handful of corporations) is that new technology has made it so that anyone who works hard (or at least anyone who is talented and works hard and gets a bit lucky) can succeed or even excel in this new tech-driven economy.

I just gave up on a piece of drivel called Bold that was seriously trying to argue that anyone with a brilliant idea can become a billionaire if they just try hard enough. (It also seemed positively gleeful about the possibility of a cyberpunk dystopia in which corporations use mass surveillance on their customers and competitors—yes, seriously, this was portrayed as a good thing.) If you must read it, please, don’t give these people any more money. Find it in a library, or find a free ebook version, or something. Instead you should give money to the people who wrote the book I switched to, Raw Deal, whose authors actually understand what’s going on here (though I maintain that the book should in fact be called Uber Capitalism).

When you look at where all the money from the tech-driven “new economy” is going, it’s not to the people who actually make things run. A typical wage for a web developer is about $35 per hour, and that’s relatively good as far as entry-level tech jobs. A typical wage for a social media intern is about $11 per hour, which is probably less than what the minimum wage ought to be. The “sharing economy” doesn’t produce outstandingly high incomes for workers, just outstandingly high income risk because you aren’t given a full-time salary. Uber has claimed that its drivers earn $90,000 per year, but in fact their real take-home pay is about $25 per hour. A typical employee at Airbnb makes $28 per hour. If you do manage to find full-time hours at those rates, you can make a middle-class salary; but that’s a big “if”. “Sharing economy”? Robert Reich has aptly renamed it the “share the crumbs economy”.

So where’s all this money going? CEOs. The CEO of Uber has net wealth of $8 billion. The CEO of Airbnb has net wealth of $3.3 billion. But they are paupers compared to the true giants of the tech industry: Larry Page of Google has $36 billion. Jeff Bezos of Amazon has $49 billion. And of course who can forget Bill Gates, founder of Microsoft, and his mind-boggling $77 billion.

Can we seriously believe that this is because their ideas were so brilliant, or because they are so talented and skilled? Uber’s “brilliant” idea is just to monetize carpooling and automate linking people up. Airbnb’s “revolutionary” concept is an app to advertise your bed-and-breakfast. At least Google invented some very impressive search algorithms, Amazon created one of the most competitive product markets in the world, and Microsoft democratized business computing. Of course, none of these would be possible without the invention of the Internet by government and university projects.

As for what these CEOs do that is so skilled? At this point they basically don’t do… anything. Any real work they did was in the past, and now it’s all delegated to other people; they just rake in money because they own things. They can manage if they want, but most of them have figured out that the best CEOs do very little while CEOS who micromanage typically fail. While I can see some argument for the idea that working hard in the past could merit you owning capital in the future, I have a very hard time seeing how being very good at programming and marketing makes you deserve to have so much money you could buy a new Ferrari every day for the rest of your life.

That’s the heuristic I like to tell people, to help them see the absolutely enormous difference between a millionaire and a billionaire: A millionaire is someone who can buy a Ferrari. A billionaire is someone who can buy a new Ferrari every day for the rest of their life. A high double-digit billionaire like Bezos or Gates could buy a new Ferrari every hour for the rest of their life. (Do the math; a Ferrari is about $250,000. Remember that they get a return on capital typically between 5% and 15% per year. With $1 billion, you get $50 to $150 million just in interest and dividends every year, and $100 million is enough to buy 365 Ferraris. As long as you don’t have several very bad years in a row on your stocks, you can keep doing this more or less forever—and that’s with only $1 billion.)

Immigration and globalization are not what is killing the American middle class. Corporatization is what’s killing the American middle class. Specifically, the use of regulatory capture to enforce monopoly power and thereby appropriate almost all the gains of new technologies into into the hands of a few dozen billionaires. Typically this is achieved through intellectual property, since corporate-owned patents basically just are monopolistic regulatory capture.

Since 1984, US real GDP per capita rose from $28,416 to $46,405 (in 2005 dollars). In that same time period, real median household income only rose from $48,664 to $53,657 (in 2014 dollars). That means that the total amount of income per person in the US rose by 49 log points (63%), while the amount of income that a typical family received only rose 10 log points (10%). If median income had risen at the same rate as per-capita GDP (and if inequality remained constant, it would), it would now be over $79,000, instead of $53,657. That is, a typical family would have $25,000 more than they actually do. The poverty line for a family of 4 is $24,300; so if you’re a family of 4 or less, the billionaires owe you a poverty line. You should have three times the poverty line, and in fact you have only two—because they took the rest.

And let me be very clear: I mean took. I mean stole, in a very real sense. This is not wealth that they created by their brilliance and hard work. This is wealth that they expropriated by exploiting people and manipulating the system in their favor. There is no way that the top 1% deserves to have as much wealth as the bottom 95% combined. They may be talented; they may work hard; but they are not that talented, and they do not work that hard. You speak of “confiscation of wealth” and you mean income taxes? No, this is the confiscation of our nation’s wealth.

Those of us who voted for Bernie Sanders voted for someone who is trying to stop it.

Those of you who voted for Donald Trump? Congratulations on supporting someone who epitomizes it.

This is why we must vote our consciences.

JDN 2457465

As I write, Bernie Sanders has just officially won the Michigan Democratic Primary. It was a close race—he was ahead by about 2% the entire time—so the delegates will be split; but he won.

This is notable because so many forecasters said it was impossible. Before the election, Nate Silver, one of the best political forecasters in the world (and he still deserves that title) had predicted a less than 1% chance Bernie Sanders could win. In fact, had he taken his models literally, he would have predicted a less than 1 in 10 million chance Bernie Sanders could win—I think it speaks highly of him that he was not willing to trust his models quite that far. I got into one of the wonkiest flamewars of all time earlier today debating whether this kind of egregious statistical error should call into question many of our standard statistical methods (I think it should; another good example is the total failure of the Black-Scholes model during the 2008 financial crisis).

Had we trusted the forecasters, held our noses and voted for the “electable” candidate, this would not have happened. But instead we voted our consciences, and the candidate we really wanted won.

It is an unfortunate truth that our system of plurality “first-past-the-post” voting does actually strongly incentivize strategic voting. Indeed, did it not, we wouldn’t need primaries in the first place. With a good range voting or even Condorcet voting system, you could basically just vote honestly among all candidates and expect a good outcome. Technically it’s still possible to vote strategically in range and Condorcet systems, but it’s not necessary the way it is in plurality vote systems.

The reason we need primaries is that plurality voting is not cloneproof; if two very similar candidates (“clones”) run that everyone likes, votes will be split between them and the two highly-favored candidates can lose to a less-favored candidate. Condorcet voting is cloneproof in most circumstances, and range voting is provably cloneproof everywhere and always. (Have I mentioned that we should really have range voting?)

Hillary Clinton and Bernie Sanders are not clones by any means, but they are considerably more similar to one another than either is to Donald Trump or Ted Cruz. If all the Republicans were to immediately drop out besides Trump while Clinton and Sanders stayed in the race, Trump could end up winning because votes were split between Clinton and Sanders. Primaries exist to prevent this outcome; either Sanders or Clinton will be in the final election, but not both (the #BernieOrBust people notwithstanding), so it will be a simple matter of whether they are preferred to Trump, which of course both Clinton and Sanders are. Don’t put too much stock in these polls, as polls this early are wildly unreliable. But I think they at least give us some sense of which direction the outcome is likely to be.

Ideally, we wouldn’t need to worry about that, and we could just vote our consciences all the time. But in the general election, you really do need to vote a little strategically and choose the better (or less-bad) option among the two major parties. No third-party Presidential candidate has ever gotten close to actually winning an election, and the best they ever seem to do is acting as weak clones undermining other similar candidates, as Ross Perot and Ralph Nader did. (Still, if you were thinking of not voting at all, it is obviously preferable for you to vote for a third-party candidate. If everyone who didn’t vote had instead voted for Ralph Nader, Nader would have won by a landslide—and US climate policy would be at least a decade ahead of where it is now, and we might not be already halfway to the 2 C global warming threshold.)

But in the primary? Vote your conscience. Primaries exist to make this possible, and we just showed that it can work. When people actually turn out to vote and support candidates they believe in, they win elections. If the same thing happens in several other states that just happened in Michigan, Bernie Sanders could win this election. And even if he doesn’t, he’s already gone a lot further than most of the pundits ever thought he could. (Sadly, so has Trump.)

We do not benefit from economic injustice.

JDN 2457461

Recently I think I figured out why so many middle-class White Americans express so much guilt about global injustice: A lot of people seem to think that we actually benefit from it. Thus, they feel caught between a rock and a hard place; conquering injustice would mean undermining their own already precarious standard of living, while leaving it in place is unconscionable.

The compromise, is apparently to feel really, really guilty about it, constantly tell people to “check their privilege” in this bizarre form of trendy autoflagellation, and then… never really get around to doing anything about the injustice.

(I guess that’s better than the conservative interpretation, which seems to be that since we benefit from this, we should keep doing it, and make sure we elect big, strong leaders who will make that happen.)

So let me tell you in no uncertain words: You do not benefit from this.

If anyone does—and as I’ll get to in a moment, that is not even necessarily true—then it is the billionaires who own the multinational corporations that orchestrate these abuses. Billionaires and billionaires only stand to gain from the exploitation of workers in the US, China, and everywhere else.

How do I know this with such certainty? Allow me to explain.

First of all, it is a common perception that prices of goods would be unattainably high if they were not produced on the backs of sweatshop workers. This perception is mistaken. The primary effect of the exploitation is simply to raise the profits of the corporation; there is a secondary effect of raising the price a moderate amount; and even this would be overwhelmed by the long-run dynamic effect of the increased consumer spending if workers were paid fairly.

Let’s take an iPad, for example. The price of iPads varies around the world in a combination of purchasing power parity and outright price discrimination; but the top model almost never sells for less than $500. The raw material expenditure involved in producing one is about $370—and the labor expenditure? Just $11. Not $110; $11. If it had been $110, the price could still be kept under $500 and turn a profit; it would simply be much smaller. That is, even if prices are really so elastic that Americans would refuse to buy an iPad at any more than $500 that would still mean Apple could still afford to raise the wages they pay (or rather, their subcontractors pay) workers by an order of magnitude. A worker who currently works 50 hours a week for $10 per day could now make $10 per hour. And the price would not have to change; Apple would simply lose profit, which is why they don’t do this. In the absence of pressure to the contrary, corporations will do whatever they can to maximize profits.

Now, in fact, the price probably would go up, because Apple fans are among the most inelastic technology consumers in the world. But suppose it went up to $600, which would mean a 1:1 absorption of these higher labor expenditures into price. Does that really sound like “Americans could never afford this”? A few people right on the edge might decide they couldn’t buy it at that price, but it wouldn’t be very many—indeed, like any well-managed monopoly, Apple knows to stop raising the price at the point where they start losing more revenue than they gain.

Similarly, half the price of an iPhone is pure profit for Apple, and only 2% goes into labor. Once again, wages could be raised by an order of magnitude and the price would not need to change.

Apple is a particularly obvious example, but it’s quite simple to see why exploitative labor cannot be the source of improved economic efficiency. Paying workers less does not make them do better work. Treating people more harshly does not improve their performance. Quite the opposite: People work much harder when they are treated well. In addition, at the levels of income we’re talking about, small improvements in wages would result in substantial improvements in worker health, further improving performance. Finally, substitution effect dominates income effect at low incomes. At very high incomes, income effect can dominate substitution effect, so higher wages might result in less work—but it is precisely when we’re talking about poor people that it makes the least sense to say they would work less if you paid them more and treated them better.

At most, paying higher wages can redistribute existing wealth, if we assume that the total amount of wealth does not increase. So it’s theoretically possible that paying higher wages to sweatshop workers would result in them getting some of the stuff that we currently have (essentially by a price mechanism where the things we want get more expensive, but our own wages don’t go up). But in fact our wages are most likely too low as well—wages in the US have become unlinked from productivity, around the time of Reagan—so there’s reason to think that a more just system would improve our standard of living also. Where would all the extra wealth come from? Well, there’s an awful lot of room at the top.

The top 1% in the US own 35% of net wealth, about as much as the bottom 95%. The 400 billionaires of the Forbes list have more wealth than the entire African-American population combined. (We’re double-counting Oprah—but that’s it, she’s the only African-American billionaire in the US.) So even assuming that the total amount of wealth remains constant (which is too conservative, as I’ll get to in a moment), improving global labor standards wouldn’t need to pull any wealth from the middle class; it could get plenty just from the top 0.01%.

In surveys, most Americans are willing to pay more for goods in order to improve labor standards—and the amounts that people are willing to pay, while they may seem small (on the order of 10% to 20% more), are in fact clearly enough that they could substantially increase the wages of sweatshop workers. The biggest problem is that corporations are so good at covering their tracks that it’s difficult to know whether you are really supporting higher labor standards. The multiple layers of international subcontractors make things even more complicated; the people who directly decide the wages are not the people who ultimately profit from them, because subcontractors are competitive while the multinationals that control them are monopsonists.

But for now I’m not going to deal with the thorny question of how we can actually regulate multinational corporations to stop them from using sweatshops. Right now, I just really want to get everyone on the same page and be absolutely clear about cui bono. If there is a benefit at all, it’s not going to you and me.

Why do I keep saying “if”? As so many people will ask me: “Isn’t it obvious that if one person gets less money, someone else must get more?” If you’ve been following my blog at all, you know that the answer is no.

On a single transaction, with everything else held constant, that is true. But we’re not talking about a single transaction. We’re talking about a system of global markets. Indeed, we’re not really talking about money at all; we’re talking about wealth.

By paying their workers so little that those workers can barely survive, corporations are making it impossible for those workers to go out and buy things of their own. Since the costs of higher wages are concentrated in one corporation while the benefits of higher wages are spread out across society, there is a Tragedy of the Commons where each corporation acting in its own self-interest undermines the consumer base that would have benefited all corporations (not to mention people who don’t own corporations). It does depend on some parameters we haven’t measured very precisely, but under a wide range of plausible values, it works out that literally everyone is worse off under this system than they would have been under a system of fair wages.

This is not simply theoretical. We have empirical data about what happened when companies (in the US at least) stopped using an even more extreme form of labor exploitation: slavery.

Because we were on the classical gold standard, GDP growth in the US in the 19th century was extremely erratic, jumping up and down as high as 10 lp and as low as -5 lp. But if you try to smooth out this roller-coaster business cycle, you can see that our growth rate did not appear tobe slowed by the ending of slavery:

US_GDP_growth_1800s

 

Looking at the level of real per capita GDP (on a log scale) shows a continuous growth trend as if nothing had changed at all:

US_GDP_per_capita_1800s

In fact, if you average the growth rates (in log points, averaging makes sense) from 1800 to 1860 as antebellum and from 1865 to 1900 as postbellum, you find that the antebellum growth rate averaged 1.04 lp, while the postbellum growth rate averaged 1.77 lp. Over a period of 50 years, that’s the difference between growing by a factor of 1.7 and growing by a factor of 2.4. Of course, there were a lot of other factors involved besides the end of slavery—but at the very least it seems clear that ending slavery did not reduce economic growth, which it would have if slavery were actually an efficient economic system.

This is a different question from whether slaveowners were irrational in continuing to own slaves. Purely on the basis of individual profit, it was most likely rational to own slaves. But the broader effects on the economic system as a whole were strongly negative. I think that part of why the debate on whether slavery is economically inefficient has never been settled is a confusion between these two questions. One side says “Slavery damaged overall economic growth.” The other says “But owning slaves produced a rate of return for investors as high as manufacturing!” Yeah, those… aren’t answering the same question. They are in fact probably both true. Something can be highly profitable for individuals while still being tremendously damaging to society.

I don’t mean to imply that sweatshops are as bad as slavery; they are not. (Though there is still slavery in the world, and some sweatshops tread a fine line.) What I’m saying is that showing that sweatshops are profitable (no doubt there) or even that they are better than most of the alternatives for their workers (probably true in most cases) does not show that they are economically efficient. Sweatshops are beneficent exploitationthey make workers better off, but in an obviously unjust way. And they only make workers better off compared to the current alternatives; if they were replaced with industries paying fair wages, workers would obviously be much better off still.

And my point is, so would we. While the prices of goods would increase slightly in the short run, in the long run the increased consumer spending by people in Third World countries—which soon would cease to be Third World countries, as happened in Korea and Japan—would result in additional trade with us that would raise our standard of living, not lower it. The only people it is even plausible to think would be harmed are the billionaires who own our multinational corporations; and yet even they might stand to benefit from the improved efficiency of the global economy.

No, you do not benefit from sweatshops. So stop feeling guilty, stop worrying so much about “checking your privilege”—and let’s get out there and do something about it.

The real Existential Risk we should be concerned about

JDN 2457458

There is a rather large subgroup within the rationalist community (loosely defined because organizing freethinkers is like herding cats) that focuses on existential risks, also called global catastrophic risks. Prominent examples include Nick Bostrom and Eliezer Yudkowsky.

Their stated goal in life is to save humanity from destruction. And when you put it that way, it sounds pretty darn important. How can you disagree with wanting to save humanity from destruction?

Well, there are actually people who do (the Voluntary Human Extinction movement), but they are profoundly silly. It should be obvious to anyone with even a basic moral compass that saving humanity from destruction is a good thing.

It’s not the goal of fighting existential risk that bothers me. It’s the approach. Specifically, they almost all seem to focus on exotic existential risks, vivid and compelling existential risks that are the stuff of great science fiction stories. In particular, they have a rather odd obsession with AI.

Maybe it’s the overlap with Singularitarians, and their inability to understand that exponentials are not arbitrarily fast; if you just keep projecting the growth in computing power as growing forever, surely eventually we’ll have a computer powerful enough to solve all the world’s problems, right? Well, yeah, I guess… if we can actually maintain the progress that long, which we almost certainly can’t, and if the problems turn out to be computationally tractable at all (the fastest possible computer that could fit inside the observable universe could not brute-force solve the game of Go, though a heuristic AI did just beat one of the world’s best players), and/or if we find really good heuristic methods of narrowing down the solution space… but that’s an awful lot of “if”s.

But AI isn’t what we need to worry about in terms of saving humanity from destruction. Nor is it asteroid impacts; NASA has been doing a good job watching for asteroids lately, and estimates the current risk of a serious impact (by which I mean something like a city-destroyer or global climate shock, not even a global killer) at around 1/10,000 per year. Alien invasion is right out; we can’t even find clear evidence of bacteria on Mars, and the skies are so empty of voices it has been called a paradox. Gamma ray bursts could kill us, and we aren’t sure about the probability of that (we think it’s small?), but much like brain aneurysms, there really isn’t a whole lot we can do to prevent them.

There is one thing that we really need to worry about destroying humanity, and one other thing that could potentially get close over a much longer timescale. The long-range threat is ecological collapse; as global climate change gets worse and the oceans become more acidic and the aquifers are drained, we could eventually reach the point where humanity cannot survive on Earth, or at least where our population collapses so severely that civilization as we know it is destroyed. This might not seem like such a threat, since we would see this coming decades or centuries in advance—but we are seeing it coming decades or centuries in advance, and yet we can’t seem to get the world’s policymakers to wake up and do something about it. So that’s clearly the second-most important existential risk.

But the most important existential risk, by far, no question, is nuclear weapons.

Nuclear weapons are the only foreseeable, preventable means by which humanity could be destroyed in the next twenty minutes.

Yes, that is approximately the time it takes an ICBM to hit its target after launch. There are almost 4,000 ICBMs currently deployed, mostly by the US and Russia. Once we include submarine-launched missiles and bombers, the total number of global nuclear weapons is over 15,000. I apologize for terrifying you by saying that these weapons could be deployed in a moment’s notice to wipe out most of human civilization within half an hour, followed by a global ecological collapse and fallout that would endanger the future of the entire human race—but it’s the truth. If you’re not terrified, you’re not paying attention.

I’ve intentionally linked the Union of Concerned Scientists as one of those sources. Now they are people who understand existential risk. They don’t talk about AI and asteroids and aliens (how alliterative). They talk about climate change and nuclear weapons.

We must stop this. We must get rid of these weapons. Next to that, literally nothing else matters.

“What if we’re conquered by tyrants?” It won’t matter. “What if there is a genocide?” It won’t matter. “What if there is a global economic collapse?” None of these things will matter, if the human race wipes itself out with nuclear weapons.

To speak like an economist for a moment, the utility of a global nuclear war must be set at negative infinity. Any detectable reduction in the probability of that event must be considered worth paying any cost to achieve. I don’t care if it costs $20 trillion and results in us being taken over by genocidal fascists—we are talking about the destruction of humanity. We can spend $20 trillion (actually the US as a whole does every 14 months!). We can survive genocidal fascists. We cannot survive nuclear war.

The good news is, we shouldn’t actually have to pay that sort of cost. All we have to do is dismantle our nuclear arsenal, and get other countries—particularly Russia—to dismantle theirs. In the long run, we will increase our wealth as our efforts are no longer wasted maintaining doomsday machines.

The main challenge is actually a matter of game theory. The surprisingly-sophisticated 1990s cartoon show the Animaniacs basically got it right when they sang: “We’d beat our swords into liverwurst / Down by the East Riverside / But no one wants to be the first!”

The thinking, anyway, is that this is basically a Prisoner’s Dilemma. If the US disarms and Russia doesn’t, Russia can destroy the US. Conversely, if Russia disarms and the US doesn’t, the US can destroy Russia. If neither disarms, we’re left where we are. Whether or not the other country disarms, you’re always better off not disarming. So neither country disarms.

But I contend that it is not, in fact, a Prisoner’s Dilemma. It could be a Stag Hunt; if that’s the case, then only multilateral disarmament makes sense, because the best outcome is if we both disarm, but the worst outcome is if we disarm and they don’t. Once we expect them to disarm, we have no temptation to renege on the deal ourselves; but if we think there’s a good chance they won’t, we might not want to either. Stag Hunts have two stable Nash equilibria; one is where both arm, the other where both disarm.

But in fact, I think it may be simply the trivial game.

There aren’t actually that many possible symmetric two-player nonzero-sum games (basically it’s a question of ordering 4 possibilities, and it’s symmetric, so 12 possible games), and one that we never talk about (because it’s sort of boring) is the trivial game: If I do the right thing and you do the right thing, we’re both better off. If you do the wrong thing and I do the right thing, I’m better off. If we both do the wrong thing, we’re both worse off. So, obviously, we both do the right thing, because we’d be idiots not to. Formally, we say that cooperation is a strictly dominant strategy. There’s no dilemma, no paradox; the self-interested strategy is the optimal strategy. (I find it kind of amusing that laissez-faire economics basically amounts to assuming that all real-world games are the trivial game.)

That is, I don’t think the US would actually benefit from nuking Russia, even if we could do so without retaliation. Likewise, I don’t think Russia would actually benefit from nuking the US. One of the things we’ve discovered—the hardest way possible—through human history is that working together is often better for everyone than fighting. Russia could nuke NATO, and thereby destroy all of their largest trading partners, or they could continue trading with us. Even if they are despicable psychopaths who think nothing of committing mass murder (Putin might be, but surely there are people under his command who aren’t?), it’s simply not in Russia’s best interest to nuke the US and Europe. Likewise, it is not in our best interest to nuke them.

Nuclear war is a strange game: The only winning move is not to play.

So I say, let’s stop playing. Yes, let’s unilaterally disarm, the thing that so many policy analysts are terrified of because they’re so convinced we’re in a Prisoner’s Dilemma or a Stag Hunt. “What’s to stop them from destroying us, if we make it impossible for us to destroy them!?” I dunno, maybe basic human decency, or failing that, rationality?

Several other countries have already done this—South Africa unilaterally disarmed, and nobody nuked them. Japan refused to build nuclear weapons in the first place—and I think it says something that they’re the only people to ever have them used against them.

Our conventional military is plenty large enough to defend us against all realistic threats, and could even be repurposed to defend against nuclear threats as well, by a method I call credible targeted conventional response. Instead of building ever-larger nuclear arsenals to threaten devastation in the world’s most terrifying penis-measuring contest, you deploy covert operatives (perhaps Navy SEALS in submarines, or double agents, or these days even stealth drones) around the world, with the standing order that if they have reason to believe a country initiated a nuclear attack, they will stop at nothing to hunt down and kill the specific people responsible for that attack. Not the country they came from; not the city they live in; those specific people. If a leader is enough of a psychopath to be willing to kill 300 million people in another country, he’s probably enough of a psychopath to be willing to lose 150 million people in his own country. He likely has a secret underground bunker that would allow him to survive, at least if humanity as a whole does. So you should be threatening the one thing he does care about—himself. You make sure he knows that if he pushes that button, you’ll find that bunker, drop in from helicopters, and shoot him in the face.

The “targeted conventional response” should be clear by now—you use non-nuclear means to respond, and you target the particular leaders responsible—but let me say a bit more about the “credible” part. The threat of mutually-assured destruction is actually not a credible one. It’s not what we call in game theory a subgame perfect Nash equilibrium. If you know that Russia has launched 1500 ICBMs to destroy every city in America, you actually have no reason at all to retaliate with your own 1500 ICBMs, and the most important reason imaginable not to. Your people are dead either way; you can’t save them. You lose. The only question now is whether you risk taking the rest of humanity down with you. If you have even the most basic human decency, you will not push that button. You will not “retaliate” in useless vengeance that could wipe out human civilization. Thus, your threat is a bluff—it is not credible.

But if your response is targeted and conventional, it suddenly becomes credible. It’s exactly reversed; you now have every reason to retaliate, and no reason not to. Your covert operation teams aren’t being asked to destroy humanity; they’re being tasked with finding and executing the greatest mass murderer in history. They don’t have some horrific moral dilemma to resolve; they have the opportunity to become the world’s greatest heroes. Indeed, they’d very likely have the whole world (or what’s left of it) on their side; even the population of the attacking country would rise up in revolt and the double agents could use the revolt as cover. Now you have no reason to even hesitate; your threat is completely credible. The only question is whether you can actually pull it off, and if we committed the full resources of the United States military to preparing for this possibility, I see no reason to doubt that we could. If a US President can be assassinated by a lone maniac (and yes, that is actually what happened), then the world’s finest covert operations teams can assassinate whatever leader pushed that button.

This is a policy that works both unilaterally and multilaterally. We could even assemble an international coalition—perhaps make the UN “peacekeepers” put their money where their mouth is and train the finest special operatives in the history of the world tasked with actually keeping the peace.

Let’s not wait for someone else to save humanity from destruction. Let’s be the first.

Is America uniquely… mean?

JDN 2457454

I read this article yesterday which I found both very resonant and very disturbing: At least among First World countries, the United States really does seem uniquely, for lack of a better word, mean.

The formal psychological terminology is social dominance orientation; the political science term is authoritarianism. In economics, we notice the difference due to its effect on income inequality. But all of these concepts are capturing part of a deeper underlying reality that in the age of Trump I am finding increasingly hard to deny. The best predictor of support for Trump is authoritarianism.

Of course I’ve already talked about our enormous military budget; but then Tennessee had to make their official state rifle a 50-caliber weapon capable of destroying light tanks. There is something especially dominant, aggressive, and violent about American culture.

We are certainly not unique in the world as a whole—actually I think the amount of social dominance orientation, authoritarianism, and inequality in the US is fairly similar to the world average. We are unique in our gun ownership, but our military spending proportional to GDP is not particularly high by world standards—we’re just an extremely rich country. But in all these respects we are a unique outlier among First World countries; in many ways we resemble a rich authoritarian petrostate like Qatar rather than a European social democracy like France or the UK. (At least we’re not Saudi Arabia?)

More than other First World cultures, Americans believe in hierarchy; they believe that someone should be on top and other people should be on the bottom. More than that, they believe that people “like us” should be on top and people “not like us” should be on the bottom, however that is defined—often in terms of race or religion, but not necessarily.

Indeed, one of the things I find most baffling about this is that it is often more important to people that others be held down than that they themselves be lifted up. This is the only way I can make sense of the fact that people who have watched their wages be drained into the pockets of billionaires for a generation can think that the most important things to do right now are block out illegal immigrants and deport Muslims.

It seems to be that people become convinced that their own status, whatever it may be, is deserved: If they are rich, it is obviously because they are so brilliant and hard-working (something Trump clearly believes about himself, being a textbook example of Narcissistic Personality Disorder); if they are poor, it is obviously because they are so incompetent and lazy. Thus, being lifted up doesn’t make sense; why would you give me things I don’t deserve?

But then when they see people who are different from them, they know automatically that those people must be by definition inferior, as all who are Not of Our Tribe are by definition inferior. And therefore, any of them who are rich gained their position through corruption or injustice, and all of them who are poor deserve their fate for being so inferior. Thus, it is most vital to ensure that these Not of Our Tribe are held down from reaching high positions they so obviously do not deserve.

I’m fairly sure that most of this happens at a very deep unconscious level; it calls upon ancient evolutionary instincts to love our own tribe, to serve the alpha male, to fear and hate those of other tribes. These instincts may well have served us 200,000 years ago (then again, they may just have been the best our brains could manage at the time); but they are becoming a dangerous liability today.

As E.O. Wilson put it: “The real problem of humanity is the following: we have paleolithic emotions; medieval institutions; and god-like technology.”

Yet this cannot be a complete explanation, for there is variation in these attitudes. A purely instinctual theory should say that all human cultures have this to an essentially equal degree; but I started this post by pointing out that the United States appears to have a particularly large amount relative to Europe.

So, there must be something in the cultures or institutions of different nations that makes them either enhance or suppress this instinctual tribalism. There must be something that Europe is doing right, the US is doing wrong, and Saudi Arabia is doing very, very wrong.
Well, the obvious one that sticks out at me is religion. It seems fairly obvious to me that Sweden is less religious than the US, which is less religious than Saudi Arabia.

Data does back me up on this. Religiosity isn’t easy to measure, but we have methods of doing so. If we ask people in various countries if religion is very important in their lives, the percentage of people who say yes gives us an indication of how religious that country is.

In Saudi Arabia, 93% say yes. In the United States, 65% say yes. In Sweden, only 17% say yes.

Religiosity tends to be highest in the poorest countries, but the US is an outlier, far too rich for our religion (or too religious for our wealth).

Religiosity also tends to be highest in countries with high inequality—this time, the US fits right in.

The link between religion and inequality is quite clear. It’s harder to say which way the causation runs. Perhaps high inequality makes people cling more to religion as a comfort, and getting rid of religion would only mean taking that comfort away. Or, perhaps religion actually makes people believe more in social dominance, and thus is part of what keeps that high inequality in place. It could also be a feedback loop, in which higher inequality leads to higher religiosity which leads to higher inequality.

That said, I think we actually have some evidence that causality runs from religion to inequality, rather than the other way around. The secularization of France took place around the same time as the French Revolution that overthrew the existing economic system and replaced it with one that had substantially less inequality. Iran’s government became substantially more based on religion in the latter half of the 20th century, and their inequality soared thereafter.

Above all, Donald Trump dominates the evangelical vote, which makes absolutely no sense if religion is a comfort against inequality—but perfect sense if religion solidifies the tendency of people to think in terms of hierarchy and authoritarianism.

This also makes sense in terms of the content of religion, especially Abrahamaic religion; read the Bible and the Qur’an, and you will see that their primary goal seems to be to convince you that some people, namely people who believe in this book, are just better than other people, and we should be in charge because God says so. (And you wouldn’t try to argue with God, would you?) They really make no particular effort to convince you that God actually exists; they spend all their argumentative effort on what God wants you to do and who God wants you to put in charge—and for some strange reason it always seems to be the same guys who are writing down “God’s words” in the book! What a coincidence!

If religion is indeed the problem, or a large part of the problem, what can we do about it? That’s the most difficult part. We’ve been making absolutely conclusive rational arguments against religion since literally 300 years before Jesus was even born (there has never been a time in human history in which it was rational for an educated person to believe in Christianity or Islam, for the religions did not come into existence until well after the arguments to refute them were well-known!), and the empirical evidence against theism has only gotten stronger ever since; so that clearly isn’t enough.

I think what we really need to do at this point is confront the moral monopoly that religion has asserted for itself. The “Moral Majority” was neither, but its name still sort of makes sense to us because we so strongly associate being moral with being religious. We use terms like “Christian” and “generous” almost interchangeably. And whenever you get into a debate about religion, shortly after you have thoroughly demolished any shred of empirical credibility religion still had left, you can basically guarantee that the response will be: “But without God, how can you know right from wrong?”

What is perhaps most baffling about this concept of morality so commonplace in our culture is that not only is the command of a higher authority that rewards and punishes you not the highest level of moral development—it is literally the lowest. Of the six stages of moral thinking Kohlberg documented in children, the reward and punishment orientation exemplified by the Bible and the Qur’an is the very first. I think many of these people really truly haven’t gotten past level 1, which is why when you start trying to explain how you base your moral judgments on universal principles of justice and consequences (level 6) they don’t seem to have any idea what you’re talking about.

Perhaps this is a task for our education system (philosophy classes in middle school?), perhaps we need something more drastic than that, or perhaps it is enough that we keep speaking about it in public. But somehow we need to break up the monopoly that religion has on moral concepts, so that people no longer feel ashamed to say that something is morally wrong without being able to cite a particular passage from a particular book from the Iron Age. Perhaps once we can finally make people realize that morality does not depend on religion, we can finally free them from the grip of religion—and therefore from the grip of authoritarianism and social dominance.

If this is right, then the reason America is so mean is that we are so Christian—and people need to realize that this is not a paradoxical statement.

Will robots take our jobs?

JDN 2457451
I briefly discussed this topic before, but I thought it deserved a little more depth. Also, the SF author in me really likes writing this sort of post where I get to speculate about futures that are utopian, dystopian, or (most likely) somewhere in between.

The fear is quite widespread, but how realistic is it? Will robots in fact take all our jobs?

Most economists do not think so. Robert Solow famously quipped, “You can see the computer age everywhere but in the productivity statistics.” (It never quite seemed to occur to him that this might be a flaw in the way we measure productivity statistics.)

By the usual measure of labor productivity, robots do not appear to have had a large impact. Indeed, their impact appears to have been smaller than almost any other major technological innovation.

Using BLS data (which was formatted badly and thus a pain to clean, by the way—albeit not as bad as the World Bank data I used on my master’s thesis, which was awful), I made this graph of the growth rate of labor productivity as usually measured:

Productivity_growth

The fluctuations are really jagged due to measurement errors, so I also made an annually smoothed version:

Productivity_growth_smooth

Based on this standard measure, productivity has grown more or less steadily during my lifetime, fluctuating with the business cycle around a value of about 3.5% per year (3.4 log points). If anything, the growth rate seems to be slowing down; in recent years it’s been around 1.5% (1.5 lp).

This was clearly the time during which robots became ubiquitous—autonomous robots did not emerge until the 1970s and 1980s, and robots became widespread in factories in the 1980s. Then there’s the fact that computing power has been doubling every 1.5 years during this period, which is an annual growth rate of 59% (46 lp). So why hasn’t productivity grown at anywhere near that rate?

I think the main problem is that we’re measuring productivity all wrong. We measure it in terms of money instead of in terms of services. Yes, we try to correct for inflation; but we fail to account for the fact that computers have allowed us to perform literally billions of services every day that could not have been performed without them. You can’t adjust that away by plugging into the CPI or the GDP deflator.

Think about it: Your computer provides you the services of all the following:

  1. A decent typesetter and layout artist
  2. A truly spectacular computer (remember, that used to be a profession!)
  3. A highly skilled statistician (who takes no initiative—you must tell her what calculations to do)
  4. A painting studio
  5. A photographer
  6. A video camera operator
  7. A professional orchestra of the highest quality
  8. A decent audio recording studio
  9. Thousands of books, articles, and textbooks
  10. Ideal seats at every sports stadium in the world

And that’s not even counting things like social media and video games that can’t even be readily compared to services that were provided before computers.

If you added up the value of all of those jobs, the amount you would have had to pay in order to hire all those people to do all those things for you before computers existed, your computer easily provides you with at least $1 million in professional services every year. Put another way, your computer has taken jobs that would have provided $1 million in wages. You do the work of a hundred people with the help of your computer.

This isn’t counted in our productivity statistics precisely because it’s so efficient. If we still had to pay that much for all these services, it would be included in our GDP and then our GDP per worker would properly reflect all this work that is getting done. But then… whom would we be paying? And how would we have enough to pay that? Capitalism isn’t actually set up to handle this sort of dramatic increase in productivity—no system is, really—and thus the market price for work has almost no real relation to the productive capacity of the technology that makes that work possible.

Instead it has to do with scarcity of work—if you are the only one in the world who can do something (e.g. write Harry Potter books), you can make an awful lot of money doing that thing, while something that is far more important but can be done by almost anyone (e.g. feed babies) will pay nothing or next to nothing. At best we could say it has to do with marginal productivity, but marginal in the sense of your additional contribution over and above what everyone else could already do—not in the sense of the value actually provided by the work that you are doing. Anyone who thinks that markets automatically reward hard work or “pay you what you’re worth” clearly does not understand how markets function in the real world.

So, let’s ask again: Will robots take our jobs?

Well, they’ve already taken many jobs already. There isn’t even a clear high-skill/low-skill dichotomy here; robots are just as likely to make pharmacists obsolete as they are truck drivers, just as likely to replace surgeons as they are cashiers.

Labor force participation is declining, though slowly:

Labor_force_participation

Yet I think this also underestimates the effect of technology. As David Graeber points out, most of the new jobs we’ve been creating seem to be for lack of a better term bullshit jobs—jobs that really don’t seem like they need to be done, other than to provide people with something to do so that we can justify paying them salaries.

As he puts it:

Again, an objective measure is hard to find, but one easy way to get a sense is to ask: what would happen were this entire class of people to simply disappear? Say what you like about nurses, garbage collectors, or mechanics, it’s obvious that were they to vanish in a puff of smoke, the results would be immediate and catastrophic. A world without teachers or dock-workers would soon be in trouble, and even one without science fiction writers or ska musicians would clearly be a lesser place. It’s not entirely clear how humanity would suffer were all private equity CEOs, lobbyists, PR researchers, actuaries, telemarketers, bailiffs or legal consultants to similarly vanish. (Many suspect it might markedly improve.)

The paragon of all bullshit jobs is sales. Sales is a job that simply should not exist. If something is worth buying, you should be able to present it to the market and people should choose to buy it. If there are many choices for a given product, maybe we could have some sort of independent product rating agencies that decide which ones are the best. But sales means trying to convince people to buy your product—you have an absolutely overwhelming conflict of interest that makes your statements to customers so utterly unreliable that they are literally not even information anymore. The vast majority of advertising, marketing, and sales is thus, in a fundamental sense, literally noise. Sales contributes absolutely nothing to our economy, and because we spend so much effort on it and advertising occupies so much of our time and attention, takes a great deal away. But sales is one of our most steadily growing labor sectors; once we figure out how to make things without people, we employ the people in trying to convince customers to buy the new things we’ve made. Sales is also absolutely miserable for many of the people who do it, as I know from personal experience in two different sales jobs that I had to quit before the end of the first week.

Fortunately we have not yet reached the point where sales is the fastest growing labor sector. Currently the fastest-growing jobs fall into three categories: Medicine, green energy, and of course computers—but actually mostly medicine. Yet even this is unlikely to last; one of the easiest ways to reduce medical costs would be to replace more and more medical staff with automated systems. A nursing robot may not be quite as pleasant as a real professional nurse—but if by switching to robots the hospital can save several million dollars a year, they’re quite likely to do so.

Certain tasks are harder to automate than others—particularly anything requiring creativity and originality is very hard to replace, which is why I believe that in the 2050s or so there will be a Revenge of the Humanities Majors as all the supposedly so stable and forward-thinking STEM jobs disappear and the only jobs that are left are for artists, authors, musicians, game designers and graphic designers. (Also, by that point, very likely holographic designers, VR game designers, and perhaps even neurostim artists.) Being good at math won’t mean anything anymore—frankly it probably shouldn’t right now. No human being, not even great mathematical savants, is anywhere near as good at arithmetic as a pocket calculator. There will still be a place for scientists and mathematicians, but it will be the creative aspects of science and math that persist—design of experiments, development of new theories, mathematical intuition to develop new concepts. The grunt work of cleaning data and churning through statistical models will be fully automated.

Most economists appear to believe that we will continue to find tasks for human beings to perform, and this improved productivity will simply raise our overall standard of living. As any ECON 101 textbook will tell you, “scarcity is a fundamental fact of the universe, because human needs are unlimited and resources are finite.”

In fact, neither of those claims are true. Human needs are not unlimited; indeed, on Maslow’s hierarchy of needs First World countries have essentially reached the point where we could provide the entire population with the whole pyramid, guaranteed, all the time—if we were willing and able to fundamentally reform our economic system.

Resources are not even finite; what constitutes a “resource” depends on technology, as does how accessible or available any given source of resources will be. When we were hunter-gatherers, our only resources were the plants and animals around us. Agriculture turned seeds and arable land into a vital resource. Whale oil used to be a major scarce resource, until we found ways to use petroleum. Petroleum in turn is becoming increasingly irrelevant (and cheap) as solar and wind power mature. Soon the waters of the oceans themselves will be our power source as we refine the deuterium for fusion. Eventually we’ll find we need something for interstellar travel that we used to throw away as garbage (perhaps it will in fact be dilithium!) I suppose that if the universe is finite or if FTL is impossible, we will be bound by what is available in the cosmic horizon… but even that is not finite, as the universe continues to expand! If the universe is open (as it probably is) and one day we can harness the dark energy that seethes through the ever-expanding vacuum, our total energy consumption can grow without bound just as the universe does. Perhaps we could even stave off the heat death of the universe this way—we after all have billions of years to figure out how.

If scarcity were indeed this fundamental law that we could rely on, then more jobs would always continue to emerge, producing whatever is next on the list of needs ordered by marginal utility. Life would always get better, but there would always be more work to be done. But in fact, we are basically already at the point where our needs are satiated; we continue to try to make more not because there isn’t enough stuff, but because nobody will let us have it unless we do enough work to convince them that we deserve it.

We could continue on this route, making more and more bullshit jobs, pretending that this is work that needs done so that we don’t have to adjust our moral framework which requires that people be constantly working for money in order to deserve to live. It’s quite likely in fact that we will, at least for the foreseeable future. In this future, robots will not take our jobs, because we’ll make up excuses to create more.

But that future is more on the dystopian end, in my opinion; there is another way, a better way, the world could be. As technology makes it ever easier to produce as much wealth as we need, we could learn to share that wealth. As robots take our jobs, we could get rid of the idea of jobs as something people must have in order to live. We could build a new economic system: One where we don’t ask ourselves whether children deserve to eat before we feed them, where we don’t expect adults to spend most of their waking hours pushing papers around in order to justify letting them have homes, where we don’t require students to take out loans they’ll need decades to repay before we teach them history and calculus.

This second vision is admittedly utopian, and perhaps in the worst way—perhaps there’s simply no way to make human beings actually live like this. Perhaps our brains, evolved for the all-too-real scarcity of the ancient savannah, simply are not plastic enough to live without that scarcity, and so create imaginary scarcity by whatever means they can. It is indeed hard to believe that we can make so fundamental a shift. But for a Homo erectus in 500,000 BP, the idea that our descendants would one day turn rocks into thinking machines that travel to other worlds would be pretty hard to believe too.

Will robots take our jobs? Let’s hope so.

Why is our diet so unhealthy?

JDN 2457447

One of the most baffling facts about the world, particularly to a development economist, is that the leading causes of death around the world broadly cluster into two categories: Obesity, in First World countries, and starvation, in Third World countries. At first glance, it seems like the rich are eating too much and there isn’t enough left for the poor.

Yet in fact it’s not quite so simple as that, because in fact obesity is most common among the poor in First World countries, and in Third World countries obesity rates are rising rapidly and co-existing with starvation. It is becoming recognized that there are many different kinds of obesity, and that a past history of starvation is actually a major risk factor in future obesity.

Indeed, the really fundamental problem is malnutrition—people are not necessarily eating too much or too little, they are eating the wrong things. So, my question is: Why?

It is widely thought that foods which are nutritious are also unappetizing, and conversely that foods which are delicious are unhealthy. There is a clear kernel of truth here, as a comparison of Brussels sprouts versus ice cream will surely indicate. But this is actually somewhat baffling. We are an evolved organism; one would think that natural selection would shape us so that we enjoy foods which are good for us and avoid foods which are bad for us.

I think it did, actually; the problem is, we have changed our situation so drastically by means of culture and technology that evolution hasn’t had time to catch up. We have evolved significantly since the dawn of civilization, but we haven’t had any time to evolve since one event in particular: The Green Revolution. Indeed, many people are still alive today who were born while the Green Revolution was still underway.

The Green Revolution is the culmination of a long process of development in agriculture and industrialization, but it would be difficult to overstate its importance as an epoch in the history of our species. We now have essentially unlimited food.

Not literally unlimited, of course; we do still need land, and water, and perhaps most notably energy (oil-driven machines are a vital part of modern agriculture). But we can produce vastly more food than was previously possible, and food supply is no longer a binding constraint on human population. Indeed, we already produce enough food to feed 10 billion people. People who say that some new agricultural technology will end world hunger don’t understand what world hunger actually is. Food production is not the problem—distribution of wealth is the problem.

I often speak about the possibility of reaching post-scarcity in the future; but we have essentially already done so in the domain of food production. If everyone ate what would be optimally healthy, and we distributed food evenly across the world, there would be plenty of food to go around and no such thing as obesity or starvation.

So why hasn’t this happened? Well, the main reason, like I said, is distribution of wealth.

But that doesn’t explain why so many people who do have access to good foods nonetheless don’t eat them.

The first thing to note is that healthy food is more expensive. It isn’t a huge difference by First World standards—about $550 per year extra per person. But when we compare the cost of a typical nutritious diet to that of a typical diet, the nutritious diet is significantly more expensive. Worse yet, this gap appears to be growing over time.

But why is this the case? It’s actually quite baffling on its face. Nutritious foods are typically fruits and vegetables that one can simply pluck off plants. Unhealthy foods are typically complex processed foods that require machines and advanced technology. There should be “value added”, at least in the economic sense; additional labor must go in, additional profits must come out. Why is it cheaper?

In a word? Subsidies.

Somehow, huge agribusinesses have convinced governments around the world that they deserve to be paid extra money, either simply for existing or based on how much they produce. Of course, when I say “somehow”, I of course mean lobbying.

In the US, these subsidies overwhelmingly go toward corn, followed by cotton, followed by soybeans.

In fact, they don’t actually even go to corn as you would normally think of it, like sweet corn or corn on the cob. No, they go to feed corn—really awful stuff that includes the entire plant, is barely even recognizable as corn, and has its “quality” literally rated by scales and sieves. No living organism was ever meant to eat this stuff.

Humans don’t, of course. Cows do. But they didn’t evolve for this stuff either; they can’t digest it properly, and it’s because of this terrible food we force-feed them that they need so many antibiotics.

Thus, these corn subsides are really primarily beef subsidies—they are a means of externalizing the cost of beef production and keeping the price of hamburgers artificially low. In all, 2/3 of US agricultural subsidies ultimately go to meat production. I haven’t been able to find any really good estimates, but as a ballpark figure it seems that meat would cost about twice as much if we didn’t subsidize it.

Fortunately a lot of these subsidies have been decreased under the Obama administration, particularly “direct payments” which are sort of like a basic income, but for agribusinesses. (That is not what basic incomes are for.) You can see the decline in US corn subsidies here.

Despite all this, however, subsidies cannot explain obesity. Removing them would have only a small effect.

An often overlooked consideration is that nutritious food can be more expensive for a family even if the actual pricetag is the same.

Why? Because kids won’t eat it.

To raise kids on a nutritious diet, you have to feed them small amounts of good food over a long period of time, until they acquire the taste. In order to do this, you need to be prepared to waste a lot of food, and that costs money. It’s cheaper to simply feed them something unhealthy, like ice cream or hot dogs, that you know they’ll eat.

And this brings me to what I think is the real ultimate cause of our awful diet: We evolved for a world of starvation, and our bodies cannot cope with abundance.

It’s important to be clear about what we mean by “unhealthy food”; people don’t enjoy consuming lead and arsenic. Rather, we enjoy consuming fat and sugar. Contrary to what fad diets will tell you, fat and sugar are not inherently bad for human health; indeed, we need a certain amount of fat and sugar in order to survive. What we call “unhealthy food” is actually food that we desperately need—in small quantities.

Under the conditions in which we evolved, fat and sugar were extremely scarce. Eating fat meant hunting a large animal, which required the cooperation of the whole tribe (a quite literal Stag Hunt) and carried risk of life and limb, not to mention simply failing and getting nothing. Eating sugar meant finding fruit trees and gathering fruit from them—and fruit trees are not all that common in nature. These foods also spoil quite quickly, so you eat them right away or not at all.

As such, we evolved to really crave these things, to ensure that we would eat them whenever they are available. Since they weren’t available all that often, this was just about right to ensure that we managed to eat enough, and rarely meant that we ate too much.

 

But now fast-forward to the Green Revolution. They aren’t scarce anymore. They’re everywhere. There are whole buildings we can go to with shelves upon shelves of them, which we ourselves can claim simply by swiping a little plastic card through a reader. We don’t even need to understand how that system of encrypted data networks operates, or what exactly is involved in maintaining our money supply (and most people clearly don’t); all we need to do is perform the right ritual and we will receive an essentially unlimited abundance of fat and sugar.

Even worse, this food is in processed form, so we can extract the parts that make it taste good, while separating them from the parts that actually make it nutritious. If fruits were our main source of sugar, that would be fine. But instead we get it from corn syrup and sugarcane, and even when we do get it from fruit, we extract the sugar instead of eating the whole fruit.

Natural selection had no particular reason to give us that level of discrimination; since eating apples and oranges was good for us, we evolved to like the taste of apples and oranges. There wasn’t a sufficient selection pressure to make us actually eat the whole fruit as opposed to extracting the sugar, because extracting the sugar was not an option available to our ancestors. But it is available to us now.

Vegetables, on the other hand, are also more abundant now, but were already fairly abundant. Indeed, it may be significant that we’ve had enough time to evolve since agriculture, but not enough time since fertilizer. Agriculture allowed us to make plenty of wheat and carrots; but it wasn’t until fertilizer that we could make enough hamburgers for people to eat them regularly. It could be that our hunter-gatherer ancestors actually did crave carrots in much the same way they and we crave sugar; but since agriculture we have no further reason to do so because carrots have always been widely available.

One thing I do still find a bit baffling: Why are so many green vegetables so bitter? It would be one thing if they simply weren’t as appealing as fat and sugar; but it honestly seems like a lot of green vegetables, such as broccoli, spinach, and Brussels sprouts, are really quite actively aversive, at least until you acquire the taste for them. Given how nutritious they are, it seems like there should have been a selective pressure in favor of liking the taste of green vegetables; but there wasn’t. I wonder if it’s actually coevolution—if perhaps broccoli has been evolving to not be eaten as quickly as we were evolving to eat it. This wouldn’t happen with apples and oranges, because in an evolutionary sense apples and oranges “want” to be eaten; they spread their seeds in the droppings of animals. But for any given stalk of broccoli, becoming lunch is definitely bad news.

Yet even this is pretty weird, because broccoli has definitely evolved substantially since agriculture—indeed, broccoli as we know it would not exist otherwise. Ancestral Brassica oleracea was bred to become cabbage, broccoli, cauliflower, kale, Brussels sprouts, collard greens, savoy, kohlrabi and kai-lan—and looks like none of them.

It looks like I still haven’t solved the mystery. In short, we get fat because kids hate broccoli; but why in the world do kids hate broccoli?

Why are all our Presidents war criminals?

JDN 2457443

Today I take on a topic that we really don’t like to talk about. It creates grave cognitive dissonance in our minds, forcing us to deeply question the moral character of our entire nation.

Yet it is undeniably a fact:

Most US Presidents are war criminals.

There is a long tradition of war crimes by US Presidents which includes Obama, Bush, Nixon, and above all Johnson and Truman.

Barack Obama has ordered so-called “double-tap” drone strikes, which kill medics and first responders, in express violation of the Geneva Convention.

George W. Bush orchestrated a global program of torture and indefinite detention.

Bill Clinton ordered “extraordinary renditions” in which suspects were detained without trial and transferred to other countries for interrogation, where we knew they would most likely be tortured.

I actually had trouble finding any credible accusations of war crimes by George H.W. Bush (there are definitely accusations, but none of them are credible—seriously, people are listening to Manuel Noriega?), even as Director of the CIA. He might not be a war criminal.

Ronald Reagan supported a government in Guatemala that was engaged in genocide. He knew this was happening and did not seem to care. This was only one of many tyrannical, murderous regimes supported by Reagan’s administration. In fact, Ronald Reagan was successfully convicted of war crimes by the International Court of Justice. Chomsky isn’t wrong about this one. Ronald Reagan was a convicted war criminal.

Jimmy Carter is a major exception to the rule; not only are there no credible accusations of war crimes against him, he has actively fought to pursue war crimes investigations against Israel and even publicly discussed the war crimes of George W. Bush.

I also wasn’t able to find any credible accusations of war crimes by Gerald Ford, so he might be clean.

But then we get to Richard Nixon, who deployed chemical weapons against civilians in Vietnam. (Calling Agent Orange “herbicide” probably shouldn’t matter morally—but it might legally, as tactical “herbicides” are not always war crimes.) But Nixon does deserve some credit for banning biological weapons.

Indeed, most of the responsibility for war crimes in Vietnam falls upon Johnson. The US deployed something very close to a “total war” strategy involving carpet bombing—more bombs were dropped by the US in Vietnam than by all countries in WW2—as well as napalm and of course chemical weapons; basically it was everything short of nuclear weapons. Kennedy and Johnson also substantially expanded the US biological weapons program.

Speaking of weapons of mass destruction, I’m not sure if it was actually illegal to expand the US nuclear arsenal as dramatically as Kennedy did, but it definitely should have been. Kennedy brought our nuclear arsenal up to its greatest peak, a horrifying 30,000 deployable warheads—more than enough to wipe out human civilization, and possibly enough to destroy the entire human race.

While Eisenhower was accused of the gravest war crime on this list, namely the genocide of over 1 million people in Germany, most historians do not consider this accusation credible. Rather, his war crimes were committed as Supreme Allied Commander, in the form of carpet bombing, especially of Tokyo, which killed as many as 200,000 people, and of Dresden, which had no apparent military significance and even held a number of Allied POWs.

But then we get to Truman, the coup de grace, the only man in history to order the use of nuclear weapons in warfare. Truman gave the order to deploy nuclear weapons against civilians. He was the only person in the history of the world to ever give such an order. It wasn’t Hitler; it wasn’t Stalin. It was Harry S. Truman.

Then of course there’s Roosevelt’s internment of over 100,000 Japanese Americans. It really pales in comparison to Truman’s order to vaporize an equal number of Japanese civilians in the blink of an eye.

I think it will suffice to end the list here, though I could definitely go on. I think Truman is a really good one to focus on, for two reasons that pull quite strongly in opposite directions.

1. The use of nuclear weapons against civilians is among the gravest possible crimes. It may be second to genocide, but then again it may not, as genocide does not risk the destruction of the entire human race. If we only had the option of outlawing one thing in war, and had to allow everything else, we would have no choice but to ban the use of nuclear weapons against civilians.

2. Truman’s decision may have been justified. To this day is still hotly debated whether the atomic bombings were justifiable; mainstream historians have taken both sides. On Debate.org, the vote is almost exactly divided—51% yes, 49% no. Many historians believe that had Truman not deployed nuclear weapons, there would have been an additional 5 million deaths as a result of the continuation of the war.

Perhaps now you can see why this matter makes me so ambivalent.

There is a part of me that wants to take an absolute hard line against war crimes, and say that they must never be tolerated, that even otherwise good Presidents like Clinton and Obama deserve to be tried at the Hague for what they have done. (Truman and Eisenhower are dead, so it’s too late for them.)

But another part of me wonders what would happen if we did this. What if the world really is so dangerous that we have no choice but to allow our leaders to commit horrible atrocities in order to defend us?

There are easy cases—Bush’s torture program didn’t even result in very much useful intelligence, so it was simply a pointless degradation of our national character. The same amount of effort invested in more humane intelligence gathering would very likely have provided more reliable information. And in any case, terrorism is such a minor threat in the scheme of things that the effort would be better spent on improving environmental regulations or auto safety.

Similarly, there’s no reason to engage in “extraordinary rendition” to a country that tortures people when you could simply conduct a legitimate trial in absentia and then arrest the convicted terrorist with special forces and imprison him in a US maximum-security prison until his execution. (Or even carry out the execution directly by the special forces; as long as the trial is legitimate, I see no problem with that.) At that point, the atrocities are being committed simply to avoid inconvenience.

But especially when we come to the WW2 examples, where the United States—nay, the world—was facing a genuine threat of being conquered by genocidal tyrants, I do begin to wonder if “victory by any means necessary” is a legitimate choice.

There is a way to cut the Gordian knot here, and say that yes, these are crimes, and should be punished; but yes, they were morally justified. Then, the moral calculus any President must undergo when contemplating such an atrocity is that he himself will be tried and executed if he goes through with it. If your situation is truly so dire that you are willing to kill 100,000 civilians, perhaps you should be willing to go down with the ship. (Roger Fisher made a similar argument when he suggested implanting the nuclear launch codes inside the body of a US military officer. If you’re not willing to tear one man apart with a knife, why are you willing to vaporize an entire city?)

But if your actions really were morally justified… what sense does it make to punish you for them? And if we hold up this threat of punishment, could it cause a President to flinch when we really need him to take such drastic action?

Another possibility to consider is that perhaps our standards for war crimes really are too strict, and some—not all, but some—of the actions I just listed are in fact morally justifiable and should be made legal under international law. Perhaps the US government is right to fight the UN convention against cluster munitions; maybe we need cluster bombs to successfully defend national security. Perhaps it should not be illegal to kill the combat medics who directly serve under the command of enemy military forces—as opposed to civilian first-responders or Medecins Sans Frontieres. Perhaps our tolerance for civilian casualties is unrealistically low, and it is impossible to fight a war in the real world without killing a large number of civilians.

Then again, perhaps not. Perhaps we are too willing to engage in war in the first place, too accustomed to deploying military force as our primary response to international conflict. Perhaps the prospect of facing a war crimes tribunal in a couple of years should be an extra layer of deterrent against any President ordering yet another war—by some estimates we have been at war 93% of the time since our founding as a nation, and it is a well-documented fact that we have by far the highest military spending in the world. Why is it that so many Americans see diplomacy as foolish, see compromise as weakness?

Perhaps the most terrifying thing is not that so many US Presidents are war criminals; it is that so many Americans don’t seem to have any problem with that.

We all know lobbying is corrupt. What can we do about it?

JDN 2457439

It’s so well-known as to almost seem cliche: Our political lobbying system is clearly corrupt.

Juan Cole, a historian and public intellectual from the University of Michigan, even went so far as to say that the United States is the most corrupt country in the world. He clearly went too far, or else left out a word; the US may well be the most corrupt county in the First World, though most rankings say Italy. In any case, the US is definitely not the most corrupt country in the whole world; no, that title goes to Somalia and/or North Korea.

Still, lobbying in the US is clearly a major source of corruption. Indeed, economists who study corruption often have trouble coming up with a sound definition of “corruption” that doesn’t end up including lobbying, despite the fact that lobbying is quite legal. Bribery means giving politicians money to get them to do things for you. Lobbying means giving politicians money and asking them to do things. In the letter of the law, that makes all the difference.

One thing that does make a difference is that lobbyists are required to register who they are and record their campaign contributions (unless of course they launder—I mean reallocate—them through a Super PAC of course). Many corporate lobbyists claim that it’s not that they go around trying to find politicians to influence, but rather politicians who call them up demanding money.

One of the biggest problems with lobbying is what’s called the revolving doorpoliticians are often re-hired as lobbyists, or lobbyists as politicians, based on the personal connections formed in the lobbying process—or possibly actual deals between lobbying companies over legislation, though if done explicitly that would be illegal. Almost 400 lobbyists working right now used to be legislators; almost 3,000 more worked as Congressional staff. Many lobbyists will do a tour as a Congressional staffer as a resume-builder, like an internship.

Studies have shown that lobbying does have an impact on policy—in terms of carving out tax loopholes it offers a huge return on investment.

Our current systems to disinventize the revolving door are not working. While there is reason to think that establishing a “cooling-off period” of a few years could make a difference, under current policy we already have some cooling-off periods and it’s clearly not enough.

So, now that we know the problem, let’s start talking about solutions.

Option 1: Ban campaign contributions

One possibility would be to eliminate campaign contributions entirely, which we could do by establishing a law that nobody can ever give money or in-kind favors to politicians ever under any circumstances. It would still be legal to meet with politicians and talk to them about issues, but if you take a Senator out for dinner we’d have to require that the Senator pay for their own food and transportation, lest wining-and-dining still be an effective means of manipulation. Then all elections would have to be completely publicly financed. This is a radical solution, but it would almost certainly work. MoveOn has a petition you can sign if you like this solution, and there’s a site called public-campaign-financing.org that will tell you how it could realistically be implemented (beware, their webmaster appears to be a time traveler from the 1990s who thinks that automatic music and tiled backgrounds constitute good web design).

There are a couple of problems with this solution, however:

First, it would be declared Unconstitutional by the Supreme Court. Under the (literally Orwellian) dicta that “corporations are people” and “money is speech” established in Citizens United vs. FEC, any restrictions on donating money to politicians constitute restrictions on free speech, and are therefore subject to strict scrutiny.

Second, there is actually a real restriction on freedom here, not because money is speech, but because money facilitates speech. Since eliminating all campaign donations would require total public financing of elections, we would need some way of deciding which candidates to finance publicly, because obviously you can’t give the same amount of money to everyone in the country or even everyone who decides to run. It simply doesn’t make sense to provide the same campaign financing for Hillary Clinton that you would for Vermin Supreme. But then, however this mechanism works, it could readily be manipulated to give even more advantages to the two major parties (not that they appear to need any more). If you’re fine with having exactly two parties to choose from, then providing funding for their, say, top 5 candidates in each primary, and then for their nominee in the general election, would work. But I for one would like to have more options than that, and that means devising some mechanism for funding third parties that have a realistic shot (like Ralph Nader or Ross Perot) but not those who don’t (like the aforementioned Vermin Supreme)—but at the same time we need to make sure that it’s not biased or self-fulfilling.

So let’s suppose we don’t eliminate campaign contributions completely. What else could we do that would curb corruption?

Option 2: Donation caps and “Democracy Credits”

I particularly like this proposal, self-titled the American Anti-Corruption Act (beware self-titled laws: USA PATRIOT ACT, anyone?), which would require full transparency—yes, even you, Super PACs—and place reasonable caps on donations so that large amounts of funds must be raised from large numbers of people rather than from a handful of people with a huge amount of money. It also includes an interesting proposal called “Democracy Credits” (again, the titles are a bit heavy-handed), which are basically an independent monetary system, used only to finance elections, and doled out exactly equally to all US citizens to spend on the candidates they like. The credits would then be exchangeable for real money, but only by the candidates themselves. This is a great idea, but sadly I doubt anyone in our political system is likely to go for it.

Actually, I would like to see these “Democracy Credits” used as votes—whoever gets the most credits wins the election, automatically. This is not quite as good as range voting, because it is not cloneproof or independent of irrelevant alternatives (briefly put, if you run two candidates that are exactly alike, their votes get split and they both lose, even if everyone likes them; and similarly, if you add a new candidate that doesn’t win you can still affect who does end up winning. Range voting is basically the only system that doesn’t have these problems, aside from a few really weird “voting” systems like “random ballot”). But still, it would be much better than our current plurality “first past the post” system, and would give third-party candidates a much fairer shot at winning elections. Indeed, it is very similar to CTT monetary voting, which is provably optimal in certain (idealized) circumstances. Of course, that’s even more of a pipe dream.

The donation caps are realistic, however; we used to have them, in fact, before Citizens United vs. FEC. Perhaps future Supreme Court decisions can overturn it and restore some semblance of balance in our campaign finance system.

Option 3: Treat campaign contributions as a conflict of interest

Jack Abramoff, a former lobbyist who was actually so corrupt he got convicted for it, has somewhat ironically made another proposal for how to reduce corrupting in the lobbying system. I suppose he would know, though I must wonder what incentives he has to actually do this properly (and corrupt people are precisely the sort of people with whom you should definitely be looking at the monetary incentives).

Abramoff would essentially use Option 1, but applied only to individuals and corporations with direct interests in the laws being made. As Gawker put it, “If you get money or perks from elected officials, […] you shouldn’t be permitted to give them so much as one dollar.” The way it avoids requiring total public financing is by saying that if you don’t get perks, you can still donate.

His plan would also extend the “cooling off” idea to its logical limit—once you work for Congress, you can never work for a lobbying organization for the rest of your life, and vice versa. That seems like a lot of commitment to ask of twentysomething Congressional interns (“If you take this job, unemployed graduate, you can never ever take that other job!”), but I suppose if it works it might be worth it.

He also wants to establish term limits for Congress, which seems pretty reasonable to me. If we’re going to have term limits for the Executive branch, why not the other branches as well? They could be longer, but if term limits are necessary at all we should use them consistently.

Abramoff also says we should repeal the 17th Amendment, because apparently making our Senators less representative of the population will somehow advance democracy. Best I can figure, he’s coming from an aristocratic attitude here, this notion that we should let “better people” make the important decisions if we want better decisions. And this sounds seductive, given how many really bad decisions people make in this world. But of course which people were the better people was precisely the question representative democracy was intended to answer. At least if Senators are chosen by state legislatures there’s a sort of meta-representation going on, which is obviously better than no representation at all; but still, adding layers of non-democracy by definition cannot make a system more democratic.

But Abramoff really goes off the rails when he proposes making it a conflict of interest to legislate about your own state.Pork-barrel spending”, as it is known, or earmarks as they are formally called, are actually a tiny portion of our budget (about 0.1% of our GDP) and really not worth worrying about. Sure, sometimes a Senator gets a bridge built that only three people will ever use, but it’s not that much money in the scheme of things, and there’s no harm in keeping our construction workers at work. The much bigger problem would be if legislators could no longer represent their own constituents in any way, thus defeating the basic purpose of having a representative legislature. (There is a thorny question about how much a Senator is responsible for their own state versus the country as a whole; but clearly their responsibility to their own state is not zero.)

Even aside from that ridiculous last part, there’s a serious problem with this idea of “no contributions from anyone who gets perks”: What constitutes a “perk”? Is a subsidy for solar power a perk for solar companies, or a smart environmental policy (can it be both?)? Does paying for road construction “affect” auto manufacturers in the relevant sense? What about policies that harm particular corporations? Since a carbon tax would hurt oil company profits, are oil companies allowed to lobby against it on the ground that it is the opposite of a “perk”?

Voting for representatives who will do things you want is kind of the point of representative democracy. (No, New York Post, it is not “pandering” to support women’s rights and interestswomen are the majority of our population. If there is one group of people that our government should represent, it is women.) Taken to its logical extreme, this policy would mean that once the government ever truly acts in the public interest, all campaign contributions are henceforth forever banned. I presume that’s not actually what Abramoff intends, but he offers no clear guidelines on how we would distinguish a special interest to be excluded from donations as opposed to a legitimate public interest that creates no such exclusion. Could we flesh this out in the actual legislative process? Is this something courts would decide?

In all, I think the best reform right now is to put the cap back on campaign contributions. It’s simple to do, and we had it before and it seemed to work (mostly). We could also combine that with longer cooling-off periods, perhaps three or five years instead of only one, and potentially even term limits for Congress. These reforms would certainly not eliminate corruption in the lobbying system, but they would most likely reduce it substantially, without stepping on fundamental freedoms.

Of course I’d really like to see those “Democracy Credits”; but that’s clearly not going to happen.

Do we always want to internalize externalities?

JDN 2457437

I often talk about the importance of externalitiesa full discussion in this earlier post, and one of their important implications, the tragedy of the commons, in another. Briefly, externalities are consequences of actions incurred upon people who did not perform those actions. Anything I do affecting you that you had no say in, is an externality.

Usually I’m talking about how we want to internalize externalities, meaning that we set up a system of incentives to make it so that the consequences fall upon the people who chose the actions instead of anyone else. If you pollute a river, you should have to pay to clean it up. If you assault someone, you should serve jail time as punishment. If you invent a new technology, you should be rewarded for it. These are all attempts to internalize externalities.

But today I’m going to push back a little, and ask whether we really always want to internalize externalities. If you think carefully, it’s not hard to come up with scenarios where it actually seems fairer to leave the externality in place, or perhaps reduce it somewhat without eliminating it.

For example, suppose indeed that someone invents a great new technology. To be specific, let’s think about Jonas Salk, inventing the polio vaccine. This vaccine saved the lives of thousands of people and saved millions more from pain and suffering. Its value to society is enormous, and of course Salk deserved to be rewarded for it.

But we did not actually fully internalize the externality. If we had, every family whose child was saved from polio would have had to pay Jonas Salk an amount equal to what they saved on medical treatments as a result, or even an amount somehow equal to the value of their child’s life (imagine how offended people would get if you asked that on a survey!). Those millions of people spared from suffering would need to each pay, at minimum, thousands of dollars to Jonas Salk, making him of course a billionaire.

And indeed this is more or less what would have happened, if he had been willing and able to enforce a patent on the vaccine. The inability of some to pay for the vaccine at its monopoly prices would add some deadweight loss, but even that could be removed if Salk Industries had found a way to offer targeted price vouchers that let them precisely price-discriminate so that every single customer paid exactly what they could afford to pay. If that had happened, we would have fully internalized the externality and therefore maximized economic efficiency.

But doesn’t that sound awful? Doesn’t it sound much worse than what we actually did, where Jonas Salk received a great deal of funding and support from governments and universities, and lived out his life comfortably upper-middle class as a tenured university professor?

Now, perhaps he should have been awarded a Nobel Prize—I take that back, there’s no “perhaps” about it, he definitely should have been awarded a Nobel Prize in Medicine, it’s absurd that he did not—which means that I at least do feel the externality should have been internalized a bit more than it was. But a Nobel Prize is only 10 million SEK, about $1.1 million. That’s about enough to be independently wealthy and live comfortably for the rest of your life; but it’s a small fraction of the roughly $7 billion he could have gotten if he had patented the vaccine. Yet while the possible world in which he wins a Nobel is better than this one, I’m fairly well convinced that the possible world in which he patents the vaccine and becomes a billionaire is considerably worse.

Internalizing externalities makes sense if your goal is to maximize total surplus (a concept I explain further in the linked post), but total surplus is actually a terrible measure of human welfare.

Total surplus counts every dollar of willingness-to-pay exactly the same across different people, regardless of whether they live on $400 per year or $4 billion.

It also takes no account whatsoever of how wealth is distributed. Suppose a new technology adds $10 billion in wealth to the world. As far as total surplus, it makes no difference whether that $10 billion is spread evenly across the entire planet, distributed among a city of a million people, concentrated in a small town of 2,000, or even held entirely in the bank account of a single man.

Particularly a propos of the Salk example, total surplus makes no distinction between these two scenarios: a perfectly-competitive market where everything is sold at a fair price, and a perfectly price-discriminating monopoly, where everything is sold at the very highest possible price each person would be willing to pay.

This is a perfectly-competitive market, where the benefits are more or less equally (in this case exactly equally, but that need not be true in real life) between sellers and buyers:

elastic_supply_competitive_labeled

This is a perfectly price-discriminating monopoly, where the benefits accrue entirely to the corporation selling the good:

elastic_supply_price_discrimination

In the former case, the company profits, consumers are better off, everyone is happy. In the latter case, the company reaps all the benefits and everyone else is left exactly as they were. In real terms those are obviously very different outcomes—the former being what we want, the latter being the cyberpunk dystopia we seem to be hurtling mercilessly toward. But in terms of total surplus, and therefore the kind of “efficiency” that is maximize by internalizing all externalities, they are indistinguishable.

In fact (as I hope to publish a paper about at some point), the way willingness-to-pay works, it weights rich people more. Redistributing goods from the poor to the rich will typically increase total surplus.

Here’s an example. Suppose there is a cake, which is sufficiently delicious that it offers 2 milliQALY in utility to whoever consumes it (this is a truly fabulous cake). Suppose there are two people to whom we might give this cake: Richie, who has $10 million in annual income, and Hungry, who has only $1,000 in annual income. How much will each of them be willing to pay?

Well, assuming logarithmic marginal utility of wealth (which is itself probably biasing slightly in favor of the rich), 1 milliQALY is about $1 to Hungry, so Hungry will be willing to pay $2 for the cake. To Richie, however, 1 milliQALY is about $10,000; so he will be willing to pay a whopping $20,000 for this cake.

What this means is that the cake will almost certainly be sold to Richie; and if we proposed a policy to redistribute the cake from Richie to Hungry, economists would emerge to tell us that we have just reduced total surplus by $19,998 and thereby committed a great sin against economic efficiency. They will cajole us into returning the cake to Richie and thus raising total surplus by $19,998 once more.

This despite the fact that I stipulated that the cake is worth just as much in real terms to Hungry as it is to Richie; the difference is due to their wildly differing marginal utility of wealth.

Indeed, it gets worse, because even if we suppose that the cake is worth much more in real utility to Hungry—because he is in fact hungry—it can still easily turn out that Richie’s willingness-to-pay is substantially higher. Suppose that Hungry actually gets 20 milliQALY out of eating the cake, while Richie still only gets 2 milliQALY. Hungry’s willingness-to-pay is now $20, but Richie is still going to end up with the cake.

Now, if your thought is, “Why would Richie pay $20,000, when he can go to another store and get another cake that’s just as good for $20?” Well, he wouldn’t—but in the sense we mean for total surplus, willingness-to-pay isn’t just what you’d actually be willing to pay given the actual prices of the goods, but the absolute maximum price you’d be willing to pay to get that good under any circumstances. It is instead the marginal utility of the good divided by your marginal utility of wealth. In this sense the cake is “worth” $20,000 to Richie, and “worth” substantially less to Hungry—but not because it’s actually worth less in real terms, but simply because Richie has so much more money.

Even economists often equate these two, implicitly assuming that we are spending our money up to the point where our marginal willingness-to-pay is the actual price we choose to pay; but in general our willingness-to-pay is higher than the price if we are willing to buy the good at all. The consumer surplus we get from goods is in fact equal to the difference between willingness-to-pay and actual price paid, summed up over all the goods we have purchased.

Internalizing all externalities would definitely maximize total surplus—but would it actually maximize happiness? Probably not.

If you asked most people what their marginal utility of wealth is, they’d have no idea what you’re talking about. But most people do actually have an intuitive sense that a dollar is worth more to a homeless person than it is to a millionaire, and that’s really all we mean by diminishing marginal utility of wealth.

I think the reason we’re uncomfortable with the idea of Jonas Salk getting $7 billion from selling the polio vaccine, rather than the same number of people getting the polio vaccine and Jonas Salk only getting the $1.1 million from a Nobel Prize, is that we intuitively grasp that after that $1.1 million makes him independently wealthy, the rest of the money is just going to sit in some stock account and continue making even more money, while if we’d let the families keep it they would have put it to much better use raising their children who are now protected from polio. We do want to reward Salk for his great accomplishment, but we don’t see why we should keep throwing cash at him when it could obviously be spent in better ways.

And indeed I think this intuition is correct; great accomplishments—which is to say, large positive externalities—should be rewarded, but not in direct proportion. Maybe there should be some threshold above which we say, “You know what? You’re rich enough now; we can stop giving you money.” Or maybe it should simply damp down very quickly, so that a contribution which is worth $10 billion to the world pays only slightly more than one that is worth $100 million, but a contribution that is worth $100,000 pays considerably more than one which is only worth $10,000.

What it ultimately comes down to is that if we make all the benefits incur to the person who did it, there aren’t any benefits anymore. The whole point of Jonas Salk inventing the polio vaccine (or Einstein discovering relativity, or Darwin figuring out natural selection, or any great achievement) is that it will benefit the rest of humanity, preferably on to future generations. If you managed to fully internalize that externality, this would no longer be true; Salk and Einstein and Darwin would have become fabulously wealthy, and then somehow we’d all have to continue paying into their estates or something an amount equal to the benefits we received from their discoveries. (Every time you use your GPS, pay a royalty to the Einsteins. Every time you take a pill, pay a royalty to the Darwins.) At some point we’d probably get fed up and decide we’re no better off with them than without them—which is exactly by construction how we should feel if the externality were fully internalized.

Internalizing negative externalities is much less problematic—it’s your mess, clean it up. We don’t want other people to be harmed by your actions, and if we can pull that off that’s fantastic. (In reality, we usually can’t fully internalize negative externalities, but we can at least try.)

But maybe internalizing positive externalities really isn’t so great after all.