Why is our diet so unhealthy?

JDN 2457447

One of the most baffling facts about the world, particularly to a development economist, is that the leading causes of death around the world broadly cluster into two categories: Obesity, in First World countries, and starvation, in Third World countries. At first glance, it seems like the rich are eating too much and there isn’t enough left for the poor.

Yet in fact it’s not quite so simple as that, because in fact obesity is most common among the poor in First World countries, and in Third World countries obesity rates are rising rapidly and co-existing with starvation. It is becoming recognized that there are many different kinds of obesity, and that a past history of starvation is actually a major risk factor in future obesity.

Indeed, the really fundamental problem is malnutrition—people are not necessarily eating too much or too little, they are eating the wrong things. So, my question is: Why?

It is widely thought that foods which are nutritious are also unappetizing, and conversely that foods which are delicious are unhealthy. There is a clear kernel of truth here, as a comparison of Brussels sprouts versus ice cream will surely indicate. But this is actually somewhat baffling. We are an evolved organism; one would think that natural selection would shape us so that we enjoy foods which are good for us and avoid foods which are bad for us.

I think it did, actually; the problem is, we have changed our situation so drastically by means of culture and technology that evolution hasn’t had time to catch up. We have evolved significantly since the dawn of civilization, but we haven’t had any time to evolve since one event in particular: The Green Revolution. Indeed, many people are still alive today who were born while the Green Revolution was still underway.

The Green Revolution is the culmination of a long process of development in agriculture and industrialization, but it would be difficult to overstate its importance as an epoch in the history of our species. We now have essentially unlimited food.

Not literally unlimited, of course; we do still need land, and water, and perhaps most notably energy (oil-driven machines are a vital part of modern agriculture). But we can produce vastly more food than was previously possible, and food supply is no longer a binding constraint on human population. Indeed, we already produce enough food to feed 10 billion people. People who say that some new agricultural technology will end world hunger don’t understand what world hunger actually is. Food production is not the problem—distribution of wealth is the problem.

I often speak about the possibility of reaching post-scarcity in the future; but we have essentially already done so in the domain of food production. If everyone ate what would be optimally healthy, and we distributed food evenly across the world, there would be plenty of food to go around and no such thing as obesity or starvation.

So why hasn’t this happened? Well, the main reason, like I said, is distribution of wealth.

But that doesn’t explain why so many people who do have access to good foods nonetheless don’t eat them.

The first thing to note is that healthy food is more expensive. It isn’t a huge difference by First World standards—about $550 per year extra per person. But when we compare the cost of a typical nutritious diet to that of a typical diet, the nutritious diet is significantly more expensive. Worse yet, this gap appears to be growing over time.

But why is this the case? It’s actually quite baffling on its face. Nutritious foods are typically fruits and vegetables that one can simply pluck off plants. Unhealthy foods are typically complex processed foods that require machines and advanced technology. There should be “value added”, at least in the economic sense; additional labor must go in, additional profits must come out. Why is it cheaper?

In a word? Subsidies.

Somehow, huge agribusinesses have convinced governments around the world that they deserve to be paid extra money, either simply for existing or based on how much they produce. Of course, when I say “somehow”, I of course mean lobbying.

In the US, these subsidies overwhelmingly go toward corn, followed by cotton, followed by soybeans.

In fact, they don’t actually even go to corn as you would normally think of it, like sweet corn or corn on the cob. No, they go to feed corn—really awful stuff that includes the entire plant, is barely even recognizable as corn, and has its “quality” literally rated by scales and sieves. No living organism was ever meant to eat this stuff.

Humans don’t, of course. Cows do. But they didn’t evolve for this stuff either; they can’t digest it properly, and it’s because of this terrible food we force-feed them that they need so many antibiotics.

Thus, these corn subsides are really primarily beef subsidies—they are a means of externalizing the cost of beef production and keeping the price of hamburgers artificially low. In all, 2/3 of US agricultural subsidies ultimately go to meat production. I haven’t been able to find any really good estimates, but as a ballpark figure it seems that meat would cost about twice as much if we didn’t subsidize it.

Fortunately a lot of these subsidies have been decreased under the Obama administration, particularly “direct payments” which are sort of like a basic income, but for agribusinesses. (That is not what basic incomes are for.) You can see the decline in US corn subsidies here.

Despite all this, however, subsidies cannot explain obesity. Removing them would have only a small effect.

An often overlooked consideration is that nutritious food can be more expensive for a family even if the actual pricetag is the same.

Why? Because kids won’t eat it.

To raise kids on a nutritious diet, you have to feed them small amounts of good food over a long period of time, until they acquire the taste. In order to do this, you need to be prepared to waste a lot of food, and that costs money. It’s cheaper to simply feed them something unhealthy, like ice cream or hot dogs, that you know they’ll eat.

And this brings me to what I think is the real ultimate cause of our awful diet: We evolved for a world of starvation, and our bodies cannot cope with abundance.

It’s important to be clear about what we mean by “unhealthy food”; people don’t enjoy consuming lead and arsenic. Rather, we enjoy consuming fat and sugar. Contrary to what fad diets will tell you, fat and sugar are not inherently bad for human health; indeed, we need a certain amount of fat and sugar in order to survive. What we call “unhealthy food” is actually food that we desperately need—in small quantities.

Under the conditions in which we evolved, fat and sugar were extremely scarce. Eating fat meant hunting a large animal, which required the cooperation of the whole tribe (a quite literal Stag Hunt) and carried risk of life and limb, not to mention simply failing and getting nothing. Eating sugar meant finding fruit trees and gathering fruit from them—and fruit trees are not all that common in nature. These foods also spoil quite quickly, so you eat them right away or not at all.

As such, we evolved to really crave these things, to ensure that we would eat them whenever they are available. Since they weren’t available all that often, this was just about right to ensure that we managed to eat enough, and rarely meant that we ate too much.

 

But now fast-forward to the Green Revolution. They aren’t scarce anymore. They’re everywhere. There are whole buildings we can go to with shelves upon shelves of them, which we ourselves can claim simply by swiping a little plastic card through a reader. We don’t even need to understand how that system of encrypted data networks operates, or what exactly is involved in maintaining our money supply (and most people clearly don’t); all we need to do is perform the right ritual and we will receive an essentially unlimited abundance of fat and sugar.

Even worse, this food is in processed form, so we can extract the parts that make it taste good, while separating them from the parts that actually make it nutritious. If fruits were our main source of sugar, that would be fine. But instead we get it from corn syrup and sugarcane, and even when we do get it from fruit, we extract the sugar instead of eating the whole fruit.

Natural selection had no particular reason to give us that level of discrimination; since eating apples and oranges was good for us, we evolved to like the taste of apples and oranges. There wasn’t a sufficient selection pressure to make us actually eat the whole fruit as opposed to extracting the sugar, because extracting the sugar was not an option available to our ancestors. But it is available to us now.

Vegetables, on the other hand, are also more abundant now, but were already fairly abundant. Indeed, it may be significant that we’ve had enough time to evolve since agriculture, but not enough time since fertilizer. Agriculture allowed us to make plenty of wheat and carrots; but it wasn’t until fertilizer that we could make enough hamburgers for people to eat them regularly. It could be that our hunter-gatherer ancestors actually did crave carrots in much the same way they and we crave sugar; but since agriculture we have no further reason to do so because carrots have always been widely available.

One thing I do still find a bit baffling: Why are so many green vegetables so bitter? It would be one thing if they simply weren’t as appealing as fat and sugar; but it honestly seems like a lot of green vegetables, such as broccoli, spinach, and Brussels sprouts, are really quite actively aversive, at least until you acquire the taste for them. Given how nutritious they are, it seems like there should have been a selective pressure in favor of liking the taste of green vegetables; but there wasn’t. I wonder if it’s actually coevolution—if perhaps broccoli has been evolving to not be eaten as quickly as we were evolving to eat it. This wouldn’t happen with apples and oranges, because in an evolutionary sense apples and oranges “want” to be eaten; they spread their seeds in the droppings of animals. But for any given stalk of broccoli, becoming lunch is definitely bad news.

Yet even this is pretty weird, because broccoli has definitely evolved substantially since agriculture—indeed, broccoli as we know it would not exist otherwise. Ancestral Brassica oleracea was bred to become cabbage, broccoli, cauliflower, kale, Brussels sprouts, collard greens, savoy, kohlrabi and kai-lan—and looks like none of them.

It looks like I still haven’t solved the mystery. In short, we get fat because kids hate broccoli; but why in the world do kids hate broccoli?

Why are all our Presidents war criminals?

JDN 2457443

Today I take on a topic that we really don’t like to talk about. It creates grave cognitive dissonance in our minds, forcing us to deeply question the moral character of our entire nation.

Yet it is undeniably a fact:

Most US Presidents are war criminals.

There is a long tradition of war crimes by US Presidents which includes Obama, Bush, Nixon, and above all Johnson and Truman.

Barack Obama has ordered so-called “double-tap” drone strikes, which kill medics and first responders, in express violation of the Geneva Convention.

George W. Bush orchestrated a global program of torture and indefinite detention.

Bill Clinton ordered “extraordinary renditions” in which suspects were detained without trial and transferred to other countries for interrogation, where we knew they would most likely be tortured.

I actually had trouble finding any credible accusations of war crimes by George H.W. Bush (there are definitely accusations, but none of them are credible—seriously, people are listening to Manuel Noriega?), even as Director of the CIA. He might not be a war criminal.

Ronald Reagan supported a government in Guatemala that was engaged in genocide. He knew this was happening and did not seem to care. This was only one of many tyrannical, murderous regimes supported by Reagan’s administration. In fact, Ronald Reagan was successfully convicted of war crimes by the International Court of Justice. Chomsky isn’t wrong about this one. Ronald Reagan was a convicted war criminal.

Jimmy Carter is a major exception to the rule; not only are there no credible accusations of war crimes against him, he has actively fought to pursue war crimes investigations against Israel and even publicly discussed the war crimes of George W. Bush.

I also wasn’t able to find any credible accusations of war crimes by Gerald Ford, so he might be clean.

But then we get to Richard Nixon, who deployed chemical weapons against civilians in Vietnam. (Calling Agent Orange “herbicide” probably shouldn’t matter morally—but it might legally, as tactical “herbicides” are not always war crimes.) But Nixon does deserve some credit for banning biological weapons.

Indeed, most of the responsibility for war crimes in Vietnam falls upon Johnson. The US deployed something very close to a “total war” strategy involving carpet bombing—more bombs were dropped by the US in Vietnam than by all countries in WW2—as well as napalm and of course chemical weapons; basically it was everything short of nuclear weapons. Kennedy and Johnson also substantially expanded the US biological weapons program.

Speaking of weapons of mass destruction, I’m not sure if it was actually illegal to expand the US nuclear arsenal as dramatically as Kennedy did, but it definitely should have been. Kennedy brought our nuclear arsenal up to its greatest peak, a horrifying 30,000 deployable warheads—more than enough to wipe out human civilization, and possibly enough to destroy the entire human race.

While Eisenhower was accused of the gravest war crime on this list, namely the genocide of over 1 million people in Germany, most historians do not consider this accusation credible. Rather, his war crimes were committed as Supreme Allied Commander, in the form of carpet bombing, especially of Tokyo, which killed as many as 200,000 people, and of Dresden, which had no apparent military significance and even held a number of Allied POWs.

But then we get to Truman, the coup de grace, the only man in history to order the use of nuclear weapons in warfare. Truman gave the order to deploy nuclear weapons against civilians. He was the only person in the history of the world to ever give such an order. It wasn’t Hitler; it wasn’t Stalin. It was Harry S. Truman.

Then of course there’s Roosevelt’s internment of over 100,000 Japanese Americans. It really pales in comparison to Truman’s order to vaporize an equal number of Japanese civilians in the blink of an eye.

I think it will suffice to end the list here, though I could definitely go on. I think Truman is a really good one to focus on, for two reasons that pull quite strongly in opposite directions.

1. The use of nuclear weapons against civilians is among the gravest possible crimes. It may be second to genocide, but then again it may not, as genocide does not risk the destruction of the entire human race. If we only had the option of outlawing one thing in war, and had to allow everything else, we would have no choice but to ban the use of nuclear weapons against civilians.

2. Truman’s decision may have been justified. To this day is still hotly debated whether the atomic bombings were justifiable; mainstream historians have taken both sides. On Debate.org, the vote is almost exactly divided—51% yes, 49% no. Many historians believe that had Truman not deployed nuclear weapons, there would have been an additional 5 million deaths as a result of the continuation of the war.

Perhaps now you can see why this matter makes me so ambivalent.

There is a part of me that wants to take an absolute hard line against war crimes, and say that they must never be tolerated, that even otherwise good Presidents like Clinton and Obama deserve to be tried at the Hague for what they have done. (Truman and Eisenhower are dead, so it’s too late for them.)

But another part of me wonders what would happen if we did this. What if the world really is so dangerous that we have no choice but to allow our leaders to commit horrible atrocities in order to defend us?

There are easy cases—Bush’s torture program didn’t even result in very much useful intelligence, so it was simply a pointless degradation of our national character. The same amount of effort invested in more humane intelligence gathering would very likely have provided more reliable information. And in any case, terrorism is such a minor threat in the scheme of things that the effort would be better spent on improving environmental regulations or auto safety.

Similarly, there’s no reason to engage in “extraordinary rendition” to a country that tortures people when you could simply conduct a legitimate trial in absentia and then arrest the convicted terrorist with special forces and imprison him in a US maximum-security prison until his execution. (Or even carry out the execution directly by the special forces; as long as the trial is legitimate, I see no problem with that.) At that point, the atrocities are being committed simply to avoid inconvenience.

But especially when we come to the WW2 examples, where the United States—nay, the world—was facing a genuine threat of being conquered by genocidal tyrants, I do begin to wonder if “victory by any means necessary” is a legitimate choice.

There is a way to cut the Gordian knot here, and say that yes, these are crimes, and should be punished; but yes, they were morally justified. Then, the moral calculus any President must undergo when contemplating such an atrocity is that he himself will be tried and executed if he goes through with it. If your situation is truly so dire that you are willing to kill 100,000 civilians, perhaps you should be willing to go down with the ship. (Roger Fisher made a similar argument when he suggested implanting the nuclear launch codes inside the body of a US military officer. If you’re not willing to tear one man apart with a knife, why are you willing to vaporize an entire city?)

But if your actions really were morally justified… what sense does it make to punish you for them? And if we hold up this threat of punishment, could it cause a President to flinch when we really need him to take such drastic action?

Another possibility to consider is that perhaps our standards for war crimes really are too strict, and some—not all, but some—of the actions I just listed are in fact morally justifiable and should be made legal under international law. Perhaps the US government is right to fight the UN convention against cluster munitions; maybe we need cluster bombs to successfully defend national security. Perhaps it should not be illegal to kill the combat medics who directly serve under the command of enemy military forces—as opposed to civilian first-responders or Medecins Sans Frontieres. Perhaps our tolerance for civilian casualties is unrealistically low, and it is impossible to fight a war in the real world without killing a large number of civilians.

Then again, perhaps not. Perhaps we are too willing to engage in war in the first place, too accustomed to deploying military force as our primary response to international conflict. Perhaps the prospect of facing a war crimes tribunal in a couple of years should be an extra layer of deterrent against any President ordering yet another war—by some estimates we have been at war 93% of the time since our founding as a nation, and it is a well-documented fact that we have by far the highest military spending in the world. Why is it that so many Americans see diplomacy as foolish, see compromise as weakness?

Perhaps the most terrifying thing is not that so many US Presidents are war criminals; it is that so many Americans don’t seem to have any problem with that.

We all know lobbying is corrupt. What can we do about it?

JDN 2457439

It’s so well-known as to almost seem cliche: Our political lobbying system is clearly corrupt.

Juan Cole, a historian and public intellectual from the University of Michigan, even went so far as to say that the United States is the most corrupt country in the world. He clearly went too far, or else left out a word; the US may well be the most corrupt county in the First World, though most rankings say Italy. In any case, the US is definitely not the most corrupt country in the whole world; no, that title goes to Somalia and/or North Korea.

Still, lobbying in the US is clearly a major source of corruption. Indeed, economists who study corruption often have trouble coming up with a sound definition of “corruption” that doesn’t end up including lobbying, despite the fact that lobbying is quite legal. Bribery means giving politicians money to get them to do things for you. Lobbying means giving politicians money and asking them to do things. In the letter of the law, that makes all the difference.

One thing that does make a difference is that lobbyists are required to register who they are and record their campaign contributions (unless of course they launder—I mean reallocate—them through a Super PAC of course). Many corporate lobbyists claim that it’s not that they go around trying to find politicians to influence, but rather politicians who call them up demanding money.

One of the biggest problems with lobbying is what’s called the revolving doorpoliticians are often re-hired as lobbyists, or lobbyists as politicians, based on the personal connections formed in the lobbying process—or possibly actual deals between lobbying companies over legislation, though if done explicitly that would be illegal. Almost 400 lobbyists working right now used to be legislators; almost 3,000 more worked as Congressional staff. Many lobbyists will do a tour as a Congressional staffer as a resume-builder, like an internship.

Studies have shown that lobbying does have an impact on policy—in terms of carving out tax loopholes it offers a huge return on investment.

Our current systems to disinventize the revolving door are not working. While there is reason to think that establishing a “cooling-off period” of a few years could make a difference, under current policy we already have some cooling-off periods and it’s clearly not enough.

So, now that we know the problem, let’s start talking about solutions.

Option 1: Ban campaign contributions

One possibility would be to eliminate campaign contributions entirely, which we could do by establishing a law that nobody can ever give money or in-kind favors to politicians ever under any circumstances. It would still be legal to meet with politicians and talk to them about issues, but if you take a Senator out for dinner we’d have to require that the Senator pay for their own food and transportation, lest wining-and-dining still be an effective means of manipulation. Then all elections would have to be completely publicly financed. This is a radical solution, but it would almost certainly work. MoveOn has a petition you can sign if you like this solution, and there’s a site called public-campaign-financing.org that will tell you how it could realistically be implemented (beware, their webmaster appears to be a time traveler from the 1990s who thinks that automatic music and tiled backgrounds constitute good web design).

There are a couple of problems with this solution, however:

First, it would be declared Unconstitutional by the Supreme Court. Under the (literally Orwellian) dicta that “corporations are people” and “money is speech” established in Citizens United vs. FEC, any restrictions on donating money to politicians constitute restrictions on free speech, and are therefore subject to strict scrutiny.

Second, there is actually a real restriction on freedom here, not because money is speech, but because money facilitates speech. Since eliminating all campaign donations would require total public financing of elections, we would need some way of deciding which candidates to finance publicly, because obviously you can’t give the same amount of money to everyone in the country or even everyone who decides to run. It simply doesn’t make sense to provide the same campaign financing for Hillary Clinton that you would for Vermin Supreme. But then, however this mechanism works, it could readily be manipulated to give even more advantages to the two major parties (not that they appear to need any more). If you’re fine with having exactly two parties to choose from, then providing funding for their, say, top 5 candidates in each primary, and then for their nominee in the general election, would work. But I for one would like to have more options than that, and that means devising some mechanism for funding third parties that have a realistic shot (like Ralph Nader or Ross Perot) but not those who don’t (like the aforementioned Vermin Supreme)—but at the same time we need to make sure that it’s not biased or self-fulfilling.

So let’s suppose we don’t eliminate campaign contributions completely. What else could we do that would curb corruption?

Option 2: Donation caps and “Democracy Credits”

I particularly like this proposal, self-titled the American Anti-Corruption Act (beware self-titled laws: USA PATRIOT ACT, anyone?), which would require full transparency—yes, even you, Super PACs—and place reasonable caps on donations so that large amounts of funds must be raised from large numbers of people rather than from a handful of people with a huge amount of money. It also includes an interesting proposal called “Democracy Credits” (again, the titles are a bit heavy-handed), which are basically an independent monetary system, used only to finance elections, and doled out exactly equally to all US citizens to spend on the candidates they like. The credits would then be exchangeable for real money, but only by the candidates themselves. This is a great idea, but sadly I doubt anyone in our political system is likely to go for it.

Actually, I would like to see these “Democracy Credits” used as votes—whoever gets the most credits wins the election, automatically. This is not quite as good as range voting, because it is not cloneproof or independent of irrelevant alternatives (briefly put, if you run two candidates that are exactly alike, their votes get split and they both lose, even if everyone likes them; and similarly, if you add a new candidate that doesn’t win you can still affect who does end up winning. Range voting is basically the only system that doesn’t have these problems, aside from a few really weird “voting” systems like “random ballot”). But still, it would be much better than our current plurality “first past the post” system, and would give third-party candidates a much fairer shot at winning elections. Indeed, it is very similar to CTT monetary voting, which is provably optimal in certain (idealized) circumstances. Of course, that’s even more of a pipe dream.

The donation caps are realistic, however; we used to have them, in fact, before Citizens United vs. FEC. Perhaps future Supreme Court decisions can overturn it and restore some semblance of balance in our campaign finance system.

Option 3: Treat campaign contributions as a conflict of interest

Jack Abramoff, a former lobbyist who was actually so corrupt he got convicted for it, has somewhat ironically made another proposal for how to reduce corrupting in the lobbying system. I suppose he would know, though I must wonder what incentives he has to actually do this properly (and corrupt people are precisely the sort of people with whom you should definitely be looking at the monetary incentives).

Abramoff would essentially use Option 1, but applied only to individuals and corporations with direct interests in the laws being made. As Gawker put it, “If you get money or perks from elected officials, […] you shouldn’t be permitted to give them so much as one dollar.” The way it avoids requiring total public financing is by saying that if you don’t get perks, you can still donate.

His plan would also extend the “cooling off” idea to its logical limit—once you work for Congress, you can never work for a lobbying organization for the rest of your life, and vice versa. That seems like a lot of commitment to ask of twentysomething Congressional interns (“If you take this job, unemployed graduate, you can never ever take that other job!”), but I suppose if it works it might be worth it.

He also wants to establish term limits for Congress, which seems pretty reasonable to me. If we’re going to have term limits for the Executive branch, why not the other branches as well? They could be longer, but if term limits are necessary at all we should use them consistently.

Abramoff also says we should repeal the 17th Amendment, because apparently making our Senators less representative of the population will somehow advance democracy. Best I can figure, he’s coming from an aristocratic attitude here, this notion that we should let “better people” make the important decisions if we want better decisions. And this sounds seductive, given how many really bad decisions people make in this world. But of course which people were the better people was precisely the question representative democracy was intended to answer. At least if Senators are chosen by state legislatures there’s a sort of meta-representation going on, which is obviously better than no representation at all; but still, adding layers of non-democracy by definition cannot make a system more democratic.

But Abramoff really goes off the rails when he proposes making it a conflict of interest to legislate about your own state.Pork-barrel spending”, as it is known, or earmarks as they are formally called, are actually a tiny portion of our budget (about 0.1% of our GDP) and really not worth worrying about. Sure, sometimes a Senator gets a bridge built that only three people will ever use, but it’s not that much money in the scheme of things, and there’s no harm in keeping our construction workers at work. The much bigger problem would be if legislators could no longer represent their own constituents in any way, thus defeating the basic purpose of having a representative legislature. (There is a thorny question about how much a Senator is responsible for their own state versus the country as a whole; but clearly their responsibility to their own state is not zero.)

Even aside from that ridiculous last part, there’s a serious problem with this idea of “no contributions from anyone who gets perks”: What constitutes a “perk”? Is a subsidy for solar power a perk for solar companies, or a smart environmental policy (can it be both?)? Does paying for road construction “affect” auto manufacturers in the relevant sense? What about policies that harm particular corporations? Since a carbon tax would hurt oil company profits, are oil companies allowed to lobby against it on the ground that it is the opposite of a “perk”?

Voting for representatives who will do things you want is kind of the point of representative democracy. (No, New York Post, it is not “pandering” to support women’s rights and interestswomen are the majority of our population. If there is one group of people that our government should represent, it is women.) Taken to its logical extreme, this policy would mean that once the government ever truly acts in the public interest, all campaign contributions are henceforth forever banned. I presume that’s not actually what Abramoff intends, but he offers no clear guidelines on how we would distinguish a special interest to be excluded from donations as opposed to a legitimate public interest that creates no such exclusion. Could we flesh this out in the actual legislative process? Is this something courts would decide?

In all, I think the best reform right now is to put the cap back on campaign contributions. It’s simple to do, and we had it before and it seemed to work (mostly). We could also combine that with longer cooling-off periods, perhaps three or five years instead of only one, and potentially even term limits for Congress. These reforms would certainly not eliminate corruption in the lobbying system, but they would most likely reduce it substantially, without stepping on fundamental freedoms.

Of course I’d really like to see those “Democracy Credits”; but that’s clearly not going to happen.

Do we always want to internalize externalities?

JDN 2457437

I often talk about the importance of externalitiesa full discussion in this earlier post, and one of their important implications, the tragedy of the commons, in another. Briefly, externalities are consequences of actions incurred upon people who did not perform those actions. Anything I do affecting you that you had no say in, is an externality.

Usually I’m talking about how we want to internalize externalities, meaning that we set up a system of incentives to make it so that the consequences fall upon the people who chose the actions instead of anyone else. If you pollute a river, you should have to pay to clean it up. If you assault someone, you should serve jail time as punishment. If you invent a new technology, you should be rewarded for it. These are all attempts to internalize externalities.

But today I’m going to push back a little, and ask whether we really always want to internalize externalities. If you think carefully, it’s not hard to come up with scenarios where it actually seems fairer to leave the externality in place, or perhaps reduce it somewhat without eliminating it.

For example, suppose indeed that someone invents a great new technology. To be specific, let’s think about Jonas Salk, inventing the polio vaccine. This vaccine saved the lives of thousands of people and saved millions more from pain and suffering. Its value to society is enormous, and of course Salk deserved to be rewarded for it.

But we did not actually fully internalize the externality. If we had, every family whose child was saved from polio would have had to pay Jonas Salk an amount equal to what they saved on medical treatments as a result, or even an amount somehow equal to the value of their child’s life (imagine how offended people would get if you asked that on a survey!). Those millions of people spared from suffering would need to each pay, at minimum, thousands of dollars to Jonas Salk, making him of course a billionaire.

And indeed this is more or less what would have happened, if he had been willing and able to enforce a patent on the vaccine. The inability of some to pay for the vaccine at its monopoly prices would add some deadweight loss, but even that could be removed if Salk Industries had found a way to offer targeted price vouchers that let them precisely price-discriminate so that every single customer paid exactly what they could afford to pay. If that had happened, we would have fully internalized the externality and therefore maximized economic efficiency.

But doesn’t that sound awful? Doesn’t it sound much worse than what we actually did, where Jonas Salk received a great deal of funding and support from governments and universities, and lived out his life comfortably upper-middle class as a tenured university professor?

Now, perhaps he should have been awarded a Nobel Prize—I take that back, there’s no “perhaps” about it, he definitely should have been awarded a Nobel Prize in Medicine, it’s absurd that he did not—which means that I at least do feel the externality should have been internalized a bit more than it was. But a Nobel Prize is only 10 million SEK, about $1.1 million. That’s about enough to be independently wealthy and live comfortably for the rest of your life; but it’s a small fraction of the roughly $7 billion he could have gotten if he had patented the vaccine. Yet while the possible world in which he wins a Nobel is better than this one, I’m fairly well convinced that the possible world in which he patents the vaccine and becomes a billionaire is considerably worse.

Internalizing externalities makes sense if your goal is to maximize total surplus (a concept I explain further in the linked post), but total surplus is actually a terrible measure of human welfare.

Total surplus counts every dollar of willingness-to-pay exactly the same across different people, regardless of whether they live on $400 per year or $4 billion.

It also takes no account whatsoever of how wealth is distributed. Suppose a new technology adds $10 billion in wealth to the world. As far as total surplus, it makes no difference whether that $10 billion is spread evenly across the entire planet, distributed among a city of a million people, concentrated in a small town of 2,000, or even held entirely in the bank account of a single man.

Particularly a propos of the Salk example, total surplus makes no distinction between these two scenarios: a perfectly-competitive market where everything is sold at a fair price, and a perfectly price-discriminating monopoly, where everything is sold at the very highest possible price each person would be willing to pay.

This is a perfectly-competitive market, where the benefits are more or less equally (in this case exactly equally, but that need not be true in real life) between sellers and buyers:

elastic_supply_competitive_labeled

This is a perfectly price-discriminating monopoly, where the benefits accrue entirely to the corporation selling the good:

elastic_supply_price_discrimination

In the former case, the company profits, consumers are better off, everyone is happy. In the latter case, the company reaps all the benefits and everyone else is left exactly as they were. In real terms those are obviously very different outcomes—the former being what we want, the latter being the cyberpunk dystopia we seem to be hurtling mercilessly toward. But in terms of total surplus, and therefore the kind of “efficiency” that is maximize by internalizing all externalities, they are indistinguishable.

In fact (as I hope to publish a paper about at some point), the way willingness-to-pay works, it weights rich people more. Redistributing goods from the poor to the rich will typically increase total surplus.

Here’s an example. Suppose there is a cake, which is sufficiently delicious that it offers 2 milliQALY in utility to whoever consumes it (this is a truly fabulous cake). Suppose there are two people to whom we might give this cake: Richie, who has $10 million in annual income, and Hungry, who has only $1,000 in annual income. How much will each of them be willing to pay?

Well, assuming logarithmic marginal utility of wealth (which is itself probably biasing slightly in favor of the rich), 1 milliQALY is about $1 to Hungry, so Hungry will be willing to pay $2 for the cake. To Richie, however, 1 milliQALY is about $10,000; so he will be willing to pay a whopping $20,000 for this cake.

What this means is that the cake will almost certainly be sold to Richie; and if we proposed a policy to redistribute the cake from Richie to Hungry, economists would emerge to tell us that we have just reduced total surplus by $19,998 and thereby committed a great sin against economic efficiency. They will cajole us into returning the cake to Richie and thus raising total surplus by $19,998 once more.

This despite the fact that I stipulated that the cake is worth just as much in real terms to Hungry as it is to Richie; the difference is due to their wildly differing marginal utility of wealth.

Indeed, it gets worse, because even if we suppose that the cake is worth much more in real utility to Hungry—because he is in fact hungry—it can still easily turn out that Richie’s willingness-to-pay is substantially higher. Suppose that Hungry actually gets 20 milliQALY out of eating the cake, while Richie still only gets 2 milliQALY. Hungry’s willingness-to-pay is now $20, but Richie is still going to end up with the cake.

Now, if your thought is, “Why would Richie pay $20,000, when he can go to another store and get another cake that’s just as good for $20?” Well, he wouldn’t—but in the sense we mean for total surplus, willingness-to-pay isn’t just what you’d actually be willing to pay given the actual prices of the goods, but the absolute maximum price you’d be willing to pay to get that good under any circumstances. It is instead the marginal utility of the good divided by your marginal utility of wealth. In this sense the cake is “worth” $20,000 to Richie, and “worth” substantially less to Hungry—but not because it’s actually worth less in real terms, but simply because Richie has so much more money.

Even economists often equate these two, implicitly assuming that we are spending our money up to the point where our marginal willingness-to-pay is the actual price we choose to pay; but in general our willingness-to-pay is higher than the price if we are willing to buy the good at all. The consumer surplus we get from goods is in fact equal to the difference between willingness-to-pay and actual price paid, summed up over all the goods we have purchased.

Internalizing all externalities would definitely maximize total surplus—but would it actually maximize happiness? Probably not.

If you asked most people what their marginal utility of wealth is, they’d have no idea what you’re talking about. But most people do actually have an intuitive sense that a dollar is worth more to a homeless person than it is to a millionaire, and that’s really all we mean by diminishing marginal utility of wealth.

I think the reason we’re uncomfortable with the idea of Jonas Salk getting $7 billion from selling the polio vaccine, rather than the same number of people getting the polio vaccine and Jonas Salk only getting the $1.1 million from a Nobel Prize, is that we intuitively grasp that after that $1.1 million makes him independently wealthy, the rest of the money is just going to sit in some stock account and continue making even more money, while if we’d let the families keep it they would have put it to much better use raising their children who are now protected from polio. We do want to reward Salk for his great accomplishment, but we don’t see why we should keep throwing cash at him when it could obviously be spent in better ways.

And indeed I think this intuition is correct; great accomplishments—which is to say, large positive externalities—should be rewarded, but not in direct proportion. Maybe there should be some threshold above which we say, “You know what? You’re rich enough now; we can stop giving you money.” Or maybe it should simply damp down very quickly, so that a contribution which is worth $10 billion to the world pays only slightly more than one that is worth $100 million, but a contribution that is worth $100,000 pays considerably more than one which is only worth $10,000.

What it ultimately comes down to is that if we make all the benefits incur to the person who did it, there aren’t any benefits anymore. The whole point of Jonas Salk inventing the polio vaccine (or Einstein discovering relativity, or Darwin figuring out natural selection, or any great achievement) is that it will benefit the rest of humanity, preferably on to future generations. If you managed to fully internalize that externality, this would no longer be true; Salk and Einstein and Darwin would have become fabulously wealthy, and then somehow we’d all have to continue paying into their estates or something an amount equal to the benefits we received from their discoveries. (Every time you use your GPS, pay a royalty to the Einsteins. Every time you take a pill, pay a royalty to the Darwins.) At some point we’d probably get fed up and decide we’re no better off with them than without them—which is exactly by construction how we should feel if the externality were fully internalized.

Internalizing negative externalities is much less problematic—it’s your mess, clean it up. We don’t want other people to be harmed by your actions, and if we can pull that off that’s fantastic. (In reality, we usually can’t fully internalize negative externalities, but we can at least try.)

But maybe internalizing positive externalities really isn’t so great after all.

Bet five dollars for maximum performance

JDN 2457433

One of the more surprising findings from the study of human behavior under stress is the Yerkes-Dodson curve:

OriginalYerkesDodson
This curve shows how well humans perform at a given task, as a function of how high the stakes are on whether or not they do it properly.

For simple tasks, it says what most people intuitively expect—and what neoclassical economists appear to believe: As the stakes rise, the more highly incentivized you are to do it, and the better you do it.

But for complex tasks, it says something quite different: While increased stakes do raise performance to a point—with nothing at stake at all, people hardly work at all—it is possible to become too incentivized. Formally we say the curve is not monotonic; it has a local maximum.

This is one of many reasons why it’s ridiculous to say that top CEOs should make tens of millions of dollars a year on the rise and fall of their company’s stock price (as a great many economists do in fact say). Even if I believed that stock prices accurately reflect the company’s viability (they do not), and believed that the CEO has a great deal to do with the company’s success, it would still be a case of overincentivizing. When a million dollars rides on a decision, that decision is going to be worse than if the stakes had only been $100. With this in mind, it’s really not surprising that higher CEO pay is correlated with worse company performance. Stock options are terrible motivators, but do offer a subtle way of making wages adjust to the business cycle.

The reason for this is that as the stakes get higher, we become stressed, and that stress response inhibits our ability to use higher cognitive functions. The sympathetic nervous system evolved to make us very good at fighting or running away in the face of danger, which works well should you ever be attacked by a tiger. It did not evolve to make us good at complex tasks under high stakes, the sort of skill we’d need when calculating the trajectory of an errant spacecraft or disarming a nuclear warhead.

To be fair, most of us never have to worry about piloting errant spacecraft or disarming nuclear warheads—indeed, you’re about as likely to get attacked by a tiger even in today’s world. (The rate of tiger attacks in the US is just under 2 per year, and the rate of manned space launches in the US was about 5 per year until the Space Shuttle was terminated.)

There are certain professions, such as pilots and surgeons, where performing complex tasks under life-or-death pressure is commonplace, but only a small fraction of people take such professions for precisely that reason. And if you’ve ever wondered why we use checklists for pilots and there is discussion of also using checklists for surgeons, this is why—checklists convert a single complex task into many simple tasks, allowing high performance even at extreme stakes.

But we do have to do a fair number of quite complex tasks with stakes that are, if not urgent life-or-death scenarios, then at least actions that affect our long-term life prospects substantially. In my tutoring business I encounter one in particular quite frequently: Standardized tests.

Tests like the SAT, ACT, GRE, LSAT, GMAT, and other assorted acronyms are not literally life-or-death, but they often feel that way to students because they really do have a powerful impact on where you’ll end up in life. Will you get into a good college? Will you get into grad school? Will you get the job you want? Even subtle deviations from the path of optimal academic success can make it much harder to achieve career success in the future.

Of course, these are hardly the only examples. Many jobs require us to complete tasks properly on tight deadlines, or else risk being fired. Working in academia infamously requires publishing in journals in time to rise up the tenure track, or else falling off the track entirely. (This incentivizes the production of huge numbers of papers, whether they’re worth writing or not; yes, the number of papers published goes down after tenure, but is that a bad thing? What we need to know is whether the number of good papers goes down. My suspicion is that most if not all of the reduction in publications is due to not publishing things that weren’t worth publishing.)

So if you are faced with this sort of task, what can you do? If you realize that you are faced with a high-stakes complex task, you know your performance will be bad—which only makes your stress worse!

My advice is to pretend you’re betting five dollars on the outcome.

Ignore all other stakes, and pretend you’re betting five dollars. $5.00 USD. Do it right and you get a Lincoln; do it wrong and you lose one.
What this does is ensures that you care enough—you don’t want to lose $5 for no reason—but not too much—if you do lose $5, you don’t feel like your life is ending. We want to put you near that peak of the Yerkes-Dodson curve.

The great irony here is that you most want to do this when it is most untrue. If you actually do have a task for which you’ve bet $5 and nothing else rides on it, you don’t need this technique, and any technique to improve your performance is not particularly worthwhile. It’s when you have a standardized test to pass that you really want to use this—and part of me even hopes that people know to do this whenever they have nuclear warheads to disarm. It is precisely when the stakes are highest that you must put those stakes out of your mind.

Why five dollars? Well, the exact amount is arbitrary, but this is at least about the right order of magnitude for most First World individuals. If you really want to get precise, I think the optimal stakes level for maximum performance is something like 100 microQALY per task, and assuming logarithmic utility of wealth, $5 at the US median household income of $53,600 is approximately 100 microQALY. If you have a particularly low or high income, feel free to adjust accordingly. Literally you should be prepared to bet about an hour of your life; but we are not accustomed to thinking that way, so use $5. (I think most people, if asked outright, would radically overestimate what an hour of life is worth to them. “I wouldn’t give up an hour of my life for $1,000!” Then why do you work at $20 an hour?)

It’s a simple heuristic, easy to remember, and sometimes effective. Give it a try.

Why Millennials feel “entitled”

JDN 2457064

I’m sure you’ve already heard this plenty of times before, but just in case here are a few particularly notable examples: “Millennials are entitled.” “Millennials are narcissistic.” “Millennials expect instant gratification.

Fortunately there are some more nuanced takes as well: One survey shows that we are perceived as “entitled” and “self-centered” but also “hardworking” and “tolerant”. This article convincingly argues that Baby Boomers show at least as much ‘entitlement’ as we do. Another article points out that young people have been called these sorts of names for decades—though actually the proper figure is centuries.

Though some of the ‘defenses’ leave a lot to be desired: “OK, admittedly, people do live at home. But that’s only because we really like our parents. And why shouldn’t we?” Uh, no, that’s not it. Nor is it that we’re holding off on getting married. The reason we live with our parents is that we have no money and can’t pay for our own housing. And why aren’t we getting married? Because we can’t afford to pay for a wedding, much less buy a home and start raising kids. (Since the time I drafted this for Patreon and it went live, yet another article hand-wringing over why we’re not getting married was publishedin Scientific American, of all places.)

Are we not buying cars because we don’t like cars? No, we’re not buying cars because we can’t afford to pay for them.

The defining attributes of the Millennial generation are that we are young (by definition) and broke (with very few exceptions). We’re not uniquely narcissistic or even tolerant; younger generations always have these qualities.

But there may be some kernel of truth here, which is that we were promised a lot more than we got.

Educational attainment in the United States is the highest it has ever been. Take a look at this graph from the US Department of Education:

Percentage of 25- to 29-year-olds who completed a bachelor’s or higher degree, by race/ethnicity: Selected years, 1990–2014

education_attainment_race

More young people of every demographic except American Indians now have college degrees (and those figures fluctuate a lot because of small samples—whether my high school had an achievement gap for American Indians depended upon how I self-identified on the form, because there were only two others and I was tied for the highest GPA).

Even the IQ of Millennials is higher than that of our parents’ generation, which is higher than their parents’ generation; (measured) intelligence rises over time in what is called the Flynn Effect. IQ tests have to be adjusted to be harder by about 3 points every 10 years because otherwise the average score would stop being 100.

As your level of education increases, your income tends to go up and your unemployment tends to go down. In 2014, while people with doctorates or professional degrees had about 2% unemployment and made a median income of $1590 per week, people without even high school diplomas had about 9% unemployment and made a median income of only $490 per week. The Bureau of Labor Statistics has a nice little bar chart of these differences:

education_employment_earnings

Now the difference is not quite as stark. With the most recent data, the unemployment rate is 6.7% for people without a high school diploma and 2.5% for people with a bachelor’s degree or higher.

But that’s for the population as a whole. What about the population of people 18 to 35, those of us commonly known as Millennials?

Well, first of all, our unemployment rate overall is much higher. With the most recent data, unemployment among people ages 20-24 is a whopping 9.4%. For ages 25 to 34 it gets better, 5.3%; but it’s still much worse than unemployment at ages 35-44 (4.0%), 45-54 (3.6%), or 55+ (3.2%). Overall, unemployment among Millennials is about 6.7% while unemployment among Baby Boomers is about 3.2%, half as much. (Gen X is in between, but a lot closer to the Boomers at around 3.8%.)

It was hard to find data specifically breaking it down by both age and education at the same time, but the hunt was worth it.

Among people age 20-24 not in school:

Without a high school diploma, 328,000 are unemployed, out of 1,501,000 in the labor force. That’s an unemployment rate of 21.9%. Not a typo, that’s 21.9%.

With only a high school diploma, 752,000 are unemployed, out of 5,498,000 in the labor force. That’s an unemployment rate of 13.7%.

With some college but no bachelor’s degree, 281,000 are unemployed, out of 3,620,000 in the labor force. That’s an unemployment rate of 7.7%.

With a bachelor’s degree, 90,000 are unemployed, out of 2,313,000 in the labor force. That’s an unemployment rate of 3.9%.

What this means is that someone 24 or under needs to have a bachelor’s degree in order to have the same overall unemployment rate that people from Gen X have in general, and even with a bachelor’s degree, people under 24 still have a higher unemployment rate than what Baby Boomers simply have by default. If someone under 24 doesn’t even have a high school diploma, forget it; their unemployment rate is comparable to the population unemployment rate at the trough of the Great Depression.

In other words, we need to have college degrees just to match the general population older than us, of whom only 20% have a college degree; and there is absolutely nothing a Millennial can do in terms of education to ever have the tiny unemployment rate (about 1.5%) of Baby Boomers with professional degrees. (Be born White, be in perfect health, have a professional degree, have rich parents, and live in a city with very high employment, and you just might be able to pull it off.)

So, why do Millennials feel like a college degree should “entitle” us to a job?

Because it does for everyone else.

Why do we feel “entitled” to a higher standard of living than the one we have?
Take a look at this graph of GDP per capita in the US:

US_GDP_per_capita

You may notice a rather sudden dip in 2009, around the time most Millennials graduated from college and entered the labor force. On the next graph, I’ve added a curve approximating what it would look like if the previous trend had continued:

US_GDP_per_capita_trend

(There’s a lot on this graph for wonks like me. You can see how the unit-root hypothesis seemed to fail in the previous four recessions, where economic output rose back up to potential; but it clearly held in this recession, and there was a permanent loss of output. It also failed in the recession before that. So what’s the deal? Why do we recover from some recessions and take a permanent blow from others?)

If the Great Recession hadn’t happened, instead of per-capita GDP being about $46,000 in 2005 dollars, it would instead be closer to $51,000 in 2005 dollars. In today’s money, that means our current $56,000 would be instead closer to $62,000. If we had simply stayed on the growth trajectory we were promised, we’d be almost 10 log points richer (11% for the uninitiated).

So, why do Millennials feel “entitled” to things we don’t have? In a word, macroeconomics.

People anchored their expectations of what the world would be like on forecasts. The forecasts said that the skies were clear and economic growth would continue apace; so naturally we assumed that this was true. When the floor fell out from under our economy, only a few brilliant and/or lucky economists saw it coming; even people who were paying quite close attention were blindsided. We were raised in a world where economic growth promised rising standard of living and steady employment for the rest of our lives. And then the storm hit, and we were thrown into a world of poverty and unemployment—and especially poverty and unemployment for us.

We are angry about how we had been promised more than we were given, angry about how the distribution of what wealth we do have gets ever more unequal. We are angry that our parents’ generation promised what they could not deliver, and angry that it was their own blind worship of the corrupt banking system that allowed the crash to happen.

And because we are angry and demand a fairer share, they have the audacity to call us “narcissistic”.

“Polarization” is not symmetric

I titled the previous post using the word “polarization”, because that’s the simplest word we have for this phenomenon; but its connotations really aren’t right. “Polarization” suggests that both sides have gotten more extreme, and as a result they are now more fiercely in conflict. In fact what has happened is that Democrats have more or less stayed where they were, while Republicans have veered off into insane far-right crypto-fascist crazyland.

If you don’t believe my graph from ThinkProgress, take it from The Washington Post.

Even when pollsters try to spin it so that maybe the Democrats have also polarized, the dimensions upon which Democrats have gotten “more extreme” are not being bigoted toward women, racial minorities, immigrants, and gay people. So while the Republicans have in fact gotten more extreme, Democrats have simply gotten less bigoted.

Yes, I suppose you can technically say that this means we’ve gotten “more extremely liberal on social issues”; but we live in a world in which “liberal on social issues” means “you don’t hate and oppress people based on who they are”. Democrats did not take on some crazy far-left social view like, I don’t know, legalizing marriage between humans and dogs. They just stopped being quite so racist, sexist, and homophobic.

It’s on issues where there is no obvious moral imperative that it makes sense to talk about whether someone has become more extreme to the left or the right.

Many economic issues are of this character; one could certainly go too far left economically if one were to talk about nationalizing the auto industry (because it worked so well in East Germany!) or repossessing and redistributing all farmland (what could possibly go wrong?). But Bernie Sanders’ “radical socialism” sounds a lot like FDR’s New Deal—which worked quite well, and is largely responsible for the rise of the American middle class.

Meanwhile, Donald Trump’s economic policy proposals (if you can even call them that) are so radical and so ad hoc that they would turn back the clock on decades of economic development and international trade. He wants to wage a trade war with China that would throw the US into recession and send millions of people in China back into poverty. And that’s not even including the human rights violations required to implement the 11 million deportations of immigrants that Trump has been clamoring for since day one.

Or how about national defense? There is room for reasonable disagreement here, and there definitely is a vein of naive leftist pacifism that tells us to simply stay out of it when other countries are invaded by terrorists or commit genocide.

FDR’s view on national defense can be found in his “Day of Infamy” speech after Pearl Harbor

The attack yesterday on the Hawaiian Islands has caused severe damage to American naval and military forces. I regret to tell you that very many American lives have been lost. In addition, American ships have been reported torpedoed on the high seas between San Francisco and Honolulu.

Yesterday the Japanese Government also launched an attack against Malaya.
Last night Japanese forces attacked Hong Kong.
Last night Japanese forces attacked Guam.
Last night Japanese forces attacked the Philippine Islands.
Last night the Japanese attacked Wake Island.
And this morning the Japanese attacked Midway Island.

Japan has therefore undertaken a surprise offensive extending throughout the Pacific area. The facts of yesterday and today speak for themselves. The people of the United States have already formed their opinions and well understand the implications to the very life and safety of our nation.

As Commander-in-Chief of the Army and Navy I have directed that all measures be taken for our defense, that always will our whole nation remember the character of the onslaught against us.

No matter how long it may take us to overcome this premeditated invasion, the American people, in their righteous might, will win through to absolute victory.

When Hillary Clinton lived through a similar event—9/11—this was her response:

We will also stand united behind our President as he and his advisors plan the necessary actions to demonstrate America’s resolve and commitment. Not only to seek out an exact punishment on the perpetrators, but to make very clear that not only those who harbor terrorists, but those who in any way aid or comfort them whatsoever will now face the wrath of our country. And I hope that that message has gotten through to everywhere it needs to be heard. You are either with America in our time of need or you are not.

We also stand united behind our resolve — as this resolution so clearly states — to recover and rebuild in the aftermath of these tragic acts. You know, New York was not an accidental choice for these madmen, these terrorists, and these instruments of evil. They deliberately chose to strike at a city, which is a global city — it is the city of the Twenty First century, it epitomizes who we are as Americans. And so this in a very real sense was an attack on America, on our values, on our power, and on who we are as a people. And I know — because I know America — that America will stand behind New York. That America will offer whatever resources, aid, comfort, support that New Yorkers and New York require. Because the greatest rebuke we can offer to those who attack our way of life is to demonstrate clearly that we are not cowed in any way whatsoever.

Sounds pretty similar to me.

Now, compare Eisenhower’s statements on the military to Ted Cruz’s.

First Eisenhower, his famous “Cross of Iron” speech

The best would be this: a life of perpetual fear and tension; a burden of arms draining the wealth and the labor of all peoples; a wasting of strength that defies the American system or the Soviet system or any system to achieve true abundance and happiness for the peoples of this earth.

Every gun that is made, every warship launched, every rocket fired signifies, in the final sense, a theft from those who hunger and are not fed, those who are cold and are not clothed.

This world in arms in not spending money alone.

It is spending the sweat of its laborers, the genius of its scientists, the hopes of its children.

The cost of one modern heavy bomber is this: a modern brick school in more than 30 cities.

It is two electric power plants, each serving a town of 60,000 population.

It is two fine, fully equipped hospitals.

It is some 50 miles of concrete highway.

We pay for a single fighter with a half million bushels of wheat.

We pay for a single destroyer with new homes that could have housed more than 8,000 people.

This, I repeat, is the best way of life to be found on the road the world has been taking.

This is not a way of life at all, in any true sense. Under the cloud of threatening war, it is humanity hanging from a cross of iron.

That is the most brilliant exposition of the opportunity cost of military spending I’ve ever heard. Let me remind you that Eisenhower was a Republican and a five-star general (we don’t even have those anymore; we stop at four stars except in major wars). He was not a naive pacifist, but a soldier who understood the real cost of war.

Now, Ted Cruz, in his political campaign videos:

Instead we will have a President who will make clear we will utterly destroy ISIS.We will carpet bomb them into oblivion. I don’t know if sand can glow in the dark, but we’re gonna find out. And we’re gonna make abundantly clear to any militant on the face of the planet, that if you go and join ISIS, if you wage jihad and declare war on America, you are signing your death warrant.

 

Under President Obama and Secretary Clinton the world is more dangerous, and America is less safe. If I’m elected to serve as commander-in-chief, we won’t cower in the face of evil. America will lead. We will rebuild our military, we will kill the terrorists, and every Islamic militant will know, if you wage jihad against us, you’re signing your death warrant.
And under no circumstances will I ever apologize for America.

In later debates Cruz tried to soften this a bit, but it ended up making him sound like he doesn’t understand what words mean. He tried to redefine “carpet bombing” to mean “precision missile strikes” (but of course, precision missile strikes are what we’re already doing). He tried to walk back the “sand can glow in the dark” line, but it’s pretty clear that the only way that line makes sense is if you intend to deploy nuclear weapons. (I’m pretty sure he didn’t mean bioluminescent phytoplankton.) He gave a speech declaring his desire to commit mass murder, and is now trying to Humpty Dumpty his way out of the outrage it provoked.

This is how far the Republican Party has fallen.

Medicaid expansion and the human cost of political polarization

JDN 2457422

As of this writing, there are still 22 of our 50 US states that have refused to expand Medicaid under the Affordable Care Act. Several other states (including Michigan) expanded Medicaid, but on an intentionally slowed timetable. The way the law was written, these people are not eligible for subsidized private insurance (because it was assumed they’d be on Medicaid!), so there are almost 3 million people without health insurance because of the refused expansions.

Why? Would expanding Medicaid on the original timetable be too arduous to accomplish? If so, explain why 13 states managed to do it on time.

Would expanding Medicaid be expensive, and put a strain on state budgets? No, the federal government will pay 90% of the cost until 2020. Some states claim that even the 10% is unbearable, but when you figure in the reduced strain on emergency rooms and public health, expanding Medicaid would most likely save state money, especially with the 90% federal funding.

To really understand why so many states are digging in their heels, I’ve made you a little table. It includes three pieces of information about each state: The first column is whether it accepted Medicaid immediately (“Yes”), accepted it with delays or conditions, or hasn’t officially accepted it yet but is negotiating to do so (“Maybe”), or refused it completely (“No”). The second column is the political party of the state governor. The third column is the majority political party of the state legislatures (“D” for Democrat, “R” for Republican, “I” for Independent, or “M” for mixed if one house has one majority and the other house has the other).

State Medicaid? Governor Legislature
Alabama No R R
Alaska Maybe I R
Arizona Yes R R
Arkansas Maybe R R
California Yes D D
Colorado Yes D M
Connecticut Yes D D
Delaware Yes D D
Florida No R R
Georgia No R R
Hawaii Yes D D
Idaho No R R
Illinois Yes R D
Indiana Maybe R R
Iowa Maybe R M
Kansas No R R
Kentucky Yes R M
Lousiana Maybe D R
Maine No R M
Maryland Yes R D
Massachusetts Yes R D
Michigan Maybe R R
Minnesota No D M
Mississippi No R R
Missouri No D M
Montana Maybe D M
Nebraska No R R
Nevada Yes R R
New Hampshire Maybe D R
New Jersey Yes R D
New Mexico Yes R M
New York Yes D D
North Carolina No R R
North Dakota Yes R R
Ohio Yes R R
Oklahoma No R R
Oregon Yes D D
Pennsylvania Maybe D R
Rhode Island Yes D D
South Carolina No R R
South Dakota Maybe R R
Tennessee No R R
Texas No R R
Utah No R R
Vermont Yes D D
Virginia Maybe D R
Washington Yes D D
West Virginia Yes D R
Wisconsin No R R
Wyoming Maybe R R

I have taken the liberty of some color-coding.

The states highlighted in red are states that refused the Medicaid expansion which have Republican governors and Republican majorities in both legislatures; that’s Alabama, Florida, Georgia, Idaho, Kansas, Mississippi, Nebraska, North Carolina, Oklahoma, South Carolina, Tennessee, Texas, Utah, and Wisconsin.

The states highlighted in purple are states that refused the Medicaid expansion which have mixed party representation between Democrats and Republicans; that’s Maine, Minnesota, and Missouri.

And I would have highlighted in blue the states that refused the Medicaid expansion which have Democrat governors and Democrat majorities in both legislatures—but there aren’t any.

There were Republican-led states which said “Yes” (Arizona, Nevada, North Dakota, and Ohio). There were Republican-led states which said “Maybe” (Arkansas, Indiana, Michigan, South Dakota, and Wyoming).

Mixed states were across the board, some saying “Yes” (Colorado, Illinois, Kentucky, Maryland, Massachusetts, New Jersey, New Mexico, and West Virginia), some saying “Maybe” (Alaska, Iowa, Lousiana, Montana, New Hampshire, Pennsylvania, and Virginia), and a few saying “No” (Maine, Minnesota, and Missouri).

But every single Democrat-led state said “Yes”. California, Connecticut, Delaware, Hawaii, New York, Oregon, Rhode Island, Vermont, and Washington. There aren’t even any Democrat-led states that said “Maybe”.

Perhaps it is simplest to summarize this in another table. Each row is a party configuration (“Democrat, Republican”, or “mixed”); the column is a Medicaid decision (“Yes”, “Maybe”, or “No”); in each cell is the count of how many states that fit that description:

Yes Maybe No
Democrat 9 0 0
Republican 4 5 14
Mixed 8 7 3

Shall I do a chi-square test? Sure, why not? A chi-square test of independence produces a p-value of 0.00001. This is not a coincidence. Being a Republican-led state is strongly correlated with rejecting the Medicaid expansion.

Indeed, because the elected officials were there first, I can say that there is Granger causality from being a Republican-led state to rejecting the Medicaid expansion. Based on the fact that mixed states were much less likely to reject Medicaid than Republican states, I could even estimate a dose-response curve on how having more Republicans makes you more likely to reject Medicaid.

Republicans did this, is basically what I’m getting at here.

Obamacare itself was legitimately controversial (though the Republicans never quite seemed to grasp that they needed a counterproposal for their argument to make sense), but once it was passed, accepting the Medicaid expansion should have been a no-brainer. The federal government is giving you money in order to give healthcare to poor people. It will not be expensive for your state budget; in fact it will probably save you money in the long run. It will help thousands or millions of your constituents. Its impact on the federal budget is negligible.

But no, 14 Republican-led states couldn’t let themselves get caught implementing a Democrat’s policy, especially if it would actually work. If it failed catastrophically, they could say “See? We told you so.” But if it succeeded, they’d have to admit that their opponents sometimes have good ideas. (You know, just like the Democrats did, when they copied most of Mitt Romney’s healthcare system.)

As a result of their stubbornness, almost 3 million Americans don’t have healthcare. Some of those people will die as a result—economists estimate about 7,000 people, to be precise. Hundreds of thousands more will suffer. All needlessly.

When 3,000 people are killed in a terrorist attack, Republicans clamor to kill millions in response with carpet bombing and nuclear weapons.

But when 7,000 people will die without healthcare, Republicans say we can’t afford it.