How (not) to talk about the defense budget

JDN 2457927 EDT 20:20.

This week on Facebook I ran into a couple of memes about the defense budget that I thought were worth addressing. While the core message that the United States spends too much on the military is sound, these particular memes are so massively misleading that I think it would be irresponsible to let them go unanswered.

Tax_dollars_meme

First of all, this graph is outdated; it appears to be from about five years ago. If you use nominal figures for just direct military spending, the budget has been cut from just under $700 billion (what this figure looks like) in 2010 to only about $600 billion today. If you include verterans’ benefits, again nominally, we haven’t been below $700 billion since 2007; today we are now above $800 billion. I think the most meaningful measure is actually military spending as percent of GDP, on which we’ve cut military spending from its peak of 4.7% of GDP in 2010 to 3.5% of GDP today.

It’s also a terrible way to draw a graph; using images instead of bars may be visually appealing, but it undermines the most important aspect of a bar graph, which is that you can easily visually compare relative magnitudes.

But the most important reason why this graph is misleading is that it uses only the so-called “discretionary budget”, which includes almost all military spending but only a small fraction of spending on healthcare and social services. This creates a wildly inflated sense of how much we spend on the military relatively to other priorities.

In particular, we’re excluding Medicare and Social Security, which are on the “mandatory budget”; each of these alone is comparable to total military spending. Here’s a very nice table of all US government spending broken down by category.

Let’s just look at federal spending for now. Including veterans’ benefits, we currently spend $814 billion per year on defense. On Social Security, we spend $959 billion. On healthcare, we spend $1,018 billion per year, of which $536 billion is Medicare.

We also spend $376 billion on social welfare programs and unemployment, along with $149 billion on education, $229 billion servicing the national debt, and $214 billion on everything else (such as police, transportation, and administration).

I’ve made you a graph that accurately reflects these relative quantities:

US_federal_spending

As you can see, the military is one of our major budget items, but the largest categories are actually pensions (i.e. Social Security) and healthcare (i.e. Medicare and Medicaid).

Given the right year and properly adjusted bars on the graph, the meme may strictly be accurate about the discretionary budget, but it gives an extremely distorted sense of our overall government spending.

The next meme is even worse:

Lee_Camp_meme

Again the figures aren’t strictly wrong if you use the right year, but we’re only looking at the federal discretionary budget. Since basically all military spending is federal and discretionary, but most education spending is mandatory and done at the state and local level, this is an even more misleading picture.

Total annual US military spending (including veteran benefits) is about $815 billion.
Total US education spending (at all levels) is about $922 billion.

Here’s an accurate graph of total US government spending at all levels:

US_total_spending

That is, we spend more on education than we do on the military, and dramatically more on healthcare.

However, the United States clearly does spend far too much on the military and probably too little on education; the proper comparison to make is to other countries.

Most other First World Countries spend dramatically more on education than they do on the military.

France, for example, spends about $160 billion per year on education, but only about $53 billion per year on the military—and France is actually a relatively militaristic country, with the 6th-highest total military spending in the world.

Germany spends about $172 billion per year on education, but only about about $44 billion on the military.

In absolute figures, the United States overwhelms all other countries in the world—we spend as much as at least the next 10 combined.

Using figures from the Stockholm International Peace Research Institute (SIPRI), the US spends $610 billion of the world’s total $1,776 billion, meaning that over a third of the world’s military spending is by the United States.

This is a graph of the top 15 largest military budgets in the world.

world_military_spending

One of these things is not like the other ones…

It probably makes the most sense to compare military spending as a portion of GDP, which makes the US no longer an outlier worldwide, but still very high by First World standards:

world_military_spending_GDP

If we do want to compare military spending to other forms of spending, I think we should do that in international perspective as well. Here is a graph of education spending versus military spending as a portion of GDP, in several First World countries (military from SIPRI and the CIA, and education from the UNDP):

world_military_education

Our education spending is about average (though somehow we do it so inefficiently that we don’t provide college for free, unlike Germany, France, Finland, Sweden, or Norway), but our military spending is by far the highest.

How about a meme about that?

Elasticity and the Law of Supply

JDN 2457292 EDT 16:16.

Today’s post is kind of a mirror image of the previous post earlier this week; I was talking about demand before, and now I’m talking about supply. (In the next post, I’ll talk about how the two work together to determine the actual price of goods.)

Just as there is an elasticity of demand which describes how rapidly the quantity demanded changes with changes in price, likewise there is an elasticity of supply which describes how much the quantity supplied changes with changes in price.

The elasticity of supply is defined as the proportional change in quantity supplied divided by the proportional change in price; so for example if the number of cars produced increases 10% when the price of cars increases by 5%, the elasticity of supply of cars would be 10%/5% = 2.

Goods that have high elasticity of supply will rapidly flood the market if the price increases even a small amount; goods that have low elasticity of supply will sell at about the same rate as ever even if the price increases dramatically.

Generally, the more initial investment of capital a good requires, the lower its elasticity of supply is going to be.

If most of the cost of production is in the actual marginal cost of producing each new gizmo, then elasticity of supply will be high, because it’s easy to produce more or produce less as the market changes.

But if most of the cost is in building machines or inventing technologies or training employees which already has to be done in order to make any at all, while the cost of each individual gizmo is unimportant, the elasticity of supply will be low, because there’s no sense letting all that capital you invested go to waste.
We can see these differences in action by comparing different sources of electric power.

Photovoltaic solar power has a high elasticity of supply, because building new solar panels is cheap and fast. As the price of solar energy fluctuates, the amount of solar panel produced changes rapidly. Technically this is actually a “fixed capital” cost, but it’s so modular that you can install as little or as much solar power capacity as you like, which makes it behave a lot more like a variable cost than a fixed cost. As a result, a 1% increase in the price paid for solar power increases the amount supplied by a whopping 2.7%, a supply elasticity of 2.7.

Oil has a moderate elasticity of supply, because finding new oil reserves is expensive but feasible. A lot of oil in the US is produced by small wells; 18% of US oil is produced by wells that put out less than 10 barrels per day. Those small wells can be turned on and off as the price of oil changes, and new ones can be built if it becomes profitable. As a result, investment in oil production is very strongly correlated with oil prices. Still, overall production of oil changes only moderate amounts; in the US it had been steadily decreasing since 1970 until very recently when new technologies and weakened regulations resulted in a rapid increase to near-1970s levels. We sort of did hit peak oil; but it’s never quite that simple.

Nuclear fission has a very low elasticity of supply, because building a nuclear reactor is extremely expensive and requires highly advanced expertise. Building a nuclear power plant costs upward of $35 billion. Once a reactor is built, the cost of generating more power is relatively trivial; three-fourths of the cost a nuclear power plant will ever pay is paid simply to build it (or to pay back the debt incurred by doing so). Even if the price of uranium plummets or the price of oil skyrockets, it would take a long time before more nuclear power plants would be built in response.

Elasticity of supply is generally a lot larger in the long run than in the short run. Over a period of a few days or months, many types of production can’t be changed significantly. If you have a corn field, you grow as much corn as you can this season; even if the price rose substantially you couldn’t actually grow any more than your field will allow. But over a period of a year to a few years, most types of production can be changed; continuing with the corn example, you could buy new land to plant corn next season.

The Law of Supply is actually a lot closer to a true law than the Law of Demand. A negative elasticity of supply is almost unheard of; at worst elasticity of supply can sometimes drop close to zero. It really is true that elasticity of supply is almost always positive.

Land has an elasticity near zero; it’s extremely expensive (albeit not impossible; Singapore does it rather frequently) to actually create new land. As a result there’s really no good reason to ever raise the price of land; higher land prices don’t incentivize new production, they just transfer wealth to landowners. That’s why a land tax is such a good idea; it would transfer some of that wealth away from landowners and let us use it for public goods like infrastructure or research, or even just give it to the poor. A few countries actually have tried this; oddly enough, they include Singapore and Denmark, two of the few places in the world where the elasticity of land supply is appreciably above zero!

Real estate in general (which is what most property taxes are imposed on) is much trickier: In the short run it seems to have a very low elasticity, because building new houses or buildings takes a lot of time and money. But in the long run it actually has a high elasticity of supply, because there is a lot of profit to be made in building new structures if you can fund projects 10 or 15 years out. The short-run elasticity is something like 0.2, meaning a 1% increase in price only yields a 0.2% increase in supply; but the long-run elasticity may be as high as 8, meaning that a 1% increase in price yields an 8% increase in supply. This is why property taxes and rent controls seem like a really good idea at the time but actually probably have the effect of making housing more expensive. The economics of real estate has a number of fundamental differences from the economics of most other goods.

Many important policy questions ultimately hinge upon the elasticity of supply: If elasticity is high, then taxing or regulating something is likely to cause large distortions of the economy, while if elasticity is low, taxes and regulations can be used to support public goods or redistribute wealth without significant distortion to the economy. On the other hand, if elasticity is high, markets generally function well on their own, while if elasticity is low, prices can get far out of whack. As a general rule of thumb, government intervention in markets is most useful and most necessary when elasticity is low.

Advertising: Someone is being irrational

JDN 2457285 EDT 12:52

I’m working on moving toward a slightly different approach to posting; instead of one long 3000-word post once a week, I’m going to try to do two more bite-sized posts of about 1500 words or less spread throughout the week. I’m actually hoping to work toward setting up a Patreon and making blogging into a source of income.

Today’s bite-sized post is about advertising, and a rather simple, basic argument that shows that irrational economic behavior is widespread.

First, there are advertisements that don’t make sense. They don’t tell you anything about the product, they are often completely absurd, and while sometimes entertaining they are rarely so entertaining that people would pay to see them in theaters or buy them on DVD—which means that any entertainment value they had is outweighed by the opportunity cost of seeing them instead of the actual TV show, movie, or whatever else it was you wanted to see.

If you doubt that there are advertisements that don’t make sense, I have one example in particular for you which I think will settle this matter:

If you didn’t actually watch it, you must. It is too absurd to be explained.

And of course there are many other examples, from Coca-Cola’s weird associations with polar bears to the series of GEICO TV spots about Neanderthals that they thought were so entertaining as to deserve a TV show (the world proved them wrong), to M&M commercials that present a terrifying world in which humans regularly consume the chocolatey flesh of other sapient citizens (and I thought beef was bad!).

Or here’s another good one:

In the above commercial, Walmart attempts to advertise themselves by showing a heartwarming story of a child who works hard to make money by doing odd jobs, including using the model of door-to-door individual sales that Walmart exists to make obsolete. The only contribution Walmart makes to the story is apparently “we have affordable bicycles for children”. Coca-Cola is also thrown in for some reason.

Certain products seem to attract nonsensical advertising more than others, with car insurance being the prime culprit of totally nonsensical and irrelevant commercials, perhaps because of GEICO in particular who do not actually seem to be any good at providing car insurance but instead spend all of their resources making commercials.

Commercials for cars themselves are an interesting case, as certain ads actually appeal in at least a general way to the quality of the vehicle itself:

Then there are those that vaguely allude to qualities of their vehicles, but mostly immerse us in optimistic cyberpunk:

Others, however, make no attempt to say anything about the vehicle, instead spinning us exciting tales of giant hamsters who use the car and the power of dance to somehow form a truce between warring robot factions in a dystopian future (if you haven’t seen this commercial, none of that is a joke; see for yourself below):

So, I hope that I have satisfied you that there are in fact advertisements which don’t make sense, which could not possibly give anyone a rational reason to purchase the product contained within.

Therefore, at least one of the following statements must be true:

1. Consumers behave irrationally by buying products for irrational reasons
2. Corporations behave irrationally by buying advertisements that don’t work

Both could be true (in fact I think both are true), but at least one must be, on pain of contradiction, as long as you accept that there are advertisements which don’t provide rational reasons to buy products. There’s no wiggling out of this one, neoclassicists.

Advertising forms a large part of our economy—Americans spend $171 billion per year on ads, more than the federal government spends on education, and also more than the nominal GDP of Hungary or Vietnam. This figure is growing thanks to the Internet and its proliferation of “free” ad-supported content. Insofar as advertising is irrational, this money is being thrown down the drain.

The waste from spending on ads that don’t work is limited; you can’t waste more than you actually spent. But the waste from buying things you don’t actually need is not limited in the same way; an ad that cost $1 million to air (cheaper than a typical Super Bowl ad) could lead to $10 million in worthless purchases.

I wouldn’t say that all advertising is irrational; some ads do actually provide enough meaningful information about a product that they could reasonably motivate you to buy it (or at least look into buying it), and it is in both your best interest and the company’s best interest for you to have such information.

But I think it’s not unreasonable to estimate that about half of our advertising spending is irrational, either by making people buy things for bad reasons or by making corporations waste time and money on buying ads that don’t work. This amounts to some $85 billion per year, or enough to pay every undergraduate tuition at every public university in the United States.

This state of affairs is not inevitable.

Most meaningless ads could be undermined by regulation; instead of the current “blacklist” model where an ad is legal as long as it doesn’t explicitly state anything that is verifiably false, we could move to a “whitelist” model where an ad is illegal if it states anything that isn’t verifiably true. Red Bull cannot give you wings, Maxwell House isn’t good to the last drop, and Volkswagen needs to be more specific than “round for a reason”. We may never be able to completely eliminate irrelevant emotionally-salient allusions (pictures of families, children, puppies, etc.), but as long as the actual content of the words is regulated it would be much harder to deluge people with advertisements that provide no actual information.

We have a choice, as a civilization: Do we want to continue to let meaningless ads invade our brains and waste the resources of our society?

9/11, 14 years on—and where are our civil liberties?

JDN 2457278 (09/11/2015) EDT 20:53

Today is the 14th anniversary of the 9/11 attacks. A lot has changed since then—yet it’s quite remarkable what hasn’t. In particular, we still don’t have our civil liberties back.

In our immediate panicked response to the attacks, the United States passed almost unanimously the USA PATRIOT ACT, giving unprecedented power to our government in surveillance, searches, and even arrests and detentions. Most of those powers have been renewed repeatedly and remain in effect; the only major change has been a slight weakening of the NSA’s authority to use mass dragnet surveillance on Internet traffic and phone metadata. And this change in turn was almost certainly only made because of Edward Snowden, who is still forced to live in Russia for fear of being executed if he returns to the US. That is, the man most responsible for the only significant improvement in civil liberties in the United States in the last decade is living in Russia because he has been branded a traitor. No, the traitors here are the over one hundred standing US Congress members who voted for an act that is in explicit and direct violation of the Constitution. At the very least every one of them should be removed from office, and we as voters have the power to do that—so why haven’t we? In particular, why are Dan Lipinski and Steny Hoyer, both Democrats from non-southern states who voted every single time to extend provisions of the PATRIOT ACT, still in office? At least Carl Levin had the courtesy to resign after sponsoring the act allowing indefinite detention—I hope we would have voted him out anyway, since I’d much rather have a Republican (and all the absurd economic policy that entails) than someone who apparently doesn’t believe the Fourth and Sixth Amendments have any meaning at all.

We have become inured to this loss of liberty; it feels natural or inevitable to us. But these are not minor inconveniences; they are not small compromises. Giving our government the power to surveil, search, arrest, imprison, torture, and execute anyone they want at any time without the system of due process—and make no mistake, that is what the PATRIOT ACT and the indefinite detention law do—means giving away everything that separates us from tyranny. Bypassing the justice system and the rule of law means bypassing everything that America stands for.

So far, these laws have actually mostly been used against people reasonably suspected of terrorism, that much is true; but it’s also irrelevant. Democracy doesn’t mean you give the government extreme power and they uphold your trust and use it benevolently. Democracy means you don’t give them that power in the first place.

If there’s really sufficient evidence to support an arrest for terrorism, get a warrant. If you don’t have enough evidence for a warrant, you don’t have enough evidence for an arrest. If there’s really sufficient evidence to justify imprisoning someone for terrorism, get a jury to convict. If you don’t have enough evidence to convince a jury, guess what? You don’t have enough evidence to imprison them. These are not negotiable. They are not “political opinions” in any ordinary sense. The protection of due process is so fundamental to democracy that without it political opinions lose all meaning.

People talk about “Big Government” when we suggest increasing taxes on capital gains or expanding Medicare. No, that isn’t Big Government. Searching without warrants is Big Government. Imprisoning people without trial is Big Government. From all the decades of crying wolf in which any policy someone doesn’t like is accused of being “tyranny”, we seem to have lost the ability to recognize actual tyranny. I hope you understand the full force of my meaning when I say that the PATRIOT ACT is literally fascist. Fascism has come to America, and as predicted it was wrapped in the flag and carrying a cross.

In this sort of situation, a lot of people like to quote (or misquote) Benjamin Franklin:

“Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety.”

With the qualifiers “essential” and “temporary”, this quote seems right; but a lot of people forget them and quote him as saying:
“Those would give up liberty to purchase safety, deserve neither liberty nor safety.”

That’s clearly wrong. We do in fact give up liberty to purchase safety, and as well we should. We give up our liberty to purchase weapons-grade plutonium; we give up our liberty to drive at 220 mph. The question we need to be asking is: How much liberty are we giving up to gain how much safety?

Spoken like an economist, the question is not whether you will give up liberty to purchase safety—the question is at what price you’re willing to make the purchase. The price we’ve been paying in response to terrorism is far too high. Indeed, the price we are paying is tantamount to America itself.

As horrific as 9/11 was, it’s important to remember: It only killed 3,000 people.

This statement probably makes you uncomfortable; it may even offend you. How dare I say “only”?

I don’t mean to minimize the harm of those deaths. I don’t mean to minimize the suffering of people who lost friends, colleagues, parents, siblings, children. The death of any human being is the permanent destruction of something irreplaceable, a spark of life that can never be restored; it is always a tragedy and there is never any way to repay it.

But I think people are actually doing the opposite—they are ignoring or minimizing millions of other deaths because those deaths didn’t happen to be dramatic enough. A parent killed by a heart attack is just as lost as a parent who died in 9/11. A friend who died of brain cancer is just as gone as a friend who was killed in a terrorist attack. A child killed in a car accident is just as much a loss as a child killed by suicide bombers. If you really care about human suffering, I contend that you should care about all human suffering, not just the kind that makes the TV news.

Here is a list, from the CDC, of things that kill more Americans per month than terrorists have killed in the last three decades:

Heart disease: 50,900 per month

Cancer: 48,700 per month

Lung disease: 12,400 per month

Accidents: 10,800 per month

Stroke: 10,700 per month

Alzheimer’s: 7,000 per month

Diabetes: 6,300 per month

Influenza: 4,700 per month

Kidney failure: 3,900 per month

Terrorism deaths since 1985: 3,455
Yes, that’s right; influenza kills more Americans per month (on average; flu is seasonal, after all) than terrorism has killed in the last thirty years.
And for comparison, other violent deaths, not quite but almost as many per month as terrorism has killed in my entire life so far:
Suicide: 3,400 per month

Homicide: 1,300 per month

Now, with those figures in mind, I want you to ask yourself the following question: Would you be willing to give up basic, fundamental civil liberties in order to avoid any of these things?

Would you want the government to be able to arrest you and imprison you without trial for eating too many cheeseburgers, so as to reduce the risk of heart disease and stroke?

Would you want the government to monitor your phone calls and Internet traffic to make sure you don’t smoke, so as to avoid lung disease? Or to watch for signs of depression, to reduce the rate of suicide?

Would you want the government to be able to use targeted drone strikes, ordered directly by the President, pre-emptively against probable murderers (with a certain rate of collateral damage, of course), to reduce the rate of homicide?

I presume that the answer to all the above questions is “no”. Then now I have to ask you: Why are you willing to give up those same civil liberties to prevent a risk that is three hundred times smaller?

And then of course there’s the Iraq War, which killed 4,400 Americans and at least 100,000 civilians, and the Afghanistan War, which killed 3,400 allied soldiers and over 90,000 civilians.

In response to the horrific murder of 3,000 people, we sacrificed another 7,800 soldiers and killed another 190,000 innocent civilians. What exactly did that accomplish? What benefit did we get for such an enormous cost?

The people who sold us these deadly wars and draconian policies did so based on the threat that terrorism could somehow become vastly worse, involving the release of some unstoppable bioweapon or the detonation of a full-scale nuclear weapon, killing millions of people—but that has never happened, has never gotten close to happening, and would be thousands of times worse than the worst terrorist attacks that have ever actually happened.

If we’re worried about millions of people dying, it is far more likely that there would be a repeat of the 1918 influenza pandemic, or an accidental detonation of a nuclear weapon, or a flashpoint event with Russia or China triggering World War III; it’s probably more likely that there would be an asteroid impact large enough to kill a million people than there would be a terrorist attack large enough to do the same.

As it is, heart disease is already killing millions of people—about a million every two years—and we aren’t so panicked about that as to give up civil liberties. Elsewhere in the world, malnutrition kills over 3 million children per year, essentially all of it due to extreme poverty, which we could eliminate by spending between a quarter ($150 billion) and a half ($300 billion) of our current military budget ($600 billion); but we haven’t even done that even though it would require no loss of civil liberties at all.

Why is terrorism different? In short, the tribal paradigm.

There are in fact downsides to not being infinite identical psychopaths, and this is one of them. An infinite identical psychopath would simply maximize their own probability of survival; but finite diverse tribalists such as we underreact to some threats (such as heart disease) and overreact to others (such as terrorism). We’ll do almost anything to stop the latter—and almost nothing to stop the former.

Terrorists are perceived as a threat not just to our individual survival like heart disease or stroke, but a threat to our tribe from another tribe. This triggers a deep, instinctual sense of panic and hatred that makes us willing to ignore principles we would otherwise uphold and commit acts of violence we would otherwise find unimaginable.

Indeed, it’s precisely that instinct which motivates the terrorists in the first place. From their perspective, we are the other tribe that threatens their tribe, and they are therefore willing to stop at nothing until we are destroyed.

In a fundamental way, when we respond to terrorism in this way we do not defeat them—we become them.
If you ask people who support the PATRIOT ACT, it’s very clear that they don’t see themselves as imposing upon the civil liberties of Americans. Instead, they see themselves as protecting Americans (our tribe), and they think the impositions upon civil liberties will only harm those who don’t count as Americans (other tribes). This is a pretty bizarre notion if you think about it carefully—if you don’t need a warrant or probable cause to imprison people, then what stops you from imprisoning people who aren’t terrorists?—but people don’t think about it carefully. They act on emotion, on instinct.

The odds of terrorists actually destroying America by killing people are basically negligible. Even the most deadly terrorist attack in recorded history—9/11—killed fewer Americans than die every month from diabetes, or every week from heart disease. Even the most extreme attacks feared (which are extremely unlikely) wouldn’t be any worse than World War II, which of course we won.

But the odds of terrorists destroying America by making us give up the rights and freedoms that define us as a nation? That’s well underway.

Free trade, fair trade, or what?

JDN 2457271 EDT 11:34.

As I mentioned in an earlier post, almost all economists are opposed to protectionism. In a survey of 264 AEA economists, 87% opposed tariffs to protect US workers against foreign competition.

(By the way, 58% said they usually vote Democrat and only 23% said they usually vote Republican. Given that economists are overwhelmingly middle-age rich White males—only 12% of tenured faculty economists are women and the median income of economists is over $90,000—that’s saying something. Dare I suggest it’s saying that Democrat economic policy is usually better?)

There are a large number of published research papers showing large positive effects of free trade agreements, such as this paper, and this paper, and this paper, and this paper. It’s hard to find any good papers showing any significant negative effects. This is probably why the consensus is so strong; the empirical evidence is overwhelming.

Yet protectionism is very popular among the general public. The majority of both Democrat and Republican voters believe that free trade agreements have harmed the United States. For decades, protectionism has always been the politically popular answer.

To be fair, it’s actually possible to think that free trade harms the US but still support free trade; actually there are some economists who argue that free trade has harmed the US, but has benefited other countries like China and India so much more that it is worth it, making free trade an act of global altruism and good will (for the opposite view, here’s a pretty good article about how “free trade” in principle is often mercantilism in practice, and by no means altruistic). As Krugman talks about, there is some evidence that income inequality in the First World has been exacerbated by globalization—but it’s clearly not the primary reason for rising inequality.

What’s going on here? Are economists ignoring the negative impacts of free trade because it doesn’t fit their elegant mathematical models? Is the general public ignorant of how trade actually works? Does the way free trade works, or its interaction with human psychology, inherently obscure its benefits while emphasizing its harms?

Yes. All of the above.

One of the central mistakes of neoclassical economics is the tendency to over-aggregate. Instead of looking at the impact on individuals, it’s much easier to look at the impact on aggregated abstractions like trade flows and GDP. To some extent this is inevitable—there are simply too many people in the world to keep track of them all. But we need to be aware of what welose when we aggregate, and we need to test the robustness of our theories by applying different models of aggregation (such as comparing “how does this affect Americans” with “how does this affect the First World middle class”).

It is absolutely unambiguous that free trade increases trade flows and GDP, and for small countries these benefits can be mind-bogglingly huge. A key part of the amazing success story of economic development that is Korea is that they dramatically increased their openness to global trade.

The reason for this is absolutely fundamental to economics, and in grasping it in 1776 Adam Smith basically founded the field: Voluntary trade benefits both parties.

As most economists would put it today, comparative advantage leads to Pareto-improving gains from trade. Or as I’d tend to put it, more succinctly yet just as thoroughly based in modern game theory: Trade is nonzero-sum.

When you sell a product to someone, it is because the money they’re offering you is worth more to you than the product—and because the product is worth more to them than the money. You each lose something you value less and gain something you value more—so you are both better off.

This mutual benefit occurs whether you are individuals, corporations, or nations. It’s a fundamental principle of economics that underlies the operation of markets at every scale.

This is what I think most people don’t understand when they say they want to “stop sending jobs overseas”. If by that all you mean is ensuring that there aren’t incentives to offshore and outsource, that’s quite reasonable. Even some degree of incentive to keep businesses in the US might make sense, to avoid a race-to-the-bottom in global wages. But I get the sense that it is more than this, that people have a general notion that jobs are zero-sum and if we hire a million people in China that means a million people must lose their jobs in the US. This is not simply wrong, it is fundamentally wrong; it misses the entire point of economics. If there is one core principle that defines economics, I think it would be that the universe is nonzero-sum; gains for some can also be gains for others. There is not a fixed amount of stuff in the world that we distribute; we can make more stuff. Handled properly, a trade that results in a million people hired in China can mean an extra million people hired in the US.

Once you introduce a competitive market, things get more complicated, because there aren’t just winners—there are also losers. When you have competitors, someone can buy from them instead of you, and the two of them benefit, but you are harmed. By the standard methods of calculating benefits and harms (which admittedly leave much to be desired), we can show quite clearly that in general, on average, the benefits outweigh the harms.

But of course we don’t live “in general, on average”. Despite the overwhelming, unambiguous benefit to the economy as a whole, there is some evidence that free trade can produce a good deal of harm to specific individuals.

Suppose you live in the US and your job is to assemble iPads. You’re good at it, you like it, it pays pretty well. But now Apple says that they want to “reduce labor costs” (they are in fact doing nothing of the sort; to really reduce labor costs in a deep economic sense you’d have to make work easier, more productive, or more fun—the wage and the cost are fundamentally different things), so they outsource production to Foxconn in China, who pay wages 1/30 of what you were being paid.

The net result of this change to the economy as a whole is almost certainly positive—the price of iPads goes down, we all get to have iPads. (There’s a meme going around claiming that the price of an iPad would be almost $15,000 if it were made in the US; no, it would cost about $1000 even if our productivity were no higher and Apple could keep their current profit margin intact, both of which are clearly overestimates. But since it’s currently selling for about $500, that’s still a big difference.) Apple makes more profits, which is why they did it—and we do have to count that in our GDP. Most importantly, workers in China get employed in safe, high-skill jobs instead of working in coal mines, subsistence farming, or turning to drugs and prostitution. More stuff, more profits, better jobs for some of the world’s poorest workers. These are all good things, and overall they outweigh the harm of you losing your job.

Well, from a global perspective, anyway. I doubt they outweigh the harm from your perspective. You still lost a good job; you’re now unemployed, and may have skills so specific that they can’t be transferred to anything else. You’ll need to retrain, which means going back to school or else finding one of those rare far-sighted companies that actually trains their workers. Since the social welfare system in the US is such a quagmire of nonsensical programs, you may be ineligible for support, or eligible in theory and unable to actually get it in practice. (Recently I got a notice from Medicaid that I need to prove again that my income is sufficiently low. Apparently it’s because I got hired at a temporary web development gig, which paid me a whopping $700 over a few weeks—why, that’s almost the per-capita GDP of Ghana, so clearly I am a high-roller who doesn’t need help affording health insurance. I wonder how much they spend sending out these notices.)

If we had a basic income—I know I harp on this a lot, but seriously, it solves almost every economic problem you can think of—losing your job wouldn’t make you feel so desperate, and owning a share in GDP would mean that the rising tide actually would lift all boats. This might make free trade more popular.

But even with ideal policies (which we certainly do not have), the fact remains that human beings are loss-averse. We care more about losses than we do about gains. The pain you feel from losing $100 is about the same as the joy you feel from gaining $200. The pain you feel from losing your job is about twice as intense as the joy you feel from finding a new one.

Because of loss aversion, the constant churn of innovation and change, the “creative destruction” that Schumpeter considered the defining advantage of capitalism—well, it hurts. The constant change and uncertainty is painful, and we want to run away from it.

But the truth is, we can’t. There’s no way to stop the change in the global economy, and most of our attempts to insulate ourselves from it only end up hurting us more. This, I think, is the fundamental reason why protectionism is popular among the general public but not economists: The general public sees protectionism as a way of holding onto the past, while economists recognize that it is simply a way of damaging the future. That constant churning of people gaining and losing jobs isn’t a bug, it’s a feature—it’s the reason that capitalism is so efficient in the first place.

There are a few ways we can reduce the pain of this churning, but we need to focus on that—reducing the pain—rather than trying to stop the churning itself. We should provide social welfare programs that allow people to survive while they are unemployed. We should use active labor market policies to train new workers and match them with good jobs. We may even want to provide some sort of subsidy or incentive to companies that don’t outsource—a small one, to make sure they don’t do so needlessly, but not a large one, so they’ll still do it when it’s actually necessary.

But the one thing we must not do is stop creating jobs overseas. And yes, that is what we are doing, creating jobs. We are not sending jobs that already exist, we are creating new ones. In the short run we also destroy some jobs here, but if we do it right we can replace them—and usually we do okay.

If we stop creating jobs in India and China and around the world, millions of people will starve.

Yes, it is as stark as that. Millions of lives depend upon continued open trade. We in the United States are a manufacturing, technological and agricultural superpower—we could wall ourselves off from the world and only see a few percentage points shaved off of GDP. But a country like Nicaragua or Ghana or Vietnam doesn’t have that option; if they cut off trade, people start dying.

This is actually the main reason why our trade agreements are often so unfair; we are in by far the stronger bargaining position, so we can make them cut their tariffs on textiles even as we maintain our subsidies on agriculture. We are Mr. Bumble dishing out gruel and they are Oliver Twist begging for another bite.

We can’t afford to stop free trade. We can’t even afford to significantly slow it down. A global economy is the best hope we have for global peace and global prosperity.

That is not to say that we should leave trade completely unregulated; trade policy can and should be used to enforce human rights standards. That enormous asymmetry in bargaining power doesn’t have to be used to maximize profits; it can be used to advance human rights.

This is not as simple as saying we should never trade with nations that have bad human rights records, by the way. First of all that would require we cut off Saudi Arabia and China, which is totally unrealistic and would impoverish millions of people; second it doesn’t actually solve the problem. Instead we should use sanctions, tariffs, and trade agreements to provide incentives to improve human rights, rewarding governments that do and punishing governments that don’t. We could have a sliding tariff that decreases every time you show improvement in human rights standards. Think of it like behavioral reinforcement; reward good behavior and you’ll get more of it.

We do need to have sweatshops—but as Krugman has come around to realizing, we can make sweatshops safer. We can put pressure on other countries to treat their workers better, pay them more—and actually make the global economy more efficient, because right now their wages are held down below the efficient level by the power that corporations wield over them. We should not demand that they pay the same they would here in the First World—that’s totally unrealistic, given the difference in productivity—but we should demand that they pay what their workers actually deserve.

Similar incentives should apply to individual corporations, which these days are as powerful as some governments. For example, as part of a zero-tolerance program against forced labor, any company caught using or outsourcing to forced labor should have its profits garnished for damages and the executives who made the decision imprisoned. Sometimes #Scandinaviaisnotbetter; IKEA was involved in such outsourcing during the Cold War, and it is currently being litigated just how much they knew and what they could have done about it. If they knew and did nothing, some IKEA executive should be going to prison. If that seems extreme, let me remind you what they did: They used slaves.

My standard for penalizing human rights violations, whether by corporations or governments, is basically like this: Follow the decision-making up the chain of command, stopping only when the next-higher executive can clearly show to the preponderance of evidence that they were kept out of the loop. If no executive can provide sufficient evidence, the highest-ranking executive at the time the crime was committed will be held responsible. If you don’t want to be held responsible for crimes committed by people who work for you, it’s your responsibility to bring them to justice. Negligence in oversight will not be exonerating because you didn’t know; it will be incriminating because you should have. When your bank is caught laundering money for terrorists and drug lords, it isn’t enough to have your chief of compliance resign; he should be imprisoned—and if his superiors knew about it, so should they.

In fact maybe the focus should be on corporations, because we have the legal authority to do that. When dealing with other countries, there are United Nations rules and simply the de facto power of large trade flows and national standing armies. With Saudi Arabia or China, there’s a very real chance that they’ll simply tell us where we can shove it; but if we get that same kind of response from HSBC or Goldman Sachs (which, actually, we did), we can start taking out handcuffs (that, we did not do—but I think we should have).

We can also use consumer pressure to change the behavior of corporations, such as Fair Trade. There’s some debate about just how effective these things are, but the comparison that is often made between Fair Trade and tariffs is ridiculous; this is a change in consumer behavior, not a change in government policy. There is absolutely no loss of freedom. Choosing not to buy something does not constitute coercion against someone else. Maybe there are more efficient ways to spend money (like donating it directly to the best global development charities), but if you start going down that road you quickly turn into Peter Singer and start saying that wearing nicer shoes means you’re committing murder. By all means, let’s empirically study different methods of fighting poverty and focus on the ones that work best; but there’s a perverse smugness to criticisms of Fair Trade that says to me this isn’t actually about that at all. Instead, I think most people who criticize Fair Trade don’t support the idea of altruism at all—they’re far-right Randian libertarians who honestly believe that selfishness is the highest form of human morality. (It is in fact the second-lowest, according to Kohlberg.) Maybe it will turn out that Fair Trade is actually ineffective at fighting poverty, but it’s clear that an unregulated free market isn’t good at that either. Those aren’t the only options, and the best way to find out which methods work is to give them a try. Consumer pressure clearly can work in some cases, and it’s a low-cost zero-regulation solution. They say the road to Hell is paved with good intentions—but would you rather we have bad intentions instead?

By these two methods we could send a clear message to multinational corporations that if they want to do business in the US—and trust me, they do—they have to meet certain standards of human rights. This in turn will make those corporations put pressure on their suppliers, all the way down the supply chain, to uphold the standards lest they lose their contracts. With some companies upholding labor standards in Third World countries, others will be forced to, as workers refuse to work for companies that don’t. This could make life better for many millions of people.

But this whole plan only works on one condition: We need to have trade.

What is socialism?

JDN 2457265 EDT 10:47

Last night I was having a political discussion with some friends (as I am wont to do), and it became a little heated, though never uncongenial. A key point of contention was the fact that Bernie Sanders is a socialist, and what exactly that entails.

One of my friends was arguing that this makes him far-left, and thus it is fair when the news media often likes to make a comparison between Sanders on the left and Trump on the right. Donald Trump is actually oddly liberal on some issues, but his attitudes on racial purity, nativism, military unilateralism, and virtually unlimited executive power are literally fascist. Even his “liberal” views are more like the kind of populism that fascists have often used to win support in the past: Don’t you hate being disenfranchised? Give me absolute power and I’ll fix everything for you! Don’t like how our democracy has become corrupt? Don’t worry, I’ll get rid of it! (The democracy, that is.) While he certainly doesn’t align well with the Republican Party platform, I think it’s quite fair to say that Donald Trump is a far-right candidate.

Bernie Sanders, however, is not a far-left candidate. He is a center-left candidate. His views are basically consonant with the Labour Party of the UK and the Social Democratic Party of Germany. He has spoken often about the Scandinavian model (because, well, #Scandinaviaisbetter—Denmark, Sweden, and Norway are some of the happiest places on Earth). When we talk about Bernie Sanders we aren’t talking about following Cuba and the Soviet Union; we’re talking about following Norway and Sweden. As Jon Stewart put it, he isn’t a “crazy-pants cuckoo bird” as some would have you think.

But he’s a socialist, right? Well… sort of—we have to be very clear what that means.

The word “socialism” has been used to mean many things; it has been a cover for genocidal fascism (“National Socialism”) and tyrannical Communism (“Union of Soviet Socialist Republics”). It has become a pejorative thrown at Social Security, Medicare, banking regulations—basically any policy left of Milton Friedman. So apparently it means something between Medicare and the Holocaust.

Social democracy is often classified as a form of socialism—but one can actually make a pretty compelling case that social democracy is not socialism, but in fact a form of capitalism.

If we want a simple, consistent definition of “socialism”, I think I would put it thus: Socialism is a system in which the majority of economic activity is directly controlled by the government. Most, if not all, industries are nationalized; production and distribution are handled by centrally-planned quotas instead of market supply and demand. Under this definition, the USSR, Venezuela, Cuba, and (at least until recently) China are socialist—and under this definition, socialism is a very bad idea. The best-case scenario is inefficiency; the worst-case scenario is mass murder.

Social democracy, the position that Bernie Sanders espouses (and I basically agree wit), is as follows: Social democracy is a system in which markets are taxed and regulated by a democratically-elected government to ensure that they promote general welfare, public goods are provided by the government, and transfer programs are used to reduce poverty and inequality.

Let’s also try to define “capitalism”: Capitalism is a system in which the majority of economic activity is handled by private sector markets.

Under the Scandinavian model, the majority of economic activity is handled by private sector markets, which are in turn regulated and taxed to promote the general welfare—that is, at least on these definitions, Scandinavia is both capitalist and social democratic.

In fact, so is the United States; while our taxes are lower and our regulations weaker, we still have substantial taxes and regulations. We do have transfer programs like WIC, SNAP, and Social Security that attempt to redistribute wealth and reduce poverty.

We could define “socialism” more broadly to mean any government intervention in the economy, in which case Bernie Sander is a socialist and so is… almost everyone else, including most economists.

The majority of the most eminent American economists are in favor of social democracy. I don’t intend this as an argument from authority, but rather to give a sense of the scientific consensus. The consensus in economics is by no means as strong as that in biology or physics (or climatology, ahem), but there is still broad agreement on many issues.

In a survey of 264 members of the American Economics Association [pdf link], 77% opposed government ownership of enterprise (14% mixed feelings, 8% favor) but 71% favored redistribution of wealth in some form (7% mixed feelings, 20% opposed). That’s social democracy is a nutshell. 67% favored public schools (14% mixed feelings, 17% opposed); 75% favored Keynesian monetary policy (12% mixed feelings, 12% opposed); 51% favored Keynesian fiscal policy (19% mixed feelings, 30% opposed). 58% opposed tighter immigration restrictions (16% mixed feelings, 25% opposed). 79% support anti-discrimination laws. 68% favor gun control.

The major departure from left-wing views that the majority of economists make is a near-universal opposition to protectionism, with 86.8% opposed, 7.6% with mixed feelings, and only 5.3% in favor. It seems I am not the only economist to cringe when politicians say they want to “stop sending jobs overseas”, which they do left and right. This view is quite popular; but the evidence says that it is wrong. Protectionism is not the answer; you make your trading partners poorer, they retaliate with their own protections, and you both end up worse off. We need open trade. I’ll save the details on why open trade is so important for a later post.

One issue that economists are very divided on right now is minimum wage; 47.3% favor minimum wage, 38.3% oppose it, and 14.4% have mixed feelings. This division likely reflects the ambiguity of empirical results on the employment effect of minimum wage, which have a wide margin of error but effect sizes that cluster around zero. Economists are also somewhat divided on military aid, with 36.8% in favor, 33% opposed, and 29.9% with mixed feelings. This I attribute more to the fact that military aid, like most military action, can be justified in principle but is typically unjustified in practice. And indeed perhaps “mixed feelings” is the most reasonable view to have on war and its instruments.

Since Bernie Sanders strongly supports raising minimum wage and some of his statements verge on protectionism, I do have to place him to the left of the economic consensus. A lot of economists would probably disagree on the particulars of his tax plans and such. But his core policies are entirely in line with that consensus, and being a social democrat is absolutely part of that. Compare this to the Republicans, who keep trying to out-crazy each other (apparently Scott Walker thinks we should not only build a wall against Mexico, but also against Canada?) and want policies that were abandoned decades ago by mainstream economists (like the gold standard, or a balanced-budget amendment), or simply would never be taken seriously by mainstream economists at all (the aforementioned border wall, eliminating all environmental regulation, or ending all transfer payments and social welfare programs). Even the things they supposedly agree on I’m not sure they do; when economists say they want “deregulation” Republicans seem to think that means “no rules at all” when in fact it’s supposed to mean “simple, transparent rules that can be tightly and fairly enforced”. (I think we need a new term for it, though there is a slogan I like: “Deregulate with a scalpel, not a chainsaw.”) Obama has done a very good job of deregulating in the sense that economists intend, and I think in general most economists view him positively as a leader who made the best of a bad situation.

In any case, the broad consensus of American economists (and I think most economists around the world) is that some form of capitalist social democracy is the best system we have so far. There is dispute about particular policies—how much should the tax rates be, should we tax income, consumption, real estate, capital, etc.; how large should the transfers be; what regulations should be added or removed—but the basic concept of a market economy with a government that taxes, transfers, and regulates is not in serious dispute.

Indeed, social democracy is the economic system of the free world.

Even using the conservative Heritage Foundation’s data, the correlation between tax burden and economic freedom—that’s economic freedom—is small but positive. (I’m excluding missing data, as well as Timor-Leste because it has a “tax burden” larger than its GDP due to weird accounting of its tourism-based economy, and North Korea because they lie to us and they theoretically have “zero taxes” but that’s clearly not true; the Heritage Foundation reports them as 100% taxes, but that’s also clearly not true either.) See for yourself:

Graph: Heritage Foundation Economic Freedom Index and tax burden

Why is this? Do taxes automatically make you more free? No, they make you less free, because you have to pay for things you didn’t choose to buy (which I admit and the Heritage Foundation includes in their index). But taxes are how you manage a free economy. You need to control monetary policy somehow, which means adding and removing money. The way that social democracies do this is by spending on public goods and transfers to add money, and taxing income, consumption, or assets to remove money. Even if you tie your money to the gold standard, you still need to pay for public goods like military and police; and with a fixed money supply that means spending must be matched by taxes.

There are other ways to do this. You could be like Zimbabwe and print as much money as you feel like. You could be like Venezuela, and have government-owned industries form the majority of your economy. Or, actually, you could not do it; you could fail to manage your country’s economy and leave it wallowing in poverty, like Ghana. All of the countries I just listed have lower tax burdens than the United States.

Within the framework of social democracy, there are higher taxes so that spending and transfers can be higher, which means that more public goods are provided and poverty is lower, which means that real equality of opportunity and thus, real economic freedom, are higher. It’s not that raising taxes automatically makes people more free; rather, the kind of policies that make people more free tend to be the kind of social-democratic policies that involve relatively high taxes.

Worldwide, US is 12th in terms of economic freedom and 62nd in terms of tax burden. We currently stand at 24%. That’s quite low for a First World country, but still relatively high by world standards. The highest tax burden is in Eritrea at 50%; the lowest is in Kuwait at an astonishing 0.7% (I don’t even know how that’s possible). Neither is a really wonderful place to live (though Kuwait is better).

Indeed, if you restrict the sample to North America and Europe, the correlation basically disappears; all the countries are fairly free, all the taxes are fairly high, and within that the two aren’t very much related. (It’s been a long time since I’ve seen a trendline that flat, actually!)

Graph: Heritage Foundation Economic Freedom Index and tax burden, Europe and North America

Switzerland, Canada, and Denmark all have higher economic freedom scores than the United States, as well as higher tax burdens; but on the other hand, Greece, Spain, and Austria have higher tax burdens but lower freedom scores. All of them are variations on social democracy.

Is that socialism? I’m really not sure. Why does it matter, really?

What’s wrong with academic publishing?

JDN 2457257 EDT 14:23.

I just finished expanding my master’s thesis into a research paper that is, I hope, suitable for publication in an economics journal. As part of this process I’ve been looking into the process of submitting articles for publication in academic journals… and I’ve found has been disgusting and horrifying. It is astonishingly bad, and my biggest question is why researchers put up with it.

Thus, the subject of this post is what’s wrong with the system—and what we might do instead.

Before I get into it, let me say that I don’t actually disagree with “publish or perish” in principle—as SMBC points out, it’s a lot like “do your job or get fired”. Researchers should publish in peer-reviewed journals; that’s a big part of what doing research means. The problem is how most peer-reviewed journals are currently operated.

First of all, in case you didn’t know, most scientific journals are owned by for-profit corporations. The largest corporation Elsevier, owns The Lancet and all of ScienceDirect, and has net income of over 1 billion Euros a year. Then there’s Springer and Wiley-Blackwell; between the three of them, these publishers account for over 40% of all scientific publications. These for-profit publishers retain the full copyright to most of the papers they publish, and tightly control access with paywalls; the cost to get through these paywalls is generally thousands of dollars a year for individuals and millions of dollars a year for universities. Their monopoly power is so great it “makes Rupert Murdoch look like a socialist.”

For-profit journals do often offer an “open-access” option in which you basically buy back your own copyright, but the price is high—the most common I’ve seen are $1800 or $3000 per paper—and very few researchers do this, for obvious financial reasons. In fact I think for a full-time tenured faculty researcher it’s probably worth it, given the alternatives. (Then again, full-time tenured faculty are becoming an endangered species lately; what might be worth it in the long run can still be very difficult for a cash-strapped adjunct to afford.) Open-access means people can actually read your paper and potentially cite your paper. Closed-access means it may languish in obscurity.

And of course it isn’t just about the benefits for the individual researcher. The scientific community as a whole depends upon the free flow of information; the reason we publish in the first place is that we want people to read papers, discuss them, replicate them, challenge them. Publication isn’t the finish line; it’s at best a checkpoint. Actually one thing that does seem to be wrong with “publish or perish” is that there is so much pressure for publication that we publish too many pointless papers and nobody has time to read the genuinely important ones.

These prices might be justifiable if the for-profit corporations actually did anything. But in fact they are basically just aggregators. They don’t do the peer-review, they farm it out to other academic researchers. They don’t even pay those other researchers; they just expect them to do it. (And they do! Like I said, why do they put up with this?) They don’t pay the authors who have their work published (on the contrary, they often charge submission fees—about $100 seems to be typical—simply to look at them). It’s been called “the world’s worst restaurant”, where you pay to get in, bring your own ingredients and recipes, cook your own food, serve other people’s food while they serve yours, and then have to pay again if you actually want to be allowed to eat.

They pay for the printing of paper copies of the journal, which basically no one reads; and they pay for the electronic servers that host the digital copies that everyone actually reads. They also provide some basic copyediting services (copyediting APA style is a job people advertise on Craigslist—so you can guess how much they must be paying).

And even supposing that they actually provided some valuable and expensive service, the fact would remain that we are making for-profit corporations the gatekeepers of the scientific community. Entities that exist only to make money for their owners are given direct control over the future of human knowledge. If you look at Cracked’s “reasons why we can’t trust science anymore”, all of them have to do with the for-profit publishing system. p-hacking might still happen in a better system, but publishers that really had the best interests of science in mind would be more motivated to fight it than publishers that are simply trying to raise revenue by getting people to buy access to their papers.

Then there’s the fact that most journals do not allow authors to submit to multiple journals at once, yet take 30 to 90 days to respond and only publish a fraction of what is submitted—it’s almost impossible to find good figures on acceptance rates (which is itself a major problem!), but the highest figures I’ve seen are 30% acceptance, a more typical figure seems to be 10%, and some top journals go as low as 3%. In the worst-case scenario you are locked into a journal for 90 days with only a 3% chance of it actually publishing your work. At that rate publishing an article could take years.

Is open-access the solution? Yes… well, part of it, anyway.

There are a large number of open-access journals, some of which do not charge submission fees, but very few of them are prestigious, and many are outright predatory. Predatory journals charge exorbitant fees, often after accepting papers for publication; many do little or no real peer review. There are almost seven hundred known predatory open-access journals; over one hundred have even been caught publishing hoax papers. These predatory journals are corrupting the process of science.

There are a few reputable open-access journals, such as BMC Biology and PLOSOne. Though not actually a journal, ArXiv serves a similar role. These will be part of the solution, most definitely. Yet even legitimate open-access journals often charge each author over $1000 to publish an article. There is a small but significant positive correlation between publication fees and journal impact factor.

We need to found more open-access journals which are funded by either governments or universities, so that neither author nor reader ever pays a cent. Science is a public good and should be funded as such. Even if copyright makes sense for other forms of content (I’m not so sure about that), it most certainly does not make sense for scientific knowledge, which by its very nature is only doing its job if it is shared with the world.

These journals should be specifically structured to be method-sensitive but results-blind. (It’s a very good thing that medical trials are usually registered before they are completed, so that publication is assured even if the results are negative—the same should be done with other sciences. Unfortunately, even in medicine there is significant publication bias.) If you could sum up the scientific method in one phrase, it might just be that: Method-sensitive but results-blind. If you think you know what you’re going to find beforehand, you may not be doing science. If you are certain what you’re going to find beforehand, you’re definitely not doing science.

The process should still be highly selective, but it should be possible—indeed, expected—to submit to multiple journals at once. If journals want to start paying their authors to entice them to publish in that journal rather than take another offer, that’s fine with me. Researchers are the ones who produce the content; if anyone is getting paid for it, it should be us.

This is not some wild and fanciful idea; it’s already the way that book publishing works. Very few literary agents or book publishers would ever have the audacity to say you can’t submit your work elsewhere; those that try are rapidly outcompeted as authors stop submitting to them. It’s fundamentally unreasonable to expect anyone to hang all their hopes on a particular buyer months in advance—and that is what you are, publishers, you are buyers. You are not sellers, you did not create this content.

But new journals face a fundamental problem: Good researchers will naturally want to publish in journals that are prestigious—that is, journals that are already prestigious. When all of the prestige is in journals that are closed-access and owned by for-profit companies, the best research goes there, and the prestige becomes self-reinforcing. Journals are prestigious because they are prestigious; welcome to tautology club.

Somehow we need to get good researchers to start boycotting for-profit journals and start investing in high-quality open-access journals. If Elsevier and Springer can’t get good researchers to submit to them, they’ll change their ways or wither and die. Research should be funded and published by governments and nonprofit institutions, not by for-profit corporations.

This may in fact highlight a much deeper problem in academia, the very concept of “prestige”. I have no doubt that Harvard is a good university, better university than most; but is it actually the best as it is in most people’s minds? Might Stanford or UC Berkeley be better, or University College London, or even the University of Michigan? How would we tell? Are the students better? Even if they are, might that just be because all the better students went to the schools that had better reputations? Controlling for the quality of the student, more prestigious universities are almost uncorrelated with better outcomes. Those who get accepted to Ivies but attend other schools do just as well in life as those who actually attend Ivies. (Good news for me, getting into Columbia but going to Michigan.) Yet once a university acquires such a high reputation, it can be very difficult for it to lose that reputation, and even more difficult for others to catch up.

Prestige is inherently zero-sum; for me to get more prestige you must lose some. For one university or research journal to rise in rankings, another must fall. Aside from simply feeding on other prestige, the prestige of a university is largely based upon the students it rejects—its “selectivity” score. What does it say about our society that we value educational institutions based upon the number of people they exclude?

Zero-sum ranking is always easier to do than nonzero-sum absolute scoring. Actually that’s a mathematical theorem, and one of the few good arguments against range voting (still not nearly good enough, in my opinion); if you have a list of scores you can always turn them into ranks (potentially with ties); but from a list of ranks there is no way to turn them back into scores.

Yet ultimately it is absolute scores that must drive humanity’s progress. If life were simply a matter of ranking, then progress would be by definition impossible. No matter what we do, there will always be top-ranked and bottom-ranked people.

There is simply no way mathematically for more than 1% of human beings to be in the top 1% of the income distribution. (If you’re curious where exactly that lies today, I highly recommend this interactive chart by the New York Times.) But we could raise the standard of living for the majority of people to a level that only the top 1% once had—and in fact, within the First World we have already done this. We could in fact raise the standard of living for everyone in the First World to a level that only the top 1%—or less—had as recently as the 16th century, by the simple change of implementing a basic income.

There is no way for more than 0.14% of people to have an IQ above 145, because IQ is defined to have a mean of 100 and a standard deviation of 15, regardless of how intelligent people are. People could get dramatically smarter over timeand in fact have—and yet it would still be the case that by definition, only 0.14% can be above 145.

Similarly, there is no way for much more than 1% of people to go to the top 1% of colleges. There is no way for more than 1% of people to be in the highest 1% of their class. But we could increase the number of college degrees (which we have); we could dramatically increase literacy rates (which we have).

We need to find a way to think of science in the same way. I wouldn’t suggest simply using number of papers published or even number of drugs invented; both of those are skyrocketing, but I can’t say that most of the increase is actually meaningful. I don’t have a good idea of what an absolute scale for scientific quality would look like, even at an aggregate level; and it is likely to be much harder still to make one that applies on an individual level.

But I think that ultimately this is the only way, the only escape from the darkness of cutthroat competition. We must stop thinking in terms of zero-sum rankings and start thinking in terms of nonzero-sum absolute scales.

The Warren Rule is a good start

JDN 2457243 EDT 10:40.

As far back as 2010, Elizabeth Warren proposed a simple regulation on the reporting of CEO compensation that was then built into Dodd-Frank—but the SEC has resisted actually applying that rule for five years; only now will it actually take effect (and by “now” I mean over the next two years). For simplicity I’ll refer to that rule as the Warren Rule, though I don’t see a lot of other people doing that (most people don’t give it a name at all).

Two things are important to understand about this rule, which both undercut its effectiveness and make all the right-wing whinging about it that much more ridiculous.

1. It doesn’t actually place any limits on CEO compensation or employee salaries; it merely requires corporations to consistently report the ratio between them. Specifically, the rule says that every publicly-traded corporation must report the ratio between the “total compensation” of their CEO and the median salary (with benefits) of their employees; wisely, it includes foreign workers (with a few minor exceptions—lobbyists fought for more but fortunately Warren stood firm), so corporations can’t simply outsource everything but management to make it look like they pay their employees more. Unfortunately, it does not include contractors, which is awful; expect to see corporations working even harder to outsource their work to “contractors” who are actually employees without benefits (not that they weren’t already). The greatest victory here will be for economists, who now will have more reliable data on CEO compensation; and for consumers, who will now find it more salient just how overpaid America’s CEOs really are.

2. While it does wisely cover “total compensation”, that isn’t actually all the money that CEOs receive for owning and operating corporations. It includes salaries, bonuses, benefits, and newly granted stock options—it does not include the value of stock options previously exercised or dividends received from stock the CEO already owns.

TIME screwed this up; they took at face value when Larry Page reported a $1 “total compensation”, which technically is true by how “total compensation” is defined; he received a $1 token salary and no new stock awards. But Larry Page has net wealth of over $38 billion; about half of that is Google stock, so even if we ignore all others, on Google’s PE ratio of about 25, Larry Page received at least $700 million in Google retained earnings alone. (In my personal favorite unit of wealth, Page receives about 3 romneys a year in retained earnings.) No, TIME, he is not the lowest-paid CEO in the world; he has simply structured his income so that it comes entirely from owning shares instead of receiving a salary. Most top CEOs do this, so be wary when it says a Fortune 500 CEO received only $2 million, and completely ignore it when it says a CEO received only $1. Probably in the former case and definitely in the latter, their real money is coming from somewhere else.

Of course, the complaints about how this is an unreasonable demand on businesses are totally absurd. Most of them keep track of all this data anyway; it’s simply a matter of porting it from one spreadsheet to another. (I also love the argument that only “idiosyncratic investors” will care; yeah, what sort of idiot would care about income inequality or be concerned how much of their investment money is going directly to line a single person’s pockets?) They aren’t complaining because it will be a large increase in bureaucracy or a serious hardship on their businesses; they’re complaining because they think it might work. Corporations are afraid that if they have to publicly admit how overpaid their CEOs are, they might actually be pressured to pay them less. I hope they’re right.

CEO pay is set in a very strange way; instead of being based on an estimate of how much they are adding to the company, a CEO’s pay is typically set as a certain margin above what the average CEO is receiving. But then as the process iterates and everyone tries to be above average, pay keeps rising, more or less indefinitely. Anyone with a basic understanding of statistics could have seen this coming, but somehow thousands of corporations didn’t—or else simply didn’t care.

Most people around the world want the CEO-to-employee pay ratio to be dramatically lower than it is. Indeed, unrealistically lower, in my view. Most countries say only 6 to 1, while Scandinavia says only 2 to 1. I want you to think about that for a moment; if the average employee at a corporation makes $50,000, people in Scandinavia think the CEO should only make $100,000, and people elsewhere think the CEO should only make $300,000? I’m honestly not sure what would happen to our economy if we made such a rule. There would be very little incentive to want to become a CEO; why bear all that fierce competition and get blamed for everything to make only twice as much as you would as an average employee?

On the other hand, most CEOs don’t actually do all that much; CEO pay is basically uncorrelated with company performance. Maybe it would be better if they weren’t paid very much, or even if we didn’t have them at all. But under our current system, capping CEO pay also caps the pay of basically everyone else; the CEO is almost always the highest-paid individual in any corporation.

I guess that’s really the problem. We need to find ways to change the overall attitude of our society that higher authority necessarily comes with higher pay; that isn’t a rational assessment of marginal productivity, it’s a recapitulation of our primate instincts for a mating hierarchy. He’s the alpha male, of course he gets all the bananas.

The president of a university should make next to nothing compared to the top scientists at that university, because the president is a useless figurehead and scientists are the foundation of universities—and human knowledge in general. Scientists are actually the one example I can think of where one individual trulycan be one million times as productive as another—though even then I don’t think that justifies paying them one million times as much.

Most corporations should be structured so that managers make moderate incomes and the highest incomes go to engineers and designers, the people who have the highest skills and do the most important work. A car company without managers seems like an interesting experiment in employee ownership. A car company without engineers seems like an oxymoron.

Finally, people who work in finance should make very low incomes, because they don’t actually do very much. Bank tellers are probably paid about what they should be; stock traders and hedge fund managers should be paid like bank tellers. (Or rather, there shouldn’t be stock traders and hedge funds as we know them; this is all pure waste. A really efficient financial system would be extremely simple, because finance actually is very simple—people who have money loan it to people who need it, and in return receive more money later. Everything else is just elaborations on that, and most of these elaborations are really designed to obscure, confuse, and manipulate.)

Oddly enough, the place where we do this best is the nation as a whole; the President of the United States would be astonishingly low-paid if we thought of him as a CEO. Only about $450,000 including expense accounts, for a “corporation” with revenue of nearly $3 trillion? (Suppose instead we gave the President 1% of tax revenue; that would be $30 billion per year. Think about how absurdly wealthy our leaders would be if we gave them stock options, and be glad that we don’t do that.)

But placing a hard cap at 2 or even 6 strikes me as unreasonable. Even during the 1950s the ratio was about 20 to 1, and it’s been rising ever since. I like Robert Reich’s proposal of a sliding scale of corporate taxes; I also wouldn’t mind a hard cap at a higher figure, like 50 or 100. Currently the average CEO makes about 350 times as much as the average employee, so even a cap of 100 would substantially reduce inequality.
A pay ratio cap could actually be a better alternative to a minimum wage, because it can adapt to market conditions. If the economy is really so bad that you must cut the pay of most of your workers, well, you’d better cut your own pay as well. If things are going well and you can afford to raise your own pay, your workers should get a share too. We never need to set some arbitrary amount as the minimum you are allowed to pay someone—but if you want to pay your employees that little, you won’t be paid very much yourself.

The biggest reason to support the Warren Rule, however, is awareness. Most people simply have no idea of how much CEOs are actually paid. When asked to estimate the ratio between CEO and employee pay, most people around the world underestimate by a full order of magnitude.

Here are some graphs from a sampling of First World countries. I used data from this paper in Perspectives on Psychological Sciencethe fact that it’s published in a psychology journal tells you a lot about the academic turf wars involved in cognitive economics.

The first shows the absolute amount of average worker pay (not adjusted for purchasing power) in each country. Notice how the US is actually near the bottom, despite having one of the strongest overall economies and not particularly high purchasing power:

worker_pay

The second shows the absolute amount of average CEO pay in each country; I probably don’t even need to mention how the US is completely out of proportion with every other country.

CEO_pay

And finally, the ratio of the two. One of these things is not like the other ones…

CEO_worker_ratio

So obviously the ratio in the US is far too high. But notice how even in Poland, the ratio is still 28 to 1. In order to drop to the 6 to 1 ratio that most people seem to think would be ideal, we would need to dramatically reform even the most equal nations in the world. Denmark and Norway should particularly think about whether they really believe that 2 to 1 is the proper ratio, since they are currently some of the most equal (not to mention happiest) nations in the world, but their current ratios are still 48 and 58 respectively. You can sustain a ratio that high and still have universal prosperity; every adult citizen in Norway is a millionaire in local currency. (Adjusting for purchasing power, it’s not quite as impressive; instead the guaranteed wealth of a Norwegian citizen is “only” about $100,000.)

Most of the world’s population simply has no grasp of how extreme economic inequality has become. Putting the numbers right there in people’s faces should help with this, though if the figures only need to be reported to investors that probably won’t make much difference. But hey, it’s a start.

How much should we save?

JDN 2457215 EDT 15:43.

One of the most basic questions in macroeconomics has oddly enough received very little attention: How much should we save? What is the optimal level of saving?

At the microeconomic level, how much you should save basically depends on what you think your income will be in the future. If you have more income now than you think you’ll have later, you should save now to spend later. If you have less income now than you think you’ll have later, you should spend now and dissave—save negatively, otherwise known as borrowing—and pay it back later. The life-cycle hypothesis says that people save when they are young in order to retire when they are old—in its strongest form, it says that we keep our level of spending constant across our lifetime at a value equal to our average income. The strongest form is utterly ridiculous and disproven by even the most basic empirical evidence, so usually the hypothesis is studied in a weaker form that basically just says that people save when they are young and spend when they are old—and even that runs into some serious problems.

The biggest problem, I think, is that the interest rate you receive on savings is always vastly less than the interest rate you pay on borrowing, which in turn is related to the fact that people are credit-constrainedthey generally would like to borrow more than they actually can. It also has a lot to do with the fact that our financial system is an oligopoly; banks make more profits if they can pay savers less and charge borrowers more, and by colluding with each other they can control enough of the market that no major competitors can seriously undercut them. (There is some competition, however, particularly from credit unions—and if you compare these two credit card offers from University of Michigan Credit Union at 8.99%/12.99% and Bank of America at 12.99%/22.99% respectively, you can see the oligopoly in action as the tiny competitor charges you a much fairer price than the oligopoly beast. 9% means doubling in just under eight years, 13% means doubling in a little over five years, and 23% means doubling in three years.) Another very big problem with the life-cycle theory is that human beings are astonishingly bad at predicting the future, and thus our expectations about our future income can vary wildly from the actual future income we end up receiving. People who are wise enough to know that they do not know generally save more than they think they’ll need, which is called precautionary saving. Combine that with our limited capacity for self-control, and I’m honestly not sure the life-cycle hypothesis is doing any work for us at all.

But okay, let’s suppose we had a theory of optimal individual saving. That would still leave open a much larger question, namely optimal aggregate saving. The amount of saving that is best for each individual may not be best for society as a whole, and it becomes a difficult policy challenge to provide incentives to make people save the amount that is best for society.

Or it would be, if we had the faintest idea what the optimal amount of saving for society is. There’s a very simple rule-of-thumb that a lot of economists use, often called the golden rule (not to be confused with the actual Golden Rule, though I guess the idea is that a social optimum is a moral optimum), which is that we should save exactly the same amount as the share of capital in income. If capital receives one third of income (This figure of one third has been called a “law”, but as with most “laws” in economics it’s really more like the Pirate Code; labor’s share of income varies across countries and years. I doubt you’ll be surprised to learn that it is falling around the world, meaning more income is going to capital owners and less is going to workers.), then one third of income should be saved to make more capital for next year.

When you hear that, you should be thinking: “Wait. Saved to make more capital? You mean invested to make more capital.” And this is the great sleight of hand in the neoclassical theory of economic growth: Saving and investment are made to be the same by definition. It’s called the savings-investment identity. As I talked about in an earlier post, the model seems to be that there is only one kind of good in the world, and you either use it up or save it to make more.

But of course that’s not actually how the world works; there are different kinds of goods, and if people stop buying tennis shoes that doesn’t automatically lead to more factories built to make tennis shoes—indeed, quite the opposite.If people reduce their spending, the products they no longer buy will now accumulate on shelves and the businesses that make those products will start downsizing their production. If people increase their spending, the products they now buy will fly off the shelves and the businesses that make them will expand their production to keep up.

In order to make the savings-investment identity true by definition, the definition of investment has to be changed. Inventory accumulation, products building up on shelves, is counted as “investment” when of course it is nothing of the sort. Inventory accumulation is a bad sign for an economy; indeed the time when we see the most inventory accumulation is right at the beginning of a recession.

As a result of this bizarre definition of “investment” and its equation with saving, we get the famous Paradox of Thrift, which does indeed sound paradoxical in its usual formulation: “A global increase in marginal propensity to save can result in a reduction in aggregate saving.” But if you strip out the jargon, it makes a lot more sense: “If people suddenly stop spending money, companies will stop investing, and the economy will grind to a halt.” There’s still a bit of feeling of paradox from the fact that we tried to save more money and ended up with less money, but that isn’t too hard to understand once you consider that if everyone else stops spending, where are you going to get your money from?

So what if something like this happens, we all try to save more and end up having no money? The government could print a bunch of money and give it to people to spend, and then we’d have money, right? Right. Exactly right, in fact. You now understand monetary policy better than most policymakers. Like a basic income, for many people it seems too simple to be true; but in a nutshell, that is Keynesian monetary policy. When spending falls and the economy slows down as a result, the government should respond by expanding the money supply so that people start spending again. In practice they usually expand the money supply by a really bizarre roundabout way, buying and selling bonds in open market operations in order to change the interest rate that banks charge each other for loans of reserves, the Fed funds rate, in the hopes that banks will change their actual lending interest rates and more people will be able to borrow, thus, ultimately, increasing the money supply (because, remember, banks don’t have the money they lend you—they create it).

We could actually just print some money and give it to people (or rather, change a bunch of numbers in an IRS database), but this is very unpopular, particularly among people like Ron Paul and other gold-bug Republicans who don’t understand how monetary policy works. So instead we try to obscure the printing of money behind a bizarre chain of activities, opening many more opportunities for failure: Chiefly, we can hit the zero lower bound where interest rates are zero and can’t go any lower (or can they?), or banks can be too stingy and decide not to lend, or people can be too risk-averse and decide not to borrow; and that’s not even to mention the redistribution of wealth that happens when all the money you print is given to banks. When that happens we turn to “unconventional monetary policy”, which basically just means that we get a little bit more honest about the fact that we’re printing money. (Even then you get articles like this one insisting that quantitative easing isn’t really printing money.)

I don’t know, maybe there’s actually some legitimate reason to do it this way—I do have to admit that when governments start openly printing money it often doesn’t end well. But really the question is why you’re printing money, whom you’re giving it to, and above all how much you are printing. Weimar Germany printed money to pay off odious war debts (because it totally makes sense to force a newly-established democratic government to pay the debts incurred by belligerent actions of the monarchy they replaced; surely one must repay one’s debts). Hungary printed money to pay for rebuilding after the devastation of World War 2. Zimbabwe printed money to pay for a war (I’m sensing a pattern here) and compensate for failed land reform policies. In all three cases the amount of money they printed was literally billions of times their original money supply. Yes, billions. They found their inflation cascading out of control and instead of stopping the printing, they printed even more. The United States has so far printed only about three times our original monetary base, still only about a third of our total money supply. (Monetary base is the part that the Federal reserve controls; the rest is created by banks. Typically 90% of our money is not monetary base.) Moreover, we did it for the right reasons—in response to deflation and depression. That is why, as Matthew O’Brien of The Atlantic put it so well, the US can never be Weimar.

I was supposed to be talking about saving and investment; why am I talking about money supply? Because investment is driven by the money supply. It’s not driven by saving, it’s driven by lending.

Now, part of the underlying theory was that lending and saving are supposed to be tied together, with money lent coming out of money saved; this is true if you assume that things are in a nice tidy equilibrium. But we never are, and frankly I’m not sure we’d want to be. In order to reach that equilibrium, we’d either need to have full-reserve banking, or banks would have to otherwise have their lending constrained by insufficient reserves; either way, we’d need to have a constant money supply. Any dollar that could be lent, would have to be lent, and the whole debt market would have to be entirely constrained by the availability of savings. You wouldn’t get denied for a loan because your credit rating is too low; you’d get denied for a loan because the bank would literally not have enough money available to lend you. Banking would have to be perfectly competitive, so if one bank can’t do it, no bank can. Interest rates would have to precisely match the supply and demand of money in the same way that prices are supposed to precisely match the supply and demand of products (and I think we all know how well that works out). This is why it’s such a big problem that most macroeconomic models literally do not include a financial sector. They simply assume that the financial sector is operating at such perfect efficiency that money in equals money out always and everywhere.

So, recognizing that saving and investment are in fact not equal, we now have two separate questions: What is the optimal rate of saving, and what is the optimal rate of investment? For saving, I think the question is almost meaningless; individuals should save according to their future income (since they’re so bad at predicting it, we might want to encourage people to save extra, as in programs like Save More Tomorrow), but the aggregate level of saving isn’t an important question. The important question is the aggregate level of investment, and for that, I think there are two ways of looking at it.

The first way is to go back to that original neoclassical growth model and realize it makes a lot more sense when the s term we called “saving” actually is a funny way of writing “investment”; in that case, perhaps we should indeed invest the same proportion of income as the income that goes to capital. An interesting, if draconian, way to do so would be to actually require this—all and only capital income may be used for business investment. Labor income must be used for other things, and capital income can’t be used for anything else. The days of yachts bought on stock options would be over forever—though so would the days of striking it rich by putting your paycheck into a tech stock. Due to the extreme restrictions on individual freedom, I don’t think we should actually do such a thing; but it’s an interesting thought that might lead to an actual policy worth considering.

But a second way that might actually be better—since even though the model makes more sense this way, it still has a number of serious flaws—is to think about what we might actually do in order to increase or decrease investment, and then consider the costs and benefits of each of those policies. The simplest case to analyze is if the government invests directly—and since the most important investments like infrastructure, education, and basic research are usually done this way, it’s definitely a useful example. How is the government going to fund this investment in, say, a nuclear fusion project? They have four basic ways: Cut spending somewhere else, raise taxes, print money, or issue debt. If you cut spending, the question is whether the spending you cut is more or less important than the investment you’re making. If you raise taxes, the question is whether the harm done by the tax (which is generally of two flavors; first there’s the direct effect of taking someone’s money so they can’t use it now, and second there’s the distortions created in the market that may make it less efficient) is outweighed by the new project. If you print money or issue debt, it’s a subtler question, since you are no longer pulling from any individual person or project but rather from the economy as a whole. Actually, if your economy has unused capacity as in a depression, you aren’t pulling from anywhere—you’re simply adding new value basically from thin air, which is why deficit spending in depressions is such a good idea. (More precisely, you’re putting resources to use that were otherwise going to lay fallow—to go back to my earlier example, the tennis shoes will no longer rest on the shelves.) But if you do not have sufficient unused capacity, you will get crowding-out; new debt will raise interest rates and make other investments more expensive, while printing money will cause inflation and make everything more expensive. So you need to weigh that cost against the benefit of your new investment and decide whether it’s worth it.

This second way is of course a lot more complicated, a lot messier, a lot more controversial. It would be a lot easier if we could just say: “The target investment rate should be 33% of GDP.” But even then the question would remain as to which investments to fund, and which consumption to pull from. The abstraction of simply dividing the economy into “consumption” versus “investment” leaves out matters of the utmost importance; Paul Allen’s 400-foot yacht and food stamps for children are both “consumption”, but taxing the former to pay for the latter seems not only justified but outright obligatory. The Bridge to Nowhere and the Humane Genome Project are both “investment”, but I think we all know which one had a higher return for human society. The neoclassical model basically assumes that the optimal choices for consumption and investment are decided automatically (automagically?) by the inscrutable churnings of the free market, but clearly that simply isn’t true.

In fact, it’s not always clear what exactly constitutes “consumption” versus “investment”, and the particulars of answering that question may distract us from answering the questions that actually matter. Is a refrigerator investment because it’s a machine you buy that sticks around and does useful things for you? Or is it consumption because consumers buy it and you use it for food? Is a car an investment because it’s vital to getting a job? Or is it consumption because you enjoy driving it? Someone could probably argue that the appreciation on Paul Allen’s yacht makes it an investment, for instance. Feeding children really is an investment, in their so-called “human capital” that will make them more productive for the rest of their lives. Part of the money that went to the Humane Genome Project surely paid some graduate student who then spent part of his paycheck on a keg of beer, which would make it consumption. And so on. The important question really isn’t “is this consumption or investment?” but “Is this worth doing?” And thus, the best answer to the question, “How much should we save?” may be: “Who cares?”

What are we celebrating today?

JDN 2457208 EDT 13:35 (July 4, 2015)

As all my American readers will know (and unsurprisingly 79% of my reader trackbacks come from the United States), today is Independence Day. I’m curious how my British readers feel about this day (and the United Kingdom is my second-largest source of reader trackbacks); we are in a sense celebrating the fact that we’re no longer ruled by you.

Every nation has some notion of patriotism; in the simplest sense we could say that patriotism is simply nationalism, yet another reflection of our innate tribal nature. As Obama said when asked about American exceptionalism, the British also believe in British exceptionalism. If that is all we are dealing with, then there is no particular reason to celebrate; Saudi Arabia or China could celebrate just as well (and very likely does). Independence Day then becomes something parochial, something that is at best a reflection of local community and culture, and at worst a reaffirmation of nationalistic divisiveness.

But in fact I think we are celebrating something more than that. The United States of America is not just any country. It is not just a richer Brazil or a more militaristic United Kingdom. There really is something exceptional about the United States, and it really did begin on July 4, 1776.

In fact we should probably celebrate June 21, 1789 and December 15, 1791, the ratification of the Constitution and the Bill of Rights respectively. But neither of these would have been possible without that Declaration of Independence on July 4, 1776. (In fact, even that date isn’t as clear-cut as commonly imagined.)

What makes the United States unique?

From the dawn of civilization around 5000 BC up to the mid-18th century AD, there were basically two ways to found a nation. The most common was to grow the nation organically, formulate an ethnic identity over untold generations and then make up an appealing backstory later. The second way, and not entirely mutually exclusive, was for a particular leader, usually a psychopathic king, to gather a superior army, conquer territory, and annex the people there, making them part of his nation whether they wanted it or not. Variations on these two themes were what happened in Rome, in Greece, in India, in China; they were done by the Sumerians, by the Egyptians, by the Aztecs, by the Maya. All the ancient civilizations have founding myths that are distorted so far from the real history that the real history has become basically unknowable. All the more recent powers were formed by warlords and usually ruled with iron fists.

The United States of America started with a war, make no mistake; and George Washington really was more a charismatic warlord than he ever was a competent statesman. But Washington was not a psychopath, and refused to rule with an iron fist. Instead he was instrumental in establishing a fundamentally new approach to the building of nations.
This is literally what happened—myths have grown around it, but it itself documented history. Washington and his compatriots gathered a group of some of the most intelligent and wise individuals they could find, sat them down in a room, and tasked them with answering the basic question: “What is the best possible country?” They argued and debated, considering absolutely the most cutting-edge economics (The Wealth of Nations was released in 1776) and political philosophy (Thomas Paine’s Common Sense also came out in 1776). And then, when they had reached some kind of consensus on what the best sort of country would be—they created that country. They were conscious of building a new tradition, of being the founders of the first nation built as part of the Enlightenment. Previously nations were built from immemorial tradition or the whims of warlords—the United States of America was the first nation in the world that was built on principle.

It would not be the last; in fact, with a terrible interlude that we call Napoleon, France would soon become the second nation of the Enlightenment. A slower process of reform would eventually bring the United Kingdom itself to a similar state (though the UK is still a monarchy and has no formal constitution, only an ever-growing mountain of common law). As the centuries passed and the United States became more and more powerful, its system of government attained global influence, with now almost every nation in the world nominally a “democracy” and about half actually recognizable as such. We now see it as unexceptional to have a democratically-elected government bound by a constitution, and even think of the United States as a relatively poor example compared to, say, Sweden or Norway (because #Scandinaviaisbetter), and this assessment is not entirely wrong; but it’s important to keep in mind that this was not always the case, and on July 4, 1776 the Founding Fathers truly were building something fundamentally new.

Of course, the Founding Fathers were not the demigods they are often imagined to be; Washington himself was a slaveholder, and not just any slaveholder, but in fact almost a billionaire in today’s terms—the wealthiest man in America by far and actually a rival to the King of England. Thomas Jefferson somehow managed to read Thomas Paine and write “all men are created equal” without thinking that this obligated him to release his own slaves. Benjamin Franklin was a misogynist and womanizer. James Madison’s concept of formalizing armed rebellion bordered on insanity (and ultimately resulted in our worst amendment, the Second). The system that they built disenfranchised women, enshrined the slavery of Black people into law, and consisted of dozens of awkward compromises (like the Senate) that would prove disastrous in the future. The Founding Fathers were human beings with human flaws and human hypocrisy, and they did many things wrong.

But they also did one thing very, very right: They created a new model for how nations should be built. In a very real sense they redefined what it means to be a nation. That is what we celebrate on Independence Day.

1200px-Flag_of_the_United_States.svg