Moral responsibility does not inherit across generations

JDN 2457548

In last week’s post I made a sharp distinction between believing in human progress and believing that colonialism was justified. To make this argument, I relied upon a moral assumption that seems to me perfectly obvious, and probably would to most ethicists as well: Moral responsibility does not inherit across generations, and people are only responsible for their individual actions.

But is in fact this principle is not uncontroversial in many circles. When I read utterly nonsensical arguments like this one from the aptly-named Race Baitr saying that White people have no role to play in the liberation of Black people apparently because our blood is somehow tainted by the crimes our ancestors, it becomes apparent to me that this principle is not obvious to everyone, and therefore is worth defending. Indeed, many applications of the concept of “White Privilege” seem to ignore this principle, speaking as though racism is not something one does or participates in, but something that one is simply by being born with less melanin. Here’s a Salon interview specifically rejecting the proposition that racism is something one does:

For white people, their identities rest on the idea of racism as about good or bad people, about moral or immoral singular acts, and if we’re good, moral people we can’t be racist – we don’t engage in those acts. This is one of the most effective adaptations of racism over time—that we can think of racism as only something that individuals either are or are not “doing.”

If racism isn’t something one does, then what in the world is it? It’s all well and good to talk about systems and social institutions, but ultimately systems and social institutions are made of human behaviors. If you think most White people aren’t doing enough to combat racism (which sounds about right to me!), say that—don’t make some bizarre accusation that simply by existing we are inherently racist. (Also: We? I’m only 75% White, so am I only 75% inherently racist?) And please, stop redefining the word “racism” to mean something other than what everyone uses it to mean; “White people are snakes” is in fact a racist sentiment (and yes, one I’ve actually heard–indeed, here is the late Muhammad Ali comparing all White people to rattlesnakes, and Huffington Post fawning over him for it).

Racism is clearly more common and typically worse when performed by White people against Black people—but contrary to the claims of some social justice activists the White perpetrator and Black victim are not part of the definition of racism. Similarly, sexism is more common and more severe committed by men against women, but that doesn’t mean that “men are pigs” is not a sexist statement (and don’t tell me you haven’t heard that one). I don’t have a good word for bigotry by gay people against straight people (“heterophobia”?) but it clearly does happen on occasion, and similarly cannot be defined out of existence.

I wouldn’t care so much that you make this distinction between “racism” and “racial prejudice”, except that it’s not the normal usage of the word “racism” and therefore confuses people, and also this redefinition clearly is meant to serve a political purpose that is quite insidious, namely making excuses for the most extreme and hateful prejudice as long as it’s committed by people of the appropriate color. If “White people are snakes” is not racism, then the word has no meaning.

Not all discussions of “White Privilege” are like this, of course; this article from Occupy Wall Street actually does a fairly good job of making “White Privilege” into a sensible concept, albeit still not a terribly useful one in my opinion. I think the useful concept is oppression—the problem here is not how we are treating White people, but how we are treating everyone else. What privilege gives you is the freedom to be who you are.”? Shouldn’t everyone have that?

Almost all the so-called “benefits” or “perks” associated with privilege” are actually forgone harms—they are not good things done to you, but bad things not done to you. But benefitting from racist systems doesn’t mean that everything is magically easy for us. It just means that as hard as things are, they could always be worse.” No, that is not what the word “benefit” means. The word “benefit” means you would be worse off without it—and in most cases that simply isn’t true. Many White people obviously think that it is true—which is probably a big reason why so many White people fight so hard to defend racism, you know; you’ve convinced them it is in their self-interest. But, with rare exceptions, it is not; most racial discrimination has literally zero long-run benefit. It’s just bad. Maybe if we helped people appreciate that more, they would be less resistant to fighting racism!

The only features of “privilege” that really make sense as benefits are those that occur in a state of competition—like being more likely to be hired for a job or get a loan—but one of the most important insights of economics is that competition is nonzero-sum, and fairer competition ultimately means a more efficient economy and thus more prosperity for everyone.

But okay, let’s set that aside and talk about this core question of what sort of responsibility we bear for the acts of our ancestors. Many White people clearly do feel deep shame about what their ancestors (or people the same color as their ancestors!) did hundreds of years ago. The psychological reactance to that shame may actually be what makes so many White people deny that racism even exists (or exists anymore)—though a majority of Americans of all races do believe that racism is still widespread.

We also apply some sense of moral responsibility applied to whole races quite frequently. We speak of a policy “benefiting White people” or “harming Black people” and quickly elide the distinction between harming specific people who are Black, and somehow harming “Black people” as a group. The former happens all the time—the latter is utterly nonsensical. Similarly, we speak of a “debt owed by White people to Black people” (which might actually make sense in the very narrow sense of economic reparations, because people do inherit money! They probably shouldn’t, that is literally feudalist, but in the existing system they in fact do), which makes about as much sense as a debt owed by tall people to short people. As Walter Michaels pointed out in The Trouble with Diversity (which I highly recommend), because of this bizarre sense of responsibility we are often in the habit of “apologizing for something you didn’t do to people to whom you didn’t do it (indeed to whom it wasn’t done)”. It is my responsibility to condemn colonialism (which I indeed do), to fight to ensure that it never happens again; it is not my responsibility to apologize for colonialism.

This makes some sense in evolutionary terms; it’s part of the all-encompassing tribal paradigm, wherein human beings come to identify themselves with groups and treat those groups as the meaningful moral agents. It’s much easier to maintain the cohesion of a tribe against the slings and arrows (sometimes quite literal) of outrageous fortune if everyone believes that the tribe is one moral agent worthy of ultimate concern.

This concept of racial responsibility is clearly deeply ingrained in human minds, for it appears in some of our oldest texts, including the Bible: “You shall not bow down to them or worship them; for I, the Lord your God, am a jealous God, punishing the children for the sin of the parents to the third and fourth generation of those who hate me,” (Exodus 20:5)

Why is inheritance of moral responsibility across generations nonsensical? Any number of reasons, take your pick. The economist in me leaps to “Ancestry cannot be incentivized.” There’s no point in holding people responsible for things they can’t control, because in doing so you will not in any way alter behavior. The Stanford Encyclopedia of Philosophy article on moral responsibility takes it as so obvious that people are only responsible for actions they themselves did that they don’t even bother to mention it as an assumption. (Their big question is how to reconcile moral responsibility with determinism, which turns out to be not all that difficult.)

An interesting counter-argument might be that descent can be incentivized: You could use rewards and punishments applied to future generations to motivate current actions. But this is actually one of the ways that incentives clearly depart from moral responsibilities; you could incentivize me to do something by threatening to murder 1,000 children in China if I don’t, but even if it was in fact something I ought to do, it wouldn’t be those children’s fault if I didn’t do it. They wouldn’t deserve punishment for my inaction—I might, and you certainly would for using such a cruel incentive.

Moreover, there’s a problem with dynamic consistency here: Once the action is already done, what’s the sense in carrying out the punishment? This is why a moral theory of punishment can’t merely be based on deterrence—the fact that you could deter a bad action by some other less-bad action doesn’t make the less-bad action necessarily a deserved punishment, particularly if it is applied to someone who wasn’t responsible for the action you sought to deter. In any case, people aren’t thinking that we should threaten to punish future generations if people are racist today; they are feeling guilty that their ancestors were racist generations ago. That doesn’t make any sense even on this deterrence theory.

There’s another problem with trying to inherit moral responsibility: People have lots of ancestors. Some of my ancestors were most likely rapists and murderers; most were ordinary folk; a few may have been great heroes—and this is true of just about anyone anywhere. We all have bad ancestors, great ancestors, and, mostly, pretty good ancestors. 75% of my ancestors are European, but 25% are Native American; so if I am to apologize for colonialism, should I be apologizing to myself? (Only 75%, perhaps?) If you go back enough generations, literally everyone is related—and you may only have to go back about 4,000 years. That’s historical time.

Of course, we wouldn’t be different colors in the first place if there weren’t some differences in ancestry, but there is a huge amount of gene flow between different human populations. The US is a particularly mixed place; because most Black Americans are quite genetically mixed, it is about as likely that any randomly-selected Black person in the US is descended from a slaveowner as it is that any randomly-selected White person is. (Especially since there were a large number of Black slaveowners in Africa and even some in the United States.) What moral significance does this have? Basically none! That’s the whole point; your ancestors don’t define who you are.

If these facts do have any moral significance, it is to undermine the sense most people seem to have that there are well-defined groups called “races” that exist in reality, to which culture responds. No; races were created by culture. I’ve said this before, but it bears repeating: The “races” we hold most dear in the US, White and Black, are in fact the most nonsensical. “Asian” and “Native American” at least almost make sense as categories, though Chippewa are more closely related to Ainu than Ainu are to Papuans. “Latino” isn’t utterly incoherent, though it includes as much Aztec as it does Iberian. But “White” is a club one can join or be kicked out of, while “Black” is the majority of genetic diversity.

Sex is a real thing—while there are intermediate cases of course, broadly speaking humans, like most metazoa, are sexually dimorphic and come in “male” and “female” varieties. So sexism took a real phenomenon and applied cultural dynamics to it; but that’s not what happened with racism. Insofar as there was a real phenomenon, it was extremely superficial—quite literally skin deep. In that respect, race is more like class—a categorization that is itself the result of social institutions.

To be clear: Does the fact that we don’t inherit moral responsibility from our ancestors absolve us from doing anything to rectify the inequities of racism? Absolutely not. Not only is there plenty of present discrimination going on we should be fighting, there are also inherited inequities due to the way that assets and skills are passed on from one generation to the next. If my grandfather stole a painting from your grandfather and both our grandfathers are dead but I am now hanging that painting in my den, I don’t owe you an apology—but I damn well owe you a painting.

The further we become from the past discrimination the harder it gets to make reparations, but all hope is not lost; we still have the option of trying to reset everyone’s status to the same at birth and maintaining equality of opportunity from there. Of course we’ll never achieve total equality of opportunity—but we can get much closer than we presently are.

We could start by establishing an extremely high estate tax—on the order of 99%—because no one has a right to be born rich. Free public education is another good way of equalizing the distribution of “human capital” that would otherwise be concentrated in particular families, and expanding it to higher education would make it that much better. It even makes sense, at least in the short run, to establish some affirmative action policies that are race-conscious and sex-conscious, because there are so many biases in the opposite direction that sometimes you must fight bias with bias.

Actually what I think we should do in hiring, for example, is assemble a pool of applicants based on demographic quotas to ensure a representative sample, and then anonymize the applications and assess them on merit. This way we do ensure representation and reduce bias, but don’t ever end up hiring anyone other than the most qualified candidate. But nowhere should we think that this is something that White men “owe” to women or Black people; it’s something that people should do in order to correct the biases that otherwise exist in our society. Similarly with regard to sexism: Women exhibit just as much unconscious bias against other women as men do. This is not “men” hurting “women”—this is a set of unconscious biases found in almost everywhere and social structures almost everywhere that systematically discriminate against people because they are women.

Perhaps by understanding that this is not about which “team” you’re on (which tribe you’re in), but what policy we should have, we can finally make these biases disappear, or at least fade so small that they are negligible.

Is Equal Unfair?

JDN 2457492

Much as you are officially a professional when people start paying you for what you do, I think you are officially a book reviewer when people start sending you books for free asking you to review them for publicity. This has now happened to me, with the book Equal Is Unfair by Don Watkins and Yaron Brook. This post is longer than usual, but in order to be fair to the book’s virtues as well as its flaws, I felt a need to explain quite thoroughly.

It’s a very frustrating book, because at times I find myself agreeing quite strongly with the first part of a paragraph, and then reaching the end of that same paragraph and wanting to press my forehead firmly into the desk in front of me. It makes some really good points, and for the most part uses economic statistics reasonably accurately—but then it rides gleefully down a slippery slope fallacy like a waterslide. But I guess that’s what I should have expected; it’s by leaders of the Ayn Rand Institute, and my experience with reading Ayn Rand is similar to that of Randall Monroe (I’m mainly referring to the alt-text, which uses slightly foul language).

As I kept being jostled between “That’s a very good point.”, “Hmm, that’s an interesting perspective.”, and “How can anyone as educated as you believe anything that stupid!?” I realized that there are actually three books here, interleaved:

1. A decent economics text on the downsides of taxation and regulation and the great success of technology and capitalism at raising the standard of living in the United States, which could have been written by just about any mainstream centrist neoclassical economist—I’d say it reads most like John Taylor or Ken Galbraith. My reactions to this book were things like “That’s a very good point.”, and “Sure, but any economist would agree with that.”

2. An interesting philosophical treatise on the meanings of “equality” and “opportunity” and their application to normative economic policy, as well as about the limitations of statistical data in making political and ethical judgments. It could have been written by Robert Nozick (actually I think much of it was based on Robert Nozick). Some of the arguments are convincing, others are not, and many of the conclusions are taken too far; but it’s well within the space of reasonable philosophical arguments. My reactions to this book were things like “Hmm, that’s an interesting perspective.” and “Your argument is valid, but I think I reject the second premise.”

3. A delusional rant of the sort that could only be penned by a True Believer in the One True Gospel of Ayn Rand, about how poor people are lazy moochers, billionaires are world-changing geniuses whose superior talent and great generosity we should all bow down before, and anyone who would dare suggest that perhaps Steve Jobs got lucky or owes something to the rest of society is an authoritarian Communist who hates all achievement and wants to destroy the American Dream. It was this book that gave me reactions like “How can anyone as educated as you believe anything that stupid!?” and “You clearly have no idea what poverty is like, do you?” and “[expletive] you, you narcissistic ingrate!”

Given that the two co-authors are Executive Director and a fellow of the Ayn Rand Institute, I suppose I should really be pleasantly surprised that books 1 and 2 exist, rather than disappointed by book 3.

As evidence of each of the three books interleaved, I offer the following quotations:

Book 1:

“All else being equal, taxes discourage production and prosperity.” (p. 30)

No reasonable economist would disagree. The key is all else being equal—it rarely is.

“For most of human history, our most pressing problem was getting enough food. Now food is abundant and affordable.” (p.84)

Correct! And worth pointing out, especially to anyone who thinks that economic progress is an illusion or we should go back to pre-industrial farming practices—and such people do exist.

“Wealth creation is first and foremost knowledge creation. And this is why you can add to the list of people who have created the modern world, great thinkers: people such as Euclid, Aristotle, Galileo, Newton, Darwin, Einstein, and a relative handful of others.” (p.90, emph. in orig.)

Absolutely right, though as I’ll get to below there’s something rather notable about that list.

“To be sure, there is competition in an economy, but it’s not a zero-sum game in which some have to lose so that others can win—not in the big picture.” (p. 97)

Yes! Precisely! I wish I could explain to more people—on both the Left and the Right, by the way—that economics is nonzero-sum, and that in the long run competitive markets improve the standard of living of society as a whole, not just the people who win that competition.

Book 2:

“Even opportunities that may come to us without effort on our part—affluent parents, valuable personal connections, a good education—require enormous effort to capitalize on.” (p. 66)

This is sometimes true, but clearly doesn’t apply to things like the Waltons’ inherited billions, for which all they had to do was be born in the right family and not waste their money too extravagantly.

“But life is not a game, and achieving equality of initial chances means forcing people to play by different rules.” (p. 79)

This is an interesting point, and one that I think we should acknowledge; we must treat those born rich differently from those born poor, because their unequal starting positions mean that treating them equally from this point forward would lead to a wildly unfair outcome. If my grandfather stole your grandfather’s wealth and passed it on to me, the fair thing to do is not to treat you and I equally from this point forward—it’s to force me to return what was stolen, insofar as that is possible. And even if we suppose that my grandfather earned far vaster wealth than yours, I think a more limited redistribution remains justified simply to put you and I on a level playing field and ensure fair competition and economic efficiency.

“The key error in this argument is that it totally mischaracterizes what it means to earn something. For the egalitarians, the results of our actions don’t merely have to be under our control, but entirely of our own making. […] But there is nothing like that in reality, and so what the egalitarians are ultimately doing is wiping out the very possibility of earning something.” (p. 193)

The way they use “egalitarian” as an insult is a bit grating, but there clearly are some actual egalitarian philosophers whose views are this extreme, such as G.A. Cohen, James Kwak and Peter Singer. I strongly agree that we need to make a principled distinction between gains that are earned and gains that are unearned, such that both sets are nonempty. Yet while Cohen would seem to make “earned” an empty set, Watkins and Brook very nearly make “unearned” empty—you get what you get, and you deserve it. The only exceptions they seem willing to make are outright theft and, what they consider equivalent, taxation. They have no concept of exploitation, excessive market power, or arbitrage—and while they claim they oppose fraud, they seem to think that only government is capable of it.

Book 3:

“What about government handouts (usually referred to as ‘transfer payments’)?” (p. 23)

Because Social Security is totally just a handout—it’s not like you pay into it your whole life or anything.

“No one cares whether the person who fixes his car or performs his brain surgery or applies for a job at his company is male or female, Indian or Pakistani—he wants to know whether they are competent.” (p.61)

Yes they do. We have direct experimental evidence of this.

“The notion that ‘spending drives the economy’ and that rich people spend less than others isn’t a view seriously entertained by economists,[…]” (p. 110)

The New Synthesis is Keynesian! This is what Milton Friedman was talking about when he said, “We’re all Keynesians now.”

“Because mobility statistics don’t distinguish between those who don’t rise and those who can’t, they are useless when it comes to assessing how healthy mobility is.” (p. 119)

So, if Black people have much lower odds of achieving high incomes even controlling for education, we can’t assume that they are disadvantaged or discriminated against; maybe Black people are just lazy or stupid? Is that what you’re saying here? (I think it might be.)

“Payroll taxes alone amount to 15.3 percent of your income; money that is taken from you and handed out to the elderly. This means that you have to spend more than a month and a half each year working without pay in order to fund other people’s retirement and medical care.” (p. 127)

That is not even close to how taxes work. Taxes are not “taken” from money you’d otherwise get—taxation changes prices and the monetary system depends upon taxation.

“People are poor, in the end, because they have not created enough wealth to make themselves prosperous.” (p. 144)

This sentence was so awful that when I showed it to my boyfriend, he assumed it must be out of context. When I showed him the context, he started swearing the most I’ve heard him swear in a long time, because the context was even worse than it sounds. Yes, this book is literally arguing that the reason people are poor is that they’re just too lazy and stupid to work their way out of poverty.

“No society has fully implemented the egalitarian doctrine, but one came as close as any society can come: Cambodia’s Khmer Rouge.” (p. 207)

Because obviously the problem with the Khmer Rouge was their capital gains taxes. They were just too darn fair, and if they’d been more selfish they would never have committed genocide. (The authors literally appear to believe this.)

 

So there are my extensive quotations, to show that this really is what the book is saying. Now, a little more summary of the good, the bad, and the ugly.

One good thing is that the authors really do seem to understand fairly well the arguments of their opponents. They quote their opponents extensively, and only a few times did it feel meaningfully out of context. Their use of economic statistics is also fairly good, though occasionally they present misleading numbers or compare two obviously incomparable measures.

One of the core points in Equal is Unfair is quite weak: They argue against the “shared-pie assumption”, which is that we create wealth as a society, and thus the rest of society is owed some portion of the fruits of our efforts. They maintain that this is fundamentally authoritarian and immoral; essentially they believe a totalizing false dichotomy between either absolute laissez-faire or Stalinist Communism.

But the “shared-pie assumption” is not false; we do create wealth as a society. Human cognition is fundamentally social cognition; they said themselves that we depend upon the discoveries of people like Newton and Einstein for our way of life. But it should be obvious we can never pay Einstein back; so instead we must pay forward, to help some child born in the ghetto to rise to become the next Einstein. I agree that we must build a society where opportunity is maximized—and that means, necessarily, redistributing wealth from its current state of absurd and immoral inequality.

I do however agree with another core point, which is that most discussions of inequality rely upon a tacit assumption which is false: They call it the “fixed-pie assumption”.

When you talk about the share of income going to different groups in a population, you have to be careful about the fact that there is not a fixed amount of wealth in a society to be distributed—not a “fixed pie” that we are cutting up and giving around. If it were really true that the rising income share of the top 1% were necessary to maximize the absolute benefits of the bottom 99%, we probably should tolerate that, because the alternative means harming everyone. (In arguing this they quote John Rawls several times with disapprobation, which is baffling because that is exactly what Rawls says.)

Even if that’s true, there is still a case to be made against inequality, because too much wealth in the hands of a few people will give them more power—and unequal power can be dangerous even if wealth is earned, exchanges are uncoerced, and the distribution is optimally efficient. (Watkins and Brook dismiss this contention out of hand, essentially defining beneficent exploitation out of existence.)

Of course, in the real world, there’s no reason to think that the ballooning income share of the top 0.01% in the US is actually associated with improved standard of living for everyone else.

I’ve shown these graphs before, but they bear repeating:

Income shares for the top 1% and especially the top 0.1% and 0.01% have risen dramatically in the last 30 years.

top_income_shares_adjusted

But real median income has only slightly increased during the same period.

US_median_household_income

Thus, mean income has risen much faster than median income.

median_mean

While theoretically it could be that the nature of our productivity technology has shifted in such a way that it suddenly became necessary to heap more and more wealth on the top 1% in order to continue increasing national output, there is actually very little evidence of this. On the contrary, as Joseph Stiglitz (Nobel Laureate, you may recall) has documented, the leading cause of our rising inequality appears to be a dramatic increase in rent-seeking, which is to say corruption, exploitation, and monopoly power. (This probably has something to do with why I found in my master’s thesis that rising top income shares correlate quite strongly with rising levels of corruption.)

Now to be fair, the authors of Equal is Unfair do say that they are opposed to rent-seeking, and would like to see it removed. But they have a very odd concept of what rent-seeking entails, and it basically seems to amount to saying that whatever the government does is rent-seeking, whatever corporations do is fair free-market competition. On page 38 they warn us not to assume that government is good and corporations are bad—but actually it’s much more that they assume that government is bad and corporations are good. (The mainstream opinion appears to be actually that both are bad, and we should replace them both with… er… something.)

They do make some other good points I wish more leftists would appreciate, such as the point that while colonialism and imperialism can damage countries that suffer them and make them poorer, they generally do not benefit the countries that commit them and make them richer. The notion that Europe is rich because of imperialism is simply wrong; Europe is rich because of education, technology, and good governance. Indeed, the greatest surge in Europe’s economic growth occurred as the period of imperialism was winding down—when Europeans realized that they would be better off trying to actually invent and produce things rather than stealing them from others.

Likewise, they rightfully demolish notions of primitivism and anti-globalization that I often see bouncing around from folks like Naomi Klein. But these are book 1 messages; any economist would agree that primitivism is a terrible idea, and very few are opposed to globalization per se.

The end of Equal is Unfair gives a five-part plan for unleashing opportunity in America:

1. Abolish all forms of corporate welfare so that no business can gain unfair advantage.

2. Abolish government barriers to work so that every individual can enjoy the dignity of earned success.

3. Phase out the welfare state so that America can once again become the land of self-reliance.

4. Unleash the power of innovation in education by ending the government monopoly on schooling.

5. Liberate innovators from the regulatory shackles that are strangling them.

Number 1 is hard to disagree with, except that they include literally everything the government does that benefits a corporation as corporate welfare, including things like subsidies for solar power that the world desperately needs (or millions of people will die).

Number 2 sounds really great until you realize that they are including all labor standards, environmental standards and safety regulations as “barriers to work”; because it’s such a barrier for children to not be able to work in a factory where your arm can get cut off, and such a barrier that we’ve eliminated lead from gasoline emissions and thereby cut crime in half.

Number 3 could mean a lot of things; if it means replacing the existing system with a basic income I’m all for it. But in fact it seems to mean removing all social insurance whatsoever. Indeed, Watkins and Brook do not appear to believe in social insurance at all. The whole concept of “less fortunate”, “there but for the grace of God go I” seems to elude them. They have no sense that being fortunate in their own lives gives them some duty to help others who were not; they feel no pang of moral obligation whatsoever to help anyone else who needs help. Indeed, they literally mock the idea that human beings are “all in this together”.

They also don’t even seem to believe in public goods, or somehow imagine that rational self-interest could lead people to pay for public goods without any enforcement whatsoever despite the overwhelming incentives to free-ride. (What if you allow people to freely enter a contract that provides such enforcement mechanisms? Oh, you mean like social democracy?)

Regarding number 4, I’d first like to point out that private schools exist. Moreover, so do charter schools in most states, and in states without charter schools there are usually vouchers parents can use to offset the cost of private schools. So while the government has a monopoly in the market share sense—the vast majority of education in the US is public—it does not actually appear to be enforcing a monopoly in the anti-competitive sense—you can go to private school, it’s just too expensive or not as good. Why, it’s almost as if education is a public good or a natural monopoly.

Number 5 also sounds all right, until you see that they actually seem most opposed to antitrust laws of all things. Why would antitrust laws be the ones that bother you? They are designed to increase competition and lower barriers, and largely succeed in doing so (when they are actually enforced, which is rare of late). If you really want to end barriers to innovation and government-granted monopolies, why is it not patents that draw your ire?

They also seem to have trouble with the difference between handicapping and redistribution—they seem to think that the only way to make outcomes more equal is to bring the top down and leave the bottom where it is, and they often use ridiculous examples like “Should we ban reading to your children, because some people don’t?” But of course no serious egalitarian would suggest such a thing. Education isn’t fungible, so it can’t be redistributed. You can take it away (and sometimes you can add it, e.g. public education, which Watkins and Brooks adamantly oppose); but you can’t simply transfer it from one person to another. Money on the other hand, is by definition fungible—that’s kind of what makes it money, really. So when we take a dollar from a rich person and give it to a poor person, the poor person now has an extra dollar. We’ve not simply lowered; we’ve also raised. (In practice it’s a bit more complicated than that, as redistribution can introduce inefficiencies. So realistically maybe we take $1.00 and give $0.90; that’s still worth doing in a lot of cases.)

If attributes like intelligence were fungible, I think we’d have a very serious moral question on our hands! It is not obvious to me that the world is better off with its current range of intelligence, compared to a world where geniuses had their excess IQ somehow sucked out and transferred to mentally disabled people. Or if you think that the marginal utility of intelligence is increasing, then maybe we should redistribute IQ upward—take it from some mentally disabled children who aren’t really using it for much and add it onto some geniuses to make them super-geniuses. Of course, the whole notion is ridiculous; you can’t do that. But whereas Watkins and Brook seem to think it’s obvious that we shouldn’t even if we could, I don’t find that obvious at all. You didn’t earn your IQ (for the most part); you don’t seem to deserve it in any deep sense; so why should you get to keep it, if the world would be much better off if you didn’t? Why should other people barely be able to feed themselves so I can be good at calculus? At best, maybe I’m free to keep it—but given the stakes, I’m not even sure that would be justifiable. Peter Singer is right about one thing: You’re not free to let a child drown in a lake just to keep your suit from getting wet.

Ultimately, if you really want to understand what’s going on with Equal is Unfair, consider the following sentence, which I find deeply revealing as to the true objectives of these Objectivists:

“Today, meanwhile, although we have far more liberty than our feudal ancestors, there are countless ways in which the government restricts our freedom to produce and trade including minimum wage laws, rent control, occupational licensing laws, tariffs, union shop laws, antitrust laws, government monopolies such as those granted to the post office and education system, subsidies for industries such as agriculture or wind and solar power, eminent domain laws, wealth redistribution via the welfare state, and the progressive income tax.” (p. 114)

Some of these are things no serious economist would disagree with: We should stop subsidizing agriculture and tariffs should be reduced or removed. Many occupational licenses are clearly unnecessary (though this has a very small impact on inequality in real terms—licensing may stop you from becoming a barber, but it’s not what stops you from becoming a CEO). Others are legitimately controversial: Economists are currently quite divided over whether minimum wage is beneficial or harmful (I lean toward beneficial, but I’d prefer a better solution), as well as how to properly regulate unions so that they give workers much-needed bargaining power without giving unions too much power. But a couple of these are totally backward, exactly contrary to what any mainstream economist would say: Antitrust laws need to be enforced more, not eliminated (don’t take it from me; take it from that well-known Marxist rag The Economist). Subsidies for wind and solar power make the economy more efficient, not less—and suspiciously Watkins and Brook omitted the competing subsidies that actually are harmful, namely those to coal and oil.

Moreover, I think it’s very revealing that they included the word progressive when talking about taxation. In what sense does making a tax progressive undermine our freedom? None, so far as I can tell. The presence of a tax undermines freedom—your freedom to spend that money some other way. Making the tax higher undermines freedom—it’s more money you lose control over. But making the tax progressive increases freedom for some and decreases it for others—and since rich people have lower marginal utility of wealth and are generally more free in substantive terms in general, it really makes the most sense that, holding revenue constant, making a tax progressive generally makes your people more free.

But there’s one thing that making taxes progressive does do: It benefits poor people and hurts rich people. And thus the true agenda of Equal is Unfair becomes clear: They aren’t actually interested in maximizing freedom—if they were, they wouldn’t be complaining about occupational licensing and progressive taxation, they’d be outraged by forced labor, mass incarceration, indefinite detention, and the very real loss of substantive freedom that comes from being born into poverty. They wouldn’t want less redistribution, they’d want more efficient and transparent redistribution—a shift from the current hodgepodge welfare state to a basic income system. They would be less concerned about the “freedom” to pollute the air and water with impunity, and more concerned about the freedom to breathe clean air and drink clean water.

No, what they really believe is rich people are better. They believe that billionaires attained their status not by luck or circumstance, not by corruption or ruthlessness, but by the sheer force of their genius. (This is essentially the entire subject of chapter 6, “The Money-Makers and the Money-Appropriators”, and it’s nauseating.) They describe our financial industry as “fundamentally moral and productive” (p.156)—the industry that you may recall stole millions of homes and laundered money for terrorists. They assert that no sane person could believe that Steve Wozniack got lucky—I maintain no sane person could think otherwise. Yes, he was brilliant; yes, he invented good things. But he had to be at the right place at the right time, in a society that supported and educated him and provided him with customers and employees. You didn’t build that.

Indeed, perhaps most baffling is that they themselves seem to admit that the really great innovators, such as Newton, Einstein, and Darwin, were scientists—but scientists are almost never billionaires. Even the common counterexample, Thomas Edison, is largely false; he mainly plagiarized from Nikola Tesla and appropriated the ideas of his employees. Newton, Einstein and Darwin were all at least upper-middle class (as was Tesla, by the way—he did not die poor as is sometimes portrayed), but they weren’t spectacularly mind-bogglingly rich the way that Steve Jobs and Andrew Carnegie were and Bill Gates and Jeff Bezos are.

Some people clearly have more talent than others, and some people clearly work harder than others, and some people clearly produce more than others. But I just can’t wrap my head around the idea that a single man can work so hard, be so talented, produce so much that he can deserve to have as much wealth as a nation of millions of people produces in a year. Yet, Mark Zuckerberg has that much wealth. Remind me again what he did? Did he cure a disease that was killing millions? Did he colonize another planet? Did he discover a fundamental law of nature? Oh yes, he made a piece of software that’s particularly convenient for talking to your friends. Clearly that is worth the GDP of Latvia. Not that silly Darwin fellow, who only uncovered the fundamental laws of life itself.

In the grand tradition of reducing complex systems to simple numerical values, I give book 1 a 7/10, book 2 a 5/10, and book 3 a 2/10. Equal is Unfair is about 25% book 1, 25% book 2, and 50% book 3, so altogether their final score is, drumroll please: 4/10. Maybe read the first half, I guess? That’s where most of the good stuff is.

We all know lobbying is corrupt. What can we do about it?

JDN 2457439

It’s so well-known as to almost seem cliche: Our political lobbying system is clearly corrupt.

Juan Cole, a historian and public intellectual from the University of Michigan, even went so far as to say that the United States is the most corrupt country in the world. He clearly went too far, or else left out a word; the US may well be the most corrupt county in the First World, though most rankings say Italy. In any case, the US is definitely not the most corrupt country in the whole world; no, that title goes to Somalia and/or North Korea.

Still, lobbying in the US is clearly a major source of corruption. Indeed, economists who study corruption often have trouble coming up with a sound definition of “corruption” that doesn’t end up including lobbying, despite the fact that lobbying is quite legal. Bribery means giving politicians money to get them to do things for you. Lobbying means giving politicians money and asking them to do things. In the letter of the law, that makes all the difference.

One thing that does make a difference is that lobbyists are required to register who they are and record their campaign contributions (unless of course they launder—I mean reallocate—them through a Super PAC of course). Many corporate lobbyists claim that it’s not that they go around trying to find politicians to influence, but rather politicians who call them up demanding money.

One of the biggest problems with lobbying is what’s called the revolving doorpoliticians are often re-hired as lobbyists, or lobbyists as politicians, based on the personal connections formed in the lobbying process—or possibly actual deals between lobbying companies over legislation, though if done explicitly that would be illegal. Almost 400 lobbyists working right now used to be legislators; almost 3,000 more worked as Congressional staff. Many lobbyists will do a tour as a Congressional staffer as a resume-builder, like an internship.

Studies have shown that lobbying does have an impact on policy—in terms of carving out tax loopholes it offers a huge return on investment.

Our current systems to disinventize the revolving door are not working. While there is reason to think that establishing a “cooling-off period” of a few years could make a difference, under current policy we already have some cooling-off periods and it’s clearly not enough.

So, now that we know the problem, let’s start talking about solutions.

Option 1: Ban campaign contributions

One possibility would be to eliminate campaign contributions entirely, which we could do by establishing a law that nobody can ever give money or in-kind favors to politicians ever under any circumstances. It would still be legal to meet with politicians and talk to them about issues, but if you take a Senator out for dinner we’d have to require that the Senator pay for their own food and transportation, lest wining-and-dining still be an effective means of manipulation. Then all elections would have to be completely publicly financed. This is a radical solution, but it would almost certainly work. MoveOn has a petition you can sign if you like this solution, and there’s a site called public-campaign-financing.org that will tell you how it could realistically be implemented (beware, their webmaster appears to be a time traveler from the 1990s who thinks that automatic music and tiled backgrounds constitute good web design).

There are a couple of problems with this solution, however:

First, it would be declared Unconstitutional by the Supreme Court. Under the (literally Orwellian) dicta that “corporations are people” and “money is speech” established in Citizens United vs. FEC, any restrictions on donating money to politicians constitute restrictions on free speech, and are therefore subject to strict scrutiny.

Second, there is actually a real restriction on freedom here, not because money is speech, but because money facilitates speech. Since eliminating all campaign donations would require total public financing of elections, we would need some way of deciding which candidates to finance publicly, because obviously you can’t give the same amount of money to everyone in the country or even everyone who decides to run. It simply doesn’t make sense to provide the same campaign financing for Hillary Clinton that you would for Vermin Supreme. But then, however this mechanism works, it could readily be manipulated to give even more advantages to the two major parties (not that they appear to need any more). If you’re fine with having exactly two parties to choose from, then providing funding for their, say, top 5 candidates in each primary, and then for their nominee in the general election, would work. But I for one would like to have more options than that, and that means devising some mechanism for funding third parties that have a realistic shot (like Ralph Nader or Ross Perot) but not those who don’t (like the aforementioned Vermin Supreme)—but at the same time we need to make sure that it’s not biased or self-fulfilling.

So let’s suppose we don’t eliminate campaign contributions completely. What else could we do that would curb corruption?

Option 2: Donation caps and “Democracy Credits”

I particularly like this proposal, self-titled the American Anti-Corruption Act (beware self-titled laws: USA PATRIOT ACT, anyone?), which would require full transparency—yes, even you, Super PACs—and place reasonable caps on donations so that large amounts of funds must be raised from large numbers of people rather than from a handful of people with a huge amount of money. It also includes an interesting proposal called “Democracy Credits” (again, the titles are a bit heavy-handed), which are basically an independent monetary system, used only to finance elections, and doled out exactly equally to all US citizens to spend on the candidates they like. The credits would then be exchangeable for real money, but only by the candidates themselves. This is a great idea, but sadly I doubt anyone in our political system is likely to go for it.

Actually, I would like to see these “Democracy Credits” used as votes—whoever gets the most credits wins the election, automatically. This is not quite as good as range voting, because it is not cloneproof or independent of irrelevant alternatives (briefly put, if you run two candidates that are exactly alike, their votes get split and they both lose, even if everyone likes them; and similarly, if you add a new candidate that doesn’t win you can still affect who does end up winning. Range voting is basically the only system that doesn’t have these problems, aside from a few really weird “voting” systems like “random ballot”). But still, it would be much better than our current plurality “first past the post” system, and would give third-party candidates a much fairer shot at winning elections. Indeed, it is very similar to CTT monetary voting, which is provably optimal in certain (idealized) circumstances. Of course, that’s even more of a pipe dream.

The donation caps are realistic, however; we used to have them, in fact, before Citizens United vs. FEC. Perhaps future Supreme Court decisions can overturn it and restore some semblance of balance in our campaign finance system.

Option 3: Treat campaign contributions as a conflict of interest

Jack Abramoff, a former lobbyist who was actually so corrupt he got convicted for it, has somewhat ironically made another proposal for how to reduce corrupting in the lobbying system. I suppose he would know, though I must wonder what incentives he has to actually do this properly (and corrupt people are precisely the sort of people with whom you should definitely be looking at the monetary incentives).

Abramoff would essentially use Option 1, but applied only to individuals and corporations with direct interests in the laws being made. As Gawker put it, “If you get money or perks from elected officials, […] you shouldn’t be permitted to give them so much as one dollar.” The way it avoids requiring total public financing is by saying that if you don’t get perks, you can still donate.

His plan would also extend the “cooling off” idea to its logical limit—once you work for Congress, you can never work for a lobbying organization for the rest of your life, and vice versa. That seems like a lot of commitment to ask of twentysomething Congressional interns (“If you take this job, unemployed graduate, you can never ever take that other job!”), but I suppose if it works it might be worth it.

He also wants to establish term limits for Congress, which seems pretty reasonable to me. If we’re going to have term limits for the Executive branch, why not the other branches as well? They could be longer, but if term limits are necessary at all we should use them consistently.

Abramoff also says we should repeal the 17th Amendment, because apparently making our Senators less representative of the population will somehow advance democracy. Best I can figure, he’s coming from an aristocratic attitude here, this notion that we should let “better people” make the important decisions if we want better decisions. And this sounds seductive, given how many really bad decisions people make in this world. But of course which people were the better people was precisely the question representative democracy was intended to answer. At least if Senators are chosen by state legislatures there’s a sort of meta-representation going on, which is obviously better than no representation at all; but still, adding layers of non-democracy by definition cannot make a system more democratic.

But Abramoff really goes off the rails when he proposes making it a conflict of interest to legislate about your own state.Pork-barrel spending”, as it is known, or earmarks as they are formally called, are actually a tiny portion of our budget (about 0.1% of our GDP) and really not worth worrying about. Sure, sometimes a Senator gets a bridge built that only three people will ever use, but it’s not that much money in the scheme of things, and there’s no harm in keeping our construction workers at work. The much bigger problem would be if legislators could no longer represent their own constituents in any way, thus defeating the basic purpose of having a representative legislature. (There is a thorny question about how much a Senator is responsible for their own state versus the country as a whole; but clearly their responsibility to their own state is not zero.)

Even aside from that ridiculous last part, there’s a serious problem with this idea of “no contributions from anyone who gets perks”: What constitutes a “perk”? Is a subsidy for solar power a perk for solar companies, or a smart environmental policy (can it be both?)? Does paying for road construction “affect” auto manufacturers in the relevant sense? What about policies that harm particular corporations? Since a carbon tax would hurt oil company profits, are oil companies allowed to lobby against it on the ground that it is the opposite of a “perk”?

Voting for representatives who will do things you want is kind of the point of representative democracy. (No, New York Post, it is not “pandering” to support women’s rights and interestswomen are the majority of our population. If there is one group of people that our government should represent, it is women.) Taken to its logical extreme, this policy would mean that once the government ever truly acts in the public interest, all campaign contributions are henceforth forever banned. I presume that’s not actually what Abramoff intends, but he offers no clear guidelines on how we would distinguish a special interest to be excluded from donations as opposed to a legitimate public interest that creates no such exclusion. Could we flesh this out in the actual legislative process? Is this something courts would decide?

In all, I think the best reform right now is to put the cap back on campaign contributions. It’s simple to do, and we had it before and it seemed to work (mostly). We could also combine that with longer cooling-off periods, perhaps three or five years instead of only one, and potentially even term limits for Congress. These reforms would certainly not eliminate corruption in the lobbying system, but they would most likely reduce it substantially, without stepping on fundamental freedoms.

Of course I’d really like to see those “Democracy Credits”; but that’s clearly not going to happen.

Medicaid expansion and the human cost of political polarization

JDN 2457422

As of this writing, there are still 22 of our 50 US states that have refused to expand Medicaid under the Affordable Care Act. Several other states (including Michigan) expanded Medicaid, but on an intentionally slowed timetable. The way the law was written, these people are not eligible for subsidized private insurance (because it was assumed they’d be on Medicaid!), so there are almost 3 million people without health insurance because of the refused expansions.

Why? Would expanding Medicaid on the original timetable be too arduous to accomplish? If so, explain why 13 states managed to do it on time.

Would expanding Medicaid be expensive, and put a strain on state budgets? No, the federal government will pay 90% of the cost until 2020. Some states claim that even the 10% is unbearable, but when you figure in the reduced strain on emergency rooms and public health, expanding Medicaid would most likely save state money, especially with the 90% federal funding.

To really understand why so many states are digging in their heels, I’ve made you a little table. It includes three pieces of information about each state: The first column is whether it accepted Medicaid immediately (“Yes”), accepted it with delays or conditions, or hasn’t officially accepted it yet but is negotiating to do so (“Maybe”), or refused it completely (“No”). The second column is the political party of the state governor. The third column is the majority political party of the state legislatures (“D” for Democrat, “R” for Republican, “I” for Independent, or “M” for mixed if one house has one majority and the other house has the other).

State Medicaid? Governor Legislature
Alabama No R R
Alaska Maybe I R
Arizona Yes R R
Arkansas Maybe R R
California Yes D D
Colorado Yes D M
Connecticut Yes D D
Delaware Yes D D
Florida No R R
Georgia No R R
Hawaii Yes D D
Idaho No R R
Illinois Yes R D
Indiana Maybe R R
Iowa Maybe R M
Kansas No R R
Kentucky Yes R M
Lousiana Maybe D R
Maine No R M
Maryland Yes R D
Massachusetts Yes R D
Michigan Maybe R R
Minnesota No D M
Mississippi No R R
Missouri No D M
Montana Maybe D M
Nebraska No R R
Nevada Yes R R
New Hampshire Maybe D R
New Jersey Yes R D
New Mexico Yes R M
New York Yes D D
North Carolina No R R
North Dakota Yes R R
Ohio Yes R R
Oklahoma No R R
Oregon Yes D D
Pennsylvania Maybe D R
Rhode Island Yes D D
South Carolina No R R
South Dakota Maybe R R
Tennessee No R R
Texas No R R
Utah No R R
Vermont Yes D D
Virginia Maybe D R
Washington Yes D D
West Virginia Yes D R
Wisconsin No R R
Wyoming Maybe R R

I have taken the liberty of some color-coding.

The states highlighted in red are states that refused the Medicaid expansion which have Republican governors and Republican majorities in both legislatures; that’s Alabama, Florida, Georgia, Idaho, Kansas, Mississippi, Nebraska, North Carolina, Oklahoma, South Carolina, Tennessee, Texas, Utah, and Wisconsin.

The states highlighted in purple are states that refused the Medicaid expansion which have mixed party representation between Democrats and Republicans; that’s Maine, Minnesota, and Missouri.

And I would have highlighted in blue the states that refused the Medicaid expansion which have Democrat governors and Democrat majorities in both legislatures—but there aren’t any.

There were Republican-led states which said “Yes” (Arizona, Nevada, North Dakota, and Ohio). There were Republican-led states which said “Maybe” (Arkansas, Indiana, Michigan, South Dakota, and Wyoming).

Mixed states were across the board, some saying “Yes” (Colorado, Illinois, Kentucky, Maryland, Massachusetts, New Jersey, New Mexico, and West Virginia), some saying “Maybe” (Alaska, Iowa, Lousiana, Montana, New Hampshire, Pennsylvania, and Virginia), and a few saying “No” (Maine, Minnesota, and Missouri).

But every single Democrat-led state said “Yes”. California, Connecticut, Delaware, Hawaii, New York, Oregon, Rhode Island, Vermont, and Washington. There aren’t even any Democrat-led states that said “Maybe”.

Perhaps it is simplest to summarize this in another table. Each row is a party configuration (“Democrat, Republican”, or “mixed”); the column is a Medicaid decision (“Yes”, “Maybe”, or “No”); in each cell is the count of how many states that fit that description:

Yes Maybe No
Democrat 9 0 0
Republican 4 5 14
Mixed 8 7 3

Shall I do a chi-square test? Sure, why not? A chi-square test of independence produces a p-value of 0.00001. This is not a coincidence. Being a Republican-led state is strongly correlated with rejecting the Medicaid expansion.

Indeed, because the elected officials were there first, I can say that there is Granger causality from being a Republican-led state to rejecting the Medicaid expansion. Based on the fact that mixed states were much less likely to reject Medicaid than Republican states, I could even estimate a dose-response curve on how having more Republicans makes you more likely to reject Medicaid.

Republicans did this, is basically what I’m getting at here.

Obamacare itself was legitimately controversial (though the Republicans never quite seemed to grasp that they needed a counterproposal for their argument to make sense), but once it was passed, accepting the Medicaid expansion should have been a no-brainer. The federal government is giving you money in order to give healthcare to poor people. It will not be expensive for your state budget; in fact it will probably save you money in the long run. It will help thousands or millions of your constituents. Its impact on the federal budget is negligible.

But no, 14 Republican-led states couldn’t let themselves get caught implementing a Democrat’s policy, especially if it would actually work. If it failed catastrophically, they could say “See? We told you so.” But if it succeeded, they’d have to admit that their opponents sometimes have good ideas. (You know, just like the Democrats did, when they copied most of Mitt Romney’s healthcare system.)

As a result of their stubbornness, almost 3 million Americans don’t have healthcare. Some of those people will die as a result—economists estimate about 7,000 people, to be precise. Hundreds of thousands more will suffer. All needlessly.

When 3,000 people are killed in a terrorist attack, Republicans clamor to kill millions in response with carpet bombing and nuclear weapons.

But when 7,000 people will die without healthcare, Republicans say we can’t afford it.

How Reagan ruined America

JDN 2457408

Or maybe it’s Ford?

The title is intentionally hyperbolic; despite the best efforts of Reagan and his ilk, America does yet survive. Indeed, as Obama aptly pointed out in his recent State of the Union, we appear to be on an upward trajectory once more. And as you’ll see in a moment, many of the turning points actually seem to be Gerald Ford, though it was under Reagan that the trends really gained steam.

But I think it’s quite remarkable just how much damage Reaganomics did to the economy and society of the United States. It’s actually a turning point in all sorts of different economic policy measures; things were going well from the 1940s to the 1970s, and then suddenly in the 1980s they take a turn for the worse.

The clearest example is inequality. From the World Top Incomes Database, here’s the graph I featured on my Patreon page of income shares in the United States:

top_income_shares_pretty.png

Inequality was really bad during the Roaring Twenties (no surprise to anyone who has read The Great Gatsby), then after the turmoil of the Great Depression, the New Deal, and World War 2, inequality was reduced to a much lower level.

During this period, what I like to call the Golden Age of American Capitalism:

Instead of almost 50% in the 1920s, the top 10% now received about 33%.

Instead of over 20% in the 1920s, the top 1% now received about 10%.

Instead of almost 5% in the 1920s, the top 0.01% now received about 1%.

This pattern continued to hold, remarkably stable, until 1980. Then, it completely unraveled. Income shares of the top brackets rose, and continued to rise, ever since (fluctuating with the stock market of course). Now, we’re basically back right where we were in the 1920s; the top 10% gets 50%, the top 1% gets 20%, and the top 0.01% gets 4%.

Not coincidentally, we see the same pattern if we look at the ratio of CEO pay to average worker pay, as shown here in a graph from the Economic Policy Institute:

Snapshot_CEO_pay_main

Up until 1980, the ratio in pay between CEOs and their average workers was steady around 20 to 1. From that point forward, it began to rise—and rise, and rise. It continued to rise under every Presidential administration, and actually hit its peak in 2000, under Bill Clinton, at an astonishing 411 to 1 ratio. In the 2000s it fell to about 250 to 1 (hurray?), and has slightly declined since then to about 230 to 1.

By either measure, we can see a clear turning point in US inequality—it was low and stable, until Reagan came along, when it began to explode.

Part of this no doubt is the sudden shift in tax rates. The top marginal tax rates on income were over 90% from WW2 to the 1960s; then JFK reduced them to 70%, which is probably close to the revenue-maximizing rate. There they stayed, until—you know the refrain—along came Reagan, and by the end of his administration he had dropped the top marginal rate to 28%. It then was brought back up to about 35%, where it has basically remained, sometimes getting as high as 40%.

US_income_tax_rates

Another striking example is the ratio between worker productivity and wages. The Economic Policy Institute has a very detailed analysis of this, but I think their graph by itself is quite striking:

productivity_wages

Starting around the 1970s, and then rapidly accelerating from the 1980s onward, we see a decoupling of productivity from wages. Productivity has continued to rise at more or less the same rate, but wages flatten out completely, even falling for part of the period.

For those who still somehow think Republicans are fiscally conservative, take a look at this graph of the US national debt:

US_federal_debt

We were at a comfortable 30-40% of GDP range, actually slowly decreasing—until Reagan. We got back on track to reduce the debt during the mid-1990s—under Bill Clinton—and then went back to raising it again once George W. Bush got in office. It ballooned as a result of the Great Recession, and for the past few years Obama has been trying to bring it back under control.

Of course, national debt is not nearly as bad as most people imagine it to be. If Reagan had only raised the national debt in order to stop unemployment, that would have been fine—but he did not.

Unemployment had never been above 10% since World War 2 (and in fact reached below 4% in the 1960s!) and yet all the sudden hit almost 11%, shortly after Reagan:
US_unemployment
Let’s look at that graph a little closer. Right now the Federal Reserve uses 5% as their target unemployment rate, the supposed “natural rate of unemployment” (a lot of economists use this notion, despite there being almost no empirical support for it whatsoever). If I draw red lines at 5% unemployment and at 1981, the year Reagan took office, look at what happens.

US_unemployment_annotated

For most of the period before 1981, we spent most of our time below the 5% line, jumping above it during recessions and then coming back down; for most of the period after 1981, we spent most of our time above the 5% line, even during economic booms.

I’ve drawn another line (green) where the most natural break appears, and it actually seems to be the Ford administration; so maybe I can’t just blame Reagan. But something happened in the last quarter of the 20th century that dramatically changed the shape of unemployment in America.

Inflation is at least ambiguous; it was pretty bad in the 1940s and 1950s, and then settled down in the 1960s for awhile before picking up in the 1970s, and actually hit its worst just before Reagan took office:

US_inflation

Then there’s GDP growth.

US_GDP_growth

After World War 2, our growth rate was quite volatile, rising as high as 8% (!) in some years, but sometimes falling to zero or slightly negative. Rates over 6% were common during booms. On average GDP growth was quite good, around 4% per year.

In 1981—the year Reagan took office—we had the worst growth rate in postwar history, an awful -1.9%. Coming out of that recession we had very high growth of about 7%, but then settled into the new normal: More stable growth rates, yes, but also much lower. Never again did our growth rate exceed 4%, and on average it was more like 2%. In 2009, Reagan’s record recession was broken with the Great Recession, a drop of almost 3% in a single year.

GDP per capita tells a similar story, of volatile but fast growth before Reagan followed by stable but slow growth thereafter:

US_GDP_per_capita

Of course, it wouldn’t be fair to blame Reagan for all of this. A lot of things have happened in the late 20th century, after all. In particular, the OPEC oil crisis is probably responsible for many of these 1970s shocks, and when Nixon moved us at last off the Bretton Woods gold standard, it was probably the right decision, but done at a moment of crisis instead of as the result of careful planning.

Also, while the classical gold standard was terrible, the Bretton Woods system actually had some things to recommend it. It required strict capital controls and currency exchange regulations, but the period of highest economic growth and lowest inequality in the United States—the period I’m calling the Golden Age of American Capitalism—was in fact the same period as the Bretton Woods system.

Some of these trends started before Reagan, and all of them continued in his absence—many of them worsening as much or more under Clinton. Reagan took office during a terrible recession, and either contributed to the recovery or at least did not prevent it.

The President only has very limited control over the economy in any case; he can set a policy agenda, but Congress must actually implement it, and policy can take years to show its true effects. Yet given Reagan’s agenda of cutting top tax rates, crushing unions, and generally giving large corporations whatever they want, I think he bears at least some responsibility for turning our economy in this very bad direction.

What are we celebrating today?

JDN 2457208 EDT 13:35 (July 4, 2015)

As all my American readers will know (and unsurprisingly 79% of my reader trackbacks come from the United States), today is Independence Day. I’m curious how my British readers feel about this day (and the United Kingdom is my second-largest source of reader trackbacks); we are in a sense celebrating the fact that we’re no longer ruled by you.

Every nation has some notion of patriotism; in the simplest sense we could say that patriotism is simply nationalism, yet another reflection of our innate tribal nature. As Obama said when asked about American exceptionalism, the British also believe in British exceptionalism. If that is all we are dealing with, then there is no particular reason to celebrate; Saudi Arabia or China could celebrate just as well (and very likely does). Independence Day then becomes something parochial, something that is at best a reflection of local community and culture, and at worst a reaffirmation of nationalistic divisiveness.

But in fact I think we are celebrating something more than that. The United States of America is not just any country. It is not just a richer Brazil or a more militaristic United Kingdom. There really is something exceptional about the United States, and it really did begin on July 4, 1776.

In fact we should probably celebrate June 21, 1789 and December 15, 1791, the ratification of the Constitution and the Bill of Rights respectively. But neither of these would have been possible without that Declaration of Independence on July 4, 1776. (In fact, even that date isn’t as clear-cut as commonly imagined.)

What makes the United States unique?

From the dawn of civilization around 5000 BC up to the mid-18th century AD, there were basically two ways to found a nation. The most common was to grow the nation organically, formulate an ethnic identity over untold generations and then make up an appealing backstory later. The second way, and not entirely mutually exclusive, was for a particular leader, usually a psychopathic king, to gather a superior army, conquer territory, and annex the people there, making them part of his nation whether they wanted it or not. Variations on these two themes were what happened in Rome, in Greece, in India, in China; they were done by the Sumerians, by the Egyptians, by the Aztecs, by the Maya. All the ancient civilizations have founding myths that are distorted so far from the real history that the real history has become basically unknowable. All the more recent powers were formed by warlords and usually ruled with iron fists.

The United States of America started with a war, make no mistake; and George Washington really was more a charismatic warlord than he ever was a competent statesman. But Washington was not a psychopath, and refused to rule with an iron fist. Instead he was instrumental in establishing a fundamentally new approach to the building of nations.
This is literally what happened—myths have grown around it, but it itself documented history. Washington and his compatriots gathered a group of some of the most intelligent and wise individuals they could find, sat them down in a room, and tasked them with answering the basic question: “What is the best possible country?” They argued and debated, considering absolutely the most cutting-edge economics (The Wealth of Nations was released in 1776) and political philosophy (Thomas Paine’s Common Sense also came out in 1776). And then, when they had reached some kind of consensus on what the best sort of country would be—they created that country. They were conscious of building a new tradition, of being the founders of the first nation built as part of the Enlightenment. Previously nations were built from immemorial tradition or the whims of warlords—the United States of America was the first nation in the world that was built on principle.

It would not be the last; in fact, with a terrible interlude that we call Napoleon, France would soon become the second nation of the Enlightenment. A slower process of reform would eventually bring the United Kingdom itself to a similar state (though the UK is still a monarchy and has no formal constitution, only an ever-growing mountain of common law). As the centuries passed and the United States became more and more powerful, its system of government attained global influence, with now almost every nation in the world nominally a “democracy” and about half actually recognizable as such. We now see it as unexceptional to have a democratically-elected government bound by a constitution, and even think of the United States as a relatively poor example compared to, say, Sweden or Norway (because #Scandinaviaisbetter), and this assessment is not entirely wrong; but it’s important to keep in mind that this was not always the case, and on July 4, 1776 the Founding Fathers truly were building something fundamentally new.

Of course, the Founding Fathers were not the demigods they are often imagined to be; Washington himself was a slaveholder, and not just any slaveholder, but in fact almost a billionaire in today’s terms—the wealthiest man in America by far and actually a rival to the King of England. Thomas Jefferson somehow managed to read Thomas Paine and write “all men are created equal” without thinking that this obligated him to release his own slaves. Benjamin Franklin was a misogynist and womanizer. James Madison’s concept of formalizing armed rebellion bordered on insanity (and ultimately resulted in our worst amendment, the Second). The system that they built disenfranchised women, enshrined the slavery of Black people into law, and consisted of dozens of awkward compromises (like the Senate) that would prove disastrous in the future. The Founding Fathers were human beings with human flaws and human hypocrisy, and they did many things wrong.

But they also did one thing very, very right: They created a new model for how nations should be built. In a very real sense they redefined what it means to be a nation. That is what we celebrate on Independence Day.

1200px-Flag_of_the_United_States.svg