When are we going to get serious about climate change?

Oct 8, JDN 24578035

Those two storms weren’t simply natural phenomena. We had a hand in creating them.

The EPA doesn’t want to talk about the connection, and we don’t have enough statistical power to really be certain, but there is by now an overwhelming scientific consensus that global climate change will increase hurricane intensity. The only real question left is whether it is already doing so.

The good news is that global carbon emissions are no longer rising. They have been essentially static for the last few years. The bad news is that this is almost certainly too little, too late.

The US is not on track to hit our 2025 emission target; we will probably exceed it by at least 20%.

But the real problem is that the targets themselves are much too high. Most countries have pledged to drop emissions only about 8-10% below their 1990s levels.

Even with the progress we have made, we are on track to exceed the global carbon budget needed to keep warming below 2 C by the year 2040. We have been reducing emission intensity by about 0.8% per year—we need to be reducing it by at least 3% per year and preferably faster. Highly-developed nations should be switching to nuclear energy as quickly as possible; an equitable global emission target requires us to reduce our emissions by 80% by 2050.

At the current rate of improvement, we will overshoot the 2 C warming target and very likely the 3C target as well.

Why aren’t we doing better? There is of course the Tragedy of the Commons to consider: Each individual country acting in its own self-interest will continue to pollute more, as this is the cheapest and easiest way to maintain industrial development. But then if all countries do so, the result is a disaster for us all.
But this explanation is too simple. We have managed to achieve some international cooperation on this issue. The Kyoto protocol has worked; emissions among Kyoto member nations have been reduced by more than 20% below 1990 levels, far more than originally promised. The EU in particular has taken a leadership role in reducing emissions, and has a serious shot at hitting their target of 40% reduction by 2030.

That is a truly astonishing scale of cooperation; the EU has a population of over 500 million people and spans 28 nations. It would seem like doing that should get us halfway to cooperating across all nations and all the world’s people.

But there is a vital difference between the EU and the world as a whole: The tribal paradigm. Europeans certainly have their differences: The UK and France still don’t really get along, everyone’s bitter with Germany about that whole Hitler business, and as the acronym PIIGS emphasizes, the peripheral countries have never quite felt as European as the core Schengen members. But despite all this, there has been a basic sense of trans-national (meta-national?) unity among Europeans for a long time.
For one thing, today Europeans see each other as the same race. That wasn’t always the case. In Medieval times, ethnic categories were as fine as “Cornish” and “Liverpudlian”. (To be fair, there do still exist a handful of Cornish nationalists.) Starting around the 18th cenutry, Europeans began to unite under the heading of “White people”, a classification that took on particular significance during the trans-Atlantic slave trade. But even in the 19th century, “Irish” and “Sicilian” were seen as racial categories. It wasn’t until the 20th century that Europeans really began to think of themselves as one “kind of people”, and not coincidentally it was at the end of the 20th century that the European Union finally took hold.

There is another region that has had a similar sense of unification: Latin America. Again, there are conflicts: There are a lot of nasty stereotypes about Puerto Ricans among Cubans and vice-versa. But Latinos, by and large, think of each other as the same “kind of people”, distinct from both Europeans and the indigenous population of the Americas.

I don’t think it is coincidental that the lowest carbon emission intensity (carbon emissions / GDP PPP) in the world is in Latin America, followed closely by Europe.
And if you had to name right now the most ethnically divided region in the world, what would you say? The Middle East, of course. And sure enough, they have the worst carbon emission intensity. (Of course, oil is an obvious confounding variable here, likely contributing to both.)

Indeed, the countries with the lowest ethnic fractionalization ratings tend to be in Europe and Latin America, and the highest tend to be in the Middle East and Africa.

Even within the United States, political polarization seems to come with higher carbon emissions. When we think of Democrats and Republicans as different “kinds of people”, we become less willing to cooperate on finding climate policy solutions.

This is not a complete explanation, of course. China has a low fractionalization rating but a high carbon intensity, and extremely high overall carbon emissions due to their enormous population. Africa’s carbon intensity isn’t as high as you’d think just from their terrible fractionalization, especially if you exclude Nigeria which is a major oil producer.

But I think there is nonetheless a vital truth here: One of the central barriers to serious long-term solutions to climate change is the entrenchment of racial and national identity. Solving the Tragedy of the Commons requires cooperation, we will only cooperate with those we trust, and we will only trust those we consider to be the same “kind of people”.

You can even hear it in the rhetoric: If “we” (Americans) give up our carbon emissions, then “they” (China) will take advantage of us. No one seems to worry about Alabama exploiting California—certainly no Republican would—despite the fact that in real economic terms they basically do. But people in Alabama are Americans; in other words, they count as actual people. People in China don’t count. If anything, people in California are supposed to be considered less American than people in Alabama, despite the fact that vastly more Americans live in California than Alabama. This mirrors the same pattern where we urban residents are somehow “less authentic” even though we outnumber the rural by four to one.
I don’t know how to mend this tribal division; I very much wish I did. But I do know that simply ignoring it isn’t going to work. We can talk all we want about carbon taxes and cap-and-trade, but as long as most of the world’s people are divided into racial, ethnic, and national identities that they consider to be in zero-sum conflict with one another, we are never going to achieve the level of cooperation necessary for a real permanent solution to climate change.

The temperatures and the oceans rise. United we must stand, or divided we shall fall.

Social construction is not fact—and it is not fiction

July 30, JDN 2457965

With the possible exception of politically-charged issues (especially lately in the US), most people are fairly good at distinguishing between true and false, fact and fiction. But there are certain types of ideas that can’t be neatly categorized into fact versus fiction.

First, there are subjective feelings. You can feel angry, or afraid, or sad—and really, truly feel that way—despite having no objective basis for the emotion coming from the external world. Such emotions are usually irrational, but even knowing that doesn’t make them automatically disappear. Distinguishing subjective feelings from objective facts is simple in principle, but often difficult in practice: A great many things simply “feel true” despite being utterly false. (Ask an average American which is more likely to kill them, a terrorist or the car in their garage; I bet quite a few will get the wrong answer. Indeed, if you ask them whether they’re more likely to be shot by someone else or to shoot themselves, almost literally every gun owner is going to get that answer wrong—or they wouldn’t be gun owners.)

The one I really want to focus on today is social constructions. This is a term that has been so thoroughly overused and abused by postmodernist academics (“science is a social construction”, “love is a social construction”, “math is a social construction”, “sex is a social construction”, etc.) that it has almost lost its meaning. Indeed, many people now react with automatic aversion to the term; upon hearing it, they immediately assume—understandably—that whatever is about to follow is nonsense.

But there is actually a very important core meaning to the term “social construction” that we stand to lose if we throw it away entirely. A social construction is something that exists only because we all believe in it.

Every part of that definition is important:

First, a social construction is something that exists: It’s really there, objectively. If you think it doesn’t exist, you’re wrong. It even has objective properties; you can be right or wrong in your beliefs about it, even once you agree that it exists.

Second, a social construction only exists because we all believe in it: If everyone in the world suddenly stopped believing in it, like Tinker Bell it would wink out of existence. The “we all” is important as well; a social construction doesn’t exist simply because one person, or a few people, believe in it—it requires a certain critical mass of society to believe in it. Of course, almost nothing is literally believed by everyone, so it’s more that a social construction exists insofar as people believe in it—and thus can attain a weaker or stronger kind of existence as beliefs change.

The combination of these two features makes social constructions a very weird sort of entity. They aren’t merely subjective beliefs; you can’t be wrong about what you are feeling right now (though you can certainly lie about it), but you can definitely be wrong about the social constructions of your society. But we can’t all be wrong about the social constructions of our society; once enough of our society stops believing in them, they will no longer exist. And when we have conflict over a social construction, its existence can become weaker or stronger—indeed, it can exist to some of us but not to others.

If all this sounds very bizarre and reminds you of postmodernist nonsense that might come from the Wisdom of Chopra randomizer, allow me to provide a concrete and indisputable example of a social construction that is vitally important to economics: Money.

The US dollar is a social construction. It has all sorts of well-defined objective properties, from its purchasing power in the market to its exchange rate with other currencies (also all social constructions). The markets in which it is spent are social constructions. The laws which regulate those markets are social constructions. The government which makes those laws is a social construction.

But it is not social constructions all the way down. The paper upon which the dollar was printed is a physical object with objective factual existence. It is an artifact—it was made by humans, and wouldn’t exist if we didn’t—but now that we’ve made it, it exists and would continue to exist regardless of whether we believe in it or even whether we continue to exist. The cotton from which it was made is also partly artificial, bred over centuries from a lifeform that evolved over millions of years. But the carbon atoms inside that cotton were made in a star, and that star existed and fused its carbon billions of years before any life on Earth existed, much less humans in particular. This is why the statements “math is a social construction” and “science is a social construction” are so ridiculous. Okay, sure, the institutions of science and mathematics are social constructions, but that’s trivial; nobody would dispute that, and it’s not terribly interesting. (What, you mean if everyone stopped going to MIT, there would be no MIT!?) The truths of science and mathematics were true long before we were even here—indeed, the fundamental truths of mathematics could not have failed to be true in any possible universe.

But the US dollar did not exist before human beings created it, and unlike the physical paper, the purchasing power of that dollar (which is, after all, mainly what we care about) is entirely socially constructed. If everyone in the world suddenly stopped accepting US dollars as money, the US dollar would cease to be money. If even a few million people in the US suddenly stopped accepting dollars, its value would become much more precarious, and inflation would be sure to follow.

Nor is this simply because the US dollar is a fiat currency. That makes it more obvious, to be sure; a fiat currency attains its value solely through social construction, as its physical object has negligible value. But even when we were on the gold standard, our currency was representative; the paper itself was still equally worthless. If you wanted gold, you’d have to exchange for it; and that process of exchange is entirely social construction.

And what about gold coins, one of the oldest form of money? There now the physical object might actually be useful for something, but not all that much. It’s shiny, you can make jewelry out of it, it doesn’t corrode, it can be used to replace lost teeth, it has anti-inflammatory properties—and millennia later we found out that its dense nucleus is useful for particle accelerator experiments and it is a very reliable electrical conductor useful for making microchips. But all in all, gold is really not that useful. If gold were priced based on its true usefulness, it would be extraordinarily cheap; cheaper than water, for sure, as it’s much less useful than water. Yet very few cultures have ever used water as currency (though some have used salt). Thus, most of the value of gold is itself socially constructed; you value gold not to use it, but to impress other people with the fact that you own it (or indeed to sell it to them). Stranded alone on a desert island, you’d do anything for fresh water, but gold means nothing to you. And a gold coin actually takes on additional socially-constructed value; gold coins almost always had seignorage, additional value the government received from minting them over and above the market price of the gold itself.

Economics, in fact, is largely about social constructions; or rather I should say it’s about the process of producing and distributing artifacts by means of social constructions. Artifacts like houses, cars, computers, and toasters; social constructions like money, bonds, deeds, policies, rights, corporations, and governments. Of course, there are also services, which are not quite artifacts since they stop existing when we stop doing them—though, crucially, not when we stop believing in them; your waiter still delivered your lunch even if you persist in the delusion that the lunch is not there. And there are natural resources, which existed before us (and may or may not exist after us). But these are corner cases; mostly economics is about using laws and money to distribute goods, which means using social constructions to distribute artifacts.

Other very important social constructions include race and gender. Not melanin and sex, mind you; human beings have real, biological variation in skin tone and body shape. But the concept of a race—especially the race categories we ordinarily use—is socially constructed. Nothing biological forced us to regard Kenyan and Burkinabe as the same “race” while Ainu and Navajo are different “races”; indeed, the genetic data is screaming at us in the opposite direction. Humans are sexually dimorphic, with some rare exceptions (only about 0.02% of people are intersex; about 0.3% are transgender; and no more than 5% have sex chromosome abnormalities). But the much thicker concept of gender that comes with a whole system of norms and attitudes is all socially constructed.

It’s one thing to say that perhaps males are, on average, more genetically predisposed to be systematizers than females, and thus men are more attracted to engineering and women to nursing. That could, in fact, be true, though the evidence remains quite weak. It’s quite another to say that women must not be engineers, even if they want to be, and men must not be nurses—yet the latter was, until very recently, the quite explicit and enforced norm. Standards of clothing are even more obviously socially-constructed; in Western cultures (except the Celts, for some reason), flared garments are “dresses” and hence “feminine”; in East Asian cultures, flared garments such as kimono are gender-neutral, and gender is instead expressed through clothing by subtler aspects such as being fastened on the left instead of the right. In a thousand different ways, we mark our gender by what we wear, how we speak, even how we walk—and what’s more, we enforce those gender markings. It’s not simply that males typically speak in lower pitches (which does actually have a biological basis); it’s that males who speak in higher pitches are seen as less of a man, and that is a bad thing. We have a very strict hierarchy, which is imposed in almost every culture: It is best to be a man, worse to be a woman who acts like a woman, worse still to be a woman who acts like a man, and worst of all to be a man who acts like a woman. What it means to “act like a man” or “act like a woman” varies substantially; but the core hierarchy persists.

Social constructions like these ones are in fact some of the most important things in our lives. Human beings are uniquely social animals, and we define our meaning and purpose in life largely through social constructions.

It can be tempting, therefore, to be cynical about this, and say that our lives are built around what is not real—that is, fiction. But while this may be true for religious fanatics who honestly believe that some supernatural being will reward them for their acts of devotion, it is not a fair or accurate description of someone who makes comparable sacrifices for “the United States” or “free speech” or “liberty”. These are social constructions, not fictions. They really do exist. Indeed, it is only because we are willing to make sacrifices to maintain them that they continue to exist. Free speech isn’t maintained by us saying things we want to say; it is maintained by us allowing other people to say things we don’t want to hear. Liberty is not protected by us doing whatever we feel like, but by not doing things we would be tempted to do that impose upon other people’s freedom. If in our cynicism we act as though these things are fictions, they may soon become so.

But it would be a lot easier to get this across to people, I think, if folks would stop saying idiotic things like “science is a social construction”.

Sometimes people have to lose their jobs. This isn’t a bad thing.

Oct 8, JDN 2457670

Eleizer Yudkowsky (founder of the excellent blog forum Less Wrong) has a term he likes to use to distinguish his economic policy views from either liberal, conservative, or even libertarian: “econoliterate”, meaning the sort of economic policy ideas one comes up with when one actually knows a good deal about economics.

In general I think Yudkowsky overestimates this effect; I’ve known some very knowledgeable economists who disagree quite strongly over economic policy, and often following the conventional political lines of liberal versus conservative: Liberal economists want more progressive taxation and more Keynesian monetary and fiscal policy, while conservative economists want to reduce taxes on capital and remove regulations. Theoretically you can want all these things—as Miles Kimball does—but it’s rare. Conservative economists hate minimum wage, and lean on the theory that says it should be harmful to employment; liberal economists are ambivalent about minimum wage, and lean on the empirical data that shows it has almost no effect on employment. Which is more reliable? The empirical data, obviously—and until more economists start thinking that way, economics is never truly going to be a science as it should be.

But there are a few issues where Yudkowsky’s “econoliterate” concept really does seem to make sense, where there is one view held by most people, and another held by economists, regardless of who is liberal or conservative. One such example is free trade, which almost all economists believe in. A recent poll of prominent economists by the University of Chicago found literally zero who agreed with protectionist tariffs.

Another example is my topic for today: People losing their jobs.

Not unemployment, which both economists and almost everyone else agree is bad; but people losing their jobs. The general consensus among the public seems to be that people losing jobs is always bad, while economists generally consider it a sign of an economy that is run smoothly and efficiently.

To be clear, of course losing your job is bad for you; I don’t mean to imply that if you lose your job you shouldn’t be sad or frustrated or anxious about that, particularly not in our current system. Rather, I mean to say that policy which tries to keep people in their jobs is almost always a bad idea.

I think the problem is that most people don’t quite grasp that losing your job and not having a job are not the same thing. People not having jobs who want to have jobs—unemployment—is a bad thing. But losing your job doesn’t mean you have to stay unemployed; it could simply mean you get a new job. And indeed, that is what it should mean, if the economy is running properly.

Check out this graph, from FRED:

hires_separations

The red line shows hires—people getting jobs. The blue line shows separations—people losing jobs or leaving jobs. During a recession (the most recent two are shown on this graph), people don’t actually leave their jobs faster than usual; if anything, slightly less. Instead what happens is that hiring rates drop dramatically. When the economy is doing well (as it is right now, more or less), both hires and separations are at very high rates.

Why is this? Well, think about what a job is, really: It’s something that needs done, that no one wants to do for free, so someone pays someone else to do it. Once that thing gets done, what should happen? The job should end. It’s done. The purpose of the job was not to provide for your standard of living; it was to achieve the task at hand. Once it doesn’t need done, why keep doing it?

We tend to lose sight of this, for a couple of reasons. First, we don’t have a basic income, and our social welfare system is very minimal; so a job usually is the only way people have to provide for their standard of living, and they come to think of this as the purpose of the job. Second, many jobs don’t really “get done” in any clear sense; individual tasks are completed, but new ones always arise. After every email sent is another received; after every patient treated is another who falls ill.

But even that is really only true in the short run. In the long run, almost all jobs do actually get done, in the sense that no one has to do them anymore. The job of cleaning up after horses is done (with rare exceptions). The job of manufacturing vacuum tubes for computers is done. Indeed, the job of being a computer—that used to be a profession, young women toiling away with slide rules—is very much done. There are no court jesters anymore, no town criers, and very few artisans (and even then, they’re really more like hobbyists). There are more writers now than ever, and occasional stenographers, but there are no scribes—no one powerful but illiterate pays others just to write things down, because no one powerful is illiterate (and even few who are not powerful, and fewer all the time).

When a job “gets done” in this long-run sense, we usually say that it is obsolete, and again think of this as somehow a bad thing, like we are somehow losing the ability to do something. No, we are gaining the ability to do something better. Jobs don’t become obsolete because we can’t do them anymore; they become obsolete because we don’t need to do them anymore. Instead of computers being a profession that toils with slide rules, they are thinking machines that fit in our pockets; and there are plenty of jobs now for software engineers, web developers, network administrators, hardware designers, and so on as a result.

Soon, there will be no coal miners, and very few oil drillers—or at least I hope so, for the sake of our planet’s climate. There will be far fewer auto workers (robots have already done most of that already), but far more construction workers who install rail lines. There will be more nuclear engineers, more photovoltaic researchers, even more miners and roofers, because we need to mine uranium and install solar panels on rooftops.

Yet even by saying that I am falling into the trap: I am making it sound like the benefit of new technology is that it opens up more new jobs. Typically it does do that, but that isn’t what it’s for. The purpose of technology is to get things done.

Remember my parable of the dishwasher. The goal of our economy is not to make people work; it is to provide people with goods and services. If we could invent a machine today that would do the job of everyone in the world and thereby put us all out of work, most people think that would be terrible—but in fact it would be wonderful.

Or at least it could be, if we did it right. See, the problem right now is that while poor people think that the purpose of a job is to provide for their needs, rich people think that the purpose of poor people is to do jobs. If there are no jobs to be done, why bother with them? At that point, they’re just in the way! (Think I’m exaggerating? Why else would anyone put a work requirement on TANF and SNAP? To do that, you must literally think that poor people do not deserve to eat or have homes if they aren’t, right now, working for an employer. You can couch that in cold economic jargon as “maximizing work incentives”, but that’s what you’re doing—you’re threatening people with starvation if they can’t or won’t find jobs.)

What would happen if we tried to stop people from losing their jobs? Typically, inefficiency. When you aren’t allowed to lay people off when they are no longer doing useful work, we end up in a situation where a large segment of the population is being paid but isn’t doing useful work—and unlike the situation with a basic income, those people would lose their income, at least temporarily, if they quit and tried to do something more useful. There is still considerable uncertainty within the empirical literature on just how much “employment protection” (laws that make it hard to lay people off) actually creates inefficiency and reduces productivity and employment, so it could be that this effect is small—but even so, likewise it does not seem to have the desired effect of reducing unemployment either. It may be like minimum wage, where the effect just isn’t all that large. But it’s probably not saving people from being unemployed; it may simply be shifting the distribution of unemployment so that people with protected jobs are almost never unemployed and people without it are unemployed much more frequently. (This doesn’t have to be based in law, either; while it is made by custom rather than law, it’s quite clear that tenure for university professors makes tenured professors vastly more secure, but at the cost of making employment tenuous and underpaid for adjuncts.)

There are other policies we could make that are better than employment protection, active labor market policies like those in Denmark that would make it easier to find a good job. Yet even then, we’re assuming that everyone needs jobs–and increasingly, that just isn’t true.

So, when we invent a new technology that replaces workers, workers are laid off from their jobs—and that is as it should be. What happens next is what we do wrong, and it’s not even anybody in particular; this is something our whole society does wrong: All those displaced workers get nothing. The extra profit from the more efficient production goes entirely to the shareholders of the corporation—and those shareholders are almost entirely members of the top 0.01%. So the poor get poorer and the rich get richer.

The real problem here is not that people lose their jobs; it’s that capital ownership is distributed so unequally. And boy, is it ever! Here are some graphs I made of the distribution of net wealth in the US, using from the US Census.

Here are the quintiles of the population as a whole:

net_wealth_us

And here are the medians by race:

net_wealth_race

Medians by age:

net_wealth_age

Medians by education:

net_wealth_education

And, perhaps most instructively, here are the quintiles of people who own their homes versus renting (The rent is too damn high!)

net_wealth_rent

All that is just within the US, and already they are ranging from the mean net wealth of the lowest quintile of people under 35 (-$45,000, yes negative—student loans) to the mean net wealth of the highest quintile of people with graduate degrees ($3.8 million). All but the top quintile of renters are poorer than all but the bottom quintile of homeowners. And the median Black or Hispanic person has less than one-tenth the wealth of the median White or Asian person.

If we look worldwide, wealth inequality is even starker. Based on UN University figures, 40% of world wealth is owned by the top 1%; 70% by the top 5%; and 80% by the top 10%. There is less total wealth in the bottom 80% than in the 80-90% decile alone. According to Oxfam, the richest 85 individuals own as much net wealth as the poorest 3.7 billion. They are the 0.000,001%.

If we had an equal distribution of capital ownership, people would be happy when their jobs became obsolete, because it would free them up to do other things (either new jobs, or simply leisure time), while not decreasing their income—because they would be the shareholders receiving those extra profits from higher efficiency. People would be excited to hear about new technologies that might displace their work, especially if those technologies would displace the tedious and difficult parts and leave the creative and fun parts. Losing your job could be the best thing that ever happened to you.

The business cycle would still be a problem; we have good reason not to let recessions happen. But stopping the churn of hiring and firing wouldn’t actually make our society better off; it would keep people in jobs where they don’t belong and prevent us from using our time and labor for its best use.

Perhaps the reason most people don’t even think of this solution is precisely because of the extreme inequality of capital distribution—and the fact that it has more or less always been this way since the dawn of civilization. It doesn’t seem to even occur to most people that capital income is a thing that exists, because they are so far removed from actually having any amount of capital sufficient to generate meaningful income. Perhaps when a robot takes their job, on some level they imagine that the robot is getting paid, when of course it’s the shareholders of the corporations that made the robot and the corporations that are using the robot in place of workers. Or perhaps they imagine that those shareholders actually did so much hard work they deserve to get paid that money for all the hours they spent.

Because pay is for work, isn’t it? The reason you get money is because you’ve earned it by your hard work?

No. This is a lie, told to you by the rich and powerful in order to control you. They know full well that income doesn’t just come from wages—most of their income doesn’t come from wages! Yet this is even built into our language; we say “net worth” and “earnings” rather than “net wealth” and “income”. (Parade magazine has a regular segment called “What People Earn”; it should be called “What People Receive”.) Money is not your just reward for your hard work—at least, not always.

The reason you get money is that this is a useful means of allocating resources in our society. (Remember, money was created by governments for the purpose of facilitating economic transactions. It is not something that occurs in nature.) Wages are one way to do that, but they are far from the only way; they are not even the only way currently in use. As technology advances, we should expect a larger proportion of our income to go to capital—but what we’ve been doing wrong is setting it up so that only a handful of people actually own any capital.

Fix that, and maybe people will finally be able to see that losing your job isn’t such a bad thing; it could even be satisfying, the fulfillment of finally getting something done.

Moral responsibility does not inherit across generations

JDN 2457548

In last week’s post I made a sharp distinction between believing in human progress and believing that colonialism was justified. To make this argument, I relied upon a moral assumption that seems to me perfectly obvious, and probably would to most ethicists as well: Moral responsibility does not inherit across generations, and people are only responsible for their individual actions.

But is in fact this principle is not uncontroversial in many circles. When I read utterly nonsensical arguments like this one from the aptly-named Race Baitr saying that White people have no role to play in the liberation of Black people apparently because our blood is somehow tainted by the crimes our ancestors, it becomes apparent to me that this principle is not obvious to everyone, and therefore is worth defending. Indeed, many applications of the concept of “White Privilege” seem to ignore this principle, speaking as though racism is not something one does or participates in, but something that one is simply by being born with less melanin. Here’s a Salon interview specifically rejecting the proposition that racism is something one does:

For white people, their identities rest on the idea of racism as about good or bad people, about moral or immoral singular acts, and if we’re good, moral people we can’t be racist – we don’t engage in those acts. This is one of the most effective adaptations of racism over time—that we can think of racism as only something that individuals either are or are not “doing.”

If racism isn’t something one does, then what in the world is it? It’s all well and good to talk about systems and social institutions, but ultimately systems and social institutions are made of human behaviors. If you think most White people aren’t doing enough to combat racism (which sounds about right to me!), say that—don’t make some bizarre accusation that simply by existing we are inherently racist. (Also: We? I’m only 75% White, so am I only 75% inherently racist?) And please, stop redefining the word “racism” to mean something other than what everyone uses it to mean; “White people are snakes” is in fact a racist sentiment (and yes, one I’ve actually heard–indeed, here is the late Muhammad Ali comparing all White people to rattlesnakes, and Huffington Post fawning over him for it).

Racism is clearly more common and typically worse when performed by White people against Black people—but contrary to the claims of some social justice activists the White perpetrator and Black victim are not part of the definition of racism. Similarly, sexism is more common and more severe committed by men against women, but that doesn’t mean that “men are pigs” is not a sexist statement (and don’t tell me you haven’t heard that one). I don’t have a good word for bigotry by gay people against straight people (“heterophobia”?) but it clearly does happen on occasion, and similarly cannot be defined out of existence.

I wouldn’t care so much that you make this distinction between “racism” and “racial prejudice”, except that it’s not the normal usage of the word “racism” and therefore confuses people, and also this redefinition clearly is meant to serve a political purpose that is quite insidious, namely making excuses for the most extreme and hateful prejudice as long as it’s committed by people of the appropriate color. If “White people are snakes” is not racism, then the word has no meaning.

Not all discussions of “White Privilege” are like this, of course; this article from Occupy Wall Street actually does a fairly good job of making “White Privilege” into a sensible concept, albeit still not a terribly useful one in my opinion. I think the useful concept is oppression—the problem here is not how we are treating White people, but how we are treating everyone else. What privilege gives you is the freedom to be who you are.”? Shouldn’t everyone have that?

Almost all the so-called “benefits” or “perks” associated with privilege” are actually forgone harms—they are not good things done to you, but bad things not done to you. But benefitting from racist systems doesn’t mean that everything is magically easy for us. It just means that as hard as things are, they could always be worse.” No, that is not what the word “benefit” means. The word “benefit” means you would be worse off without it—and in most cases that simply isn’t true. Many White people obviously think that it is true—which is probably a big reason why so many White people fight so hard to defend racism, you know; you’ve convinced them it is in their self-interest. But, with rare exceptions, it is not; most racial discrimination has literally zero long-run benefit. It’s just bad. Maybe if we helped people appreciate that more, they would be less resistant to fighting racism!

The only features of “privilege” that really make sense as benefits are those that occur in a state of competition—like being more likely to be hired for a job or get a loan—but one of the most important insights of economics is that competition is nonzero-sum, and fairer competition ultimately means a more efficient economy and thus more prosperity for everyone.

But okay, let’s set that aside and talk about this core question of what sort of responsibility we bear for the acts of our ancestors. Many White people clearly do feel deep shame about what their ancestors (or people the same color as their ancestors!) did hundreds of years ago. The psychological reactance to that shame may actually be what makes so many White people deny that racism even exists (or exists anymore)—though a majority of Americans of all races do believe that racism is still widespread.

We also apply some sense of moral responsibility applied to whole races quite frequently. We speak of a policy “benefiting White people” or “harming Black people” and quickly elide the distinction between harming specific people who are Black, and somehow harming “Black people” as a group. The former happens all the time—the latter is utterly nonsensical. Similarly, we speak of a “debt owed by White people to Black people” (which might actually make sense in the very narrow sense of economic reparations, because people do inherit money! They probably shouldn’t, that is literally feudalist, but in the existing system they in fact do), which makes about as much sense as a debt owed by tall people to short people. As Walter Michaels pointed out in The Trouble with Diversity (which I highly recommend), because of this bizarre sense of responsibility we are often in the habit of “apologizing for something you didn’t do to people to whom you didn’t do it (indeed to whom it wasn’t done)”. It is my responsibility to condemn colonialism (which I indeed do), to fight to ensure that it never happens again; it is not my responsibility to apologize for colonialism.

This makes some sense in evolutionary terms; it’s part of the all-encompassing tribal paradigm, wherein human beings come to identify themselves with groups and treat those groups as the meaningful moral agents. It’s much easier to maintain the cohesion of a tribe against the slings and arrows (sometimes quite literal) of outrageous fortune if everyone believes that the tribe is one moral agent worthy of ultimate concern.

This concept of racial responsibility is clearly deeply ingrained in human minds, for it appears in some of our oldest texts, including the Bible: “You shall not bow down to them or worship them; for I, the Lord your God, am a jealous God, punishing the children for the sin of the parents to the third and fourth generation of those who hate me,” (Exodus 20:5)

Why is inheritance of moral responsibility across generations nonsensical? Any number of reasons, take your pick. The economist in me leaps to “Ancestry cannot be incentivized.” There’s no point in holding people responsible for things they can’t control, because in doing so you will not in any way alter behavior. The Stanford Encyclopedia of Philosophy article on moral responsibility takes it as so obvious that people are only responsible for actions they themselves did that they don’t even bother to mention it as an assumption. (Their big question is how to reconcile moral responsibility with determinism, which turns out to be not all that difficult.)

An interesting counter-argument might be that descent can be incentivized: You could use rewards and punishments applied to future generations to motivate current actions. But this is actually one of the ways that incentives clearly depart from moral responsibilities; you could incentivize me to do something by threatening to murder 1,000 children in China if I don’t, but even if it was in fact something I ought to do, it wouldn’t be those children’s fault if I didn’t do it. They wouldn’t deserve punishment for my inaction—I might, and you certainly would for using such a cruel incentive.

Moreover, there’s a problem with dynamic consistency here: Once the action is already done, what’s the sense in carrying out the punishment? This is why a moral theory of punishment can’t merely be based on deterrence—the fact that you could deter a bad action by some other less-bad action doesn’t make the less-bad action necessarily a deserved punishment, particularly if it is applied to someone who wasn’t responsible for the action you sought to deter. In any case, people aren’t thinking that we should threaten to punish future generations if people are racist today; they are feeling guilty that their ancestors were racist generations ago. That doesn’t make any sense even on this deterrence theory.

There’s another problem with trying to inherit moral responsibility: People have lots of ancestors. Some of my ancestors were most likely rapists and murderers; most were ordinary folk; a few may have been great heroes—and this is true of just about anyone anywhere. We all have bad ancestors, great ancestors, and, mostly, pretty good ancestors. 75% of my ancestors are European, but 25% are Native American; so if I am to apologize for colonialism, should I be apologizing to myself? (Only 75%, perhaps?) If you go back enough generations, literally everyone is related—and you may only have to go back about 4,000 years. That’s historical time.

Of course, we wouldn’t be different colors in the first place if there weren’t some differences in ancestry, but there is a huge amount of gene flow between different human populations. The US is a particularly mixed place; because most Black Americans are quite genetically mixed, it is about as likely that any randomly-selected Black person in the US is descended from a slaveowner as it is that any randomly-selected White person is. (Especially since there were a large number of Black slaveowners in Africa and even some in the United States.) What moral significance does this have? Basically none! That’s the whole point; your ancestors don’t define who you are.

If these facts do have any moral significance, it is to undermine the sense most people seem to have that there are well-defined groups called “races” that exist in reality, to which culture responds. No; races were created by culture. I’ve said this before, but it bears repeating: The “races” we hold most dear in the US, White and Black, are in fact the most nonsensical. “Asian” and “Native American” at least almost make sense as categories, though Chippewa are more closely related to Ainu than Ainu are to Papuans. “Latino” isn’t utterly incoherent, though it includes as much Aztec as it does Iberian. But “White” is a club one can join or be kicked out of, while “Black” is the majority of genetic diversity.

Sex is a real thing—while there are intermediate cases of course, broadly speaking humans, like most metazoa, are sexually dimorphic and come in “male” and “female” varieties. So sexism took a real phenomenon and applied cultural dynamics to it; but that’s not what happened with racism. Insofar as there was a real phenomenon, it was extremely superficial—quite literally skin deep. In that respect, race is more like class—a categorization that is itself the result of social institutions.

To be clear: Does the fact that we don’t inherit moral responsibility from our ancestors absolve us from doing anything to rectify the inequities of racism? Absolutely not. Not only is there plenty of present discrimination going on we should be fighting, there are also inherited inequities due to the way that assets and skills are passed on from one generation to the next. If my grandfather stole a painting from your grandfather and both our grandfathers are dead but I am now hanging that painting in my den, I don’t owe you an apology—but I damn well owe you a painting.

The further we become from the past discrimination the harder it gets to make reparations, but all hope is not lost; we still have the option of trying to reset everyone’s status to the same at birth and maintaining equality of opportunity from there. Of course we’ll never achieve total equality of opportunity—but we can get much closer than we presently are.

We could start by establishing an extremely high estate tax—on the order of 99%—because no one has a right to be born rich. Free public education is another good way of equalizing the distribution of “human capital” that would otherwise be concentrated in particular families, and expanding it to higher education would make it that much better. It even makes sense, at least in the short run, to establish some affirmative action policies that are race-conscious and sex-conscious, because there are so many biases in the opposite direction that sometimes you must fight bias with bias.

Actually what I think we should do in hiring, for example, is assemble a pool of applicants based on demographic quotas to ensure a representative sample, and then anonymize the applications and assess them on merit. This way we do ensure representation and reduce bias, but don’t ever end up hiring anyone other than the most qualified candidate. But nowhere should we think that this is something that White men “owe” to women or Black people; it’s something that people should do in order to correct the biases that otherwise exist in our society. Similarly with regard to sexism: Women exhibit just as much unconscious bias against other women as men do. This is not “men” hurting “women”—this is a set of unconscious biases found in almost everywhere and social structures almost everywhere that systematically discriminate against people because they are women.

Perhaps by understanding that this is not about which “team” you’re on (which tribe you’re in), but what policy we should have, we can finally make these biases disappear, or at least fade so small that they are negligible.