When are we going to get serious about climate change?

Oct 8, JDN 24578035

Those two storms weren’t simply natural phenomena. We had a hand in creating them.

The EPA doesn’t want to talk about the connection, and we don’t have enough statistical power to really be certain, but there is by now an overwhelming scientific consensus that global climate change will increase hurricane intensity. The only real question left is whether it is already doing so.

The good news is that global carbon emissions are no longer rising. They have been essentially static for the last few years. The bad news is that this is almost certainly too little, too late.

The US is not on track to hit our 2025 emission target; we will probably exceed it by at least 20%.

But the real problem is that the targets themselves are much too high. Most countries have pledged to drop emissions only about 8-10% below their 1990s levels.

Even with the progress we have made, we are on track to exceed the global carbon budget needed to keep warming below 2 C by the year 2040. We have been reducing emission intensity by about 0.8% per year—we need to be reducing it by at least 3% per year and preferably faster. Highly-developed nations should be switching to nuclear energy as quickly as possible; an equitable global emission target requires us to reduce our emissions by 80% by 2050.

At the current rate of improvement, we will overshoot the 2 C warming target and very likely the 3C target as well.

Why aren’t we doing better? There is of course the Tragedy of the Commons to consider: Each individual country acting in its own self-interest will continue to pollute more, as this is the cheapest and easiest way to maintain industrial development. But then if all countries do so, the result is a disaster for us all.
But this explanation is too simple. We have managed to achieve some international cooperation on this issue. The Kyoto protocol has worked; emissions among Kyoto member nations have been reduced by more than 20% below 1990 levels, far more than originally promised. The EU in particular has taken a leadership role in reducing emissions, and has a serious shot at hitting their target of 40% reduction by 2030.

That is a truly astonishing scale of cooperation; the EU has a population of over 500 million people and spans 28 nations. It would seem like doing that should get us halfway to cooperating across all nations and all the world’s people.

But there is a vital difference between the EU and the world as a whole: The tribal paradigm. Europeans certainly have their differences: The UK and France still don’t really get along, everyone’s bitter with Germany about that whole Hitler business, and as the acronym PIIGS emphasizes, the peripheral countries have never quite felt as European as the core Schengen members. But despite all this, there has been a basic sense of trans-national (meta-national?) unity among Europeans for a long time.
For one thing, today Europeans see each other as the same race. That wasn’t always the case. In Medieval times, ethnic categories were as fine as “Cornish” and “Liverpudlian”. (To be fair, there do still exist a handful of Cornish nationalists.) Starting around the 18th cenutry, Europeans began to unite under the heading of “White people”, a classification that took on particular significance during the trans-Atlantic slave trade. But even in the 19th century, “Irish” and “Sicilian” were seen as racial categories. It wasn’t until the 20th century that Europeans really began to think of themselves as one “kind of people”, and not coincidentally it was at the end of the 20th century that the European Union finally took hold.

There is another region that has had a similar sense of unification: Latin America. Again, there are conflicts: There are a lot of nasty stereotypes about Puerto Ricans among Cubans and vice-versa. But Latinos, by and large, think of each other as the same “kind of people”, distinct from both Europeans and the indigenous population of the Americas.

I don’t think it is coincidental that the lowest carbon emission intensity (carbon emissions / GDP PPP) in the world is in Latin America, followed closely by Europe.
And if you had to name right now the most ethnically divided region in the world, what would you say? The Middle East, of course. And sure enough, they have the worst carbon emission intensity. (Of course, oil is an obvious confounding variable here, likely contributing to both.)

Indeed, the countries with the lowest ethnic fractionalization ratings tend to be in Europe and Latin America, and the highest tend to be in the Middle East and Africa.

Even within the United States, political polarization seems to come with higher carbon emissions. When we think of Democrats and Republicans as different “kinds of people”, we become less willing to cooperate on finding climate policy solutions.

This is not a complete explanation, of course. China has a low fractionalization rating but a high carbon intensity, and extremely high overall carbon emissions due to their enormous population. Africa’s carbon intensity isn’t as high as you’d think just from their terrible fractionalization, especially if you exclude Nigeria which is a major oil producer.

But I think there is nonetheless a vital truth here: One of the central barriers to serious long-term solutions to climate change is the entrenchment of racial and national identity. Solving the Tragedy of the Commons requires cooperation, we will only cooperate with those we trust, and we will only trust those we consider to be the same “kind of people”.

You can even hear it in the rhetoric: If “we” (Americans) give up our carbon emissions, then “they” (China) will take advantage of us. No one seems to worry about Alabama exploiting California—certainly no Republican would—despite the fact that in real economic terms they basically do. But people in Alabama are Americans; in other words, they count as actual people. People in China don’t count. If anything, people in California are supposed to be considered less American than people in Alabama, despite the fact that vastly more Americans live in California than Alabama. This mirrors the same pattern where we urban residents are somehow “less authentic” even though we outnumber the rural by four to one.
I don’t know how to mend this tribal division; I very much wish I did. But I do know that simply ignoring it isn’t going to work. We can talk all we want about carbon taxes and cap-and-trade, but as long as most of the world’s people are divided into racial, ethnic, and national identities that they consider to be in zero-sum conflict with one another, we are never going to achieve the level of cooperation necessary for a real permanent solution to climate change.

The temperatures and the oceans rise. United we must stand, or divided we shall fall.

Bigotry is more powerful than the market

Nov 20, JDN 2457683

If there’s one message we can take from the election of Donald Trump, it is that bigotry remains a powerful force in our society. A lot of autoflagellating liberals have been trying to explain how this election result really reflects our failure to help people displaced by technology and globalization (despite the fact that personal income and local unemployment had negligible correlation with voting for Trump), or Hillary Clinton’s “bad campaign” that nonetheless managed the same proportion of Democrat turnout that re-elected her husband in 1996.

No, overwhelmingly, the strongest predictor of voting for Trump was being White, and living in an area where most people are White. (Well, actually, that’s if you exclude authoritarianism as an explanatory variable—but really I think that’s part of what we’re trying to explain.) Trump voters were actually concentrated in areas less affected by immigration and globalization. Indeed, there is evidence that these people aren’t racist because they have anxiety about the economy—they are anxious about the economy because they are racist. How does that work? Obama. They can’t believe that the economy is doing well when a Black man is in charge. So all the statistics and even personal experiences mean nothing to them. They know in their hearts that unemployment is rising, even as the BLS data clearly shows it’s falling.

The wide prevalence and enormous power of bigotry should be obvious. But economists rarely talk about it, and I think I know why: Their models say it shouldn’t exist. The free market is supposed to automatically eliminate all forms of bigotry, because they are inefficient.

The argument for why this is supposed to happen actually makes a great deal of sense: If a company has the choice of hiring a White man or a Black woman to do the same job, but they know that the market wage for Black women is lower than the market wage for White men (which it most certainly is), and they will do the same quality and quantity of work, why wouldn’t they hire the Black woman? And indeed, if human beings were rational profit-maximizers, this is probably how they would think.

More recently some neoclassical models have been developed to try to “explain” this behavior, but always without daring to give up the precious assumption of perfect rationality. So instead we get the two leading neoclassical theories of discrimination, which are statistical discrimination and taste-based discrimination.

Statistical discrimination is the idea that under asymmetric information (and we surely have that), features such as race and gender can act as signals of quality because they are correlated with actual quality for various reasons (usually left unspecified), so it is not irrational after all to choose based upon them, since they’re the best you have.

Taste-based discrimination is the idea that people are rationally maximizing preferences that simply aren’t oriented toward maximizing profit or well-being. Instead, they have this extra term in their utility function that says they should also treat White men better than women or Black people. It’s just this extra thing they have.

A small number of studies have been done trying to discern which of these is at work.
The correct answer, of course, is neither.

Statistical discrimination, at least, could be part of what’s going on. Knowing that Black people are less likely to be highly educated than Asians (as they definitely are) might actually be useful information in some circumstances… then again, you list your degree on your resume, don’t you? Knowing that women are more likely to drop out of the workforce after having a child could rationally (if coldly) affect your assessment of future productivity. But shouldn’t the fact that women CEOs outperform men CEOs be incentivizing shareholders to elect women CEOs? Yet that doesn’t seem to happen. Also, in general, people seem to be pretty bad at statistics.

The bigger problem with statistical discrimination as a theory is that it’s really only part of a theory. It explains why not all of the discrimination has to be irrational, but some of it still does. You need to explain why there are these huge disparities between groups in the first place, and statistical discrimination is unable to do that. In order for the statistics to differ this much, you need a past history of discrimination that wasn’t purely statistical.

Taste-based discrimination, on the other hand, is not a theory at all. It’s special pleading. Rather than admit that people are failing to rationally maximize their utility, we just redefine their utility so that whatever they happen to be doing now “maximizes” it.

This is really what makes the Axiom of Revealed Preference so insidious; if you really take it seriously, it says that whatever you do, must by definition be what you preferred. You can’t possibly be irrational, you can’t possibly be making mistakes of judgment, because by definition whatever you did must be what you wanted. Maybe you enjoy bashing your head into a wall, who am I to judge?

I mean, on some level taste-based discrimination is what’s happening; people think that the world is a better place if they put women and Black people in their place. So in that sense, they are trying to “maximize” some “utility function”. (By the way, most human beings behave in ways that are provably inconsistent with maximizing any well-defined utility function—the Allais Paradox is a classic example.) But the whole framework of calling it “taste-based” is a way of running away from the real explanation. If it’s just “taste”, well, it’s an unexplainable brute fact of the universe, and we just need to accept it. If people are happier being racist, what can you do, eh?

So I think it’s high time to start calling it what it is. This is not a question of taste. This is a question of tribal instinct. This is the product of millions of years of evolution optimizing the human brain to act in the perceived interest of whatever it defines as its “tribe”. It could be yourself, your family, your village, your town, your religion, your nation, your race, your gender, or even the whole of humanity or beyond into all sentient beings. But whatever it is, the fundamental tribe is the one thing you care most about. It is what you would sacrifice anything else for.

And what we learned on November 9 this year is that an awful lot of Americans define their tribe in very narrow terms. Nationalistic and xenophobic at best, racist and misogynistic at worst.

But I suppose this really isn’t so surprising, if you look at the history of our nation and the world. Segregation was not outlawed in US schools until 1955, and there are women who voted in this election who were born before American women got the right to vote in 1920. The nationalistic backlash against sending jobs to China (which was one of the chief ways that we reduced global poverty to its lowest level ever, by the way) really shouldn’t seem so strange when we remember that over 100,000 Japanese-Americans were literally forcibly relocated into camps as recently as 1942. The fact that so many White Americans seem all right with the biases against Black people in our justice system may not seem so strange when we recall that systemic lynching of Black people in the US didn’t end until the 1960s.

The wonder, in fact, is that we have made as much progress as we have. Tribal instinct is not a strange aberration of human behavior; it is our evolutionary default setting.

Indeed, perhaps it is unreasonable of me to ask humanity to change its ways so fast! We had millions of years to learn how to live the wrong way, and I’m giving you only a few centuries to learn the right way?

The problem, of course, is that the pace of technological change leaves us with no choice. It might be better if we could wait a thousand years for people to gradually adjust to globalization and become cosmopolitan; but climate change won’t wait a hundred, and nuclear weapons won’t wait at all. We are thrust into a world that is changing very fast indeed, and I understand that it is hard to keep up; but there is no way to turn back that tide of change.

Yet “turn back the tide” does seem to be part of the core message of the Trump voter, once you get past the racial slurs and sexist slogans. People are afraid of what the world is becoming. They feel that it is leaving them behind. Coal miners fret that we are leaving them behind by cutting coal consumption. Factory workers fear that we are leaving them behind by moving the factory to China or inventing robots to do the work in half the time for half the price.

And truth be told, they are not wrong about this. We are leaving them behind. Because we have to. Because coal is polluting our air and destroying our climate, we must stop using it. Moving the factories to China has raised them out of the most dire poverty, and given us a fighting chance toward ending world hunger. Inventing the robots is only the next logical step in the process that has carried humanity forward from the squalor and suffering of primitive life to the security and prosperity of modern society—and it is a step we must take, for the progress of civilization is not yet complete.

They wouldn’t have to let themselves be left behind, if they were willing to accept our help and learn to adapt. That carbon tax that closes your coal mine could also pay for your basic income and your job-matching program. The increased efficiency from the automated factories could provide an abundance of wealth that we could redistribute and share with you.

But this would require them to rethink their view of the world. They would have to accept that climate change is a real threat, and not a hoax created by… uh… never was clear on that point actually… the Chinese maybe? But 45% of Trump supporters don’t believe in climate change (and that’s actually not as bad as I’d have thought). They would have to accept that what they call “socialism” (which really is more precisely described as social democracy, or tax-and-transfer redistribution of wealth) is actually something they themselves need, and will need even more in the future. But despite rising inequality, redistribution of wealth remains fairly unpopular in the US, especially among Republicans.

Above all, it would require them to redefine their tribe, and start listening to—and valuing the lives of—people that they currently do not.

Perhaps we need to redefine our tribe as well; many liberals have argued that we mistakenly—and dangerously—did not include people like Trump voters in our tribe. But to be honest, that rings a little hollow to me: We aren’t the ones threatening to deport people or ban them from entering our borders. We aren’t the ones who want to build a wall (though some have in fact joked about building a wall to separate the West Coast from the rest of the country, I don’t think many people really want to do that). Perhaps we live in a bubble of liberal media? But I make a point of reading outlets like The American Conservative and The National Review for other perspectives (I usually disagree, but I do at least read them); how many Trump voters do you think have ever read the New York Times, let alone Huffington Post? Cosmopolitans almost by definition have the more inclusive tribe, the more open perspective on the world (in fact, do I even need the “almost”?).

Nor do I think we are actually ignoring their interests. We want to help them. We offer to help them. In fact, I want to give these people free money—that’s what a basic income would do, it would take money from people like me and give it to people like them—and they won’t let us, because that’s “socialism”! Rather, we are simply refusing to accept their offered solutions, because those so-called “solutions” are beyond unworkable; they are absurd, immoral and insane. We can’t bring back the coal mining jobs, unless we want Florida underwater in 50 years. We can’t reinstate the trade tariffs, unless we want millions of people in China to starve. We can’t tear down all the robots and force factories to use manual labor, unless we want to trigger a national—and then global—economic collapse. We can’t do it their way. So we’re trying to offer them another way, a better way, and they’re refusing to take it. So who here is ignoring the concerns of whom?

Of course, the fact that it’s really their fault doesn’t solve the problem. We do need to take it upon ourselves to do whatever we can, because, regardless of whose fault it is, the world will still suffer if we fail. And that presents us with our most difficult task of all, a task that I fully expect to spend a career trying to do and yet still probably failing: We must understand the human tribal instinct well enough that we can finally begin to change it. We must know enough about how human beings form their mental tribes that we can actually begin to shift those parameters. We must, in other words, cure bigotry—and we must do it now, for we are running out of time.

Toward an economics of social norms

Sep 17, JDN 2457649

It is typical in economics to assume that prices are set by perfect competition in markets with perfect information. This is obviously ridiculous, so many economists do go further and start looking into possible distortions of the market, such as externalities and monopolies. But almost always the assumption is still that human beings are neoclassical rational agents, what I call “infinite identical psychopaths”, selfish profit-maximizers with endless intelligence and zero empathy.

What happens when we recognize that human beings are not like this, but in fact are empathetic, social creatures, who care about one another and work toward the interests of (what they perceive to be) their tribe? How are prices really set? What actually decides what is made and sold? What does economics become once you understand sociology? (The good news is that experiments are now being done to find out.)

Presumably some degree of market competition is involved, and no small amount of externalities and monopolies. But one of the very strongest forces involved in setting prices in the real world is almost completely ignored, and that is social norms.

Social norms are tremendously powerful. They will drive us to bear torture, fight and die on battlefields, even detonate ourselves as suicide bombs. When we talk about “religion” or “ideology” motivating people to do things, really what we are talking about is social norms. While some weaker norms can be overridden, no amount of economic incentive can ever override a social norm at its full power. Moreover, most of our behavior in daily life is driven by social norms: How to dress, what to eat, where to live. Even the fundamental structure of our lives is written by social norms: Go to school, get a job, get married, raise a family.

Even academic economists, who imagine themselves one part purveyor of ultimate wisdom and one part perfectly rational agent, are clearly strongly driven by social norms—what problems are “interesting”, which researchers are “renowned”, what approaches are “sensible”, what statistical methods are “appropriate”. If economists were perfectly rational, dynamic stochastic general equilibrium models would be in the dustbin of history (because, like string theory, they have yet to lead to a single useful empirical prediction), research journals would not be filled with endless streams of irrelevant but impressive equations (I recently read one that basically spent half a page of calculus re-deriving the concept of GDP—and computer-generated gibberish has been published, because its math looked so impressive), and instead of frequentist p-values (and often misinterpreted at that), all the statistics would be written in the form of Bayesian logodds.

Indeed, in light of all this, I often like to say that to a first approximation, all human behavior is social norms.

How does this affect buying and selling? Well, first of all, there are some things we refuse to buy and sell, or at least that most of us refuse to buy and sell, and who use social pressure, public humilitation, or even the force of law to prevent. You’re not supposed to sell children. You’re not supposed to sell your vote. You’re not even supposed to sell sexual favors (though every society has always had a large segment of people who do, and more recently people are becoming more open to the idea of at least decriminalizing it). If we were neoclassical rational agents, we would have no such qualms; if we want something and someone is willing to sell it to us, we’ll buy it. But as actual human beings with emotions and social norms, we recognize that there is something fundamentally different about selling your vote as opposed to selling a shirt or a television. It’s not always immediately obvious where to draw the line, which is why sex work can be such a complicated issue (You can’t get paid to have sex… unless someone is filming it?). Different societies may do it differently: Part of the challenge of fighting corruption in Third World countries is that much of what we call corruption—and which actually is harmful to long-run economic development—isn’t perceived as “corruption” by the people involved in it, just as social custom (“Of course I’d hire my cousin! What kind of cousin would I be if I didn’t?”). Yet despite all that, almost everyone agrees that there is a line to be drawn. So there are whole markets that theoretically could exist, but don’t, or only exist as tiny black markets most people never participate in, because we consider selling those things morally wrong. Recently a whole subfield of cognitive economics has emerged studying these repugnant markets.

Even if a transaction is not considered so repugnant as to be unacceptable, there are also other classes of goods that are in some sense unsavory; something you really shouldn’t buy, but you’re not a monster for doing so. These are often called sin goods, and they have always included drugs, alcohol, and gambling—and I do mean always, as every human civilization has had these things—they include prostitution where it is legal, and as social norms change they are now beginning to include oil and coal as well (which can only be good for the future of Earth’s climate). Sin goods are systematically more expensive than they should be for their marginal cost, because most people are unwilling to participate in selling them. As a result, the financial returns for producing sin goods are systematically higher. Actually, this could partially explain why Wall Street banks are so profitable; when the banking system is corrupt as it is—and you’re not imagining that; laundering money for terroriststhen banking becomes a sin good, and good people don’t want to participate in it. Or perhaps the effect runs the other way around: Banking has been viewed as sinful for centuries (in Medieval times, usury was punished much the same way as witchcraft), and as a result only the sort of person who doesn’t care about social and moral norms becomes a banker—and so the banking system becomes horrifically corrupt. Is this a reason for good people to force ourselves to become bankers? Or is there another way—perhaps credit unions?

There are other ways that social norms drive prices as well. We have a concept ofa “fair wage”, which is quite distinct from the economic concept of a “market-clearing wage”. When people ask whether someone’s wage is fair, they don’t look at supply and demand and try to determine whether there are too many or too few people offering that service. They ask themselves what the labor is worth—what value has it added—and how hard that person has worked to do it—what cost it bore. Now, these aren’t totally unrelated to supply and demand (people are less likely to supply harder work, people are more likely to demand higher value), so it’s conceivable that these heuristics could lead us to more or less achieve the market-clearing wage most of the time. But there are also some systematic distortions to consider.

Perhaps the most important way fairness matters in economics is necessities: Basic requirements for human life such as food, housing, and medicine. The structure of our society also makes transportation, education, and Internet access increasingly necessary for basic functioning. From the perspective of an economist, it is a bit paradoxical how angry people get when the price of something important (such as healthcare) is increased: If it’s extremely valuable, shouldn’t you be willing to pay more? Why does it bother you less when something like a Lamborghini or a Rolex rises in price, something that almost certainly wasn’t even worth its previous price? You’re going to buy the necessities anyway, right? Well, as far as most economists are concerned, that’s all that matters—what gets bought and sold. But of course as a human being I do understand why people get angry about these things, and it is because they have to buy them anyway. When someone like Martin Shkreli raises the prices on basic goods, we feel exploited. There’s even a way to make this economically formal: When demand is highly inelastic, we are rightly very sensitive to the possibility of a monopoly, because monopolies under inelastic demand can extract huge profits and cause similarly huge amounts of damage to the welfare of their customers. That isn’t quite how most people would put it, but I think that has something to do with the ultimate reason we evolved that heuristic: It’s dangerous to let someone else control your basic necessities, because that gives them enormous power to exploit you. If they control things that aren’t as important to you, that doesn’t matter so much, because you can always do without if you must. So a norm that keeps businesses from overcharging on necessities is very important—and probably not as strong anymore as it should be.

Another very important way that fairness and markets can be misaligned is talent: What if something is just easier for one person than another? If you achieve the same goal with half the work, should you be rewarded more for being more efficient, or less because you bore less cost? Neoclassical economics doesn’t concern itself with such questions, asking only if supply and demand reached equilibrium. But we as human beings do care about such things; we want to know what wage a person deserves, not just what wage they would receive in a competitive market.

Could we be wrong to do that? Might it be better if we just let the market do its work? In some cases I think that may actually be true. Part of why CEO pay is rising so fast despite being uncorrelated with corporate profitability or even negatively correlated is that CEOs have convinced us (or convinced their boards of directors) that this is fair, that they deserve more stock options. They even convince them that their pay is based on performance, by using highly distorted measures of performance. If boards thought more like economic rational agents, when a CEO asked for more pay they’d ask: “What other company gave you a higher offer?” and if the CEO didn’t have an answer, they’d laugh and refuse the raise. Because in purely economic terms, that is all a salary does: it keeps you from quitting to work somewhere else. The competitive mechanism of the market is supposed to then ensure that your wage aligns with your marginal cost and marginal productivity purely due to that.

On the other hand, there are many groups of people who simply aren’t doing very well in the market: Women, racial minorities, people with disabilities. There are a lot of reasons for this, some of which might go away if markets were made more competitive—the classic argument that competitive markets reward companies that don’t discriminate—but many clearly wouldn’t. Indeed, that argument was never as strong as it at first appears; in a society where social norms are strongly in favor of bigotry, it can be completely economically rational to participate in bigotry to avoid being penalized. When Chick-Fil-A was revealed to have donated to anti-LGBT political groups, many people tried to boycott—but their sales actually increased from the publicity. Honestly it’s a bit baffling that they promised not to donate to such causes anymore; it was apparently a profitable business decision to be revealed as supporters of bigotry. And even when discrimination does hurt economic performance, companies are run by human beings, and they are still quite capable of discriminating regardless. Indeed, the best evidence we have that discrimination is inefficient comes from… businesses that persist in discriminating despite the fact that it is inefficient.

But okay, suppose we actually did manage to make everyone compensated according to their marginal productivity. (Or rather, what Rawls derided: “From each according to his marginal productivity, to each according to his threat advantage.”) The market would then clear and be highly efficient. Would that actually be a good thing? I’m not so sure.

A lot of people are highly unproductive through no fault of their own—particularly children and people with disabilities. Much of this is not discrimination; it’s just that they aren’t as good at providing services. Should we simply leave them to fend for themselves? Then there’s the key point about what marginal means in this case—it means “given what everyone else is doing”. But that means that you can be made obsolete by someone else’s actions, and in this era of rapid technological advancement, jobs become obsolete faster than ever. Unlike a lot of people, I recognize that it makes no sense to keep people working at jobs that can be automated—the machines are better. But still, what do we do with the people whose jobs have been eliminated? Do we treat them as worthless? When automated buses become affordable—and they will; I give it 20 years—do we throw the human bus drivers under them?

One way out is of course a basic income: Let the market wage be what it will, and then use the basic income to provide for what human beings deserve irrespective of their market productivity. I definitely support a basic income, of course, and this does solve the most serious problems like children and quadriplegics starving in the streets.

But as I read more of the arguments by people who favor a job guarantee instead of a basic income, I begin to understand better why they are uncomfortable with the idea: It doesn’t seem fair. A basic income breaks once and for all the link between “a fair day’s work” and “a fair day’s wage”. It runs counter to this very deep-seated intuition most people have that money is what you earn—and thereby deserve—by working, and only by working. That is an extremely powerful social norm, and breaking it will be very difficult; so it’s worth asking: Should we even try to break it? Is there a way to achieve a system where markets are both efficient and fair?

I’m honestly not sure; but I do know that we could make substantial progress from where we currently stand. Most billionaire wealth is pure rent in the economic sense: It’s received by corruption and market distortion, not by efficient market competition. Most poverty is due to failures of institutions, not lack of productivity of workers. As George Monblot famously wrote, “If wealth was the inevitable result of hard work and enterprise, every woman in Africa would be a millionaire.” Most of the income disparity between White men and others is due to discrimination, not actual skill—and what skill differences there are are largely the result of differences in education and upbringing anyway. So if we do in fact correct these huge inefficiencies, we will also be moving toward fairness at the same time. But still that nagging thought remains: When all that is done, will there come a day where we must decide whether we would rather have an efficient economy or a just society? And if it does, will we decide the right way?

Lukewarm support is a lot better than opposition

July 23, JDN 2457593

Depending on your preconceptions, this statement may seem either eminently trivial or offensively wrong: Lukewarm support is a lot better than opposition.

I’ve always been in the “trivial” camp, so it has taken me awhile to really understand where people are coming from when they say things like the following.

From a civil rights activist blogger (“POC” being “person of color” in case you didn’t know):

Many of my POC friends would actually prefer to hang out with an Archie Bunker-type who spits flagrantly offensive opinions, rather than a colorblind liberal whose insidious paternalism, dehumanizing tokenism, and cognitive indoctrination ooze out between superficially progressive words.

From the Daily Kos:

Right-wing racists are much more honest, and thus easier to deal with, than liberal racists.

From a Libertarian blogger:

I can deal with someone opposing me because of my politics. I can deal with someone who attacks me because of my religious beliefs. I can deal with open hostility. I know where I stand with people like that.

They hate me or my actions for (insert reason here). Fine, that is their choice. Let’s move onto the next bit. I’m willing to live and let live if they are.

But I don’t like someone buttering me up because they need my support, only to drop me the first chance they get. I don’t need sweet talk to distract me from the knife at my back. I don’t need someone promising the world just so they can get a boost up.

In each of these cases, people are expressing a preference for dealing with someone who actively opposes them, rather than someone who mostly supports them. That’s really weird.

The basic fact that lukewarm support is better than opposition is basically a mathematical theorem. In a democracy or anything resembling one, if you have the majority of population supporting you, even if they are all lukewarm, you win; if you have the majority of the population opposing you, even if the remaining minority is extremely committed to your cause, you lose.

Yes, okay, it does get slightly more complicated than that, as in most real-world democracies small but committed interest groups actually can pressure policy more than lukewarm majorities (the special interest effect); but even then, you are talking about the choice between no special interests and a special interest actively against you.

There is a valid question of whether it is more worthwhile to get a small, committed coalition, or a large, lukewarm coalition; but at the individual level, it is absolutely undeniable that supporting you is better for you than opposing you, full stop. I mean that in the same sense that the Pythagorean theorem is undeniable; it’s a theorem, it has to be true.

If you had the opportunity to immediately replace every single person who opposes you with someone who supports you but is lukewarm about it, you’d be insane not to take it. Indeed, this is basically how all social change actually happens: Committed supporters persuade committed opponents to become lukewarm supporters, until they get a majority and start winning policy votes.

If this is indeed so obvious and undeniable, why are there so many people… trying to deny it?

I came to realize that there is a deep psychological effect at work here. I could find very little in the literature describing this effect, which I’m going to call heretic effect (though the literature on betrayal aversion, several examples of which are linked in this sentence, is at least somewhat related).

Heretic effect is the deeply-ingrained sense human beings tend to have (as part of the overall tribal paradigm) that one of the worst things you can possibly do is betray your tribe. It is worse than being in an enemy tribe, worse even than murdering someone. The one absolutely inviolable principle is that you must side with your tribe.

This is one of the biggest barriers to police reform, by the way: The Blue Wall of Silence is the result of police officers identifying themselves as a tight-knit tribe and refusing to betray one of their own for anything. I think the best option for convincing police officers to support reform is to reframe violations of police conduct as themselves betrayals—the betrayal is not the IA taking away your badge, the betrayal is you shooting an unarmed man because he was Black.

Heretic effect is a particular form of betrayal aversion, where we treat those who are similar to our tribe but not quite part of it as the very worst sort of people, worse than even our enemies, because at least our enemies are not betrayers. In fact it isn’t really betrayal, but it feels like betrayal.

I call it “heretic effect” because of the way that exclusivist religions (including all the Abrahamaic religions, and especially Christianity and Islam) focus so much of their energy on rooting out “heretics”, people who almost believe the same as you do but not quite. The Spanish Inquisition wasn’t targeted at Buddhists or even Muslims; it was targeted at Christians who slightly disagreed with Catholicism. Why? Because while Buddhists might be the enemy, Protestants were betrayers. You can still see this in the way that Muslim societies treat “apostates”, those who once believed in Islam but don’t anymore. Indeed, the very fact that Christianity and Islam are at each other’s throats, rather than Hinduism and atheism, shows that it’s the people who almost agree with you that really draw your hatred, not the people whose worldview is radically distinct.

This is the effect that makes people dislike lukewarm supporters; like heresy, lukewarm support feels like betrayal. You can clearly hear that in the last quote: “I don’t need sweet talk to distract me from the knife at my back.” Believe it or not, Libertarians, my support for replacing the social welfare state with a basic income, decriminalizing drugs, and dramatically reducing our incarceration rate is not deception. Nor do I think I’ve been particularly secretive about my desire to make taxes more progressive and environmental regulations stronger, the things you absolutely don’t agree with. Agreeing with you on some things but not on other things is not in fact the same thing as lying to you about my beliefs or infiltrating and betraying your tribe.

That said, I do sort of understand why it feels that way. When I agree with you on one thing (decriminalizing cannabis, for instance), it sends you a signal: “This person thinks like me.” You may even subconsciously tag me as a fellow Libertarian. But then I go and disagree with you on something else that’s just as important (strengthening environmental regulations), and it feels to you like I have worn your Libertarian badge only to stab you in the back with my treasonous environmentalism. I thought you were one of us!

Similarly, if you are a social justice activist who knows all the proper lingo and is constantly aware of “checking your privilege”, and I start by saying, yes, racism is real and terrible, and we should definitely be working to fight it, but then I question something about your language and approach, that feels like a betrayal. At least if I’d come in wearing a Trump hat you could have known which side I was really on. (And indeed, I have had people unfriend me or launch into furious rants at me for questioning the orthodoxy in this way. And sure, it’s not as bad as actually being harassed on the street by bigots—a thing that has actually happened to me, by the way—but it’s still bad.)

But if you can resist this deep-seated impulse and really think carefully about what’s happening here, agreeing with you partially clearly is much better than not agreeing with you at all. Indeed, there’s a fairly smooth function there, wherein the more I agree with your goals the more our interests are aligned and the better we should get along. It’s not completely smooth, because certain things are sort of package deals: I wouldn’t want to eliminate the social welfare system without replacing it with a basic income, whereas many Libertarians would. I wouldn’t want to ban fracking unless we had established a strong nuclear infrastructure, but many environmentalists would. But on the whole, more agreement is better than less agreement—and really, even these examples are actually surface-level results of deeper disagreement.

Getting this reaction from social justice activists is particularly frustrating, because I am on your side. Bigotry corrupts our society at a deep level and holds back untold human potential, and I want to do my part to undermine and hopefully one day destroy it. When I say that maybe “privilege” isn’t the best word to use and warn you about not implicitly ascribing moral responsibility across generations, this is not me being a heretic against your tribe; this is a strategic policy critique. If you are writing a letter to the world, I’m telling you to leave out paragraph 2 and correcting your punctuation errors, not crumpling up the paper and throwing it into a fire. I’m doing this because I want you to win, and I think that your current approach isn’t working as well as it should. Maybe I’m wrong about that—maybe paragraph 2 really needs to be there, and you put that semicolon there on purpose—in which case, go ahead and say so. If you argue well enough, you may even convince me; if not, this is the sort of situation where we can respectfully agree to disagree. But please, for the love of all that is good in the world, stop saying that I’m worse than the guys in the KKK hoods. Resist that feeling of betrayal so that we can have a constructive critique of our strategy. Don’t do it for me; do it for the cause.

The powerful persistence of bigotry

JDN 2457527

Bigotry has been a part of human society since the beginning—people have been hating people they perceive as different since as long as there have been people, and maybe even before that. I wouldn’t be surprised to find that different tribes of chimpanzees or even elephants hold bigoted beliefs about each other.

Yet it may surprise you that neoclassical economics has basically no explanation for this. There is a long-standing famous argument that bigotry is inherently irrational: If you hire based on anything aside from actual qualifications, you are leaving money on the table for your company. Because women CEOs are paid less and perform better, simply ending discrimination against women in top executive positions could save any typical large multinational corporation tens of millions of dollars a year. And yet, they don’t! Fancy that.

More recently there has been work on the concept of statistical discrimination, under which it is rational (in the sense of narrowly-defined economic self-interest) to discriminate because categories like race and gender may provide some statistically valid stereotype information. For example, “Black people are poor” is obviously not true across the board, but race is strongly correlated with wealth in the US; “Asians are smart” is not a universal truth, but Asian-Americans do have very high educational attainment. In the absence of more reliable information that might be your best option for making good decisions. Of course, this creates a vicious cycle where people in the positive stereotype group are better off and have more incentive to improve their skills than people in the negative stereotype group, thus perpetuating the statistical validity of the stereotype.

But of course that assumes that the stereotypes are statistically valid, and that employers don’t have more reliable information. Yet many stereotypes aren’t even true statistically: If “women are bad drivers”, then why do men cause 75% of traffic fatalities? Furthermore, in most cases employers have more reliable information—resumes with education and employment records. Asian-Americans are indeed more likely to have bachelor’s degrees than Latino Americans, but when it say right on Mr. Lorenzo’s resume that he has a B.A. and on Mr. Suzuki’s resume that he doesn’t, that racial stereotype no longer provides you with any further information. Yet even if the resumes are identical, employers will be more likely to hire a White applicant than a Black applicant, and more likely to hire a male applicant than a female applicant—we have directly tested this in experiments. In an experiment where employers had direct performance figures in front of them, they were still more likely to choose the man when they had the same scores—and sometimes even when the woman had a higher score!

Even our assessments of competence are often biased, probably subconsciously; given the same essay to review, most reviewers find more spelling errors and are more concerned about those errors if they are told that the author is Black. If they thought the author was White, they thought of the errors as “minor mistakes” by a student with “otherwise good potential”; but if they thought the author was Black, they “can’t believe he got into this school in the first place”. These reviewers were reading the same essay. The alleged author’s race was decided randomly. Most if not all of these reviewers were not consciously racist. Subconscious racial biases are all over the place; almost everyone exhibits some subconscious racial bias.

No, discrimination isn’t just rational inference based on valid (if unfortunate and self-reinforcing) statistical trends. There is a significant component of just outright irrational bigotry.

We’re seeing this play out in North Carolina; due to their arbitrary discrimination against lesbian, gay, bisexual and especially transgender people, they are now hemorrhaging jobs as employers pull out, and their federal funding for student loans is now in jeopardy due to the obvious Title IX violation. This is obviously not in the best interest of the people of North Carolina (even the ones who aren’t LGBT!); and it’s all being justified on the grounds of an epidemic of sexual assaults by people pretending to be trans that doesn’t even exist. It turns out that more Republican Senators have been arrested for sexual misconduct in bathrooms than transgender people—and while the number of transgender people in the US is surprisingly hard to measure, it’s clearly a lot larger than the number of Republican Senators!

In fact, discrimination is even more irrational than it may seem, because empirically the benefits of discrimination (such as they are—short-term narrow economic self-interest) fall almost entirely on the rich while the harms fall mainly on the poor, yet poor people are much more likely to be racist! Since income and education are highly correlated, education accounts for some of this effect. This is reason to be hopeful, for as educational attainment has soared, we have found that racism has decreased.

But education doesn’t seem to explain the full effect. One theory to account this is what’s called last-place aversiona highly pernicious heuristic where people are less concerned about their own absolute status than they are about not having the worst status. In economic experiments, people are usually more willing to give money to people worse off than them than to those better off than them—unless giving it to the worse-off would make those people better off than they themselves are. I think we actually need to do further study to see what happens if it would make those other people exactly as well-off as they are, because that turns out to be absolutely critical to whether people would be willing to support a basic income. In other words, do people count “tied for last”? Would they rather play a game where everyone gets $100, or one where they get $50 but everyone else only gets $10?

I would hope that humanity is better than that—that we would want to play the $100 game, which is analogous to a basic income. But when I look at the extreme and persistent inequality that has plagued human society for millennia, I begin to wonder if perhaps there really are a lot of people who think of the world in such zero-sum, purely relative terms, and care more about being better than others than they do about doing well themselves. Perhaps the horrific poverty of Sub-Saharan Africa and Southeast Asia is, for many First World people, not a bug but a feature; we feel richer when we know they are poorer. Scarcity seems to amplify this zero-sum thinking; racism gets worse whenever we have economic downturns. Precisely because discrimination is economically inefficient, this can create a vicious cycle where poverty causes bigotry which worsens poverty.

There is also something deeper going on, something evolutionary; bigotry is part of what I call the tribal paradigm, the core aspect of human psychology that defines identity in terms of in-groups which are good and out-groups which are bad. We will probably never fully escape the tribal paradigm, but this is not a reason to give up hope; we have made substantial progress in reducing bigotry in many places. What seems to happen is that people learn to expand their mental tribe, so that it encompasses larger and larger groups—not just White Americans but all Americans, or not just Americans but all human beings. Peter Singer calls this the Expanding Circle (also the title of his book on it). We may one day be able to make our tribe large enough to encompass all sentient beings in the universe; at that point, it’s just fine if we are only interested in advancing the interests of those in our tribe, because our tribe would include everyone. Yet I don’t think any of us are quite there yet, and some people have a really long way to go.

But with these expanding tribes in mind, perhaps I can leave you with a fact that is as counter-intuitive as it is encouraging, and even easier still to take out of context: Racism was better than what came before it. What I mean by this is not that racism is good—of course it’s terrible—but that in order to be racism, to define the whole world into a small number of “racial groups”, people already had to enormously expand their mental tribe from where it started. When we evolved on the African savannah millions of years ago, our tribe was 150 people; to this day, that’s about the number of people we actually feel close to and interact with on a personal level. We could have stopped there, and for millennia we did. But over time we managed to expand beyond that number, to a village of 1,000, a town of 10,000, a city of 100,000. More recently we attained mental tribes of whole nations, in some case hundreds of millions of people. Racism is about that same scale, if not a bit larger; what most people (rather arbitrarily, and in a way that changes over time) call “White” constitutes about a billion people. “Asian” (including South Asian) is almost four billion. These are astonishingly huge figures, some seven orders of magnitude larger than what we originally evolved to handle. The ability to feel empathy for all “White” people is just a little bit smaller than the ability to feel empathy for all people period. Similarly, while today the gender in “all men are created equal” is jarring to us, the idea at the time really was an incredibly radical broadening of the moral horizon—Half the world? Are you mad?

Therefore I am confident that one day, not too far from now, the world will take that next step, that next order of magnitude, which many of us already have (or try to), and we will at last conquer bigotry, and if not eradicate it entirely then force it completely into the most distant shadows and deny it its power over our society.

Why is there a “corporate ladder”?

JDN 2457482

We take this concept for granted; there are “entry-level” jobs, and then you can get “promoted”, until perhaps you’re lucky enough or talented enough to rise to the “top”. Jobs that are “higher” on this “ladder” pay better, offer superior benefits, and also typically involve more pleasant work environments and more autonomy, though they also typically require greater skill and more responsibility.

But I contend that an alien lifeform encountering our planet for the first time, even one that somehow knew all about neoclassical economic theory (admittedly weird, but bear with me here), would be quite baffled by this arrangement.

The classic “rags to riches” story always involves starting work in some menial job like working in the mailroom, from which you then more or less magically rise to the position of CEO. (The intermediate steps are rarely told in the story, probably because they undermine the narrative; successful entrepreneurs usually make their first successful business using funds from their wealthy relatives, and if you haven’t got any wealthy relatives, that’s just too bad for you.)

Even despite its dubious accuracy, the story is bizarre in another way: There’s no reason to think that being really good at working in the mail room has anything at all to do with being good at managing a successful business. They’re totally orthogonal skills. They may even be contrary in personality terms; the kind of person who makes a good entrepreneur is innovative, decisive, and independent—and those are exactly the kind of personality traits that will make you miserable in a menial job where you’re constantly following orders.

Yet in almost every profession, we have this process where you must first “earn” your way to “higher” positions by doing menial and at best tangentially-related tasks.

This even happens in science, where we ought to know better! There’s really no reason to think that being good at taking multiple-choice tests strongly predicts your ability to do scientific research, nor that being good at grading multiple-choice tests does either; and yet to become a scientific researcher you must pass a great many multiple-choice tests (at bare minimum the SAT and GRE), and probably as a grad student you’ll end up grading some as well.

This process is frankly bizarre; worldwide, we are probably leaving tens of trillions of dollars of productivity on the table by instituting these arbitrary selection barriers that have nothing to do with actual skills. Simply optimizing our process of CEO selection alone would probably add a trillion dollars to US GDP.

If neoclassical economics were right, we should assign jobs solely based on marginal productivity; there should be some sort of assessment of your ability at each task you might perform, and whichever you’re best at (in the sense of comparative advantage) is what you end up doing, because that’s what you’ll be paid the most to do. Actually for this to really work the selection process would have to be extremely cheap, extremely reliable, and extremely fast, lest the friction of the selection system itself introduce enormous inefficiencies. (The fact that this never even seems to work even in SF stories with superintelligent sorting AIs, let alone in real life, is just so much the worse for neoclassical economics. The last book I read in which it actually seemed to work was Harry Potter and the Sorceror’s Stone—so it was literally just magic.)

The hope seems to be that competition will somehow iron out this problem, but in order for that to work, we must all be competing on a level playing field, and furthermore the mode of competition must accurately assess our real ability. The reason Olympic sports do a pretty good job of selecting the best athletes in the world is that they obey these criteria; the reason corporations do a terrible job of selecting the best CEOs is that they do not.

I’m quite certain I could do better than the former CEO of the late Lehman Brothers (and, to be fair, there are others who could do better still than I), but I’ll likely never get the chance to own a major financial firm—and I’m a lot closer than most people. I get to tick most of the boxes you need to be in that kind of position: White, male, American, mostly able-bodied, intelligent, hard-working, with a graduate degree in economics. Alas, I was only born in the top 10% of the US income distribution, not the top 1% or 0.01%, so my odds are considerably reduced. (That and I’m pretty sure that working for a company as evil as the late Lehman Brothers would destroy my soul.) Somewhere in Sudan there is a little girl who would be the best CEO of an investment bank the world has ever seen, but she is dying of malaria. Somewhere in India there is a little boy who would have been a greater physicist than Einstein, but no one ever taught him to read.

Competition may help reduce the inefficiency of this hierarchical arrangement—but it cannot explain why we use a hierarchy in the first place. Some people may be especially good at leadership and coordination; but in an efficient system they wouldn’t be seen as “above” other people, but as useful coordinators and advisors that people consult to ensure they are allocating tasks efficiently. You wouldn’t do things because “your boss told you to”, but because those things were the most efficient use of your time, given what everyone else in the group was doing. You’d consult your coordinator often, and usually take their advice; but you wouldn’t see them as orders you were required to follow.

Moreover, coordinators would probably not be paid much better than those they coordinate; what they were paid would depend on how much the success of the tasks depends upon efficient coordination, as well as how skilled other people are at coordination. It’s true that if having you there really does make a company with $1 billion in revenue 1% more efficient, that is in fact worth $10 million; but that isn’t how we set the pay of managers. It’s simply obvious to most people that managers should be paid more than their subordinates—that with a “promotion” comes more leadership and more pay. You’re “moving up the corporate ladder” Your pay reflects your higher status, not your marginal productivity.

This is not an optimal economic system by any means. And yet it seems perfectly natural to us to do this, and most people have trouble thinking any other way—which gives us a hint of where it’s probably coming from.

Perfectly natural. That is, instinctual. That is, evolutionary.

I believe that the corporate ladder, like most forms of hierarchy that humans use, is actually a recapitulation of our primate instincts to form a mating hierarchy with an alpha male.

First of all, the person in charge is indeed almost always male—over 90% of all high-level business executives are men. This is clearly discrimination, because women executives are paid less and yet show higher competence. Rare, underpaid, and highly competent is exactly the pattern we would expect in the presence of discrimination. If it were instead a lack of innate ability, we would expect that women executives would be much less competent on average, though they would still be rare and paid less. If there were no discrimination and no difference in ability, we would see equal pay, equal competence, and equal prevalence (this happens almost nowhere—the closest I think we get is in undergraduate admissions). Executives are also usually tall, healthy, and middle-aged—just like alpha males among chimpanzees and gorillas. (You can make excuses for why: Height is correlated with IQ, health makes you more productive, middle age is when you’re old enough to have experience but young enough to have vigor and stamina—but the fact remains, you’re matching the gorillas.)

Second, many otherwise-baffling economic decisions make sense in light of this hypothesis.

When a large company is floundering, why do we cut 20,000 laborers instead of simply reducing the CEO’s stock option package by half to save the same amount of money? Think back to the alpha male: Would he give himself less in a time of scarcity? Of course not. Nor would he remove his immediate subordinates, unless they had done something to offend him. If resources are scarce, the “obvious” answer is to take them from those at the bottom of the hierarchy—resource conservation is always accomplished at the expense of the lowest-status individuals.

Why are the very same poor people who would most stand to gain from redistribution of wealth often those who are most fiercely opposed to it? Because, deep down, they just instinctually “know” that alpha males are supposed to get the bananas, and if they are of low status it is their deserved lot in life. That is how people who depend on TANF and Medicaid to survive can nonetheless vote for Donald Trump. (As for how they can convince themselves that they “don’t get anything from the government”, that I’m not sure. “Keep your government hands off my Medicare!”)

Why is power an aphrodisiac, as well as for many an apparent excuse for bad behavior? I’ll let Cameron Anderson (a psychologist at UC Berkeley) give you the answer: “powerful people act with great daring and sometimes behave rather like gorillas”. With higher status comes a surge in testosterone (makes sense if you’re going to have more mates, and maybe even if you’re commanding an army—but running an investment bank?), which is directly linked to dominance behavior.

These attitudes may well have been adaptive for surviving in the African savannah 2 million years ago. In a world red in tooth and claw, having the biggest, strongest male be in charge of the tribe might have been the most efficient means of ensuring the success of the tribe—or rather I should say, the genes of the tribe, since the only reason we have a tribal instinct is that tribal instinct genes were highly successful at propagating themselves.

I’m actually sort of agnostic on the question of whether our evolutionary heuristics were optimal for ancient survival, or simply the best our brains could manage; but one thing is certain: They are not optimal today. The uninhibited dominance behavior associated with high status may work well enough for a tribal chieftain, but it could be literally apocalyptic when exhibited by the head of state of a nuclear superpower. Allocation of resources by status hierarchy may be fine for hunter-gatherers, but it is disastrously inefficient in an information technology economy.

From now on, whenever you hear “corporate ladder” and similar turns of phrase, I want you to substitute “primate status hierarchy”. You’ll quickly see how well it fits; and hopefully once enough people realize this, together we can all find a way to change to a better system.

Is America uniquely… mean?

JDN 2457454

I read this article yesterday which I found both very resonant and very disturbing: At least among First World countries, the United States really does seem uniquely, for lack of a better word, mean.

The formal psychological terminology is social dominance orientation; the political science term is authoritarianism. In economics, we notice the difference due to its effect on income inequality. But all of these concepts are capturing part of a deeper underlying reality that in the age of Trump I am finding increasingly hard to deny. The best predictor of support for Trump is authoritarianism.

Of course I’ve already talked about our enormous military budget; but then Tennessee had to make their official state rifle a 50-caliber weapon capable of destroying light tanks. There is something especially dominant, aggressive, and violent about American culture.

We are certainly not unique in the world as a whole—actually I think the amount of social dominance orientation, authoritarianism, and inequality in the US is fairly similar to the world average. We are unique in our gun ownership, but our military spending proportional to GDP is not particularly high by world standards—we’re just an extremely rich country. But in all these respects we are a unique outlier among First World countries; in many ways we resemble a rich authoritarian petrostate like Qatar rather than a European social democracy like France or the UK. (At least we’re not Saudi Arabia?)

More than other First World cultures, Americans believe in hierarchy; they believe that someone should be on top and other people should be on the bottom. More than that, they believe that people “like us” should be on top and people “not like us” should be on the bottom, however that is defined—often in terms of race or religion, but not necessarily.

Indeed, one of the things I find most baffling about this is that it is often more important to people that others be held down than that they themselves be lifted up. This is the only way I can make sense of the fact that people who have watched their wages be drained into the pockets of billionaires for a generation can think that the most important things to do right now are block out illegal immigrants and deport Muslims.

It seems to be that people become convinced that their own status, whatever it may be, is deserved: If they are rich, it is obviously because they are so brilliant and hard-working (something Trump clearly believes about himself, being a textbook example of Narcissistic Personality Disorder); if they are poor, it is obviously because they are so incompetent and lazy. Thus, being lifted up doesn’t make sense; why would you give me things I don’t deserve?

But then when they see people who are different from them, they know automatically that those people must be by definition inferior, as all who are Not of Our Tribe are by definition inferior. And therefore, any of them who are rich gained their position through corruption or injustice, and all of them who are poor deserve their fate for being so inferior. Thus, it is most vital to ensure that these Not of Our Tribe are held down from reaching high positions they so obviously do not deserve.

I’m fairly sure that most of this happens at a very deep unconscious level; it calls upon ancient evolutionary instincts to love our own tribe, to serve the alpha male, to fear and hate those of other tribes. These instincts may well have served us 200,000 years ago (then again, they may just have been the best our brains could manage at the time); but they are becoming a dangerous liability today.

As E.O. Wilson put it: “The real problem of humanity is the following: we have paleolithic emotions; medieval institutions; and god-like technology.”

Yet this cannot be a complete explanation, for there is variation in these attitudes. A purely instinctual theory should say that all human cultures have this to an essentially equal degree; but I started this post by pointing out that the United States appears to have a particularly large amount relative to Europe.

So, there must be something in the cultures or institutions of different nations that makes them either enhance or suppress this instinctual tribalism. There must be something that Europe is doing right, the US is doing wrong, and Saudi Arabia is doing very, very wrong.
Well, the obvious one that sticks out at me is religion. It seems fairly obvious to me that Sweden is less religious than the US, which is less religious than Saudi Arabia.

Data does back me up on this. Religiosity isn’t easy to measure, but we have methods of doing so. If we ask people in various countries if religion is very important in their lives, the percentage of people who say yes gives us an indication of how religious that country is.

In Saudi Arabia, 93% say yes. In the United States, 65% say yes. In Sweden, only 17% say yes.

Religiosity tends to be highest in the poorest countries, but the US is an outlier, far too rich for our religion (or too religious for our wealth).

Religiosity also tends to be highest in countries with high inequality—this time, the US fits right in.

The link between religion and inequality is quite clear. It’s harder to say which way the causation runs. Perhaps high inequality makes people cling more to religion as a comfort, and getting rid of religion would only mean taking that comfort away. Or, perhaps religion actually makes people believe more in social dominance, and thus is part of what keeps that high inequality in place. It could also be a feedback loop, in which higher inequality leads to higher religiosity which leads to higher inequality.

That said, I think we actually have some evidence that causality runs from religion to inequality, rather than the other way around. The secularization of France took place around the same time as the French Revolution that overthrew the existing economic system and replaced it with one that had substantially less inequality. Iran’s government became substantially more based on religion in the latter half of the 20th century, and their inequality soared thereafter.

Above all, Donald Trump dominates the evangelical vote, which makes absolutely no sense if religion is a comfort against inequality—but perfect sense if religion solidifies the tendency of people to think in terms of hierarchy and authoritarianism.

This also makes sense in terms of the content of religion, especially Abrahamaic religion; read the Bible and the Qur’an, and you will see that their primary goal seems to be to convince you that some people, namely people who believe in this book, are just better than other people, and we should be in charge because God says so. (And you wouldn’t try to argue with God, would you?) They really make no particular effort to convince you that God actually exists; they spend all their argumentative effort on what God wants you to do and who God wants you to put in charge—and for some strange reason it always seems to be the same guys who are writing down “God’s words” in the book! What a coincidence!

If religion is indeed the problem, or a large part of the problem, what can we do about it? That’s the most difficult part. We’ve been making absolutely conclusive rational arguments against religion since literally 300 years before Jesus was even born (there has never been a time in human history in which it was rational for an educated person to believe in Christianity or Islam, for the religions did not come into existence until well after the arguments to refute them were well-known!), and the empirical evidence against theism has only gotten stronger ever since; so that clearly isn’t enough.

I think what we really need to do at this point is confront the moral monopoly that religion has asserted for itself. The “Moral Majority” was neither, but its name still sort of makes sense to us because we so strongly associate being moral with being religious. We use terms like “Christian” and “generous” almost interchangeably. And whenever you get into a debate about religion, shortly after you have thoroughly demolished any shred of empirical credibility religion still had left, you can basically guarantee that the response will be: “But without God, how can you know right from wrong?”

What is perhaps most baffling about this concept of morality so commonplace in our culture is that not only is the command of a higher authority that rewards and punishes you not the highest level of moral development—it is literally the lowest. Of the six stages of moral thinking Kohlberg documented in children, the reward and punishment orientation exemplified by the Bible and the Qur’an is the very first. I think many of these people really truly haven’t gotten past level 1, which is why when you start trying to explain how you base your moral judgments on universal principles of justice and consequences (level 6) they don’t seem to have any idea what you’re talking about.

Perhaps this is a task for our education system (philosophy classes in middle school?), perhaps we need something more drastic than that, or perhaps it is enough that we keep speaking about it in public. But somehow we need to break up the monopoly that religion has on moral concepts, so that people no longer feel ashamed to say that something is morally wrong without being able to cite a particular passage from a particular book from the Iron Age. Perhaps once we can finally make people realize that morality does not depend on religion, we can finally free them from the grip of religion—and therefore from the grip of authoritarianism and social dominance.

If this is right, then the reason America is so mean is that we are so Christian—and people need to realize that this is not a paradoxical statement.

9/11, 14 years on—and where are our civil liberties?

JDN 2457278 (09/11/2015) EDT 20:53

Today is the 14th anniversary of the 9/11 attacks. A lot has changed since then—yet it’s quite remarkable what hasn’t. In particular, we still don’t have our civil liberties back.

In our immediate panicked response to the attacks, the United States passed almost unanimously the USA PATRIOT ACT, giving unprecedented power to our government in surveillance, searches, and even arrests and detentions. Most of those powers have been renewed repeatedly and remain in effect; the only major change has been a slight weakening of the NSA’s authority to use mass dragnet surveillance on Internet traffic and phone metadata. And this change in turn was almost certainly only made because of Edward Snowden, who is still forced to live in Russia for fear of being executed if he returns to the US. That is, the man most responsible for the only significant improvement in civil liberties in the United States in the last decade is living in Russia because he has been branded a traitor. No, the traitors here are the over one hundred standing US Congress members who voted for an act that is in explicit and direct violation of the Constitution. At the very least every one of them should be removed from office, and we as voters have the power to do that—so why haven’t we? In particular, why are Dan Lipinski and Steny Hoyer, both Democrats from non-southern states who voted every single time to extend provisions of the PATRIOT ACT, still in office? At least Carl Levin had the courtesy to resign after sponsoring the act allowing indefinite detention—I hope we would have voted him out anyway, since I’d much rather have a Republican (and all the absurd economic policy that entails) than someone who apparently doesn’t believe the Fourth and Sixth Amendments have any meaning at all.

We have become inured to this loss of liberty; it feels natural or inevitable to us. But these are not minor inconveniences; they are not small compromises. Giving our government the power to surveil, search, arrest, imprison, torture, and execute anyone they want at any time without the system of due process—and make no mistake, that is what the PATRIOT ACT and the indefinite detention law do—means giving away everything that separates us from tyranny. Bypassing the justice system and the rule of law means bypassing everything that America stands for.

So far, these laws have actually mostly been used against people reasonably suspected of terrorism, that much is true; but it’s also irrelevant. Democracy doesn’t mean you give the government extreme power and they uphold your trust and use it benevolently. Democracy means you don’t give them that power in the first place.

If there’s really sufficient evidence to support an arrest for terrorism, get a warrant. If you don’t have enough evidence for a warrant, you don’t have enough evidence for an arrest. If there’s really sufficient evidence to justify imprisoning someone for terrorism, get a jury to convict. If you don’t have enough evidence to convince a jury, guess what? You don’t have enough evidence to imprison them. These are not negotiable. They are not “political opinions” in any ordinary sense. The protection of due process is so fundamental to democracy that without it political opinions lose all meaning.

People talk about “Big Government” when we suggest increasing taxes on capital gains or expanding Medicare. No, that isn’t Big Government. Searching without warrants is Big Government. Imprisoning people without trial is Big Government. From all the decades of crying wolf in which any policy someone doesn’t like is accused of being “tyranny”, we seem to have lost the ability to recognize actual tyranny. I hope you understand the full force of my meaning when I say that the PATRIOT ACT is literally fascist. Fascism has come to America, and as predicted it was wrapped in the flag and carrying a cross.

In this sort of situation, a lot of people like to quote (or misquote) Benjamin Franklin:

“Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety.”

With the qualifiers “essential” and “temporary”, this quote seems right; but a lot of people forget them and quote him as saying:
“Those would give up liberty to purchase safety, deserve neither liberty nor safety.”

That’s clearly wrong. We do in fact give up liberty to purchase safety, and as well we should. We give up our liberty to purchase weapons-grade plutonium; we give up our liberty to drive at 220 mph. The question we need to be asking is: How much liberty are we giving up to gain how much safety?

Spoken like an economist, the question is not whether you will give up liberty to purchase safety—the question is at what price you’re willing to make the purchase. The price we’ve been paying in response to terrorism is far too high. Indeed, the price we are paying is tantamount to America itself.

As horrific as 9/11 was, it’s important to remember: It only killed 3,000 people.

This statement probably makes you uncomfortable; it may even offend you. How dare I say “only”?

I don’t mean to minimize the harm of those deaths. I don’t mean to minimize the suffering of people who lost friends, colleagues, parents, siblings, children. The death of any human being is the permanent destruction of something irreplaceable, a spark of life that can never be restored; it is always a tragedy and there is never any way to repay it.

But I think people are actually doing the opposite—they are ignoring or minimizing millions of other deaths because those deaths didn’t happen to be dramatic enough. A parent killed by a heart attack is just as lost as a parent who died in 9/11. A friend who died of brain cancer is just as gone as a friend who was killed in a terrorist attack. A child killed in a car accident is just as much a loss as a child killed by suicide bombers. If you really care about human suffering, I contend that you should care about all human suffering, not just the kind that makes the TV news.

Here is a list, from the CDC, of things that kill more Americans per month than terrorists have killed in the last three decades:

Heart disease: 50,900 per month

Cancer: 48,700 per month

Lung disease: 12,400 per month

Accidents: 10,800 per month

Stroke: 10,700 per month

Alzheimer’s: 7,000 per month

Diabetes: 6,300 per month

Influenza: 4,700 per month

Kidney failure: 3,900 per month

Terrorism deaths since 1985: 3,455
Yes, that’s right; influenza kills more Americans per month (on average; flu is seasonal, after all) than terrorism has killed in the last thirty years.
And for comparison, other violent deaths, not quite but almost as many per month as terrorism has killed in my entire life so far:
Suicide: 3,400 per month

Homicide: 1,300 per month

Now, with those figures in mind, I want you to ask yourself the following question: Would you be willing to give up basic, fundamental civil liberties in order to avoid any of these things?

Would you want the government to be able to arrest you and imprison you without trial for eating too many cheeseburgers, so as to reduce the risk of heart disease and stroke?

Would you want the government to monitor your phone calls and Internet traffic to make sure you don’t smoke, so as to avoid lung disease? Or to watch for signs of depression, to reduce the rate of suicide?

Would you want the government to be able to use targeted drone strikes, ordered directly by the President, pre-emptively against probable murderers (with a certain rate of collateral damage, of course), to reduce the rate of homicide?

I presume that the answer to all the above questions is “no”. Then now I have to ask you: Why are you willing to give up those same civil liberties to prevent a risk that is three hundred times smaller?

And then of course there’s the Iraq War, which killed 4,400 Americans and at least 100,000 civilians, and the Afghanistan War, which killed 3,400 allied soldiers and over 90,000 civilians.

In response to the horrific murder of 3,000 people, we sacrificed another 7,800 soldiers and killed another 190,000 innocent civilians. What exactly did that accomplish? What benefit did we get for such an enormous cost?

The people who sold us these deadly wars and draconian policies did so based on the threat that terrorism could somehow become vastly worse, involving the release of some unstoppable bioweapon or the detonation of a full-scale nuclear weapon, killing millions of people—but that has never happened, has never gotten close to happening, and would be thousands of times worse than the worst terrorist attacks that have ever actually happened.

If we’re worried about millions of people dying, it is far more likely that there would be a repeat of the 1918 influenza pandemic, or an accidental detonation of a nuclear weapon, or a flashpoint event with Russia or China triggering World War III; it’s probably more likely that there would be an asteroid impact large enough to kill a million people than there would be a terrorist attack large enough to do the same.

As it is, heart disease is already killing millions of people—about a million every two years—and we aren’t so panicked about that as to give up civil liberties. Elsewhere in the world, malnutrition kills over 3 million children per year, essentially all of it due to extreme poverty, which we could eliminate by spending between a quarter ($150 billion) and a half ($300 billion) of our current military budget ($600 billion); but we haven’t even done that even though it would require no loss of civil liberties at all.

Why is terrorism different? In short, the tribal paradigm.

There are in fact downsides to not being infinite identical psychopaths, and this is one of them. An infinite identical psychopath would simply maximize their own probability of survival; but finite diverse tribalists such as we underreact to some threats (such as heart disease) and overreact to others (such as terrorism). We’ll do almost anything to stop the latter—and almost nothing to stop the former.

Terrorists are perceived as a threat not just to our individual survival like heart disease or stroke, but a threat to our tribe from another tribe. This triggers a deep, instinctual sense of panic and hatred that makes us willing to ignore principles we would otherwise uphold and commit acts of violence we would otherwise find unimaginable.

Indeed, it’s precisely that instinct which motivates the terrorists in the first place. From their perspective, we are the other tribe that threatens their tribe, and they are therefore willing to stop at nothing until we are destroyed.

In a fundamental way, when we respond to terrorism in this way we do not defeat them—we become them.
If you ask people who support the PATRIOT ACT, it’s very clear that they don’t see themselves as imposing upon the civil liberties of Americans. Instead, they see themselves as protecting Americans (our tribe), and they think the impositions upon civil liberties will only harm those who don’t count as Americans (other tribes). This is a pretty bizarre notion if you think about it carefully—if you don’t need a warrant or probable cause to imprison people, then what stops you from imprisoning people who aren’t terrorists?—but people don’t think about it carefully. They act on emotion, on instinct.

The odds of terrorists actually destroying America by killing people are basically negligible. Even the most deadly terrorist attack in recorded history—9/11—killed fewer Americans than die every month from diabetes, or every week from heart disease. Even the most extreme attacks feared (which are extremely unlikely) wouldn’t be any worse than World War II, which of course we won.

But the odds of terrorists destroying America by making us give up the rights and freedoms that define us as a nation? That’s well underway.

What are we celebrating today?

JDN 2457208 EDT 13:35 (July 4, 2015)

As all my American readers will know (and unsurprisingly 79% of my reader trackbacks come from the United States), today is Independence Day. I’m curious how my British readers feel about this day (and the United Kingdom is my second-largest source of reader trackbacks); we are in a sense celebrating the fact that we’re no longer ruled by you.

Every nation has some notion of patriotism; in the simplest sense we could say that patriotism is simply nationalism, yet another reflection of our innate tribal nature. As Obama said when asked about American exceptionalism, the British also believe in British exceptionalism. If that is all we are dealing with, then there is no particular reason to celebrate; Saudi Arabia or China could celebrate just as well (and very likely does). Independence Day then becomes something parochial, something that is at best a reflection of local community and culture, and at worst a reaffirmation of nationalistic divisiveness.

But in fact I think we are celebrating something more than that. The United States of America is not just any country. It is not just a richer Brazil or a more militaristic United Kingdom. There really is something exceptional about the United States, and it really did begin on July 4, 1776.

In fact we should probably celebrate June 21, 1789 and December 15, 1791, the ratification of the Constitution and the Bill of Rights respectively. But neither of these would have been possible without that Declaration of Independence on July 4, 1776. (In fact, even that date isn’t as clear-cut as commonly imagined.)

What makes the United States unique?

From the dawn of civilization around 5000 BC up to the mid-18th century AD, there were basically two ways to found a nation. The most common was to grow the nation organically, formulate an ethnic identity over untold generations and then make up an appealing backstory later. The second way, and not entirely mutually exclusive, was for a particular leader, usually a psychopathic king, to gather a superior army, conquer territory, and annex the people there, making them part of his nation whether they wanted it or not. Variations on these two themes were what happened in Rome, in Greece, in India, in China; they were done by the Sumerians, by the Egyptians, by the Aztecs, by the Maya. All the ancient civilizations have founding myths that are distorted so far from the real history that the real history has become basically unknowable. All the more recent powers were formed by warlords and usually ruled with iron fists.

The United States of America started with a war, make no mistake; and George Washington really was more a charismatic warlord than he ever was a competent statesman. But Washington was not a psychopath, and refused to rule with an iron fist. Instead he was instrumental in establishing a fundamentally new approach to the building of nations.
This is literally what happened—myths have grown around it, but it itself documented history. Washington and his compatriots gathered a group of some of the most intelligent and wise individuals they could find, sat them down in a room, and tasked them with answering the basic question: “What is the best possible country?” They argued and debated, considering absolutely the most cutting-edge economics (The Wealth of Nations was released in 1776) and political philosophy (Thomas Paine’s Common Sense also came out in 1776). And then, when they had reached some kind of consensus on what the best sort of country would be—they created that country. They were conscious of building a new tradition, of being the founders of the first nation built as part of the Enlightenment. Previously nations were built from immemorial tradition or the whims of warlords—the United States of America was the first nation in the world that was built on principle.

It would not be the last; in fact, with a terrible interlude that we call Napoleon, France would soon become the second nation of the Enlightenment. A slower process of reform would eventually bring the United Kingdom itself to a similar state (though the UK is still a monarchy and has no formal constitution, only an ever-growing mountain of common law). As the centuries passed and the United States became more and more powerful, its system of government attained global influence, with now almost every nation in the world nominally a “democracy” and about half actually recognizable as such. We now see it as unexceptional to have a democratically-elected government bound by a constitution, and even think of the United States as a relatively poor example compared to, say, Sweden or Norway (because #Scandinaviaisbetter), and this assessment is not entirely wrong; but it’s important to keep in mind that this was not always the case, and on July 4, 1776 the Founding Fathers truly were building something fundamentally new.

Of course, the Founding Fathers were not the demigods they are often imagined to be; Washington himself was a slaveholder, and not just any slaveholder, but in fact almost a billionaire in today’s terms—the wealthiest man in America by far and actually a rival to the King of England. Thomas Jefferson somehow managed to read Thomas Paine and write “all men are created equal” without thinking that this obligated him to release his own slaves. Benjamin Franklin was a misogynist and womanizer. James Madison’s concept of formalizing armed rebellion bordered on insanity (and ultimately resulted in our worst amendment, the Second). The system that they built disenfranchised women, enshrined the slavery of Black people into law, and consisted of dozens of awkward compromises (like the Senate) that would prove disastrous in the future. The Founding Fathers were human beings with human flaws and human hypocrisy, and they did many things wrong.

But they also did one thing very, very right: They created a new model for how nations should be built. In a very real sense they redefined what it means to be a nation. That is what we celebrate on Independence Day.

1200px-Flag_of_the_United_States.svg

Happy Capybara Day! Or the power of culture

JDN 2457131 EDT 14:33.

Did you celebrate Capybara Day yesterday? You didn’t? Why not? We weren’t able to find any actual capybaras this year, but maybe next year we’ll be able to plan better and find a capybara at a zoo; unfortunately the nearest zoo with a capybara appears to be in Maryland. But where would we be without a capybara to consult annually on the stock market?

Right now you are probably rather confused, perhaps wondering if I’ve gone completely insane. This is because Capybara Day is a holiday of my own invention, one which only a handful of people have even heard about.

But if you think we’d never have a holiday so bizarre, think again: For all I did was make some slight modifications to Groundhog Day. Instead of consulting a groundhog about the weather every February 2, I proposed that we consult a capybara about the stock market every April 17. And if you think you have some reason why groundhogs are better at predicting the weather (perhaps because they at least have some vague notion of what weather is) than capybaras are at predicting the stock market (since they have no concept of money or numbers), think about this: Capybara Day could produce extremely accurate predictions, provided only that people actually believed it. The prophecy of rising or falling stock prices could very easily become self-fulfilling. If it were a cultural habit of ours to consult capybaras about the stock market, capybaras would become good predictors of the stock market.

That might seem a bit far-fetched, but think about this: Why is there a January Effect? (To be fair, some researchers argue that there isn’t, and the apparent correlation between higher stock prices and the month of January is simply an illusion, perhaps the result of data overfitting.)

But I think it probably is real, and moreover has some very obvious reasons behind it. In this I’m in agreement with Richard Thaler, a founder of cognitive economics who wrote about such anomalies in the 1980s. December is a time when two very culturally-important events occur: The end of the year, during which many contracts end, profits are assessed, and tax liabilities are determined; and Christmas, the greatest surge of consumer spending and consumer debt.

The first effect means that corporations are very likely to liquidate assets—particularly assets that are running at a loss—in order to minimize their tax liabilities for the year, which will drive down prices. The second effect means that consumers are in search of financing for extravagant gift purchases, and those who don’t run up credit cards may instead sell off stocks. This is if anything a more rational way of dealing with the credit constraint, since interest rates on credit cards are typically far in excess of stock returns. But this surge of selling due to credit constraints further depresses prices.

In January, things return to normal; assets are repurchased, debt is repaid. This brings prices back up to where they were, which results in a higher than normal return for January.

Neoclassical economists are loath to admit that such a seasonal effect could exist, because it violates their concept of how markets work—and to be fair, the January Effect is actually weak enough to be somewhat ambiguous. But actually it doesn’t take much deviation from neoclassical models to explain the effect: Tax policies and credit constraints are basically enough to do it, so you don’t even need to go that far into understanding human behavior. It’s perfectly rational to behave this way given the distortions that are created by taxes and credit limits, and the arbitrage opportunity is one that you can only take advantage of if you have large amounts of credit and aren’t worried about minimizing your tax liabilities. It’s important to remember just how strong the assumptions of models like CAPM truly are; in addition to the usual infinite identical psychopaths, CAPM assumes there are no taxes, no transaction costs, and unlimited access to credit. I’d say it’s amazing that it works at all, but actually, it doesn’t—check out this graph of risk versus return and tell me if you think CAPM is actually giving us any information at all about how stock markets behave. It frankly looks like you could have drawn a random line through a scatter plot and gotten just as good a fit. Knowing how strong its assumptions are, we would not expect CAPM to work—and sure enough, it doesn’t.

Of course, that leaves the question of why our tax policy would be structured in this way—why make the year end on December 31 instead of some other date? And for that, you need to go back through hundreds of years of history, the Gregorian calendar, which in turn was influenced by Christianity, and before that the Julian calendar—in other words, culture.

Culture is one of the most powerful forces that influences human behavior—and also one of the strangest and least-understood. Economic theory is basically silent on the matter of culture. Typically it is ignored entirely, assumed to be irrelevant against the economic incentives that are the true drivers of human action. (There’s a peculiar emotion many neoclassical economists express that I can best describe as self-righteous cynicism, the attitude that we alone—i.e., economists—understand that human beings are not the noble and altruistic creatures many imagine us to be, nor beings of art and culture, but simply cold, calculating machines whose true motives are reducible to profit incentives—and all who think otherwise are being foolish and naïve; true enlightenment is understanding that human beings are infinite identical psychopaths. This is the attitude epitomized by the economist who once sent me an email with “altruism” written in scare quotes.)

Occasionally culture will be invoked as an external (in jargon, exogenous) force, to explain some aspect of human behavior that is otherwise so totally irrational that even invoking nonsensical preferences won’t make it go away. When a suicide bomber blows himself up in a crowd of people, it’s really pretty hard to explain that in terms of rational profit incentives—though I have seen it tried. (It could be self-interest at a larger scale, like families or nations—but then, isn’t that just the tribal paradigm I’ve been arguing for all along?)

But culture doesn’t just motivate us to do extreme or wildly irrational things. It motivates us all the time, often in quite beneficial ways; we wait in line, hold doors for people walking behind us, tip waiters who serve us, and vote in elections, not because anyone pressures us directly to do so (unlike say Australia we do not have compulsory voting) but because it’s what we feel we ought to do. There is a sense of altruism—and altruism provides the ultimate justification for why it is right to do these things—but the primary motivator in most cases is culture—that’s what people do, and are expected to do, around here.

Indeed, even when there is a direct incentive against behaving a certain way—like criminal penalties against theft—the probability of actually suffering a direct penalty is generally so low that it really can’t be our primary motivation. Instead, the reason we don’t cheat and steal is that we think we shouldn’t, and a major part of why we think we shouldn’t is that we have cultural norms against it.

We can actually observe differences in cultural norms across countries in the laboratory. In this 2008 study by Massimo Castro (PDF) comparing British and Italian people playing an economic game called the public goods game in which you can pay a cost yourself to benefit the group as a whole, it was found not only that people were less willing to benefit groups of foreigners than groups of compatriots, British people were overall more generous than Italian people. This 2010 study by Gachter et. al. (actually Joshua Greene talked about it last week) compared how people play the game in various cities, they found three basic patterns: In Western European and American cities such as Zurich, Copenhagen and Boston, cooperation started out high and remained high throughout; people were just cooperative in general. In Asian cities such as Chengdu and Seoul, cooperation started out low, but if people were punished for not cooperating, cooperation would improve over time, eventually reaching about the same place as in the highly cooperative cities. And in Mediterranean cities such as Istanbul, Athens, and Riyadh, cooperation started low and stayed low—even when people could be punished for not cooperating, nobody actually punished them. (These patterns are broadly consistent with the World Bank corruption ratings of these regions, by the way; Western Europe shows very low corruption, while Asia and the Mediterranean show high corruption. Of course this isn’t all that’s going on—and Asia isn’t much less corrupt than the Middle East, while this experiment might make you think so.)

Interestingly, these cultural patterns showed Melbourne as behaving more like an Asian city than a Western European one—perhaps being in the Pacific has worn off on Australia more than they realize.

This is very preliminary, cutting-edge research I’m talking about, so be careful about drawing too many conclusions. But in general we’ve begun to find some fairly clear cultural differences in economic behavior across different societies. While this would not be at all surprising to a sociologist or anthropologist, it’s the sort of thing that economists have insisted for years is impossible.

This is the frontier of cognitive economics, in my opinion. We know that culture is a very powerful motivator of our behavior, and it is time for us to understand how it works—and then, how it can be changed. We know that culture can be changed—cultural norms do change over time, sometimes remarkably rapidly; but we have only a faint notion of how or why they change. Changing culture has the power to do things that simply changing policy cannot, however; policy requires enforcement, and when the enforcement is removed the behavior will often disappear. But if a cultural norm can be imparted, it could sustain itself for a thousand years without any government action at all.