The evolution of human cooperation

Jun 17 JDN 2458287

If alien lifeforms were observing humans (assuming they didn’t turn out the same way—which they actually might, for reasons I’ll get to shortly), the thing that would probably baffle them the most about us is how we organize ourselves into groups. Each individual may be part of several groups at once, and some groups are closer-knit than others; but the most tightly-knit groups exhibit extremely high levels of cooperation, coordination, and self-sacrifice.

They might think at first that we are eusocial, like ants or bees; but upon closer study they would see that our groups are not very strongly correlated with genetic relatedness. We are somewhat more closely related to those in our groups than to those outsides, usually; but it’s a remarkably weak effect, especially compared to the extremely high relatedness of worker bees in a hive. No, to a first approximation, these groups are of unrelated humans; yet their level of cooperation is equal to if not greater than that exhibited by the worker bees.

However, the alien anthropologists would find that it is not that humans are simply predisposed toward extremely high altruism and cooperation in general; when two humans groups come into conflict, they are capable of the most extreme forms of violence imaginable. Human history is full of atrocities that combine the indifferent brutality of nature red in tooth and claw with the boundless ingenuity of a technologically advanced species. Yet except for a small proportion perpetrated by individual humans with some sort of mental pathology, these atrocities are invariably committed by one unified group against another. Even in genocide there is cooperation.

Humans are not entirely selfish. But nor are they paragons of universal altruism (though some of them aspire to be). Humans engage in a highly selective form of altruism—virtually boundless for the in-group, almost negligible for the out-group. Humans are tribal.

Being a human yourself, this probably doesn’t strike you as particularly strange. Indeed, I’ve mentioned it many times previously on this blog. But it is actually quite strange, from an evolutionary perspective; most organisms are not like this.

As I said earlier, there is actually reason to think that our alien anthropologist would come from a species with similar traits, simply because such cooperation may be necessary to achieve a full-scale technological civilization, let alone the capacity for interstellar travel. But there might be other possibilities; perhaps they come from a eusocial species, and their large-scale cooperation is within an extremely large hive.

It’s true that most organisms are not entirely selfish. There are various forms of cooperation within and even across species. But these usually involve only close kin, and otherwise involve highly stable arrangements of mutual benefit. There is nothing like the large-scale cooperation between anonymous unrelated individuals that is exhibited by all human societies.

How would such an unusual trait evolve? It must require a very particular set of circumstances, since it only seems to have evolved in a single species (or at most a handful of species, since other primates and cetaceans display some of the same characteristics).

Once evolved, this trait is clearly advantageous; indeed it turned a local apex predator into a species so successful that it can actually intentionally control the evolution of other species. Humans have become a hegemon over the entire global ecology, for better or for worse. Cooperation gave us a level of efficiency in producing the necessities of survival so great that at this point most of us spend our time working on completely different tasks. If you are not a farmer or a hunter or a carpenter (and frankly, even if you are a farmer with a tractor, a hunter with a rifle, or a carpenter with a table saw), you are doing work that would simply not have been possible without very large-scale human cooperation.

This extremely high fitness benefit only makes the matter more puzzling, however: If the benefits are so great, why don’t more species do this? There must be some other requirements that other species were unable to meet.

One clear requirement is high intelligence. As frustrating as it may be to be a human and watch other humans kill each other over foolish grievances, this is actually evidence of how smart humans are, biologically speaking. We might wish we were even smarter still—but most species don’t have the intelligence to make it even as far as we have.

But high intelligence is likely not sufficient. We can’t be sure of that, since we haven’t encountered any other species with equal intelligence; but what we do know is that even Homo sapiens didn’t coordinate on anything like our current scale for tens of thousands of years. We may have had tribal instincts, but if so they were largely confined to a very small scale. Something happened, about 50,000 years ago or so—not very long ago in evolutionary time—that allowed us to increase that scale dramatically.

Was this a genetic change? It’s difficult to say. There could have been some subtle genetic mutation, something that wouldn’t show up in the fossil record. But more recent expansions in human cooperation to the level of the nation-state and beyond clearly can’t be genetic; they were much too fast for that. They must be a form of cultural evolution: The replicators being spread are ideas and norms—memes—rather than genes.

So perhaps the very early shift toward tribal cooperation was also a cultural one. Perhaps it began not as a genetic mutation but as an idea—perhaps a metaphor of “universal brotherhood” as we often still hear today. The tribes that believed this ideas prospered; the tribes that didn’t were outcompeted or even directly destroyed.

This would explain why it had to be an intelligent species. We needed brains big enough to comprehend metaphors and generalize concepts. We needed enough social cognition to keep track of who was in the in-group and who was in the out-group.

If it was indeed a cultural shift, this should encourage us. (And since the most recent changes definitely were cultural, that is already quite encouraging.) We are not limited by our DNA to only care about a small group of close kin; we are capable of expanding our scale of unity and cooperation far beyond.
The real question is whether we can expand it to everyone. Unfortunately, there is some reason to think that this may not be possible. If our concept of tribal identity inherently requires both an in-group and an out-group, then we may never be able to include everyone. If we are only unified against an enemy, never simply for our own prosperity, world peace may forever remain a dream.

But I do have a work-around that I think is worth considering. Can we expand our concept of the out-group to include abstract concepts? With phrases like “The War on Poverty” and “The War on Terror”, it would seem in fact that we can. It feels awkward; it is somewhat imprecise—but then, so was the original metaphor of “universal brotherhood”. Our brains are flexible enough that they don’t actually seem to need the enemy to be a person; it can also be an idea. If this is right, then we can actually include everyone in our in-group, as long as we define the right abstract out-group. We can choose enemies like poverty, violence, cruelty, and despair instead of other nations or ethnic groups. If we must continue to fight a battle, let it be a battle against the pitiless indifference of the universe, rather than our fellow human beings.

Of course, the real challenge will be getting people to change their existing tribal identities. In the moment, these identities seem fundamentally intractable. But that can’t really be the case—for these identities have changed over historical time. Once-important categories have disappeared; new ones have arisen in their place. Someone in 4th century Constantinople would find the conflict between Democrats and Republicans as baffling as we would find the conflict between Trinitarians and Arians. The ongoing oppression of Native American people by White people would be unfathomable to someone of the 11th century Onondaga, who could scarcely imagine an enemy more different than the Seneca west of them. Even the conflict between Russia and NATO would probably seem strange to someone living in France in 1943, for whom Germany was the enemy and Russia was at least the enemy of the enemy—and many of those people are still alive.

I don’t know exactly how these tribal identities change (I’m working on it). It clearly isn’t as simple as convincing people with rational arguments. In fact, part of how it seems to work is that someone will shift their identity slowly enough that they can’t perceive the shift themselves. People rarely seem to appreciate, much less admit, how much their own minds have changed over time. So don’t ever expect to change someone’s identity in one sitting. Don’t even expect to do it in one year. But never forget that identities do change, even within an individual’s lifetime.

Reasonableness and public goods games

Apr 1 JDN 2458210

There’s a very common economics experiment called a public goods game, often used to study cooperation and altruistic behavior. I’m actually planning on running a variant of such an experiment for my second-year paper.

The game is quite simple, which is part of why it is used so frequently: You are placed into a group of people (usually about four), and given a little bit of money (say $10). Then you are offered a choice: You can keep the money, or you can donate some of it to a group fund. Money in the group fund will be multiplied by some factor (usually about two) and then redistributed evenly to everyone in the group. So for example if you donate $5, that will become $10, split four ways, so you’ll get back $2.50.

Donating more to the group will benefit everyone else, but at a cost to yourself. The game is usually set up so that the best outcome for everyone is if everyone donates the maximum amount, but the best outcome for you, holding everyone else’s choices constant, is to donate nothing and keep it all.

Yet it is a very robust finding that most people do neither of those things. There’s still a good deal of uncertainty surrounding what motivates people to donate what they do, but certain patterns that have emerged:

  1. Most people donate something, but hardly anyone donates everything.
  2. Increasing the multiplier tends to smoothly increase how much people donate.
  3. The number of people in the group isn’t very important, though very small groups (e.g. 2) behave differently from very large groups (e.g. 50).
  4. Letting people talk to each other tends to increase the rate of donations.
  5. Repetition of the game, or experience from previous games, tends to result in decreasing donation over time.
  6. Economists donate less than other people.

Number 6 is unfortunate, but easy to explain: Indoctrination into game theory and neoclassical economics has taught economists that selfish behavior is efficient and optimal, so they behave selfishly.

Number 3 is also fairly easy to explain: Very small groups allow opportunities for punishment and coordination that don’t exist in large groups. Think about how you would respond when faced with 2 defectors in a group of 4 as opposed to 10 defectors in a group of 50. You could punish the 2 by giving less next round; but punishing the 10 would end up punishing 40 others who had contributed like they were supposed to.

Number 4 is a very interesting finding. Game theory says that communication shouldn’t matter, because there is a unique Nash equilibrium: Donate nothing. All the promises in the world can’t change what is the optimal response in the game. But in fact, human beings don’t like to break their promises, and so when you get a bunch of people together and they all agree to donate, most of them will carry through on that agreement most of the time.

Number 5 is on the frontier of research right now. There are various theoretical accounts for why it might occur, but none of the models proposed so far have much predictive power.

But my focus today will be on findings 1 and 2.

If you’re not familiar with the underlying game theory, finding 2 may seem obvious to you: Well, of course if you increase the payoff for donating, people will donate more! It’s precisely that sense of obviousness which I am going to appeal to in a moment.

In fact, the game theory makes a very sharp prediction: For N players, if the multiplier is less than N, you should always contribute nothing. Only if the multiplier becomes larger than N should you donate—and at that point you should donate everything. The game theory prediction is not a smooth increase; it’s all-or-nothing. The only time game theory predicts intermediate amounts is on the knife-edge at exactly equal to N, where each player would be indifferent between donating and not donating.

But it feels reasonable that increasing the multiplier should increase donation, doesn’t it? It’s a “safer bet” in some sense to donate $1 if the payoff to everyone is $3 and the payoff to yourself is $0.75 than if the payoff to everyone is $1.04 and the payoff to yourself is $0.26. The cost-benefit analysis comes out better: In the former case, you can gain up to $2 if everyone donates, but would only lose $0.25 if you donate alone; but in the latter case, you would only gain $0.04 if everyone donates, and would lose $0.74 if you donate alone.

I think this notion of “reasonableness” is a deep principle that underlies a great deal of human thought. This is something that is sorely lacking from artificial intelligence: The same AI that tells you the precise width of the English Channel to the nearest foot may also tell you that the Earth is 14 feet in diameter, because the former was in its database and the latter wasn’t. Yes, WATSON may have won on Jeopardy, but it (he?) also made a nonsensical response to the Final Jeopardy question.

Human beings like to “sanity-check” our results against prior knowledge, making sure that everything fits together. And, of particular note for public goods games, human beings like to “hedge our bets”; we don’t like to over-commit to a single belief in the face of uncertainty.

I think this is what best explains findings 1 and 2. We don’t donate everything, because that requires committing totally to the belief that contributing is always better. We also don’t donate nothing, because that requires committing totally to the belief that contributing is always worse.

And of course we donate more as the payoffs to donating more increase; that also just seems reasonable. If something is better, you do more of it!

These choices could be modeled formally by assigning some sort of probability distribution over other’s choices, but in a rather unconventional way. We can’t simply assume that other people will randomly choose some decision and then optimize accordingly—that just gives you back the game theory prediction. We have to assume that our behavior and the behavior of others is in some sense correlated; if we decide to donate, we reason that others are more likely to donate as well.

Stated like that, this sounds irrational; some economists have taken to calling it “magical thinking”. Yet, as I always like to point out to such economists: On average, people who do that make more money in the games. Economists playing other economists always make very little money in these games, because they turn on each other immediately. So who is “irrational” now?

Indeed, if you ask people to predict how others will behave in these games, they generally do better than the game theory prediction: They say, correctly, that some people will give nothing, most will give something, and hardly any will give everything. The same “reasonableness” that they use to motivate their own decisions, they also accurately apply to forecasting the decisions of others.

Of course, to say that something is “reasonable” may be ultimately to say that it conforms to our heuristics well. To really have a theory, I need to specify exactly what those heuristics are.

“Don’t put all your eggs in one basket” seems to be one, but it’s probably not the only one that matters; my guess is that there are circumstances in which people would actually choose all-or-nothing, like if we said that the multiplier was 0.5 (so everyone giving to the group would make everyone worse off) or 10 (so that giving to the group makes you and everyone else way better off).

“Higher payoffs are better” is probably one as well, but precisely formulating that is actually surprisingly difficult. Higher payoffs for you? For the group? Conditional on what? Do you hold others’ behavior constant, or assume it is somehow affected by your own choices?

And of course, the theory wouldn’t be much good if it only worked on public goods games (though even that would be a substantial advance at this point). We want a theory that explains a broad class of human behavior; we can start with simple economics experiments, but ultimately we want to extend it to real-world choices.

How we can best help refugees

JDN 2457376

Though the debate seems to have simmered down a little over the past few weeks, the fact remains that we are in the middle of a global refugee crisis. There are 4 million refugees from Syria alone, part of 10 million refugees worldwide from various conflicts.

The ongoing occupation of the terrorist group / totalitarian state Daesh (also known as Islamic State, ISIS and ISIL, but like John Kerry, I like to use Daesh precisely because they seem to hate it) has displaced almost 14 million people, 3.3 million of them refugees from Syria.

Most of these refugees have fled to Lebanon, Jordan, Turkey, and, Iraq, for the obvious reason that these countries are both geographically closest and culturally best equipped to handle them.
There is another reason, however: Some of the other countries in the region, notably Saudi Arabia, have taken no refugees at all. In an upcoming post I intend to excoriate Saudi Arabia for a number of reasons, but this one is perhaps the most urgent. Their response? They simply deny it outright, claiming they’ve taken millions of refugees and somehow nobody noticed. They

Turkey and Lebanon are stretched to capacity, however; they simply do not have the resources to take on more refugees. This gives the other nations of the world only two morally legitimate options:

1. We could take more refugees ourselves.

2. We could supply funding and support to Turkey and Lebanon for them to take on more refugees.

Most of the debate has centered around option (1), and in particular around Obama’s plan to take on about 10,000 refugees to the United States, which Ted Cruz calls “lunacy” (to be fair, if it takes one to know one…).

This debate has actually served more to indict the American population for paranoia and xenophobia than anything else. The fact that 17 US states—including some with Democrat governors—have unilaterally declared that they will not accept refugees (despite having absolutely no Constitutional authority to make such a declaration) is truly appalling.

Even if everything that the xenophobic bigots say were true—even if we really were opening ourselves to increased risk of terrorism and damaging our economy and subjecting ourselves to mass unemployment—we would still have a moral duty as human beings to help these people.

And of course almost all of it is false.

Only a tiny fraction of refugees are terrorists, indeed very likely smaller than the fraction of the native population or the fraction of those who arrive on legal visas, meaning that we would actually be diluting our risk of terrorism by accepting more refugees. And as you may recall from my post on 9/11, our risk of terrorism is already so small that the only thing we have to fear is fear itself.

There is a correlation between terrorism and refugees, but it’s almost entirely driven by the opposite effect: terrorism causes refugee crises.

The net aggregate economic effect of immigration is most likely positive. The effect on employment is more ambiguous; immigration does appear to create a small increase in unemployment in the short run as all those new people try to find jobs, and there is some evidence that it may reduce wages for local low-skill workers. But the employment effect is small temporary, and there is a long-run boost in overall productivity. However, it may not have much effect on overall growth: the positive correlation between immigration and economic growth is primarily due to the fact that higher growth triggers more immigration.

And of course, it’s important to keep in mind that the reason wages are depressed at all is that people come from places where wages are even lower, so they improve their standard of living, but may also reduce the standard of living of some of the workers who were already here. The paradigmatic example is immigrants who leave a wage of $4 per hour in Mexico, arrive in California, and end up reducing wages in California from $10 to $8. While this certainly hurts some people who went from $10 to $8, it’s so narrow-sighted as to border on racism to ignore the fact that it also raised other people from $4 to $8. The overall effect is not simply to redistribute wealth from some to others, but actually to create more wealth. If there are things we can do to prevent low-skill wages from falling, perhaps we should; but systematically excluding people who need work is not the way to do that.

Accepting 10,000 more refugees would have a net positive effect on the American economy—though given our huge population and GDP, probably a negligible one. It has been pointed out that Germany’s relatively open policy advances the interests of Germany as much as it does those of the refugees; but so what? They are doing the right thing, even if it’s not for entirely altruistic reasons. One of the central insights of economics is that the universe is nonzero-sum; helping someone else need not mean sacrificing your own interests, and when it doesn’t, the right thing to do should be a no-brainer. Instead of castigating Germany for doing what needs to be done for partially selfish reasons, we should be castigating everyone else for not even doing what’s in their own self-interest because they are so bigoted and xenophobic they’d rather harm themselves than help someone else. (Also, it does not appear to be in Angela Merkel’s self-interest to take more refugees; she is spending a lot of political capital to make this happen.)

We could follow Germany’s example, and Obama’s plan would move us in that direction.

But the fact remains that we could go through with Obama’s plan, indeed double, triple, quadruple it—and still not make a significant dent in the actual population of refugees who need help. When 1,500,000 people need help and the most powerful nation in the world offers to help 10,000, that isn’t an act of great openness and generosity; it’s almost literally the least we could do. 10,000 is only 0.7% of 1.5 million; even if we simply accepted an amount of refugees proportional to our own population it would be more like 70,000. If we instead accepted an amount of refugees proportional to our GDP we should be taking on closer to 400,000.

This is why in fact I think option (2) may be the better choice.

There actually are real cultural and linguistic barriers to assimilation for Syrian people in the United States, barriers which are much lower in Turkey and Lebanon. Immigrant populations always inevitably assimilate eventually, but there is a period of transition which is painful for both immigrants and locals, often lasting a decade or more. On top of this there is the simple logistical cost of moving all those people that far; crossing the border into Lebanon is difficult enough without having to raft across the Mediterranean, let alone being airlifted or shipped all the way across the Atlantic afterward. The fact that many refugees are willing to bear such a cost serves to emphasize their desperation; but it also suggests that there may be alternatives that would work out better for everyone.

The United States has a large population at 322 million; but Turkey (78 million) has about a quarter of our population and Jordan (8 million) and Lebanon (6 million) are about the size of our largest cities.

Our GDP, on the other hand, is vastly larger. At $18 trillion, we have 12 times the GDP of Turkey ($1.5 T), and there are individual American billionaires with wealth larger than the GDPs of Lebanon ($50 B) and Jordan ($31 B).

This means that while we have an absolute advantage in population, we have a comparative advantage in wealth—and the benefits of trade depend on comparative advantage. It therefore makes sense for us to in a sense “trade” wealth for population; in exchange for taking on fewer refugees, we would offer to pay a larger share of the expenses involved in housing, feeding, and ultimately assimilating those refugees.

Another thing we could offer (and have a comparative as well as absolute advantage in) is technology. These surprisingly-nice portable shelters designed by IKEA are an example of how First World countries can contribute to helping refugees without necessarily accepting them into their own borders (as well as an example of why #Scandinaviaisbetter). We could be sending equipment and technicians to provide electricity, Internet access, or even plumbing to the refugee camps. We could ship them staple foods or even MREs. (On the other hand, I am not impressed by the tech entrepreneurs whose “solutions” apparently involve selling more smartphone apps.)

The idea of actually taking on 400,000 or even 70,000 additional people into the United States is daunting even for those of us who strongly believe in helping the refugees—in the former case we’re adding another Cleveland, and even in the latter we’d be almost doubling Dearborn. But if we estimate the cost of simply providing money to support the refugee camps, the figures come out a lot less demanding.
Charities are currently providing money on the order of millions—which is to say on the order of single dollars per person. GBP 887,000 sounds like a lot of money until you realize it’s less than $0.50 per Syrian refugee.

Suppose we were to grant $5,000 per refugee per year. That’s surely more than enough. The UN is currently asking for $6.5 billion, which is only about $1,500 per refugee.

Yet to supply that much for all 4 million refugees would cost us only $20 billion per year, a mere 0.1% of our GDP. (Or if you like, a mere 3% of our military budget, which is probably smaller than what the increase would be if we stepped up our military response to Daesh.)

I say we put it to a vote among the American people: Are you willing to accept a flat 0.1% increase in income tax in order to help the refugees? (Would you even notice?) This might create an incentive to become a refugee when you’d otherwise have tried to stay in Syria, but is that necessarily a bad thing? Daesh, like any state, depends upon its tax base to function, so encouraging emigration undermines Daesh taxpayer by taxpayer. We could make it temporary and tied to the relief efforts—or, more radically, we could not do that, and use it as a starting point to build an international coalition for a global basic income.

Right now a global $5,000 per person per year would not be feasible (that would be almost half of the world’s GDP); but something like $1,000 would be, and would eliminate world hunger immediately and dramatically reduce global poverty. The US alone could in fact provide a $1,000 global basic income, though it would cost $7.2 trillion, which is over 40% of our $18.1 trillion GDP—not beyond our means, but definitely stretching them to the limit. Yet simply by including Europe ($18.5 T), China ($12.9 T), Japan ($4.2 T), India ($2.2 T), and Brazil ($1.8 T), we’d reduce the burden among the whole $57.7 trillion coalition to 12.5% of GDP. That’s roughly what we already spend on Medicare and Social Security. Not a small amount, to be sure; but this would get us within arm’s reach of permanently ending global poverty.

Think of the goodwill we’d gain around the world; think of how much it would undermine Daesh’s efforts to recruit followers if everyone knew that just across the border is a guaranteed paycheck from that same United States that Daesh keeps calling the enemy. This isn’t necessarily contradictory to a policy of accepting more refugees, but it would be something we could implement immediately, with minimal cost to ourselves.

And I’m sure there’d be people complaining that we were only doing it to make ourselves look good and stabilize the region economically, and it will all ultimately benefit us eventually—which is very likely true. But again, I say: So what? Would you rather we do the right thing and benefit from it, or do the wrong thing just so we dare not help ourselves?

Just give people money!

JDN 2457332 EDT 17:02.

Today is the Fifth of November, on which a bunch of people who liked a Hollywood movie start posting images in support of a fanatical religious terrorist in his plot to destroy democracy in the United Kingdom a few centuries ago. It’s really weird, but I’m not particularly interested in that.

Instead I’d like to talk about the solution to poverty, which we’ve known for a long time—in fact, it’s completely obvious—and yet have somehow failed to carry out. Many people doubt that it even works, not based on the empirical evidence, but because it just feels like it can’t be right, like it’s so obvious that surely it was tried and didn’t work and that’s why we moved on to other things. When you first tell a kindergartner that there are poor people in the world, that child will very likely ask: “Why don’t we just give them some money?”

Why not indeed?

Formally this is called a “direct cash transfer”, and it comes in many different variants, but basically they run along a continuum from unconditional—we just give it to everybody, no questions asked—to more and more conditional—you have to be below a certain income, or above a certain age, or have kids, or show up at our work program, or take a drug test, etc. The EU has a nice little fact sheet about the different types of cash transfer programs in use.

Actually, I’d argue that at the very far extreme is government salaries—the government will pay you $40,000 per year, provided that you teach high school every weekday. We don’t really think of that as a “conditional cash transfer” because it involves you providing a useful service (and is therefore more like an ordinary, private-sector salary), but many of the conditions imposed on cash transfers actually have this sort of character—we want people to do things that we think are useful to society, in order to justify us giving them the money. It really seems to be a continuum, from just giving money to everyone, to giving money to some people based on them doing certain things, to specifically hiring people to do something.

Social programs in different countries can be found at different places on this continuum. In the United States, our programs are extremely conditional, and also the total amount we give out is relatively small. In Europe, programs are not as conditional—though still conditional—and they give out more. And sure enough, after-tax poverty in Europe is considerably lower, even though before-tax poverty is about the same.

In fact, the most common way to make transfers conditional is to make them “in-kind”; instead of giving you money, we give you something—healthcare, housing, food. Sometimes this makes sense; actually I think for healthcare it makes the most sense, because price signals don’t work in a market as urgent and inelastic as healthcare (that is, you don’t shop around for an emergency room—in fact, people don’t even really shop around for a family doctor). But often it’s simply a condition we impose for political reasons; we don’t want those “lazy freeloaders” to do anything else with the money that we wouldn’t like, such as buying alcohol or gambling. Even poor people in India buy into this sort of reasoning. Nevermind that they generally don’t do that, or that they could just shift away spending they would otherwise be making (warning: technical economics paper within) to do those things anyway—it’s the principle of the thing.

Direct cash transfers not only work—they work about as well as the best things we’ve tried. Spending on cash transfers is about as cost-effective as spending on medical aid and malaria nets.

Other than in experiments (the largest of which I’m aware of was a town in Canada, unless you count Alaska’s Permanent Fund Dividend, which is unconditional but quite small), we have never really tried implementing a fully unconditional cash transfer system. “Too expensive” is usually the complaint, and it would indeed be relatively expensive (probably greater than all of what we currently spend on Social Security and Medicare, which are two of our biggest government budget items). Implementing a program with a cost on the order of $2 trillion per year is surely not something to be done lightly. But it would have one quite substantial benefit: It would eliminate poverty in the United States immediately and forever.

This is why I really like the “abolish poverty” movement; we must recognize that at our current level of economic development, poverty is no longer a natural state, a complex problem to solve. It is a policy decision that we are making. We are saying, as a society, that we would rather continue to have poverty than spend that $2 trillion per year, about 12% of our $17.4 trillion GDP. We are saying that we’d rather have people who are homeless and starving than lose 12 cents of every dollar we make. (To be fair, if we include the dynamic economic impact of this tax-and-transfer system it might actually turn out to be more than that; but it could in fact be less—the increased spending would boost the economy, just as the increased taxes would restrain it—and seems very unlikely to be more than 20% of GDP.)

For most of human history—and in most countries today—that is not the case. India could not abolish poverty immediately by a single tax policy; nor could China. Probably not Brazil either. Maybe Greece could do it, but then again maybe not. But Germany could; the United Kingdom could; France could; and we could in the United States. We have enough wealth now that with a moderate increase in government spending we could create an economic floor below which no person could fall. It is incumbent upon us at the very least to justify why we don’t.

I have heard it said that poverty is not a natural condition, but the result of human action. Even Nelson Mandela endorsed this view. This is false, actually. In general, poverty is the natural condition of all life forms on Earth (and probably all life forms in the universe). Natural selection evolves us toward fitting as many gene-packages into the environment as possible, not toward maximizing the happiness of the sentient beings those gene-packages may happen to be. To a first approximation, all life forms suffer in poverty.

We live at a unique time in human history; for no more than the last century—and perhaps not even that—we have actually had so much wealth that we could eliminate poverty by choice. For hundreds of thousands of years human beings toiled in poverty because there was no such choice. Perhaps good policy in Greece could end poverty today, but it couldn’t have during the reign of Pericles. Good policy in Italy could end poverty now, but not when Caesar was emperor. Good policy in the United Kingdom could easily end poverty immediately, but even under Queen Victoria that wasn’t feasible.

Maybe that’s why we aren’t doing it? Our cultural memory was forged in a time decades or centuries ago, before we had this much wealth to work with. We speak of “end world hunger” in the same breath as “cure cancer” or “conquer death”, a great dream that has always been impossible and perhaps still is—but in fact we should speak of it in the same breath as “split the atom” and “land on the Moon”, seminal achievements that our civilization is now capable of thanks to economic and technological revolution.

Capitalism also seems to have a certain momentum to it; once you implement a market economy that maximizes wealth by harnessing self-interest, people seem to forget that we are fundamentally altruistic beings. I may never forget that economist who sent me an email with “altruism” in scare quotes, as though it was foolish (or at best imprecise) to say that human beings care about one another. But in fact we are the most altruistic species on Earth, without question, in a sense so formal and scientific it can literally be measured quantitatively.

There are real advantages to harnessing self-interest—not least, I know my own interests considerably better than I know yours, no matter who you are—and that is part of how we have achieved this great level of wealth (though personally I think science, democracy, and the empowerment of women are the far greater causes of our prosperity). But we must not let it forget us why we wanted to have wealth in the first place: Not to concentrate power in a handful of individuals who will pass it on to their heirs; not to “maximize work incentives”; not to give us the fanciest technological gadgets. The reason we wanted to have wealth was so that we could finally free ourselves from the endless toil that was our lot by birth and that of all other beings—to let us finally live, instead of merely survive. There is a peak to Maslow’s pyramid, and we could stand there now, together; but we must find the will to give up that 12 cents of every dollar.

Externalities

JDN 2457202 EDT 17:52.

The 1992 Bill Clinton campaign had a slogan, “It’s the economy, stupid.”: A snowclone I’ve used on occasion is “it’s the externalities, stupid.” (Though I’m actually not all that fond of calling people ‘stupid’; though occasionally true is it never polite and rarely useful.) Externalities are one of the most important concepts in economics, and yet one that even all too many economists frequently neglect.

Fortunately for this one, I really don’t need much math; the concept isn’t even that complicated, which makes it all the more mysterious how frequently it is ignored. An externality is simply an effect that an action has upon those who were not involved in choosing to perform that action.

All sorts of actions have externalities; indeed, much rarer are actions that don’t. An obvious example is that punching someone in the face has the externality of injuring that person. Pollution is an important externality of many forms of production, because the people harmed by pollution are typically not the same people who were responsible for creating it. Traffic jams are created because every car on the road causes a congestion externality on all the other cars.

All the aforementioned are negative externalities, but there are also positive externalities. When one individual becomes educated, they tend to improve the overall economic viability of the place in which they live. Building infrastructure benefits whole communities. New scientific discoveries enhance the well-being of all humanity.

Externalities are a fundamental problem for the functioning of markets. In the absence of externalities—if each person’s actions only affected that one person and nobody else—then rational self-interest would be optimal and anything else would make no sense. In arguing that rationality is equivalent to self-interest, generations of economists have been, tacitly or explicitly, assuming that there are no such things as externalities.

This is a necessary assumption to show that self-interest would lead to something I discussed in an earlier post: Pareto-efficiency, in which the only way to make one person better off is to make someone else worse off. As I already talked about in that other post, Pareto-efficiency is wildly overrated; a wide variety of Pareto-efficient systems would be intolerable to actually live in. But in the presence of externalities, markets can’t even guarantee Pareto-efficiency, because it’s possible to have everyone acting in their rational self-interest cause harm to everyone at once.

This is called a tragedy of the commons; the basic idea is really quite simple. Suppose that when I burn a gallon of gasoline, that makes me gain 5 milliQALY by driving my car, but then makes everyone lose 1 milliQALY in increased pollution. On net, I gain 4 milliQALY, so if I am rational and self-interested I would do that. But now suppose that there are 10 people all given the same choice. If we all make that same choice, each of us will gain 1 milliQALY—and then lose 10 milliQALY. We would all have been better off if none of us had done it, even though it made sense to each of us at the time. Burning a gallon of gasoline to drive my car is beneficial to me, more so than the release of carbon dioxide into the atmosphere is harmful; but as a result of millions of people burning gasoline, the carbon dioxide in the atmosphere is destabilizing our planet’s climate. We’d all be better off if we could find some way to burn less gasoline.

In order for rational self-interest to be optimal, externalities have to somehow be removed from the system. Otherwise, there are actions we can take that benefit ourselves but harm other people—and thus, we would all be better off if we acted to some degree altruistically. (When I say things like this, most non-economists think I am saying something trivial and obvious, while most economists insist that I am making an assertion that is radical if not outright absurd.)

But of course a world without externalities is a world of complete isolation; it’s a world where everyone lives on their own deserted island and there is no way of communicating or interacting with any other human being in the world. The only reasonable question about this world is whether we would die first or go completely insane first; clearly those are the two things that would happen. Human beings are fundamentally social animals—I would argue that we are in fact more social even than eusocial animals like ants and bees. (Ants and bees are only altruistic toward their own kin; humans are altruistic to groups of millions of people we’ve never even met.) Humans without social interaction are like flowers without sunlight.

Indeed, externalities are so common that if markets only worked in their absence, markets would make no sense at all. Fortunately this isn’t true; there are some ways that markets can be adjusted to deal with at least some kinds of externalities.

One of the most well-known is the Coase theorem; this is odd because it is by far the worst solution. The Coase theorem basically says that if you can assign and enforce well-defined property rights and there is absolutely no cost in making any transaction, markets will automatically work out all externalities. The basic idea is that if someone is about to perform an action that would harm you, you can instead pay them not to do it. Then, the harm to you will be prevented and they will incur an additional benefit.

In the above example, we could all agree to pay $30 (which let’s say is worth 1 milliQALY) to each person who doesn’t burn a gallon of gasoline that would pollute our air. Then, if I were thinking about burning some gasoline, I wouldn’t want to do it, because I’d lose the $300 in payments, which costs me 10 milliQALY, while the benefits of burning the gasoline are only 5 milliQALY. We all reason the same way, and the result is that nobody burns gasoline and actually the money exchanged all balances out so we end up where we were before. The result is that we are all better off.

The first thought you probably have is: How do I pay everyone who doesn’t hurt me? How do I even find all those people? How do I ensure that they follow through and actually don’t hurt me? These are the problems of transaction costs and contract enforcement that are usually presented as the problem with the Coase theorem, and they certainly are very serious problems. You end up needing some sort of government simply to enforce all those contracts, and even then there’s the question of how we can possibly locate everyone who has ever polluted our air or our water.

But in fact there’s an even more fundamental problem: This is extortion. We are almost always in the condition of being able to harm other people, and a system in which the reason people don’t hurt each other is because they’re constantly paying each other not to is a system in which the most intimidating psychopath is the wealthiest person in the world. That system is in fact Pareto-efficient (the psychopath does quite well for himself indeed); but it’s exactly the sort of Pareto-efficient system that isn’t worth pursuing.

Another response to externalities is simply to accept them, which isn’t as awful as it sounds. There are many kinds of externalities that really aren’t that bad, and anything we might do to prevent them is likely to make the cure worse than the disease. Think about the externality of people standing in front of you in line, or the externality of people buying the last cereal box off the shelf before you can get there. The externality of taking the job you applied for may hurt at the time, but in the long run that’s how we maintain a thriving and competitive labor market. In fact, even the externality of ‘gentrifying’ your neighborhood so you can no longer afford it is not nearly as bad as most people seem to think—indeed, the much larger problem seems to be the poor neighborhoods that don’t have rising incomes, remaining poor for generations. (It also makes no sense to call this “gentrifying”; the only landed gentry we have in America is the landowners who claim a ludicrous proportion of our wealth, not the middle-class people who buy cheap homes and move in. If you really want to talk about a gentry, you should be thinking Waltons and Kochs—or Bushs and Clintons.) These sorts of minor externalities that are better left alone are sometimes characterized as pecuniary externalities because they usually are linked to prices, but I think that really misses the point; it’s quite possible for an externality to be entirely price-related and do enormous damage (read: the entire financial system) and to have little or nothing to do with prices and still be not that bad (like standing in line as I mentioned above).

But obviously we can’t leave all externalities alone in this way. We can’t just let people rob and murder one another arbitrarily, or ignore the destruction of the world’s climate that threatens hundreds of millions of lives. We can’t stand back and let forests burn and rivers run dry when we could easily have saved them.

The much more reasonable and realistic response to externalities is what we call government—there are rules you have to follow in society and punishments you face if you don’t. We can avoid most of the transaction problems involved in figuring out who polluted our water by simply making strict rules about polluting water in general. We can prevent people from stealing each other’s things or murdering each other by police who will investigate and punish such crimes.

This is why regulation—and a government strong enough to enforce that regulation—is necessary for the functioning of a society. This dichotomy we have been sold about “regulations versus the market” is totally nonsensical; the market depends upon regulations. This doesn’t justify any particular regulation—and indeed, an awful lot of regulations are astonshingly bad. But some sort of regulatory system is necessary for a market to function at all, and the question has never been whether we will have regulations but which regulations we will have. People who argue that all regulations must go and the market would somehow work on its own are either deeply ignorant of economics or operating from an ulterior motive; some truly horrendous policies have been made by arguing that “less government is always better” when the truth is nothing of the sort.

In fact, there is one real-world method I can think of that actually comes reasonably close to eliminating all externalities—and it is called social democracy. By involving everyone—democracy—in a system that regulates the economy—socialism—we can, in a sense, involve everyone in every transaction, and thus make it impossible to have externalities. In practice it’s never that simple, of course; but the basic concept of involving our whole society in making the rules that our society will follow is sound—and in fact I can think of no reasonable alternative.

We have to institute some sort of regulatory system, but then we need to decide what the regulations will be and who will control them. If we want to instead vest power in a technocratic elite, how do you decide whom to include in that elite? How do we ensure that the technocrats are actually better for the general population if there is no way for that general population to have a say in their election? By involving as many people as we can in the decision-making process, we make it much less likely that one person’s selfish action will harm many others. Indeed, this is probably why democracy prevents famine and genocide—which are, after all, rather extreme examples of negative externalities.

How to change the world

JDN 2457166 EDT 17:53.

I just got back from watching Tomorrowland, which is oddly appropriate since I had already planned this topic in advance. How do we, as they say in the film, “fix the world”?

I can’t find it at the moment, but I vaguely remember some radio segment on which a couple of neoclassical economists were interviewed and asked what sort of career can change the world, and they answered something like, “Go into finance, make a lot of money, and then donate it to charity.”

In a slightly more nuanced form this strategy is called earning to give, and frankly I think it’s pretty awful. Most of the damage that is done to the world is done in the name of maximizing profits, and basically what you end up doing is stealing people’s money and then claiming you are a great altruist for giving some of it back. I guess if you can make enormous amounts of money doing something that isn’t inherently bad and then donate that—like what Bill Gates did—it seems better. But realistically your potential income is probably not actually raised that much by working in finance, sales, or oil production; you could have made the same income as a college professor or a software engineer and not be actively stripping the world of its prosperity. If we actually had the sort of ideal policies that would internalize all externalities, this dilemma wouldn’t arise; but we’re nowhere near that, and if we did have that system, the only billionaires would be Nobel laureate scientists. Albert Einstein was a million times more productive than the average person. Steve Jobs was just a million times luckier. Even then, there is the very serious question of whether it makes sense to give all the fruits of genius to the geniuses themselves, who very quickly find they have all they need while others starve. It was certainly Jonas Salk’s view that his work should only profit him modestly and its benefits should be shared with as many people as possible. So really, in an ideal world there might be no billionaires at all.

Here I would like to present an alternative. If you are an intelligent, hard-working person with a lot of talent and the dream of changing the world, what should you be doing with your time? I’ve given this a great deal of thought in planning my own life, and here are the criteria I came up with:

  1. You must be willing and able to commit to doing it despite great obstacles. This is another reason why earning to give doesn’t actually make sense; your heart (or rather, limbic system) won’t be in it. You’ll be miserable, you’ll become discouraged and demoralized by obstacles, and others will surpass you. In principle Wall Street quantitative analysts who make $10 million a year could donate 90% to UNICEF, but they don’t, and you know why? Because the kind of person who is willing and able to exploit and backstab their way to that position is the kind of person who doesn’t give money to UNICEF.
  2. There must be important tasks to be achieved in that discipline. This one is relatively easy to satisfy; I’ll give you a list in a moment of things that could be contributed by a wide variety of fields. Still, it does place some limitations: For one, it rules out the simplest form of earning to give (a more nuanced form might cause you to choose quantum physics over social work because it pays better and is just as productive—but you’re not simply maximizing income to donate). For another, it rules out routine, ordinary jobs that the world needs but don’t make significant breakthroughs. The world needs truck drivers (until robot trucks take off), but there will never be a great world-changing truck driver, because even the world’s greatest truck driver can only carry so much stuff so fast. There are no world-famous secretaries or plumbers. People like to say that these sorts of jobs “change the world in their own way”, which is a nice sentiment, but ultimately it just doesn’t get things done. We didn’t lift ourselves into the Industrial Age by people being really fantastic blacksmiths; we did it by inventing machines that make blacksmiths obsolete. We didn’t rise to the Information Age by people being really good slide-rule calculators; we did it by inventing computers that work a million times as fast as any slide-rule. Maybe not everyone can have this kind of grand world-changing impact; and I certainly agree that you shouldn’t have to in order to live a good life in peace and happiness. But if that’s what you’re hoping to do with your life, there are certain professions that give you a chance of doing so—and certain professions that don’t.
  3. The important tasks must be currently underinvested. There are a lot of very big problems that many people are already working on. If you work on the problems that are trendy, the ones everyone is talking about, your marginal contribution may be very small. On the other hand, you can’t just pick problems at random; many problems are not invested in precisely because they aren’t that important. You need to find problems people aren’t working on but should be—problems that should be the focus of our attention but for one reason or another get ignored. A good example here is to work on pancreatic cancer instead of breast cancer; breast cancer research is drowning in money and really doesn’t need any more; pancreatic cancer kills 2/3 as many people but receives less than 1/6 as much funding. If you want to do cancer research, you should probably be doing pancreatic cancer.
  4. You must have something about you that gives you a comparative—and preferably, absolute—advantage in that field. This is the hardest one to achieve, and it is in fact the reason why most people can’t make world-changing breakthroughs. It is in fact so hard to achieve that it’s difficult to even say you have until you’ve already done something world-changing. You must have something special about you that lets you achieve what others have failed. You must be one of the best in the world. Even as you stand on the shoulders of giants, you must see further—for millions of others stand on those same shoulders and see nothing. If you believe that you have what it takes, you will be called arrogant and naïve; and in many cases you will be. But in a few cases—maybe 1 in 100, maybe even 1 in 1000, you’ll actually be right. Not everyone who believes they can change the world does so, but everyone who changes the world believed they could.

Now, what sort of careers might satisfy all these requirements?

Well, basically any kind of scientific research:

Mathematicians could work on network theory, or nonlinear dynamics (the first step: separating “nonlinear dynamics” into the dozen or so subfields it should actually comprise—as has been remarked, “nonlinear” is a bit like “non-elephant”), or data processing algorithms for our ever-growing morasses of unprocessed computer data.

Physicists could be working on fusion power, or ways to neutralize radioactive waste, or fundamental physics that could one day unlock technologies as exotic as teleportation and faster-than-light travel. They could work on quantum encryption and quantum computing. Or if those are still too applied for your taste, you could work in cosmology and seek to answer some of the deepest, most fundamental questions in human existence.

Chemists could be working on stronger or cheaper materials for infrastructure—the extreme example being space elevators—or technologies to clean up landfills and oceanic pollution. They could work on improved batteries for solar and wind power, or nanotechnology to revolutionize manufacturing.

Biologists could work on any number of diseases, from cancer and diabetes to malaria and antibiotic-resistant tuberculosis. They could work on stem-cell research and regenerative medicine, or genetic engineering and body enhancement, or on gerontology and age reversal. Biology is a field with so many important unsolved problems that if you have the stomach for it and the interest in some biological problem, you can’t really go wrong.

Electrical engineers can obviously work on improving the power and performance of computer systems, though I think over the last 20 years or so the marginal benefits of that kind of research have begun to wane. Efforts might be better spent in cybernetics, control systems, or network theory, where considerably more is left uncharted; or in artificial intelligence, where computing power is only the first step.

Mechanical engineers could work on making vehicles safer and cheaper, or building reusable spacecraft, or designing self-constructing or self-repairing infrastructure. They could work on 3D printing and just-in-time manufacturing, scaling it up for whole factories and down for home appliances.

Aerospace engineers could link the world with hypersonic travel, build satellites to provide Internet service to the farthest reaches of the globe, or create interplanetary rockets to colonize Mars and the moons of Jupiter and Saturn. They could mine asteroids and make previously rare metals ubiquitous. They could build aerial drones for delivery of goods and revolutionize logistics.

Agronomists could work on sustainable farming methods (hint: stop farming meat), invent new strains of crops that are hardier against pests, more nutritious, or higher-yielding; on the other hand a lot of this is already being done, so maybe it’s time to think outside the box and consider what we might do to make our food system more robust against climate change or other catastrophes.

Ecologists will obviously be working on predicting and mitigating the effects of global climate change, but there are a wide variety of ways of doing so. You could focus on ocean acidification, or on desertification, or on fishery depletion, or on carbon emissions. You could work on getting the climate models so precise that they become completely undeniable to anyone but the most dogmatically opposed. You could focus on endangered species and habitat disruption. Ecology is in general so underfunded and undersupported that basically anything you could do in ecology would be beneficial.

Neuroscientists have plenty of things to do as well: Understanding vision, memory, motor control, facial recognition, emotion, decision-making and so on. But one topic in particular is lacking in researchers, and that is the fundamental Hard Problem of consciousness. This one is going to be an uphill battle, and will require a special level of tenacity and perseverance. The problem is so poorly understood it’s difficult to even state clearly, let alone solve. But if you could do it—if you could even make a significant step toward it—it could literally be the greatest achievement in the history of humanity. It is one of the fundamental questions of our existence, the very thing that separates us from inanimate matter, the very thing that makes questions possible in the first place. Understand consciousness and you understand the very thing that makes us human. That achievement is so enormous that it seems almost petty to point out that the revolutionary effects of artificial intelligence would also fall into your lap.

The arts and humanities also have a great deal to contribute, and are woefully underappreciated.

Artists, authors, and musicians all have the potential to make us rethink our place in the world, reconsider and reimagine what we believe and strive for. If physics and engineering can make us better at winning wars, art and literature and remind us why we should never fight them in the first place. The greatest works of art can remind us of our shared humanity, link us all together in a grander civilization that transcends the petty boundaries of culture, geography, or religion. Art can also be timeless in a way nothing else can; most of Aristotle’s science is long-since refuted, but even the Great Pyramid thousands of years before him continues to awe us. (Aristotle is about equidistant chronologically between us and the Great Pyramid.)

Philosophers may not seem like they have much to add—and to be fair, a great deal of what goes on today in metaethics and epistemology doesn’t add much to civilization—but in fact it was Enlightenment philosophy that brought us democracy, the scientific method, and market economics. Today there are still major unsolved problems in ethics—particularly bioethics—that are in need of philosophical research. Technologies like nanotechnology and genetic engineering offer us the promise of enormous benefits, but also the risk of enormous harms; we need philosophers to help us decide how to use these technologies to make our lives better instead of worse. We need to know where to draw the lines between life and death, between justice and cruelty. Literally nothing could be more important than knowing right from wrong.

Now that I have sung the praises of the natural sciences and the humanities, let me now explain why I am a social scientist, and why you probably should be as well.

Psychologists and cognitive scientists obviously have a great deal to give us in the study of mental illness, but they may actually have more to contribute in the study of mental health—in understanding not just what makes us depressed or schizophrenic, but what makes us happy or intelligent. The 21st century may not simply see the end of mental illness, but the rise of a new level of mental prosperity, where being happy, focused, and motivated are matters of course. The revolution that biology has brought to our lives may pale in comparison to the revolution that psychology will bring. On the more social side of things, psychology may allow us to understand nationalism, sectarianism, and the tribal instinct in general, and allow us to finally learn to undermine fanaticism, encourage critical thought, and make people more rational. The benefits of this are almost impossible to overstate: It is our own limited, broken, 90%-or-so heuristic rationality that has brought us from simians to Shakespeare, from gorillas to Godel. To raise that figure to 95% or 99% or 99.9% could be as revolutionary as was whatever evolutionary change first brought us out of the savannah as Australopithecus africanus.

Sociologists and anthropologists will also have a great deal to contribute to this process, as they approach the tribal instinct from the top down. They may be able to tell us how nations are formed and undermined, why some cultures assimilate and others collide. They can work to understand combat bigotry in all its forms, racism, sexism, ethnocentrism. These could be the fields that finally end war, by understanding and correcting the imbalances in human societies that give rise to violent conflict.

Political scientists and public policy researchers can allow us to understand and restructure governments, undermining corruption, reducing inequality, making voting systems more expressive and more transparent. They can search for the keystones of different political systems, finding the weaknesses in democracy to shore up and the weaknesses in autocracy to exploit. They can work toward a true international government, representative of all the world’s people and with the authority and capability to enforce global peace. If the sociologists don’t end war and genocide, perhaps the political scientists can—or more likely they can do it together.

And then, at last, we come to economists. While I certainly work with a lot of ideas from psychology, sociology, and political science, I primarily consider myself an economist. Why is that? Why do I think the most important problems for me—and perhaps everyone—to be working on are fundamentally economic?

Because, above all, economics is broken. The other social sciences are basically on the right track; their theories are still very limited, their models are not very precise, and there are decades of work left to be done, but the core principles upon which they operate are correct. Economics is the field to work in because of criterion 3: Almost all the important problems in economics are underinvested.

Macroeconomics is where we are doing relatively well, and yet the Keynesian models that allowed us to reduce the damage of the Second Depression nonetheless had no power to predict its arrival. While inflation has been at least somewhat tamed, the far worse problem of unemployment has not been resolved or even really understood.

When we get to microeconomics, the neoclassical models are totally defective. Their core assumptions of total rationality and total selfishness are embarrassingly wrong. We have no idea what controls assets prices, or decides credit constraints, or motivates investment decisions. Our models of how people respond to risk are all wrong. We have no formal account of altruism or its limitations. As manufacturing is increasingly automated and work shifts into services, most economic models make no distinction between the two sectors. While finance takes over more and more of our society’s wealth, most formal models of the economy don’t even include a financial sector.

Economic forecasting is no better than chance. The most widely-used asset-pricing model, CAPM, fails completely in empirical tests; its defenders concede this and then have the audacity to declare that it doesn’t matter because the mathematics works. The Black-Scholes derivative-pricing model that caused the Second Depression could easily have been predicted to do so, because it contains a term that assumes normal distributions when we know for a fact that financial markets are fat-tailed; simply put, it claims certain events will never happen that actually occur several times a year.

Worst of all, economics is the field that people listen to. When a psychologist or sociologist says something on television, people say that it sounds interesting and basically ignore it. When an economist says something on television, national policies are shifted accordingly. Austerity exists as national policy in part due to a spreadsheet error by two famous economists.

Keynes already knew this in 1936: “The ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else. Practical men, who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back.”

Meanwhile, the problems that economics deals with have a direct influence on the lives of millions of people. Bad economics gives us recessions and depressions; it cripples our industries and siphons off wealth to an increasingly corrupt elite. Bad economics literally starves people: It is because of bad economics that there is still such a thing as world hunger. We have enough food, we have the technology to distribute it—but we don’t have the economic policy to lift people out of poverty so that they can afford to buy it. Bad economics is why we don’t have the funding to cure diabetes or colonize Mars (but we have the funding for oil fracking and aircraft carriers, don’t we?). All of that other scientific research that needs done probably could be done, if the resources of our society were properly distributed and utilized.

This combination of both overwhelming influence, overwhelming importance and overwhelming error makes economics the low-hanging fruit; you don’t even have to be particularly brilliant to have better ideas than most economists (though no doubt it helps if you are). Economics is where we have a whole bunch of important questions that are unanswered—or the answers we have are wrong. (As Will Rogers said, “It isn’t what we don’t know that gives us trouble, it’s what we know that ain’t so.”)

Thus, rather than tell you go into finance and earn to give, those economists could simply have said: “You should become an economist. You could hardly do worse than we have.”

Happy Capybara Day! Or the power of culture

JDN 2457131 EDT 14:33.

Did you celebrate Capybara Day yesterday? You didn’t? Why not? We weren’t able to find any actual capybaras this year, but maybe next year we’ll be able to plan better and find a capybara at a zoo; unfortunately the nearest zoo with a capybara appears to be in Maryland. But where would we be without a capybara to consult annually on the stock market?

Right now you are probably rather confused, perhaps wondering if I’ve gone completely insane. This is because Capybara Day is a holiday of my own invention, one which only a handful of people have even heard about.

But if you think we’d never have a holiday so bizarre, think again: For all I did was make some slight modifications to Groundhog Day. Instead of consulting a groundhog about the weather every February 2, I proposed that we consult a capybara about the stock market every April 17. And if you think you have some reason why groundhogs are better at predicting the weather (perhaps because they at least have some vague notion of what weather is) than capybaras are at predicting the stock market (since they have no concept of money or numbers), think about this: Capybara Day could produce extremely accurate predictions, provided only that people actually believed it. The prophecy of rising or falling stock prices could very easily become self-fulfilling. If it were a cultural habit of ours to consult capybaras about the stock market, capybaras would become good predictors of the stock market.

That might seem a bit far-fetched, but think about this: Why is there a January Effect? (To be fair, some researchers argue that there isn’t, and the apparent correlation between higher stock prices and the month of January is simply an illusion, perhaps the result of data overfitting.)

But I think it probably is real, and moreover has some very obvious reasons behind it. In this I’m in agreement with Richard Thaler, a founder of cognitive economics who wrote about such anomalies in the 1980s. December is a time when two very culturally-important events occur: The end of the year, during which many contracts end, profits are assessed, and tax liabilities are determined; and Christmas, the greatest surge of consumer spending and consumer debt.

The first effect means that corporations are very likely to liquidate assets—particularly assets that are running at a loss—in order to minimize their tax liabilities for the year, which will drive down prices. The second effect means that consumers are in search of financing for extravagant gift purchases, and those who don’t run up credit cards may instead sell off stocks. This is if anything a more rational way of dealing with the credit constraint, since interest rates on credit cards are typically far in excess of stock returns. But this surge of selling due to credit constraints further depresses prices.

In January, things return to normal; assets are repurchased, debt is repaid. This brings prices back up to where they were, which results in a higher than normal return for January.

Neoclassical economists are loath to admit that such a seasonal effect could exist, because it violates their concept of how markets work—and to be fair, the January Effect is actually weak enough to be somewhat ambiguous. But actually it doesn’t take much deviation from neoclassical models to explain the effect: Tax policies and credit constraints are basically enough to do it, so you don’t even need to go that far into understanding human behavior. It’s perfectly rational to behave this way given the distortions that are created by taxes and credit limits, and the arbitrage opportunity is one that you can only take advantage of if you have large amounts of credit and aren’t worried about minimizing your tax liabilities. It’s important to remember just how strong the assumptions of models like CAPM truly are; in addition to the usual infinite identical psychopaths, CAPM assumes there are no taxes, no transaction costs, and unlimited access to credit. I’d say it’s amazing that it works at all, but actually, it doesn’t—check out this graph of risk versus return and tell me if you think CAPM is actually giving us any information at all about how stock markets behave. It frankly looks like you could have drawn a random line through a scatter plot and gotten just as good a fit. Knowing how strong its assumptions are, we would not expect CAPM to work—and sure enough, it doesn’t.

Of course, that leaves the question of why our tax policy would be structured in this way—why make the year end on December 31 instead of some other date? And for that, you need to go back through hundreds of years of history, the Gregorian calendar, which in turn was influenced by Christianity, and before that the Julian calendar—in other words, culture.

Culture is one of the most powerful forces that influences human behavior—and also one of the strangest and least-understood. Economic theory is basically silent on the matter of culture. Typically it is ignored entirely, assumed to be irrelevant against the economic incentives that are the true drivers of human action. (There’s a peculiar emotion many neoclassical economists express that I can best describe as self-righteous cynicism, the attitude that we alone—i.e., economists—understand that human beings are not the noble and altruistic creatures many imagine us to be, nor beings of art and culture, but simply cold, calculating machines whose true motives are reducible to profit incentives—and all who think otherwise are being foolish and naïve; true enlightenment is understanding that human beings are infinite identical psychopaths. This is the attitude epitomized by the economist who once sent me an email with “altruism” written in scare quotes.)

Occasionally culture will be invoked as an external (in jargon, exogenous) force, to explain some aspect of human behavior that is otherwise so totally irrational that even invoking nonsensical preferences won’t make it go away. When a suicide bomber blows himself up in a crowd of people, it’s really pretty hard to explain that in terms of rational profit incentives—though I have seen it tried. (It could be self-interest at a larger scale, like families or nations—but then, isn’t that just the tribal paradigm I’ve been arguing for all along?)

But culture doesn’t just motivate us to do extreme or wildly irrational things. It motivates us all the time, often in quite beneficial ways; we wait in line, hold doors for people walking behind us, tip waiters who serve us, and vote in elections, not because anyone pressures us directly to do so (unlike say Australia we do not have compulsory voting) but because it’s what we feel we ought to do. There is a sense of altruism—and altruism provides the ultimate justification for why it is right to do these things—but the primary motivator in most cases is culture—that’s what people do, and are expected to do, around here.

Indeed, even when there is a direct incentive against behaving a certain way—like criminal penalties against theft—the probability of actually suffering a direct penalty is generally so low that it really can’t be our primary motivation. Instead, the reason we don’t cheat and steal is that we think we shouldn’t, and a major part of why we think we shouldn’t is that we have cultural norms against it.

We can actually observe differences in cultural norms across countries in the laboratory. In this 2008 study by Massimo Castro (PDF) comparing British and Italian people playing an economic game called the public goods game in which you can pay a cost yourself to benefit the group as a whole, it was found not only that people were less willing to benefit groups of foreigners than groups of compatriots, British people were overall more generous than Italian people. This 2010 study by Gachter et. al. (actually Joshua Greene talked about it last week) compared how people play the game in various cities, they found three basic patterns: In Western European and American cities such as Zurich, Copenhagen and Boston, cooperation started out high and remained high throughout; people were just cooperative in general. In Asian cities such as Chengdu and Seoul, cooperation started out low, but if people were punished for not cooperating, cooperation would improve over time, eventually reaching about the same place as in the highly cooperative cities. And in Mediterranean cities such as Istanbul, Athens, and Riyadh, cooperation started low and stayed low—even when people could be punished for not cooperating, nobody actually punished them. (These patterns are broadly consistent with the World Bank corruption ratings of these regions, by the way; Western Europe shows very low corruption, while Asia and the Mediterranean show high corruption. Of course this isn’t all that’s going on—and Asia isn’t much less corrupt than the Middle East, while this experiment might make you think so.)

Interestingly, these cultural patterns showed Melbourne as behaving more like an Asian city than a Western European one—perhaps being in the Pacific has worn off on Australia more than they realize.

This is very preliminary, cutting-edge research I’m talking about, so be careful about drawing too many conclusions. But in general we’ve begun to find some fairly clear cultural differences in economic behavior across different societies. While this would not be at all surprising to a sociologist or anthropologist, it’s the sort of thing that economists have insisted for years is impossible.

This is the frontier of cognitive economics, in my opinion. We know that culture is a very powerful motivator of our behavior, and it is time for us to understand how it works—and then, how it can be changed. We know that culture can be changed—cultural norms do change over time, sometimes remarkably rapidly; but we have only a faint notion of how or why they change. Changing culture has the power to do things that simply changing policy cannot, however; policy requires enforcement, and when the enforcement is removed the behavior will often disappear. But if a cultural norm can be imparted, it could sustain itself for a thousand years without any government action at all.