The evolution of human cooperation

Jun 17 JDN 2458287

If alien lifeforms were observing humans (assuming they didn’t turn out the same way—which they actually might, for reasons I’ll get to shortly), the thing that would probably baffle them the most about us is how we organize ourselves into groups. Each individual may be part of several groups at once, and some groups are closer-knit than others; but the most tightly-knit groups exhibit extremely high levels of cooperation, coordination, and self-sacrifice.

They might think at first that we are eusocial, like ants or bees; but upon closer study they would see that our groups are not very strongly correlated with genetic relatedness. We are somewhat more closely related to those in our groups than to those outsides, usually; but it’s a remarkably weak effect, especially compared to the extremely high relatedness of worker bees in a hive. No, to a first approximation, these groups are of unrelated humans; yet their level of cooperation is equal to if not greater than that exhibited by the worker bees.

However, the alien anthropologists would find that it is not that humans are simply predisposed toward extremely high altruism and cooperation in general; when two humans groups come into conflict, they are capable of the most extreme forms of violence imaginable. Human history is full of atrocities that combine the indifferent brutality of nature red in tooth and claw with the boundless ingenuity of a technologically advanced species. Yet except for a small proportion perpetrated by individual humans with some sort of mental pathology, these atrocities are invariably committed by one unified group against another. Even in genocide there is cooperation.

Humans are not entirely selfish. But nor are they paragons of universal altruism (though some of them aspire to be). Humans engage in a highly selective form of altruism—virtually boundless for the in-group, almost negligible for the out-group. Humans are tribal.

Being a human yourself, this probably doesn’t strike you as particularly strange. Indeed, I’ve mentioned it many times previously on this blog. But it is actually quite strange, from an evolutionary perspective; most organisms are not like this.

As I said earlier, there is actually reason to think that our alien anthropologist would come from a species with similar traits, simply because such cooperation may be necessary to achieve a full-scale technological civilization, let alone the capacity for interstellar travel. But there might be other possibilities; perhaps they come from a eusocial species, and their large-scale cooperation is within an extremely large hive.

It’s true that most organisms are not entirely selfish. There are various forms of cooperation within and even across species. But these usually involve only close kin, and otherwise involve highly stable arrangements of mutual benefit. There is nothing like the large-scale cooperation between anonymous unrelated individuals that is exhibited by all human societies.

How would such an unusual trait evolve? It must require a very particular set of circumstances, since it only seems to have evolved in a single species (or at most a handful of species, since other primates and cetaceans display some of the same characteristics).

Once evolved, this trait is clearly advantageous; indeed it turned a local apex predator into a species so successful that it can actually intentionally control the evolution of other species. Humans have become a hegemon over the entire global ecology, for better or for worse. Cooperation gave us a level of efficiency in producing the necessities of survival so great that at this point most of us spend our time working on completely different tasks. If you are not a farmer or a hunter or a carpenter (and frankly, even if you are a farmer with a tractor, a hunter with a rifle, or a carpenter with a table saw), you are doing work that would simply not have been possible without very large-scale human cooperation.

This extremely high fitness benefit only makes the matter more puzzling, however: If the benefits are so great, why don’t more species do this? There must be some other requirements that other species were unable to meet.

One clear requirement is high intelligence. As frustrating as it may be to be a human and watch other humans kill each other over foolish grievances, this is actually evidence of how smart humans are, biologically speaking. We might wish we were even smarter still—but most species don’t have the intelligence to make it even as far as we have.

But high intelligence is likely not sufficient. We can’t be sure of that, since we haven’t encountered any other species with equal intelligence; but what we do know is that even Homo sapiens didn’t coordinate on anything like our current scale for tens of thousands of years. We may have had tribal instincts, but if so they were largely confined to a very small scale. Something happened, about 50,000 years ago or so—not very long ago in evolutionary time—that allowed us to increase that scale dramatically.

Was this a genetic change? It’s difficult to say. There could have been some subtle genetic mutation, something that wouldn’t show up in the fossil record. But more recent expansions in human cooperation to the level of the nation-state and beyond clearly can’t be genetic; they were much too fast for that. They must be a form of cultural evolution: The replicators being spread are ideas and norms—memes—rather than genes.

So perhaps the very early shift toward tribal cooperation was also a cultural one. Perhaps it began not as a genetic mutation but as an idea—perhaps a metaphor of “universal brotherhood” as we often still hear today. The tribes that believed this ideas prospered; the tribes that didn’t were outcompeted or even directly destroyed.

This would explain why it had to be an intelligent species. We needed brains big enough to comprehend metaphors and generalize concepts. We needed enough social cognition to keep track of who was in the in-group and who was in the out-group.

If it was indeed a cultural shift, this should encourage us. (And since the most recent changes definitely were cultural, that is already quite encouraging.) We are not limited by our DNA to only care about a small group of close kin; we are capable of expanding our scale of unity and cooperation far beyond.
The real question is whether we can expand it to everyone. Unfortunately, there is some reason to think that this may not be possible. If our concept of tribal identity inherently requires both an in-group and an out-group, then we may never be able to include everyone. If we are only unified against an enemy, never simply for our own prosperity, world peace may forever remain a dream.

But I do have a work-around that I think is worth considering. Can we expand our concept of the out-group to include abstract concepts? With phrases like “The War on Poverty” and “The War on Terror”, it would seem in fact that we can. It feels awkward; it is somewhat imprecise—but then, so was the original metaphor of “universal brotherhood”. Our brains are flexible enough that they don’t actually seem to need the enemy to be a person; it can also be an idea. If this is right, then we can actually include everyone in our in-group, as long as we define the right abstract out-group. We can choose enemies like poverty, violence, cruelty, and despair instead of other nations or ethnic groups. If we must continue to fight a battle, let it be a battle against the pitiless indifference of the universe, rather than our fellow human beings.

Of course, the real challenge will be getting people to change their existing tribal identities. In the moment, these identities seem fundamentally intractable. But that can’t really be the case—for these identities have changed over historical time. Once-important categories have disappeared; new ones have arisen in their place. Someone in 4th century Constantinople would find the conflict between Democrats and Republicans as baffling as we would find the conflict between Trinitarians and Arians. The ongoing oppression of Native American people by White people would be unfathomable to someone of the 11th century Onondaga, who could scarcely imagine an enemy more different than the Seneca west of them. Even the conflict between Russia and NATO would probably seem strange to someone living in France in 1943, for whom Germany was the enemy and Russia was at least the enemy of the enemy—and many of those people are still alive.

I don’t know exactly how these tribal identities change (I’m working on it). It clearly isn’t as simple as convincing people with rational arguments. In fact, part of how it seems to work is that someone will shift their identity slowly enough that they can’t perceive the shift themselves. People rarely seem to appreciate, much less admit, how much their own minds have changed over time. So don’t ever expect to change someone’s identity in one sitting. Don’t even expect to do it in one year. But never forget that identities do change, even within an individual’s lifetime.

What is progress? How far have we really come?

JDN 2457534

It is a controversy that has lasted throughout the ages: Is the world getting better? Is it getting worse? Or is it more or less staying the same, changing in ways that don’t really constitute improvements or detriments?

The most obvious and indisputable change in human society over the course of history has been the advancement of technology. At one extreme there are techno-utopians, who believe that technology will solve all the world’s problems and bring about a glorious future; at the other extreme are anarcho-primitivists, who maintain that civilization, technology, and industrialization were all grave mistakes, removing us from our natural state of peace and harmony.

I am not a techno-utopian—I do not believe that technology will solve all our problems—but I am much closer to that end of the scale. Technology has solved a lot of our problems, and will continue to solve a lot more. My aim in this post is to convince you that progress is real, that things really are, on the whole, getting better.

One of the more baffling arguments against progress comes from none other than Jared Diamond, the social scientist most famous for Guns, Germs and Steel (which oddly enough is mainly about horses and goats). About seven months before I was born, Diamond wrote an essay for Discover magazine arguing quite literally that agriculture—and by extension, civilization—was a mistake.

Diamond fortunately avoids the usual argument based solely on modern hunter-gatherers, which is a selection bias if ever I heard one. Instead his main argument seems to be that paleontological evidence shows an overall decrease in health around the same time as agriculture emerged. But that’s still an endogeneity problem, albeit a subtler one. Maybe agriculture emerged as a response to famine and disease. Or maybe they were both triggered by rising populations; higher populations increase disease risk, and are also basically impossible to sustain without agriculture.

I am similarly dubious of the claim that hunter-gatherers are always peaceful and egalitarian. It does seem to be the case that herders are more violent than other cultures, as they tend to form honor cultures that punish all sleights with overwhelming violence. Even after the Industrial Revolution there were herder honor cultures—the Wild West. Yet as Steven Pinker keeps trying to tell people, the death rates due to homicide in all human cultures appear to have steadily declined for thousands of years.

I read an article just a few days ago on the Scientific American blog which included the following claim so astonishingly nonsensical it makes me wonder if the authors can even do arithmetic or read statistical tables correctly:

I keep reminding readers (see Further Reading), the evidence is overwhelming that war is a relatively recent cultural invention. War emerged toward the end of the Paleolithic era, and then only sporadically. A new study by Japanese researchers published in the Royal Society journal Biology Letters corroborates this view.

Six Japanese scholars led by Hisashi Nakao examined the remains of 2,582 hunter-gatherers who lived 12,000 to 2,800 years ago, during Japan’s so-called Jomon Period. The researchers found bashed-in skulls and other marks consistent with violent death on 23 skeletons, for a mortality rate of 0.89 percent.

That is supposed to be evidence that ancient hunter-gatherers were peaceful? The global homicide rate today is 62 homicides per million people per year. Using the worldwide life expectancy of 71 years (which is biasing against modern civilization because our life expectancy is longer), that means that the worldwide lifetime homicide rate is 4,400 homicides per million people, or 0.44%—that’s less than half the homicide rate of these “peaceful” hunter-gatherers. If you compare just against First World countries, the difference is even starker; let’s use the US, which has the highest homicide rate in the First World. Our homicide rate is 38 homicides per million people per year, which at our life expectancy of 79 years is 3,000 homicides per million people, or an overall homicide rate of 0.3%, slightly more than a third of this “peaceful” ancient culture. The most peaceful societies today—notably Japan, where these remains were found—have homicide rates as low as 3 per million people per year, which is a lifetime homicide rate of 0.02%, forty times smaller than their supposedly utopian ancestors. (Yes, all of Japan has fewer total homicides than Chicago. I’m sure it has nothing to do with their extremely strict gun control laws.) Indeed, to get a modern homicide rate as high as these hunter-gatherers, you need to go to a country like Congo, Myanmar, or the Central African Republic. To get a substantially higher homicide rate, you essentially have to be in Latin America. Honduras, the murder capital of the world, has a lifetime homicide rate of about 6.7%.

Again, how did I figure these things out? By reading basic information from publicly-available statistical tables and then doing some simple arithmetic. Apparently these paleoanthropologists couldn’t be bothered to do that, or didn’t know how to do it correctly, before they started proclaiming that human nature is peaceful and civilization is the source of violence. After an oversight as egregious as that, it feels almost petty to note that a sample size of a few thousand people from one particular region and culture isn’t sufficient data to draw such sweeping judgments or speak of “overwhelming” evidence.

Of course, in order to decide whether progress is a real phenomenon, we need a clearer idea of what we mean by progress. It would be presumptuous to use per-capita GDP, though there can be absolutely no doubt that technology and capitalism do in fact raise per-capita GDP. If we measure by inequality, modern society clearly fares much worse (our top 1% share and Gini coefficient may be higher than Classical Rome!), but that is clearly biased in the opposite direction, because the main way we have raised inequality is by raising the ceiling, not lowering the floor. Most of our really good measures (like the Human Development Index) only exist for the last few decades and can barely even be extrapolated back through the 20th century.

How about babies not dying? This is my preferred measure of a society’s value. It seems like something that should be totally uncontroversial: Babies dying is bad. All other things equal, a society is better if fewer babies die.

I suppose it doesn’t immediately follow that all things considered a society is better if fewer babies die; maybe the dying babies could be offset by some greater good. Perhaps a totalitarian society where no babies die is in fact worse than a free society in which a few babies die, or perhaps we should be prepared to accept some small amount of babies dying in order to save adults from poverty, or something like that. But without some really powerful overriding reason, babies not dying probably means your society is doing something right. (And since most ancient societies were in a state of universal poverty and quite frequently tyranny, these exceptions would only strengthen my case.)

Well, get ready for some high-yield truth bombs about infant mortality rates.

It’s hard to get good data for prehistoric cultures, but the best data we have says that infant mortality in ancient hunter-gatherer cultures was about 20-50%, with a best estimate around 30%. This is statistically indistinguishable from early agricultural societies.

Indeed, 30% seems to be the figure humanity had for most of history. Just shy of a third of all babies died for most of history.

In Medieval times, infant mortality was about 30%.

This same rate (fluctuating based on various plagues) persisted into the Enlightenment—Sweden has the best records, and their infant mortality rate in 1750 was about 30%.

The decline in infant mortality began slowly: During the Industrial Era, infant mortality was about 15% in isolated villages, but still as high as 40% in major cities due to high population densities with poor sanitation.

Even as recently as 1900, there were US cities with infant mortality rates as high as 30%, though the overall rate was more like 10%.

Most of the decline was recent and rapid: Just within the US since WW2, infant mortality fell from about 5.5% to 0.7%, though there remains a substantial disparity between White and Black people.

Globally, the infant mortality rate fell from 6.3% to 3.2% within my lifetime, and in Africa today, the region where it is worst, it is about 5.5%—or what it was in the US in the 1940s.

This precipitous decline in babies dying is the main reason ancient societies have such low life expectancies; actually once they reached adulthood they lived to be about 70 years old, not much worse than we do today. So my multiplying everything by 71 actually isn’t too far off even for ancient societies.

Let me make a graph for you here, of the approximate rate of babies dying over time from 10,000 BC to today:

Infant_mortality.png

Let’s zoom in on the last 250 years, where the data is much more solid:

Infant_mortality_recent.png

I think you may notice something in these graphs. There is quite literally a turning point for humanity, a kink in the curve where we suddenly begin a rapid decline from an otherwise constant mortality rate.

That point occurs around or shortly before 1800—that is, it occurs at industrial capitalism. Adam Smith (not to mention Thomas Jefferson) was writing at just about the point in time when humanity made a sudden and unprecedented shift toward saving the lives of millions of babies.

So now, think about that the next time you are tempted to say that capitalism is an evil system that destroys the world; the evidence points to capitalism quite literally saving babies from dying.

How would it do so? Well, there’s that rising per-capita GDP we previously ignored, for one thing. But more important seems to be the way that industrialization and free markets support technological innovation, and in this case especially medical innovation—antibiotics and vaccines. Our higher rates of literacy and better communication, also a result of raised standard of living and improved technology, surely didn’t hurt. I’m not often in agreement with the Cato Institute, but they’re right about this one: Industrial capitalism is the chief source of human progress.

Billions of babies would have died but we saved them. So yes, I’m going to call that progress. Civilization, and in particular industrialization and free markets, have dramatically improved human life over the last few hundred years.

In a future post I’ll address one of the common retorts to this basically indisputable fact: “You’re making excuses for colonialism and imperialism!” No, I’m not. Saying that modern capitalism is a better system (not least because it saves babies) is not at all the same thing as saying that our ancestors were justified in using murder, slavery, and tyranny to force people into it.

How to change the world

JDN 2457166 EDT 17:53.

I just got back from watching Tomorrowland, which is oddly appropriate since I had already planned this topic in advance. How do we, as they say in the film, “fix the world”?

I can’t find it at the moment, but I vaguely remember some radio segment on which a couple of neoclassical economists were interviewed and asked what sort of career can change the world, and they answered something like, “Go into finance, make a lot of money, and then donate it to charity.”

In a slightly more nuanced form this strategy is called earning to give, and frankly I think it’s pretty awful. Most of the damage that is done to the world is done in the name of maximizing profits, and basically what you end up doing is stealing people’s money and then claiming you are a great altruist for giving some of it back. I guess if you can make enormous amounts of money doing something that isn’t inherently bad and then donate that—like what Bill Gates did—it seems better. But realistically your potential income is probably not actually raised that much by working in finance, sales, or oil production; you could have made the same income as a college professor or a software engineer and not be actively stripping the world of its prosperity. If we actually had the sort of ideal policies that would internalize all externalities, this dilemma wouldn’t arise; but we’re nowhere near that, and if we did have that system, the only billionaires would be Nobel laureate scientists. Albert Einstein was a million times more productive than the average person. Steve Jobs was just a million times luckier. Even then, there is the very serious question of whether it makes sense to give all the fruits of genius to the geniuses themselves, who very quickly find they have all they need while others starve. It was certainly Jonas Salk’s view that his work should only profit him modestly and its benefits should be shared with as many people as possible. So really, in an ideal world there might be no billionaires at all.

Here I would like to present an alternative. If you are an intelligent, hard-working person with a lot of talent and the dream of changing the world, what should you be doing with your time? I’ve given this a great deal of thought in planning my own life, and here are the criteria I came up with:

  1. You must be willing and able to commit to doing it despite great obstacles. This is another reason why earning to give doesn’t actually make sense; your heart (or rather, limbic system) won’t be in it. You’ll be miserable, you’ll become discouraged and demoralized by obstacles, and others will surpass you. In principle Wall Street quantitative analysts who make $10 million a year could donate 90% to UNICEF, but they don’t, and you know why? Because the kind of person who is willing and able to exploit and backstab their way to that position is the kind of person who doesn’t give money to UNICEF.
  2. There must be important tasks to be achieved in that discipline. This one is relatively easy to satisfy; I’ll give you a list in a moment of things that could be contributed by a wide variety of fields. Still, it does place some limitations: For one, it rules out the simplest form of earning to give (a more nuanced form might cause you to choose quantum physics over social work because it pays better and is just as productive—but you’re not simply maximizing income to donate). For another, it rules out routine, ordinary jobs that the world needs but don’t make significant breakthroughs. The world needs truck drivers (until robot trucks take off), but there will never be a great world-changing truck driver, because even the world’s greatest truck driver can only carry so much stuff so fast. There are no world-famous secretaries or plumbers. People like to say that these sorts of jobs “change the world in their own way”, which is a nice sentiment, but ultimately it just doesn’t get things done. We didn’t lift ourselves into the Industrial Age by people being really fantastic blacksmiths; we did it by inventing machines that make blacksmiths obsolete. We didn’t rise to the Information Age by people being really good slide-rule calculators; we did it by inventing computers that work a million times as fast as any slide-rule. Maybe not everyone can have this kind of grand world-changing impact; and I certainly agree that you shouldn’t have to in order to live a good life in peace and happiness. But if that’s what you’re hoping to do with your life, there are certain professions that give you a chance of doing so—and certain professions that don’t.
  3. The important tasks must be currently underinvested. There are a lot of very big problems that many people are already working on. If you work on the problems that are trendy, the ones everyone is talking about, your marginal contribution may be very small. On the other hand, you can’t just pick problems at random; many problems are not invested in precisely because they aren’t that important. You need to find problems people aren’t working on but should be—problems that should be the focus of our attention but for one reason or another get ignored. A good example here is to work on pancreatic cancer instead of breast cancer; breast cancer research is drowning in money and really doesn’t need any more; pancreatic cancer kills 2/3 as many people but receives less than 1/6 as much funding. If you want to do cancer research, you should probably be doing pancreatic cancer.
  4. You must have something about you that gives you a comparative—and preferably, absolute—advantage in that field. This is the hardest one to achieve, and it is in fact the reason why most people can’t make world-changing breakthroughs. It is in fact so hard to achieve that it’s difficult to even say you have until you’ve already done something world-changing. You must have something special about you that lets you achieve what others have failed. You must be one of the best in the world. Even as you stand on the shoulders of giants, you must see further—for millions of others stand on those same shoulders and see nothing. If you believe that you have what it takes, you will be called arrogant and naïve; and in many cases you will be. But in a few cases—maybe 1 in 100, maybe even 1 in 1000, you’ll actually be right. Not everyone who believes they can change the world does so, but everyone who changes the world believed they could.

Now, what sort of careers might satisfy all these requirements?

Well, basically any kind of scientific research:

Mathematicians could work on network theory, or nonlinear dynamics (the first step: separating “nonlinear dynamics” into the dozen or so subfields it should actually comprise—as has been remarked, “nonlinear” is a bit like “non-elephant”), or data processing algorithms for our ever-growing morasses of unprocessed computer data.

Physicists could be working on fusion power, or ways to neutralize radioactive waste, or fundamental physics that could one day unlock technologies as exotic as teleportation and faster-than-light travel. They could work on quantum encryption and quantum computing. Or if those are still too applied for your taste, you could work in cosmology and seek to answer some of the deepest, most fundamental questions in human existence.

Chemists could be working on stronger or cheaper materials for infrastructure—the extreme example being space elevators—or technologies to clean up landfills and oceanic pollution. They could work on improved batteries for solar and wind power, or nanotechnology to revolutionize manufacturing.

Biologists could work on any number of diseases, from cancer and diabetes to malaria and antibiotic-resistant tuberculosis. They could work on stem-cell research and regenerative medicine, or genetic engineering and body enhancement, or on gerontology and age reversal. Biology is a field with so many important unsolved problems that if you have the stomach for it and the interest in some biological problem, you can’t really go wrong.

Electrical engineers can obviously work on improving the power and performance of computer systems, though I think over the last 20 years or so the marginal benefits of that kind of research have begun to wane. Efforts might be better spent in cybernetics, control systems, or network theory, where considerably more is left uncharted; or in artificial intelligence, where computing power is only the first step.

Mechanical engineers could work on making vehicles safer and cheaper, or building reusable spacecraft, or designing self-constructing or self-repairing infrastructure. They could work on 3D printing and just-in-time manufacturing, scaling it up for whole factories and down for home appliances.

Aerospace engineers could link the world with hypersonic travel, build satellites to provide Internet service to the farthest reaches of the globe, or create interplanetary rockets to colonize Mars and the moons of Jupiter and Saturn. They could mine asteroids and make previously rare metals ubiquitous. They could build aerial drones for delivery of goods and revolutionize logistics.

Agronomists could work on sustainable farming methods (hint: stop farming meat), invent new strains of crops that are hardier against pests, more nutritious, or higher-yielding; on the other hand a lot of this is already being done, so maybe it’s time to think outside the box and consider what we might do to make our food system more robust against climate change or other catastrophes.

Ecologists will obviously be working on predicting and mitigating the effects of global climate change, but there are a wide variety of ways of doing so. You could focus on ocean acidification, or on desertification, or on fishery depletion, or on carbon emissions. You could work on getting the climate models so precise that they become completely undeniable to anyone but the most dogmatically opposed. You could focus on endangered species and habitat disruption. Ecology is in general so underfunded and undersupported that basically anything you could do in ecology would be beneficial.

Neuroscientists have plenty of things to do as well: Understanding vision, memory, motor control, facial recognition, emotion, decision-making and so on. But one topic in particular is lacking in researchers, and that is the fundamental Hard Problem of consciousness. This one is going to be an uphill battle, and will require a special level of tenacity and perseverance. The problem is so poorly understood it’s difficult to even state clearly, let alone solve. But if you could do it—if you could even make a significant step toward it—it could literally be the greatest achievement in the history of humanity. It is one of the fundamental questions of our existence, the very thing that separates us from inanimate matter, the very thing that makes questions possible in the first place. Understand consciousness and you understand the very thing that makes us human. That achievement is so enormous that it seems almost petty to point out that the revolutionary effects of artificial intelligence would also fall into your lap.

The arts and humanities also have a great deal to contribute, and are woefully underappreciated.

Artists, authors, and musicians all have the potential to make us rethink our place in the world, reconsider and reimagine what we believe and strive for. If physics and engineering can make us better at winning wars, art and literature and remind us why we should never fight them in the first place. The greatest works of art can remind us of our shared humanity, link us all together in a grander civilization that transcends the petty boundaries of culture, geography, or religion. Art can also be timeless in a way nothing else can; most of Aristotle’s science is long-since refuted, but even the Great Pyramid thousands of years before him continues to awe us. (Aristotle is about equidistant chronologically between us and the Great Pyramid.)

Philosophers may not seem like they have much to add—and to be fair, a great deal of what goes on today in metaethics and epistemology doesn’t add much to civilization—but in fact it was Enlightenment philosophy that brought us democracy, the scientific method, and market economics. Today there are still major unsolved problems in ethics—particularly bioethics—that are in need of philosophical research. Technologies like nanotechnology and genetic engineering offer us the promise of enormous benefits, but also the risk of enormous harms; we need philosophers to help us decide how to use these technologies to make our lives better instead of worse. We need to know where to draw the lines between life and death, between justice and cruelty. Literally nothing could be more important than knowing right from wrong.

Now that I have sung the praises of the natural sciences and the humanities, let me now explain why I am a social scientist, and why you probably should be as well.

Psychologists and cognitive scientists obviously have a great deal to give us in the study of mental illness, but they may actually have more to contribute in the study of mental health—in understanding not just what makes us depressed or schizophrenic, but what makes us happy or intelligent. The 21st century may not simply see the end of mental illness, but the rise of a new level of mental prosperity, where being happy, focused, and motivated are matters of course. The revolution that biology has brought to our lives may pale in comparison to the revolution that psychology will bring. On the more social side of things, psychology may allow us to understand nationalism, sectarianism, and the tribal instinct in general, and allow us to finally learn to undermine fanaticism, encourage critical thought, and make people more rational. The benefits of this are almost impossible to overstate: It is our own limited, broken, 90%-or-so heuristic rationality that has brought us from simians to Shakespeare, from gorillas to Godel. To raise that figure to 95% or 99% or 99.9% could be as revolutionary as was whatever evolutionary change first brought us out of the savannah as Australopithecus africanus.

Sociologists and anthropologists will also have a great deal to contribute to this process, as they approach the tribal instinct from the top down. They may be able to tell us how nations are formed and undermined, why some cultures assimilate and others collide. They can work to understand combat bigotry in all its forms, racism, sexism, ethnocentrism. These could be the fields that finally end war, by understanding and correcting the imbalances in human societies that give rise to violent conflict.

Political scientists and public policy researchers can allow us to understand and restructure governments, undermining corruption, reducing inequality, making voting systems more expressive and more transparent. They can search for the keystones of different political systems, finding the weaknesses in democracy to shore up and the weaknesses in autocracy to exploit. They can work toward a true international government, representative of all the world’s people and with the authority and capability to enforce global peace. If the sociologists don’t end war and genocide, perhaps the political scientists can—or more likely they can do it together.

And then, at last, we come to economists. While I certainly work with a lot of ideas from psychology, sociology, and political science, I primarily consider myself an economist. Why is that? Why do I think the most important problems for me—and perhaps everyone—to be working on are fundamentally economic?

Because, above all, economics is broken. The other social sciences are basically on the right track; their theories are still very limited, their models are not very precise, and there are decades of work left to be done, but the core principles upon which they operate are correct. Economics is the field to work in because of criterion 3: Almost all the important problems in economics are underinvested.

Macroeconomics is where we are doing relatively well, and yet the Keynesian models that allowed us to reduce the damage of the Second Depression nonetheless had no power to predict its arrival. While inflation has been at least somewhat tamed, the far worse problem of unemployment has not been resolved or even really understood.

When we get to microeconomics, the neoclassical models are totally defective. Their core assumptions of total rationality and total selfishness are embarrassingly wrong. We have no idea what controls assets prices, or decides credit constraints, or motivates investment decisions. Our models of how people respond to risk are all wrong. We have no formal account of altruism or its limitations. As manufacturing is increasingly automated and work shifts into services, most economic models make no distinction between the two sectors. While finance takes over more and more of our society’s wealth, most formal models of the economy don’t even include a financial sector.

Economic forecasting is no better than chance. The most widely-used asset-pricing model, CAPM, fails completely in empirical tests; its defenders concede this and then have the audacity to declare that it doesn’t matter because the mathematics works. The Black-Scholes derivative-pricing model that caused the Second Depression could easily have been predicted to do so, because it contains a term that assumes normal distributions when we know for a fact that financial markets are fat-tailed; simply put, it claims certain events will never happen that actually occur several times a year.

Worst of all, economics is the field that people listen to. When a psychologist or sociologist says something on television, people say that it sounds interesting and basically ignore it. When an economist says something on television, national policies are shifted accordingly. Austerity exists as national policy in part due to a spreadsheet error by two famous economists.

Keynes already knew this in 1936: “The ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else. Practical men, who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back.”

Meanwhile, the problems that economics deals with have a direct influence on the lives of millions of people. Bad economics gives us recessions and depressions; it cripples our industries and siphons off wealth to an increasingly corrupt elite. Bad economics literally starves people: It is because of bad economics that there is still such a thing as world hunger. We have enough food, we have the technology to distribute it—but we don’t have the economic policy to lift people out of poverty so that they can afford to buy it. Bad economics is why we don’t have the funding to cure diabetes or colonize Mars (but we have the funding for oil fracking and aircraft carriers, don’t we?). All of that other scientific research that needs done probably could be done, if the resources of our society were properly distributed and utilized.

This combination of both overwhelming influence, overwhelming importance and overwhelming error makes economics the low-hanging fruit; you don’t even have to be particularly brilliant to have better ideas than most economists (though no doubt it helps if you are). Economics is where we have a whole bunch of important questions that are unanswered—or the answers we have are wrong. (As Will Rogers said, “It isn’t what we don’t know that gives us trouble, it’s what we know that ain’t so.”)

Thus, rather than tell you go into finance and earn to give, those economists could simply have said: “You should become an economist. You could hardly do worse than we have.”

Yes, but what about the next 5000 years?

JDN 2456991 PST 1:34.

This week’s post will be a bit different: I have a book to review. It’s called Debt: The First 5000 Years, by David Graeber. The book is long (about 400 pages plus endnotes), but such a compelling read that the hours melt away. “The First 5000 Years” is an incredibly ambitious subtitle, but Graeber actually manages to live up to it quite well; he really does tell us a story that is more or less continuous from 3000 BC to the present.

So who is this David Graeber fellow, anyway? None will be surprised that he is a founding member of Occupy Wall Street—he was in fact the man who coined “We are the 99%”. (As I’ve studied inequality more, I’ve learned he made a mistake; it really should be “We are the 99.99%”.) I had expected him to be a historian, or an economist; but in fact he is an anthropologist. He is looking at debt and its surrounding institutions in terms of a cultural ethnography—he takes a step outside our own cultural assumptions and tries to see them as he might if he were encountering them in a foreign society. This is what gives the book its freshest parts; Graeber recognizes, as few others seem willing to, that our institutions are not the inevitable product of impersonal deterministic forces, but decisions made by human beings.

(On a related note, I was pleasantly surprised to see in one of my economics textbooks yesterday a neoclassical economist acknowledging that the best explanation we have for why Botswana is doing so well—low corruption, low poverty by African standards, high growth—really has to come down to good leadership and good policy. For once they couldn’t remove all human agency and mark it down to grand impersonal ‘market forces’. It’s odd how strong the pressure is to do that, though; I even feel it in myself: Saying that civil rights progressed so much because Martin Luther King was a great leader isn’t very scientific, is it? Well, if that’s what the evidence points to… why not? At what point did ‘scientific’ come to mean ‘human beings are helplessly at the mercy of grand impersonal forces’? Honestly, doesn’t the link between science and technology make matters quite the opposite?)

Graeber provides a new perspective on many things we take for granted: in the introduction there is one particularly compelling passage where he starts talking—with a fellow left-wing activist—about the damage that has been done to the Third World by IMF policy, and she immediately interjects: “But surely one has to pay one’s debts.” The rest of the book is essentially an elaboration on why we say that—and why it is absolutely untrue.

Graeber has also made me think quite a bit differently about Medieval society and in particular Medieval Islam; this was certainly the society in which the writings of Socrates were preserved and algebra was invented, so it couldn’t have been all bad. But in fact, assuming that Graeber’s account is accurate, Muslim societies in the 14th century actually had something approaching the idyllic fair and free market to which all neoclassicists aspire. They did so, however, by rejecting one of the core assumptions of neoclassical economics, and you can probably guess which one: the assumption that human beings are infinite identical psychopaths. Instead, merchants in Medieval Muslim society were held to high moral standards, and their livelihood was largely based upon the reputation they could maintain as upstanding good citizens. Theoretically they couldn’t even lend at interest, though in practice they had workarounds (like payment in installments that total slightly higher than the original price) that amounted to low rates of interest. They did not, however, have anything approaching the levels of interest that we have today in credit cards at 29% or (it still makes me shudder every time I think about it) payday loans at 400%. Paying on installments to a Muslim merchant would make you end up paying about a 2% to 4% rate of interest—which sounds to me almost exactly what it should be, maybe even a bit low because we’re not taking inflation into account. In any case, the moral standards of society kept people from getting too poor or too greedy, and as a result there was little need for enforcement by the state. In spite of myself I have to admit that may not have been possible without the theological enforcement provided by Islam.
Graeber also avoids one of the most common failings of anthropologists, the cultural relativism that makes them unwilling to criticize any cultural practice as immoral even when it obviously is (except usually making exceptions for modern Western capitalist imperialism). While at times I can see he was tempted to go that way, he generally avoids it; several times he goes out of his way to point out how women were sold into slavery in hunter-gatherer tribes and how that contributed to the institutions of chattel slavery that developed once Western powers invaded.

Anthropologists have another common failing that I don’t think he avoids as well, which is a primitivist bent in which anthropologists speak of ancient societies as idyllic and modern societies as horrific. That’s part of why I said ‘if Graber’s account is accurate,’ because I’m honestly not sure it is. I’ll need to look more into the history of Medieval Islam to be sure. Graeber spends a great deal of time talking about how our current monetary system is fundamentally based on threats of violence—but I can tell you that I have honestly never been threatened with violence over money in my entire life. Not by the state, not by individuals, not by corporations. I haven’t even been mugged—and that’s the sort of the thing the state exists to prevent. (Not that I’ve never been threatened with violence—but so far it’s always been either something personal, or, more often, bigotry against LGBT people.) If violence is the foundation of our monetary system, then it’s hiding itself extraordinarily well. Granted, the violence probably pops up more if you’re near the very bottom, but I think I speak for most of the American middle class when I say that I’ve been through a lot of financial troubles, but none of them have involved any guns pointed at my head. And you can’t counter this by saying that we theoretically have laws on the books that allow you to be arrested for financial insolvency—because that’s always been true, in fact it’s less true now than any other point in history, and Graeber himself freely admits this. The important question is how many people actually get violence enforced upon them, and at least within the United States that number seems to be quite small.

Graeber describes the true story of the emergence of money historically, as the result of military conquest—a way to pay soldiers and buy supplies when in an occupied territory where nobody trusts you. He demolishes the (always fishy) argument that money emerged as a way of mediating a barter system: If I catch fish and he makes shoes and I want some shoes but he doesn’t want fish right now, why not just make a deal to pay later? This is of course exactly what they did. Indeed Graeber uses the intentionally provocative word communism to describe the way that resources are typically distributed within families and small villages—because it basically is “from each according to his ability, to each according to his need”. (I would probably use the less-charged word “community”, but I have to admit that those come from the same Latin root.) He also describes something I’ve tried to explain many times to neoclassical economists to no avail: There is equally a communism of the rich, a solidarity of deal-making and collusion that undermines the competitive market that is supposed to keep the rich in check. Graeber points out that wine, women and feasting have been common parts of deals between villages throughout history—and yet are still common parts of top-level business deals in modern capitalism. Even as we claim to be atomistic rational agents we still fall back on the community norms that guided our ancestors.

Another one of my favorite lines in the book is on this very subject: “Why, if I took a free-market economic theorist out to an expensive dinner, would that economist feel somewhat diminished—uncomfortably in my debt—until he had been able to return the favor? Why, if he were feeling competitive with me, would he be inclined to take me someplace even more expensive?” That doesn’t make any sense at all under the theory of neoclassical rational agents (an infinite identical psychopath would just enjoy the dinner—free dinner!—and might never speak to you again), but it makes perfect sense under the cultural norms of community in which gifts form bonds and generosity is a measure of moral character. I also got thinking about how introducing money directly into such exchanges can change them dramatically: For instance, suppose I took my professor out to a nice dinner with drinks in order to thank him for writing me recommendation letters. This seems entirely appropriate, right? But now suppose I just paid him $30 for writing the letters. All the sudden it seems downright corrupt. But the dinner check said $30 on it! My bank account debit is the same! He might go out and buy a dinner with it! What’s the difference? I think the difference is that the dinner forms a relationship that ties the two of us together as individuals, while the cash creates a market transaction between two interchangeable economic agents. By giving my professor cash I would effectively be saying that we are infinite identical psychopaths.

While Graeber doesn’t get into it, a similar argument also applies to gift-giving on holidays and birthdays. There seriously is—I kid you not—a neoclassical economist who argues that Christmas is economically inefficient and should be abolished in favor of cash transfers. He wrote a book about it. He literally does not understand the concept of gift-giving as a way of sharing experiences and solidifying relationships. This man must be such a joy to have around! I can imagine it now: “Will you play catch with me, Daddy?” “Daddy has to work, but don’t worry dear, I hired a minor league catcher to play with you. Won’t that be much more efficient?”

This sort of thing is what makes Debt such a compelling read, and Graeber does make some good points and presents a wealth of historical information. So now it’s time to talk about what’s wrong with the book, the things Graeber gets wrong.

First of all, he’s clearly quite ignorant about the state-of-the-art in economics, and I’m not even talking about the sort of cutting-edge cognitive economics experiments I want to be doing. (When I read what Molly Crockett has been working on lately in the neuroscience of moral judgments, I began to wonder if I should apply to University College London after all.)

No, I mean Graeber is ignorant of really basic stuff, like the nature of government debt—almost nothing of what I said in that post is controversial among serious economists; the equations certainly aren’t, though some of the interpretation and application might be. (One particularly likely sticking point called “Ricardian equivalence” is something I hope to get into in a future post. You already know the refrain: Ricardian equivalence only happens if you live in a world of infinite identical psychopaths.) Graeber has internalized the Republican talking points about how this is money our grandchildren will owe to China; it’s nothing of the sort, and most of it we “owe” to ourselves. In a particularly baffling passage Graeber talks about how there are no protections for creditors of the US government, when creditors of the US government have literally never suffered a single late payment in the last 200 years. There are literally no creditors in the world who are more protected from default—and only a few others that reach the same level, such as creditors to the Bank of England.

In an equally-bizarre aside he also says in one endnote that “mainstream economists” favor the use of the gold standard and are suspicious of fiat money; exactly the opposite is the case. Mainstream economists—even the neoclassicists with whom I have my quarrels—are in almost total agreement that a fiat monetary system managed by a central bank is the only way to have a stable money supply. The gold standard is the pet project of a bunch of cranks and quacks like Peter Schiff. Like most quacks, the are quite vocal; but they are by no means supported by academic research or respected by top policymakers. (I suppose the latter could change if enough Tea Party Republicans get into office, but so far even that hasn’t happened and Janet Yellen continues to manage our fiat money supply.) In fact, it’s basically a consensus among economists that the gold standard caused the Great Depression—that in addition to some triggering event (my money is on Minsky-style debt deflation—and so is Krugman’s), the inability of the money supply to adjust was the reason why the world economy remained in such terrible shape for such a long period. The gold standard has not been a mainstream position among economists since roughly the mid-1980s—before I was born.

He makes this really bizarre argument about how because Korea, Japan, Taiwan, and West Germany are major holders of US Treasury bonds and became so under US occupation—which is indisputably true—that means that their development was really just some kind of smokescreen to sell more Treasury bonds. First of all, we’ve never had trouble selling Treasury bonds; people are literally accepting negative interest rates in order to have them right now. More importantly, Korea, Japan, Taiwan, and West Germany—those exact four countries, in that order—are the greatest economic success stories in the history of the human race. West Germany was rebuilt literally from rubble to become once again a world power. The Asian Tigers were even more impressive, raised from the most abject Third World poverty to full First World high-tech economy status in a few generations. If this is what happens when you buy Treasury bonds, we should all buy as many Treasury bonds as we possibly can. And while that seems intuitively ridiculous, I have to admit, China’s meteoric rise also came with an enormous investment in Treasury bonds. Maybe the secret to economic development isn’t physical capital or exports or institutions; nope, it’s buying Treasury bonds. (I don’t actually believe this, but the correlation is there, and it totally undermines Graeber’s argument that buying Treasury bonds makes you some kind of debt peon.)

Speaking of correlations, Graeber is absolutely terrible at econometrics; he doesn’t even seem to grasp the most basic concepts. On page 366 he shows this graph of the US defense budget and the US federal debt side by side in order to argue that the military is the primary reason for our national debt. First of all, he doesn’t even correct for inflation—so most of the exponential rise in the two curves is simply the purchasing power of the dollar declining over time. Second, he doesn’t account for GDP growth, which is most of what’s left after you account for inflation. He has two nonstationary time-series with obvious exponential trends and doesn’t even formally correlate them, let alone actually perform the proper econometrics to show that they are cointegrated. I actually think they probably are cointegrated, and that a large portion of national debt is driven by military spending, but Graeber’s graph doesn’t even begin to make that argument. You could just as well graph the number of murders and the number of cheesecakes sold, each on an annual basis; both of them would rise exponentially with population, thus proving that cheesecakes cause murder (or murders cause cheesecakes?).

And then where Graeber really loses me is when he develops his theory of how modern capitalism and the monetary and debt system that go with it are fundamentally corrupt to the core and must be abolished and replaced with something totally new. First of all, he never tells us what that new thing is supposed to be. You’d think in 400 pages he could at least give us some idea, but no; nothing. He apparently wants us to do “not capitalism”, which is an infinite space of possible systems, some of which might well be better, but none of which can actually be implemented without more specific ideas. Many have declared that Occupy has failed—I am convinced that those who say this appreciate neither how long it takes social movements to make change, nor how effective Occupy has already been at changing our discourse, so that Capital in the Twenty-First Century can be a bestseller and the President of the United States can mention income inequality and economic mobility in his speeches—but insofar as Occupy has failed to achieve its goals, it seems to me that this is because it was never clear just what Occupy’s goals were to begin with. Now that I’ve read Graeber’s work, I understand why: He wanted it that way. He didn’t want to go through the hard work (which is also risky: you could be wrong) of actually specifying what this new economic system would look like; instead he’d prefer to find flaws in the current system and then wait for someone else to figure out how to fix them. That has always been the easy part; any human system comes with flaws. The hard part is actually coming up with a better system—and Graeber doesn’t seem willing to even try.

I don’t know exactly how accurate Graeber’s historical account is, but it seems to check out, and even make sense of some things that were otherwise baffling about the sketchy account of the past I had previously learned. Why were African tribes so willing to sell their people into slavery? Well, because they didn’t think of it as their people—they were selling captives from other tribes taken in war, which is something they had done since time immemorial in the form of slaves for slaves rather than slaves for goods. Indeed, it appears that trade itself emerged originally as what Graeber calls a “human economy”, in which human beings are literally traded as a fungible commodity—but always humans for humans. When money was introduced, people continued selling other people, but now it was for goods—and apparently most of the people sold were young women. So much of the Bible makes more sense that way: Why would Job be all right with getting new kids after losing his old ones? Kids are fungible! Why would people sell their daughters for goats? We always sell women! How quickly do we flirt with the unconscionable, when first we say that all is fungible.

One of Graeber’s central points is that debt came long before money—you owed people apples or hours of labor long before you ever paid anybody in gold. Money only emerged when debt became impossible to enforce, usually because trade was occurring between soldiers and the villages they had just conquered, so nobody was going to trust anyone to pay anyone back. Immediate spot trades were the only way to ensure that trades were fair in the absence of trust or community. In other words, the first use of gold as money was really using it as collateral. All of this makes a good deal of sense, and I’m willing to believe that’s where money originally came from.

But then Graeber tries to use this horrific and violent origin of money—in war, rape, and slavery, literally some of the worst things human beings have ever done to one another—as an argument for why money itself is somehow corrupt and capitalism with it. This is nothing short of a genetic fallacy: I could agree completely that money had this terrible origin, and yet still say that money is a good thing and worth preserving. (Indeed, I’m rather strongly inclined to say exactly that.) The fact that it was born of violence does not mean that it is violence; we too were born of violence, literally millions of years of rape and murder. It is astronomically unlikely that any one of us does not have a murderer somewhere in our ancestry. (Supposedly I’m descended from Julius Caesar, hence my last name Julius—not sure I really believe that—but if so, there you go, a murderer and tyrant.) Are we therefore all irredeemably corrupt? No. Where you come from does not decide what you are or where you are going.

In fact, I could even turn the argument around: Perhaps money was born of violence because it is the only alternative to violence; without money we’d still be trading our daughters away because we had no other way of trading. I don’t think I believe that either; but it should show you how fragile an argument from origin really is.

This is why the whole book gives this strange feeling of non sequitur; all this history is very interesting and enlightening, but what does it have to do with our modern problems? Oh. Nothing, that’s what. The connection you saw doesn’t make any sense, so maybe there’s just no connection at all. Well all right then. This was an interesting little experience.

This is a shame, because I do think there are important things to be said about the nature of money culturally, philosophically, morally—but Graeber never gets around to saying them, seeming to think that merely pointing out money’s violent origins is a sufficient indictment. It’s worth talking about the fact that money is something we made, something we can redistribute or unmake if we choose. I had such high expectations after I read that little interchange about the IMF: Yes! Finally, someone gets it! No, you don’t have to repay debts if that means millions of people will suffer! But then he never really goes back to that. The closest he veers toward an actual policy recommendation is at the very end of the book, a short section entitled “Perhaps the world really does owe you a living” in which he very briefly suggests—doesn’t even argue for, just suggests—that perhaps people do deserve a certain basic standard of living even if they aren’t working. He could have filled 50 pages arguing the ins and outs of a basic income with graphs and charts and citations of experimental data—but no, he just spends a few paragraphs proposing the idea and then ends the book. (I guess I’ll have to write that chapter myself; I think it would go well in The End of Economics, which I hope to get back to writing in a few months—while I also hope to finally publish my already-written book The Mathematics of Tears and Joy.)

If you want to learn about the history of money and debt over the last 5000 years, this is a good book to do so—and that is, after all, what the title said it would be. But if you’re looking for advice on how to improve our current economic system for the benefit of all humanity, you’ll need to look elsewhere.

And so in the grand economic tradition of reducing complex systems into a single numeric utility value, I rate Debt: The First 5000 Years a 3 out of 5.