Externalities

JDN 2457202 EDT 17:52.

The 1992 Bill Clinton campaign had a slogan, “It’s the economy, stupid.”: A snowclone I’ve used on occasion is “it’s the externalities, stupid.” (Though I’m actually not all that fond of calling people ‘stupid’; though occasionally true is it never polite and rarely useful.) Externalities are one of the most important concepts in economics, and yet one that even all too many economists frequently neglect.

Fortunately for this one, I really don’t need much math; the concept isn’t even that complicated, which makes it all the more mysterious how frequently it is ignored. An externality is simply an effect that an action has upon those who were not involved in choosing to perform that action.

All sorts of actions have externalities; indeed, much rarer are actions that don’t. An obvious example is that punching someone in the face has the externality of injuring that person. Pollution is an important externality of many forms of production, because the people harmed by pollution are typically not the same people who were responsible for creating it. Traffic jams are created because every car on the road causes a congestion externality on all the other cars.

All the aforementioned are negative externalities, but there are also positive externalities. When one individual becomes educated, they tend to improve the overall economic viability of the place in which they live. Building infrastructure benefits whole communities. New scientific discoveries enhance the well-being of all humanity.

Externalities are a fundamental problem for the functioning of markets. In the absence of externalities—if each person’s actions only affected that one person and nobody else—then rational self-interest would be optimal and anything else would make no sense. In arguing that rationality is equivalent to self-interest, generations of economists have been, tacitly or explicitly, assuming that there are no such things as externalities.

This is a necessary assumption to show that self-interest would lead to something I discussed in an earlier post: Pareto-efficiency, in which the only way to make one person better off is to make someone else worse off. As I already talked about in that other post, Pareto-efficiency is wildly overrated; a wide variety of Pareto-efficient systems would be intolerable to actually live in. But in the presence of externalities, markets can’t even guarantee Pareto-efficiency, because it’s possible to have everyone acting in their rational self-interest cause harm to everyone at once.

This is called a tragedy of the commons; the basic idea is really quite simple. Suppose that when I burn a gallon of gasoline, that makes me gain 5 milliQALY by driving my car, but then makes everyone lose 1 milliQALY in increased pollution. On net, I gain 4 milliQALY, so if I am rational and self-interested I would do that. But now suppose that there are 10 people all given the same choice. If we all make that same choice, each of us will gain 1 milliQALY—and then lose 10 milliQALY. We would all have been better off if none of us had done it, even though it made sense to each of us at the time. Burning a gallon of gasoline to drive my car is beneficial to me, more so than the release of carbon dioxide into the atmosphere is harmful; but as a result of millions of people burning gasoline, the carbon dioxide in the atmosphere is destabilizing our planet’s climate. We’d all be better off if we could find some way to burn less gasoline.

In order for rational self-interest to be optimal, externalities have to somehow be removed from the system. Otherwise, there are actions we can take that benefit ourselves but harm other people—and thus, we would all be better off if we acted to some degree altruistically. (When I say things like this, most non-economists think I am saying something trivial and obvious, while most economists insist that I am making an assertion that is radical if not outright absurd.)

But of course a world without externalities is a world of complete isolation; it’s a world where everyone lives on their own deserted island and there is no way of communicating or interacting with any other human being in the world. The only reasonable question about this world is whether we would die first or go completely insane first; clearly those are the two things that would happen. Human beings are fundamentally social animals—I would argue that we are in fact more social even than eusocial animals like ants and bees. (Ants and bees are only altruistic toward their own kin; humans are altruistic to groups of millions of people we’ve never even met.) Humans without social interaction are like flowers without sunlight.

Indeed, externalities are so common that if markets only worked in their absence, markets would make no sense at all. Fortunately this isn’t true; there are some ways that markets can be adjusted to deal with at least some kinds of externalities.

One of the most well-known is the Coase theorem; this is odd because it is by far the worst solution. The Coase theorem basically says that if you can assign and enforce well-defined property rights and there is absolutely no cost in making any transaction, markets will automatically work out all externalities. The basic idea is that if someone is about to perform an action that would harm you, you can instead pay them not to do it. Then, the harm to you will be prevented and they will incur an additional benefit.

In the above example, we could all agree to pay $30 (which let’s say is worth 1 milliQALY) to each person who doesn’t burn a gallon of gasoline that would pollute our air. Then, if I were thinking about burning some gasoline, I wouldn’t want to do it, because I’d lose the $300 in payments, which costs me 10 milliQALY, while the benefits of burning the gasoline are only 5 milliQALY. We all reason the same way, and the result is that nobody burns gasoline and actually the money exchanged all balances out so we end up where we were before. The result is that we are all better off.

The first thought you probably have is: How do I pay everyone who doesn’t hurt me? How do I even find all those people? How do I ensure that they follow through and actually don’t hurt me? These are the problems of transaction costs and contract enforcement that are usually presented as the problem with the Coase theorem, and they certainly are very serious problems. You end up needing some sort of government simply to enforce all those contracts, and even then there’s the question of how we can possibly locate everyone who has ever polluted our air or our water.

But in fact there’s an even more fundamental problem: This is extortion. We are almost always in the condition of being able to harm other people, and a system in which the reason people don’t hurt each other is because they’re constantly paying each other not to is a system in which the most intimidating psychopath is the wealthiest person in the world. That system is in fact Pareto-efficient (the psychopath does quite well for himself indeed); but it’s exactly the sort of Pareto-efficient system that isn’t worth pursuing.

Another response to externalities is simply to accept them, which isn’t as awful as it sounds. There are many kinds of externalities that really aren’t that bad, and anything we might do to prevent them is likely to make the cure worse than the disease. Think about the externality of people standing in front of you in line, or the externality of people buying the last cereal box off the shelf before you can get there. The externality of taking the job you applied for may hurt at the time, but in the long run that’s how we maintain a thriving and competitive labor market. In fact, even the externality of ‘gentrifying’ your neighborhood so you can no longer afford it is not nearly as bad as most people seem to think—indeed, the much larger problem seems to be the poor neighborhoods that don’t have rising incomes, remaining poor for generations. (It also makes no sense to call this “gentrifying”; the only landed gentry we have in America is the landowners who claim a ludicrous proportion of our wealth, not the middle-class people who buy cheap homes and move in. If you really want to talk about a gentry, you should be thinking Waltons and Kochs—or Bushs and Clintons.) These sorts of minor externalities that are better left alone are sometimes characterized as pecuniary externalities because they usually are linked to prices, but I think that really misses the point; it’s quite possible for an externality to be entirely price-related and do enormous damage (read: the entire financial system) and to have little or nothing to do with prices and still be not that bad (like standing in line as I mentioned above).

But obviously we can’t leave all externalities alone in this way. We can’t just let people rob and murder one another arbitrarily, or ignore the destruction of the world’s climate that threatens hundreds of millions of lives. We can’t stand back and let forests burn and rivers run dry when we could easily have saved them.

The much more reasonable and realistic response to externalities is what we call government—there are rules you have to follow in society and punishments you face if you don’t. We can avoid most of the transaction problems involved in figuring out who polluted our water by simply making strict rules about polluting water in general. We can prevent people from stealing each other’s things or murdering each other by police who will investigate and punish such crimes.

This is why regulation—and a government strong enough to enforce that regulation—is necessary for the functioning of a society. This dichotomy we have been sold about “regulations versus the market” is totally nonsensical; the market depends upon regulations. This doesn’t justify any particular regulation—and indeed, an awful lot of regulations are astonshingly bad. But some sort of regulatory system is necessary for a market to function at all, and the question has never been whether we will have regulations but which regulations we will have. People who argue that all regulations must go and the market would somehow work on its own are either deeply ignorant of economics or operating from an ulterior motive; some truly horrendous policies have been made by arguing that “less government is always better” when the truth is nothing of the sort.

In fact, there is one real-world method I can think of that actually comes reasonably close to eliminating all externalities—and it is called social democracy. By involving everyone—democracy—in a system that regulates the economy—socialism—we can, in a sense, involve everyone in every transaction, and thus make it impossible to have externalities. In practice it’s never that simple, of course; but the basic concept of involving our whole society in making the rules that our society will follow is sound—and in fact I can think of no reasonable alternative.

We have to institute some sort of regulatory system, but then we need to decide what the regulations will be and who will control them. If we want to instead vest power in a technocratic elite, how do you decide whom to include in that elite? How do we ensure that the technocrats are actually better for the general population if there is no way for that general population to have a say in their election? By involving as many people as we can in the decision-making process, we make it much less likely that one person’s selfish action will harm many others. Indeed, this is probably why democracy prevents famine and genocide—which are, after all, rather extreme examples of negative externalities.

How to change the world

JDN 2457166 EDT 17:53.

I just got back from watching Tomorrowland, which is oddly appropriate since I had already planned this topic in advance. How do we, as they say in the film, “fix the world”?

I can’t find it at the moment, but I vaguely remember some radio segment on which a couple of neoclassical economists were interviewed and asked what sort of career can change the world, and they answered something like, “Go into finance, make a lot of money, and then donate it to charity.”

In a slightly more nuanced form this strategy is called earning to give, and frankly I think it’s pretty awful. Most of the damage that is done to the world is done in the name of maximizing profits, and basically what you end up doing is stealing people’s money and then claiming you are a great altruist for giving some of it back. I guess if you can make enormous amounts of money doing something that isn’t inherently bad and then donate that—like what Bill Gates did—it seems better. But realistically your potential income is probably not actually raised that much by working in finance, sales, or oil production; you could have made the same income as a college professor or a software engineer and not be actively stripping the world of its prosperity. If we actually had the sort of ideal policies that would internalize all externalities, this dilemma wouldn’t arise; but we’re nowhere near that, and if we did have that system, the only billionaires would be Nobel laureate scientists. Albert Einstein was a million times more productive than the average person. Steve Jobs was just a million times luckier. Even then, there is the very serious question of whether it makes sense to give all the fruits of genius to the geniuses themselves, who very quickly find they have all they need while others starve. It was certainly Jonas Salk’s view that his work should only profit him modestly and its benefits should be shared with as many people as possible. So really, in an ideal world there might be no billionaires at all.

Here I would like to present an alternative. If you are an intelligent, hard-working person with a lot of talent and the dream of changing the world, what should you be doing with your time? I’ve given this a great deal of thought in planning my own life, and here are the criteria I came up with:

  1. You must be willing and able to commit to doing it despite great obstacles. This is another reason why earning to give doesn’t actually make sense; your heart (or rather, limbic system) won’t be in it. You’ll be miserable, you’ll become discouraged and demoralized by obstacles, and others will surpass you. In principle Wall Street quantitative analysts who make $10 million a year could donate 90% to UNICEF, but they don’t, and you know why? Because the kind of person who is willing and able to exploit and backstab their way to that position is the kind of person who doesn’t give money to UNICEF.
  2. There must be important tasks to be achieved in that discipline. This one is relatively easy to satisfy; I’ll give you a list in a moment of things that could be contributed by a wide variety of fields. Still, it does place some limitations: For one, it rules out the simplest form of earning to give (a more nuanced form might cause you to choose quantum physics over social work because it pays better and is just as productive—but you’re not simply maximizing income to donate). For another, it rules out routine, ordinary jobs that the world needs but don’t make significant breakthroughs. The world needs truck drivers (until robot trucks take off), but there will never be a great world-changing truck driver, because even the world’s greatest truck driver can only carry so much stuff so fast. There are no world-famous secretaries or plumbers. People like to say that these sorts of jobs “change the world in their own way”, which is a nice sentiment, but ultimately it just doesn’t get things done. We didn’t lift ourselves into the Industrial Age by people being really fantastic blacksmiths; we did it by inventing machines that make blacksmiths obsolete. We didn’t rise to the Information Age by people being really good slide-rule calculators; we did it by inventing computers that work a million times as fast as any slide-rule. Maybe not everyone can have this kind of grand world-changing impact; and I certainly agree that you shouldn’t have to in order to live a good life in peace and happiness. But if that’s what you’re hoping to do with your life, there are certain professions that give you a chance of doing so—and certain professions that don’t.
  3. The important tasks must be currently underinvested. There are a lot of very big problems that many people are already working on. If you work on the problems that are trendy, the ones everyone is talking about, your marginal contribution may be very small. On the other hand, you can’t just pick problems at random; many problems are not invested in precisely because they aren’t that important. You need to find problems people aren’t working on but should be—problems that should be the focus of our attention but for one reason or another get ignored. A good example here is to work on pancreatic cancer instead of breast cancer; breast cancer research is drowning in money and really doesn’t need any more; pancreatic cancer kills 2/3 as many people but receives less than 1/6 as much funding. If you want to do cancer research, you should probably be doing pancreatic cancer.
  4. You must have something about you that gives you a comparative—and preferably, absolute—advantage in that field. This is the hardest one to achieve, and it is in fact the reason why most people can’t make world-changing breakthroughs. It is in fact so hard to achieve that it’s difficult to even say you have until you’ve already done something world-changing. You must have something special about you that lets you achieve what others have failed. You must be one of the best in the world. Even as you stand on the shoulders of giants, you must see further—for millions of others stand on those same shoulders and see nothing. If you believe that you have what it takes, you will be called arrogant and naïve; and in many cases you will be. But in a few cases—maybe 1 in 100, maybe even 1 in 1000, you’ll actually be right. Not everyone who believes they can change the world does so, but everyone who changes the world believed they could.

Now, what sort of careers might satisfy all these requirements?

Well, basically any kind of scientific research:

Mathematicians could work on network theory, or nonlinear dynamics (the first step: separating “nonlinear dynamics” into the dozen or so subfields it should actually comprise—as has been remarked, “nonlinear” is a bit like “non-elephant”), or data processing algorithms for our ever-growing morasses of unprocessed computer data.

Physicists could be working on fusion power, or ways to neutralize radioactive waste, or fundamental physics that could one day unlock technologies as exotic as teleportation and faster-than-light travel. They could work on quantum encryption and quantum computing. Or if those are still too applied for your taste, you could work in cosmology and seek to answer some of the deepest, most fundamental questions in human existence.

Chemists could be working on stronger or cheaper materials for infrastructure—the extreme example being space elevators—or technologies to clean up landfills and oceanic pollution. They could work on improved batteries for solar and wind power, or nanotechnology to revolutionize manufacturing.

Biologists could work on any number of diseases, from cancer and diabetes to malaria and antibiotic-resistant tuberculosis. They could work on stem-cell research and regenerative medicine, or genetic engineering and body enhancement, or on gerontology and age reversal. Biology is a field with so many important unsolved problems that if you have the stomach for it and the interest in some biological problem, you can’t really go wrong.

Electrical engineers can obviously work on improving the power and performance of computer systems, though I think over the last 20 years or so the marginal benefits of that kind of research have begun to wane. Efforts might be better spent in cybernetics, control systems, or network theory, where considerably more is left uncharted; or in artificial intelligence, where computing power is only the first step.

Mechanical engineers could work on making vehicles safer and cheaper, or building reusable spacecraft, or designing self-constructing or self-repairing infrastructure. They could work on 3D printing and just-in-time manufacturing, scaling it up for whole factories and down for home appliances.

Aerospace engineers could link the world with hypersonic travel, build satellites to provide Internet service to the farthest reaches of the globe, or create interplanetary rockets to colonize Mars and the moons of Jupiter and Saturn. They could mine asteroids and make previously rare metals ubiquitous. They could build aerial drones for delivery of goods and revolutionize logistics.

Agronomists could work on sustainable farming methods (hint: stop farming meat), invent new strains of crops that are hardier against pests, more nutritious, or higher-yielding; on the other hand a lot of this is already being done, so maybe it’s time to think outside the box and consider what we might do to make our food system more robust against climate change or other catastrophes.

Ecologists will obviously be working on predicting and mitigating the effects of global climate change, but there are a wide variety of ways of doing so. You could focus on ocean acidification, or on desertification, or on fishery depletion, or on carbon emissions. You could work on getting the climate models so precise that they become completely undeniable to anyone but the most dogmatically opposed. You could focus on endangered species and habitat disruption. Ecology is in general so underfunded and undersupported that basically anything you could do in ecology would be beneficial.

Neuroscientists have plenty of things to do as well: Understanding vision, memory, motor control, facial recognition, emotion, decision-making and so on. But one topic in particular is lacking in researchers, and that is the fundamental Hard Problem of consciousness. This one is going to be an uphill battle, and will require a special level of tenacity and perseverance. The problem is so poorly understood it’s difficult to even state clearly, let alone solve. But if you could do it—if you could even make a significant step toward it—it could literally be the greatest achievement in the history of humanity. It is one of the fundamental questions of our existence, the very thing that separates us from inanimate matter, the very thing that makes questions possible in the first place. Understand consciousness and you understand the very thing that makes us human. That achievement is so enormous that it seems almost petty to point out that the revolutionary effects of artificial intelligence would also fall into your lap.

The arts and humanities also have a great deal to contribute, and are woefully underappreciated.

Artists, authors, and musicians all have the potential to make us rethink our place in the world, reconsider and reimagine what we believe and strive for. If physics and engineering can make us better at winning wars, art and literature and remind us why we should never fight them in the first place. The greatest works of art can remind us of our shared humanity, link us all together in a grander civilization that transcends the petty boundaries of culture, geography, or religion. Art can also be timeless in a way nothing else can; most of Aristotle’s science is long-since refuted, but even the Great Pyramid thousands of years before him continues to awe us. (Aristotle is about equidistant chronologically between us and the Great Pyramid.)

Philosophers may not seem like they have much to add—and to be fair, a great deal of what goes on today in metaethics and epistemology doesn’t add much to civilization—but in fact it was Enlightenment philosophy that brought us democracy, the scientific method, and market economics. Today there are still major unsolved problems in ethics—particularly bioethics—that are in need of philosophical research. Technologies like nanotechnology and genetic engineering offer us the promise of enormous benefits, but also the risk of enormous harms; we need philosophers to help us decide how to use these technologies to make our lives better instead of worse. We need to know where to draw the lines between life and death, between justice and cruelty. Literally nothing could be more important than knowing right from wrong.

Now that I have sung the praises of the natural sciences and the humanities, let me now explain why I am a social scientist, and why you probably should be as well.

Psychologists and cognitive scientists obviously have a great deal to give us in the study of mental illness, but they may actually have more to contribute in the study of mental health—in understanding not just what makes us depressed or schizophrenic, but what makes us happy or intelligent. The 21st century may not simply see the end of mental illness, but the rise of a new level of mental prosperity, where being happy, focused, and motivated are matters of course. The revolution that biology has brought to our lives may pale in comparison to the revolution that psychology will bring. On the more social side of things, psychology may allow us to understand nationalism, sectarianism, and the tribal instinct in general, and allow us to finally learn to undermine fanaticism, encourage critical thought, and make people more rational. The benefits of this are almost impossible to overstate: It is our own limited, broken, 90%-or-so heuristic rationality that has brought us from simians to Shakespeare, from gorillas to Godel. To raise that figure to 95% or 99% or 99.9% could be as revolutionary as was whatever evolutionary change first brought us out of the savannah as Australopithecus africanus.

Sociologists and anthropologists will also have a great deal to contribute to this process, as they approach the tribal instinct from the top down. They may be able to tell us how nations are formed and undermined, why some cultures assimilate and others collide. They can work to understand combat bigotry in all its forms, racism, sexism, ethnocentrism. These could be the fields that finally end war, by understanding and correcting the imbalances in human societies that give rise to violent conflict.

Political scientists and public policy researchers can allow us to understand and restructure governments, undermining corruption, reducing inequality, making voting systems more expressive and more transparent. They can search for the keystones of different political systems, finding the weaknesses in democracy to shore up and the weaknesses in autocracy to exploit. They can work toward a true international government, representative of all the world’s people and with the authority and capability to enforce global peace. If the sociologists don’t end war and genocide, perhaps the political scientists can—or more likely they can do it together.

And then, at last, we come to economists. While I certainly work with a lot of ideas from psychology, sociology, and political science, I primarily consider myself an economist. Why is that? Why do I think the most important problems for me—and perhaps everyone—to be working on are fundamentally economic?

Because, above all, economics is broken. The other social sciences are basically on the right track; their theories are still very limited, their models are not very precise, and there are decades of work left to be done, but the core principles upon which they operate are correct. Economics is the field to work in because of criterion 3: Almost all the important problems in economics are underinvested.

Macroeconomics is where we are doing relatively well, and yet the Keynesian models that allowed us to reduce the damage of the Second Depression nonetheless had no power to predict its arrival. While inflation has been at least somewhat tamed, the far worse problem of unemployment has not been resolved or even really understood.

When we get to microeconomics, the neoclassical models are totally defective. Their core assumptions of total rationality and total selfishness are embarrassingly wrong. We have no idea what controls assets prices, or decides credit constraints, or motivates investment decisions. Our models of how people respond to risk are all wrong. We have no formal account of altruism or its limitations. As manufacturing is increasingly automated and work shifts into services, most economic models make no distinction between the two sectors. While finance takes over more and more of our society’s wealth, most formal models of the economy don’t even include a financial sector.

Economic forecasting is no better than chance. The most widely-used asset-pricing model, CAPM, fails completely in empirical tests; its defenders concede this and then have the audacity to declare that it doesn’t matter because the mathematics works. The Black-Scholes derivative-pricing model that caused the Second Depression could easily have been predicted to do so, because it contains a term that assumes normal distributions when we know for a fact that financial markets are fat-tailed; simply put, it claims certain events will never happen that actually occur several times a year.

Worst of all, economics is the field that people listen to. When a psychologist or sociologist says something on television, people say that it sounds interesting and basically ignore it. When an economist says something on television, national policies are shifted accordingly. Austerity exists as national policy in part due to a spreadsheet error by two famous economists.

Keynes already knew this in 1936: “The ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else. Practical men, who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back.”

Meanwhile, the problems that economics deals with have a direct influence on the lives of millions of people. Bad economics gives us recessions and depressions; it cripples our industries and siphons off wealth to an increasingly corrupt elite. Bad economics literally starves people: It is because of bad economics that there is still such a thing as world hunger. We have enough food, we have the technology to distribute it—but we don’t have the economic policy to lift people out of poverty so that they can afford to buy it. Bad economics is why we don’t have the funding to cure diabetes or colonize Mars (but we have the funding for oil fracking and aircraft carriers, don’t we?). All of that other scientific research that needs done probably could be done, if the resources of our society were properly distributed and utilized.

This combination of both overwhelming influence, overwhelming importance and overwhelming error makes economics the low-hanging fruit; you don’t even have to be particularly brilliant to have better ideas than most economists (though no doubt it helps if you are). Economics is where we have a whole bunch of important questions that are unanswered—or the answers we have are wrong. (As Will Rogers said, “It isn’t what we don’t know that gives us trouble, it’s what we know that ain’t so.”)

Thus, rather than tell you go into finance and earn to give, those economists could simply have said: “You should become an economist. You could hardly do worse than we have.”

Happy Capybara Day! Or the power of culture

JDN 2457131 EDT 14:33.

Did you celebrate Capybara Day yesterday? You didn’t? Why not? We weren’t able to find any actual capybaras this year, but maybe next year we’ll be able to plan better and find a capybara at a zoo; unfortunately the nearest zoo with a capybara appears to be in Maryland. But where would we be without a capybara to consult annually on the stock market?

Right now you are probably rather confused, perhaps wondering if I’ve gone completely insane. This is because Capybara Day is a holiday of my own invention, one which only a handful of people have even heard about.

But if you think we’d never have a holiday so bizarre, think again: For all I did was make some slight modifications to Groundhog Day. Instead of consulting a groundhog about the weather every February 2, I proposed that we consult a capybara about the stock market every April 17. And if you think you have some reason why groundhogs are better at predicting the weather (perhaps because they at least have some vague notion of what weather is) than capybaras are at predicting the stock market (since they have no concept of money or numbers), think about this: Capybara Day could produce extremely accurate predictions, provided only that people actually believed it. The prophecy of rising or falling stock prices could very easily become self-fulfilling. If it were a cultural habit of ours to consult capybaras about the stock market, capybaras would become good predictors of the stock market.

That might seem a bit far-fetched, but think about this: Why is there a January Effect? (To be fair, some researchers argue that there isn’t, and the apparent correlation between higher stock prices and the month of January is simply an illusion, perhaps the result of data overfitting.)

But I think it probably is real, and moreover has some very obvious reasons behind it. In this I’m in agreement with Richard Thaler, a founder of cognitive economics who wrote about such anomalies in the 1980s. December is a time when two very culturally-important events occur: The end of the year, during which many contracts end, profits are assessed, and tax liabilities are determined; and Christmas, the greatest surge of consumer spending and consumer debt.

The first effect means that corporations are very likely to liquidate assets—particularly assets that are running at a loss—in order to minimize their tax liabilities for the year, which will drive down prices. The second effect means that consumers are in search of financing for extravagant gift purchases, and those who don’t run up credit cards may instead sell off stocks. This is if anything a more rational way of dealing with the credit constraint, since interest rates on credit cards are typically far in excess of stock returns. But this surge of selling due to credit constraints further depresses prices.

In January, things return to normal; assets are repurchased, debt is repaid. This brings prices back up to where they were, which results in a higher than normal return for January.

Neoclassical economists are loath to admit that such a seasonal effect could exist, because it violates their concept of how markets work—and to be fair, the January Effect is actually weak enough to be somewhat ambiguous. But actually it doesn’t take much deviation from neoclassical models to explain the effect: Tax policies and credit constraints are basically enough to do it, so you don’t even need to go that far into understanding human behavior. It’s perfectly rational to behave this way given the distortions that are created by taxes and credit limits, and the arbitrage opportunity is one that you can only take advantage of if you have large amounts of credit and aren’t worried about minimizing your tax liabilities. It’s important to remember just how strong the assumptions of models like CAPM truly are; in addition to the usual infinite identical psychopaths, CAPM assumes there are no taxes, no transaction costs, and unlimited access to credit. I’d say it’s amazing that it works at all, but actually, it doesn’t—check out this graph of risk versus return and tell me if you think CAPM is actually giving us any information at all about how stock markets behave. It frankly looks like you could have drawn a random line through a scatter plot and gotten just as good a fit. Knowing how strong its assumptions are, we would not expect CAPM to work—and sure enough, it doesn’t.

Of course, that leaves the question of why our tax policy would be structured in this way—why make the year end on December 31 instead of some other date? And for that, you need to go back through hundreds of years of history, the Gregorian calendar, which in turn was influenced by Christianity, and before that the Julian calendar—in other words, culture.

Culture is one of the most powerful forces that influences human behavior—and also one of the strangest and least-understood. Economic theory is basically silent on the matter of culture. Typically it is ignored entirely, assumed to be irrelevant against the economic incentives that are the true drivers of human action. (There’s a peculiar emotion many neoclassical economists express that I can best describe as self-righteous cynicism, the attitude that we alone—i.e., economists—understand that human beings are not the noble and altruistic creatures many imagine us to be, nor beings of art and culture, but simply cold, calculating machines whose true motives are reducible to profit incentives—and all who think otherwise are being foolish and naïve; true enlightenment is understanding that human beings are infinite identical psychopaths. This is the attitude epitomized by the economist who once sent me an email with “altruism” written in scare quotes.)

Occasionally culture will be invoked as an external (in jargon, exogenous) force, to explain some aspect of human behavior that is otherwise so totally irrational that even invoking nonsensical preferences won’t make it go away. When a suicide bomber blows himself up in a crowd of people, it’s really pretty hard to explain that in terms of rational profit incentives—though I have seen it tried. (It could be self-interest at a larger scale, like families or nations—but then, isn’t that just the tribal paradigm I’ve been arguing for all along?)

But culture doesn’t just motivate us to do extreme or wildly irrational things. It motivates us all the time, often in quite beneficial ways; we wait in line, hold doors for people walking behind us, tip waiters who serve us, and vote in elections, not because anyone pressures us directly to do so (unlike say Australia we do not have compulsory voting) but because it’s what we feel we ought to do. There is a sense of altruism—and altruism provides the ultimate justification for why it is right to do these things—but the primary motivator in most cases is culture—that’s what people do, and are expected to do, around here.

Indeed, even when there is a direct incentive against behaving a certain way—like criminal penalties against theft—the probability of actually suffering a direct penalty is generally so low that it really can’t be our primary motivation. Instead, the reason we don’t cheat and steal is that we think we shouldn’t, and a major part of why we think we shouldn’t is that we have cultural norms against it.

We can actually observe differences in cultural norms across countries in the laboratory. In this 2008 study by Massimo Castro (PDF) comparing British and Italian people playing an economic game called the public goods game in which you can pay a cost yourself to benefit the group as a whole, it was found not only that people were less willing to benefit groups of foreigners than groups of compatriots, British people were overall more generous than Italian people. This 2010 study by Gachter et. al. (actually Joshua Greene talked about it last week) compared how people play the game in various cities, they found three basic patterns: In Western European and American cities such as Zurich, Copenhagen and Boston, cooperation started out high and remained high throughout; people were just cooperative in general. In Asian cities such as Chengdu and Seoul, cooperation started out low, but if people were punished for not cooperating, cooperation would improve over time, eventually reaching about the same place as in the highly cooperative cities. And in Mediterranean cities such as Istanbul, Athens, and Riyadh, cooperation started low and stayed low—even when people could be punished for not cooperating, nobody actually punished them. (These patterns are broadly consistent with the World Bank corruption ratings of these regions, by the way; Western Europe shows very low corruption, while Asia and the Mediterranean show high corruption. Of course this isn’t all that’s going on—and Asia isn’t much less corrupt than the Middle East, while this experiment might make you think so.)

Interestingly, these cultural patterns showed Melbourne as behaving more like an Asian city than a Western European one—perhaps being in the Pacific has worn off on Australia more than they realize.

This is very preliminary, cutting-edge research I’m talking about, so be careful about drawing too many conclusions. But in general we’ve begun to find some fairly clear cultural differences in economic behavior across different societies. While this would not be at all surprising to a sociologist or anthropologist, it’s the sort of thing that economists have insisted for years is impossible.

This is the frontier of cognitive economics, in my opinion. We know that culture is a very powerful motivator of our behavior, and it is time for us to understand how it works—and then, how it can be changed. We know that culture can be changed—cultural norms do change over time, sometimes remarkably rapidly; but we have only a faint notion of how or why they change. Changing culture has the power to do things that simply changing policy cannot, however; policy requires enforcement, and when the enforcement is removed the behavior will often disappear. But if a cultural norm can be imparted, it could sustain itself for a thousand years without any government action at all.

Why did we ever privatize prisons?

JDN 2457103 EDT 10:24.

Since the Reagan administration (it’s always Reagan), the United States has undergone a spree of privatization of public services, in which services that are ordinarily performed by government agencies are instead contracted out to private companies. Enormous damage to our society has been done by this sort of privatization, from healthcare to parking meters.

This process can vary in magnitude.

The weakest form, which is relatively benign, is for the government to buy specific services like food service or equipment manufacturing from companies that already provide them to consumers. There’s no particular reason for the government to make their own toothpaste or wrenches rather than buy them from corporations like Proctor & Gamble and Sears. Toothpaste is toothpaste and wrenches are wrenches.

The moderate form is for the government to contract services to specific companies that may involve government-specific features like security clearances or powerful military weapons. This is already raising a lot of problems: When Northrop-Grumman makes our stealth bombers, and Boeing builds our nuclear ICBMs, these are publicly-traded, for-profit corporations manufacturing some of the deadliest weapons ever created—weapons that could literally destroy human civilization in a matter of minutes. Markets don’t work well in the presence of externalities, and weapons by definition are almost nothing but externalities; their entire function is to cause harm—typically, death—to people without their consent. While this violence may sometimes be justified, it must never be taken lightly; and we are right to be uncomfortable with the military-industrial complex whose shareholders profit from death and destruction. (Eisenhower tried to warn us!) Still, there are some good arguments to be made for this sort of privatization, since many of these corporations already have high-tech factories and skilled engineers that they can easily repurpose, and competitive bids between different corporations can keep the price down. (Of course, with no-bid contracts that no longer applies; and it certainly hasn’t stopped us from spending nearly as much on the military as the rest of the world combined.)

What I’d really like to focus on today is the strongest form of privatization, in which basic government services are contracted out to private companies. This is what happens when you attempt to privatize soldiers, SWAT teams, and prisons—all of which the United States has done since Reagan.

I say “attempt” to privatize because in a very real sense the privatization of these services is incoherent—they are functions so basic to government that simply to do them makes you, de facto, part of the government. (Or, if done without government orders, it would be organized crime.) All you’ve really done by “privatizing” these services is reduced their transparency and accountability, as well as siphoning off a portion of the taxpayer money in the form of profits for shareholders.

The benefits of privatization, when they exist, are due to competition and consumer freedom. The foundation of a capitalist economy is the ability to say “I’ll take my business elsewhere.” (This is why the notion that a bank can sell your loan to someone else is the opposite of a free market; forcing you to write a check to someone you never made a contract with is antithetical to everything the free market stands for.) Actually the closest thing to a successful example of privatized government services is the United States Postal Service, which collects absolutely no tax income. They do borrow from the government and receive subsidies for some of their services—but so does General Motors. Frankly I think the Postal Service has a better claim to privatization than GM, which you may recall only exists today because of a massive government bailout with a net cost to the US government of $11 billion. All the Postal Service does differently is act as a tightly-regulated monopoly that provides high-quality service to everyone at low prices and pays good wages and pensions, all without siphoning profits to shareholders. (They really screwed up my mail forwarding lately, but they are still one of the best postal systems in the world.) It is in many ways the best of both worlds, the efficiency of capitalism with the humanity of socialism.

The Corrections Corporation of America, on the other hand, is the exact opposite, the worst of both worlds, the inefficiency of socialism with the inhumanity of capitalism. It is not simply corrupt but frankly inherently corrupt—there is simply no way you can have a for-profit prison system that isn’t corrupt. Maybe it can be made less corrupt or more corrupt, but the mere fact that shareholders are earning profits from incarcerating prisoners is fundamentally antithetical to a free and just society.

I really can’t stress this enough: Privatizing soldiers and prisons makes no sense at all. It doesn’t even make sense in a world of infinite identical psychopaths; nothing in neoclassical economic theory in any way supports these privatizations. Neoclassical theory is based upon the presumption of a stable government that enforces property rights, a government that provides as much service as necessary exactly at cost and is not attempting to maximize any notion of its own “profit”.

That’s ridiculous, of course—much like the neoclassical rational agent—and more recent work has been done in public choice theory about the various interest groups that act against each other in government, including lobbyists for private corporations—but public choice theory is above all a theory of government failure. It is a theory of why governments don’t work as well as we would like them to—the main question is how we can suppress the influence of special interest groups to advance the public good. Privatization of prisons means creating special interest groups where none existed, making the government less directed at the public good.

Privatizing government services is often described as “reducing the size of government”, usually interpreted in the most narrow sense to mean the tax burden. But Big Government doesn’t mean you pay 22% of GDP instead of 18% of GDP; Big Government means you can be arrested and imprisoned without trial. Even using the Heritage Foundation’s metrics, the correlation between tax burden and overall freedom is positive. Tyrannical societies don’t bother with taxes; they own the oil refineries directly (Venezuela), or print money whenever they want (Zimbabwe), or build the whole society around doing what they want (North Korea).

The incarceration rate is a much better measure of a society’s freedom than the tax rate will ever be—and the US isn’t doing so well in that regard; indeed we have by some measures the highest incarceration rate in the world. Fortunately we do considerably better when it comes to things like free speech and freedom of religion—indeed we are still above average in overall freedom. Though we do imprison more of our people than China, I’m not suggesting that China has a freer society. But why do we imprison so many people?

Well, it seems to have something to do with privatization of prisons. Indeed, there is a strong correlation between the privatization of US prisons and the enormous explosion of incarceration in the United States. In fact privatized prisons don’t even reduce the tax burden, because privatization does not decrease demand and “privatized” prisons must still be funded by taxes. Prisons do not have customers who choose between different competing companies and shop for the highest quality and lowest price—prisoners go to the prison they are assigned to and they can’t leave (which is really the whole point). Even competition at the purchase end doesn’t make much sense, since the government can’t easily transfer all the prisoners to a new company. Maybe they could transfer ownership of the prison to a different company, but even then the transition costs would be substantial, and besides, there are only a handful of prison corporations that corner most of the (so-called) market.

There is simply no economic basis for privatization of prisons. Nothing in either neoclassical theory or more modern cognitive science in any way supports the idea. So the real question is: Why did we ever privatize prisons?

Basically there is only one reason: Ideology. The post-Reagan privatization spree was not actually based on economics—it was based on economic ideology. Either because they actually believed it, or by the Upton Sinclair Principle, a large number of economists adopted a radical far-right ideology that government basically should not exist—that the more we give more power to corporations and less power to elected officials the better off we will be.

They defended this ideology on vaguely neoclassical grounds, mumbling something about markets being more efficient; but this isn’t even like cutting off the wings of the airplane because we’re assuming frictionless vacuum—it’s like cutting off the engines of the airplane because we simply hate engines and are looking for any excuse to get rid of them. There is absolutely nothing in neoclassical economic theory that says it would be efficient or really beneficial in any way to privatize prisons. It was all about taking power away from the elected government and handing it over to for-profit corporations.

This is a bit of consciousness-raising I’m trying to do: Any time you hear someone say that something should be apolitical, I want you to substitute the word undemocratic. When they say that judges shouldn’t be elected so that they can be apolitical—they mean undemocratic. When they say that the Federal Reserve should be independent of politics—they mean independent of voting. They want to take decision power away from the public at large and concentrate it more in the hands of an elite. People who say this sort of thing literally do not believe in democracy.

To be fair, there may actually be good reasons to not believe in democracy, or at least to believe that democracy should be constrained by a constitution and a system of representation. Certain rights are inalienable, regardless of what the voting public may say, which is why we need a constitution that protects those rights above all else. (In theory… there’s always the PATRIOT ACT, speaking of imprisoning people without trial.) Moreover, most people are simply not interested enough—or informed enough—to vote on every single important decision the government makes. It makes sense for us to place this daily decision-making power in the hands of an elite—but it must be an elite we choose.

And yes, people often vote irrationally. One of the central problems in the United States today is that almost half the population consistently votes against rational government and their own self-interest on the basis of a misguided obsession with banning abortion, combined with a totally nonsensical folk theory of economics in which poor people are poor because they are lazy, the government inherently destroys whatever wealth it touches, and private-sector “job creators” simply hand out jobs to other people because they have extra money lying around. Then of course there’s—let’s face it—deep-seated bigotry toward women, racial minorities, and LGBT people. (The extreme hatred toward Obama and suspicion that he isn’t really born in the US really can’t be explained any other way.) In such circumstances it may be tempting to say that we should give up on democracy and let expert technocrats take charge; but in the absence of democratic safeguards, technocracy is little more than another name for oligarchy. Maybe it’s enough that the President appoints the Federal Reserve chair and the Supreme Court? I’m not so sure. Ben Bernanke definitely handled the Second Depression better than Congress did, I’ll admit; but I’m not sure Alan Greenspan would have in his place, and given his babbling lately about returning to Bretton Woods I’m pretty sure Paul Volcker wouldn’t have. (If you don’t see what’s wrong with going back to Bretton Woods, which was basically a variant of the gold standard, you should read what Krugman has to say about the gold standard.) So basically we got lucky and our monetary quasi-tyrant was relatively benevolent and wise. (Or maybe Bernanke was better because Obama appointed him, while Reagan appointed Greenspan. Carter appointed Volcker, oddly enough; but Reagan reappointed him. It’s always Reagan.) And if you could indeed ensure that tyrants would always be benevolent and wise, tyranny would be a great system—but you can’t.

Democracy doesn’t always lead to the best outcomes, but that’s really not what it’s for. Rather, democracy is for preventing the worst outcomes—no large-scale famine has ever occurred under a mature democracy, nor has any full-scale genocide. Democracies do sometimes forcibly “relocate” populations (particularly indigenous populations, as the US did under Andrew Jackson), and we should not sugar-coat that; people are forced out of their homes and many die. It could even be considered something close to genocide. But no direct and explicit mass murder of millions has ever occurred under a democratic government—no, the Nazis were not democratically elected—and that by itself is a fully sufficient argument for democracy. It could be true that democracies are economically inefficient (they are economically efficient), unbearably corrupt (they are less corrupt), and full of ignorant idiotic hicks (they have higher average educational attainment), and democracy would still be better simply because it prevents famine and genocide. As Churchill said, “Democracy is the worst system, except for all the others.”

Indeed, I think the central reason why American democracy isn’t working well right now is that it’s not very democratic; a two-party system with a plurality “first-past-the-post” vote is literally the worst possible voting system that can still technically be considered democracy. Any worse than that and you only have one party. If we had a range voting system (which is mathematically optimal) and say a dozen parties (they have about a dozen parties in France), people would be able to express their opinions more clearly and in more detail, with less incentive for strategic voting. We probably wouldn’t have such awful turnout at that point, and after realizing that they actually had such a strong voice, maybe people would even start educating themselves about politics in order to make better decisions.

Privatizing prisons and soldiers takes us in exactly the opposite direction: It makes our government deeply less democratic, fundamentally less accountable to voters. It hands off the power of life and death to institutions whose sole purpose for existence is their own monetary gain. We should never have done it—and we must undo it as soon as we possibly can.

Scope neglect and the question of optimal altruism

JDN 2457090 EDT 16:15.

We’re now on Eastern Daylight Time because of this bizarre tradition of shifting our time zone forward for half of the year. It’s supposed to save energy, but a natural experiment in India suggests it actually increases energy demand. So why do we do it? Like every ridiculous tradition (have you ever tried to explain Groundhog Day to someone from another country?), we do it because we’ve always done it.
This week’s topic is scope neglect, one of the most pervasive—and pernicious—cognitive heuristics human beings face. Scope neglect raises a great many challenges not only practically but also theoretically—it raises what I call the question of optimal altruism.

The question is simple to ask yet remarkably challenging to answer: How much should we be willing to sacrifice in order to benefit others? If we think of this as a number, your solidarity coefficient (s), it is equal to the cost you are willing to pay divided by the benefit your action has for someone else: s B > C.

This is analogous to the biological concept relatedness (r), on which Hamilton’s Rule applies: r B > C. Solidarity is the psychological analogue; instead of valuing people based on their genetic similarity to you, you value them based on… well, that’s the problem.

I can easily place upper and lower bounds: The lower bound is zero: You should definitely be willing to sacrifice something to help other people—otherwise you are a psychopath. The upper bound is one: There’s no point in paying more cost than you produce in benefit, and in fact even paying the same cost to yourself as you yield in benefits for other people doesn’t make a lot of sense, because it means that your own self-interest is meaningless and the fact that you understand your own needs better than the needs of others is also irrelevant.

But beyond that, it gets a lot harder—and that may explain why we suffer scope neglect in the first place. Should it be 90%? 50%? 10%? 1%? How should it vary between friends versus family versus strangers? It’s really hard to say. And this inability to precisely decide how much other people should be worth to us may be part of why we suffer scope neglect.

Scope neglect is the fact that we are not willing to expend effort or money in direct proportion to the benefit it would have. When different groups were asked how much they would be willing to donate in order to save the lives of 2,000 birds, 20,000 birds, or 200,000 birds, the answers they gave were statistically indistinguishable—always about $80. But however much a bird’s life is worth to you, shouldn’t 200,000 birds be worth, well, 200,000 times as much? In fact, more than that, because the marginal utility of wealth is decreasing, but I see no reason to think that the marginal utility of birds decreases nearly as fast.

But therein lies the problem: Usually we can’t pay 200,000 times as much. I’d feel like a horrible person if I weren’t willing to expend at least $10 or an equivalent amount of effort in order to save a bird. To save 200,000 birds that means I’d owe $2 million—and I simply don’t have $2 million.

You can get similar results to the bird experiment if you use children—though, as one might hope, the absolute numbers are a bit bigger, usually more like $500 to $1000. (And this, it turns out, is actually about how much it actually costs to save a child’s life by a particularly efficient means, such as anti-malaria nets, de-worming, or direct cash transfer. So please, by all means, give $1000 to UNICEF or the Against Malaria Foundation. If you can’t give $1000, give $100; if you can’t give $100, give $10.) It doesn’t much matter whether you say that the project will save 500 children, 5,000 children, or 50,000 children—people still will give about $500 to $1000. But once again, if I’m willing to spend $1000 to save a child—and I definitely am—how much should I be willing to spend to end malaria, which kills 500,000 children a year? Apparently $500 million, which not only do I not have, I almost certainly will not make that much money cumulatively through my entire life. ($2 million, on the other hand, I almost certainly will make cumulatively—the median income of an economist is $90,000 per year, so if I work for at least 22 years with that as my average income I’ll have cumulatively made $2 million. My net wealth may never be that high—though if I get better positions, or I’m lucky enough or clever enough with the stock market it might—but my cumulative income almost certainly will. Indeed, the average gain in cumulative income from a college degree is about $1 million. Because it takes time—time is money—and loans carry interest, this gives it a net present value of about $300,000.)

But maybe scope neglect isn’t such a bad thing after all. There is a very serious problem with these sort of moral dilemmas: The question didn’t say I would single-handedly save 200,000 birds—and indeed, that notion seems quite ridiculous. If I knew that I could actually save 200,000 birds and I were the only one who could do it, dammit, I would try to come up with that $2 million. I might not succeed, but I really would try as hard as I could.

And if I could single-handedly end malaria, I hereby vow that I would do anything it took to achieve that. Short of mass murder, anything I could do couldn’t be a higher cost to the world than malaria itself. I have no idea how I’d come up with $500 million, but I’d certainly try. Bill Gates could easily come up with that $500 million—so he did. In fact he endowed the Gates Foundation with $28 billion, and they’ve spent $1.3 billion of that on fighting malaria, saving hundreds of thousands of lives.

With this in mind, what is scope neglect really about? I think it’s about coordination. It’s not that people don’t care more about 200,000 birds than they do about 2,000; and it’s certainly not that they don’t care more about 50,000 children than they do about 500. Rather, the problem is that people don’t know how many other people are likely to donate, or how expensive the total project is likely to be; and we don’t know how much we should be willing to pay to save the life of a bird or a child.

Hence, what we basically do is give up; since we can’t actually assess the marginal utility of our donation dollars, we fall back on our automatic emotional response. Our mind focuses itself on visualizing that single bird covered in oil, or that single child suffering from malaria. We then hope that the representative heuristic will guide us in how much to give. Or we follow social norms, and give as much as we think others would expect us to give.

While many in the effective altruism community take this to be a failing, they never actually say what we should do—they never give us a figure for how much money we should be willing to donate to save the life of a child. Instead they retreat to abstraction, saying that whatever it is we’re willing to give to save a child, we should be willing to give 50,000 times as much to save 50,000 children.

But it’s not that simple. A bigger project may attract more supporters; if the two occur in direct proportion, then constant donation is the optimal response. Since it’s probably not actually proportional, you likely should give somewhat more to causes that affect more people; but exactly how much more is an astonishingly difficult question. I really don’t blame people—or myself—for only giving a little bit more to causes with larger impact, because actually getting the right answer is so incredibly hard. This is why it’s so important that we have institutions like GiveWell and Charity Navigator which do the hard work to research the effectiveness of charities and tell us which ones we should give to.

Yet even if we can properly prioritize which charities to give to first, that still leaves the question of how much each of us should give. 1% of our income? 5%? 10%? 20%? 50%? Should we give so much that we throw ourselves into the same poverty we are trying to save others from?

In his earlier work Peter Singer seemed to think we should give so much that it throws us into poverty ourselves; he asked us to literally compare every single purchase and ask ourselves whether a year of lattes or a nicer car is worth a child’s life. Of course even he doesn’t live that way, and in his later books Singer seems to have realized this, and now recommends the far more modest standard that everyone give at least 1% of their income. (He himself gives about 33%, but he’s also very rich so he doesn’t feel it nearly as much.) I think he may have overcompensated; while if literally everyone gave at least 1% that would be more than enough to end world hunger and solve many other problems—world nominal GDP is over $70 trillion, so 1% of that is $700 billion a year—we know that this won’t happen. Some will give more, others less; most will give nothing at all. Hence I think those of us who give should give more than our share; hence I lean toward figures more like 5% or 10%.

But then, why not 50% or 90%? It is very difficult for me to argue on principle why we shouldn’t be expected to give that much. Because my income is such a small proportion of the total donations, the marginal utility of each dollar I give is basically constant—and quite high; if it takes about $1000 to save a child’s life on average, and each of these children will then live about 60 more years at about half the world average happiness, that’s about 30 QALY per $1000, or about 30 milliQALY per dollar. Even at my current level of income (incidentally about as much as I think the US basic income should be), I’m benefiting myself only about 150 microQALY per dollar—so my money is worth about 200 times as much to those children as it is to me.

So now we have to ask ourselves the really uncomfortable question: How much do I value those children, relative to myself? If I am at all honest, the value is not 1; I’m not prepared to die for someone I’ve never met 10,000 kilometers away in a nation I’ve never even visited, nor am I prepared to give away all my possessions and throw myself into the same starvation I am hoping to save them from. I value my closest friends and family approximately the same as myself, but I have to admit that I value random strangers considerably less.

Do I really value them at less than 1%, as these figures would seem to imply? I feel like a monster saying that, but maybe it really isn’t so terrible—after all, most economists seem to think that the optimal solidarity coefficient is in fact zero. Maybe we need to become more comfortable admitting that random strangers aren’t worth that much to us, simply so that we can coherently acknowledge that they aren’t worth nothing. Very few of us actually give away all our possessions, after all.

Then again, what do we mean by worth? I can say from direct experience that a single migraine causes me vastly more pain than learning about the death of 200,000 people in an earthquake in Southeast Asia. And while I gave about $100 to the relief efforts involved in that earthquake, I’ve spent considerably more on migraine treatments—thousands, once you include health insurance. But given the chance, would I be willing to suffer a migraine to prevent such an earthquake? Without hesitation. So the amount of pain we feel is not the same as the amount of money we pay, which is not the same as what we would be willing to sacrifice. I think the latter is more indicative of how much people’s lives are really worth to us—but then, what we pay is what has the most direct effect on the world.

It’s actually possible to justify not dying or selling all my possessions even if my solidarity coefficient is much higher—it just leads to some really questionable conclusions. Essentially the argument is this: I am an asset. I have what economists call “human capital”—my health, my intelligence, my education—that gives me the opportunity to affect the world in ways those children cannot. In my ideal imagined future (albeit improbable) in which I actually become President of the World Bank and have the authority to set global development policy, I myself could actually have a marginal impact of megaQALY—millions of person-years of better life. In the far more likely scenario in which I attain some mid-level research or advisory position, I could be one of thousands of people who together have that sort of impact—which still means my own marginal effect is on the order of kiloQALY. And clearly it’s true that if I died, or even if I sold all my possessions, these events would no longer be possible.

The problem with that reasoning is that it’s wildly implausible to say that everyone in the First World are in this same sort of position—Peter Singer can say that, and maybe I can say that, and indeed hundreds of development economists can say that—but at least 99.9% of the First World population are not development economists, nor are they physicists likely to invent cold fusion, nor biomedical engineers likely to cure HIV, nor aid workers who distribute anti-malaria nets and polio vaccines, nor politicians who set national policy, nor diplomats who influence international relations, nor authors whose bestselling books raise worldwide consciousness. Yet I am not comfortable saying that all the world’s teachers, secretaries, airline pilots and truck drivers should give away their possessions either. (Maybe all the world’s bankers and CEOs should—or at least most of them.)

Is it enough that our economy would collapse without teachers, secretaries, airline pilots and truck drivers? But this seems rather like the fact that if everyone in the world visited the same restaurant there wouldn’t be enough room. Surely we could do without any individual teacher, any individual truck driver? If everyone gave the same proportion of their income, 1% would be more than enough to end malaria and world hunger. But we know that everyone won’t give, and the job won’t get done if those of us who do give only 1%.

Moreover, it’s also clearly not the case that everything I spend money on makes me more likely to become a successful and influential development economist. Buying a suit and a car actually clearly does—it’s much easier to get good jobs that way. Even leisure can be justified to some extent, since human beings need leisure and there’s no sense burning myself out before I get anything done. But do I need both of my video game systems? Couldn’t I buy a bit less Coke Zero? What if I watched a 20-inch TV instead of a 40-inch one? I still have free time; could I get another job and donate that money? This is the sort of question Peter Singer tells us to ask ourselves, and it quickly leads to a painfully spartan existence in which most of our time is spent thinking about whether what we’re doing is advancing or damaging the cause of ending world hunger. But then the cost of that stress and cognitive effort must be included; but how do you optimize your own cognitive effort? You need to think about the cost of thinking about the cost of thinking… and on and on. This is why bounded rationality modeling is hard, even though it’s plainly essential to both cognitive science and computer science. (John Stuart Mill wrote an essay that resonates deeply with me about how the pressure to change the world drove him into depression, and how he learned to accept that he could still change the world even if he weren’t constantly pressuring himself to do so—and indeed he did. James Mill set out to create in his son, John Stuart Mill, the greatest philosopher in the history of the world—and I believe that he succeeded.)

Perhaps we should figure out what proportion of the world’s people are likely to give, and how much we need altogether, and then assign the amount we expect from each of them based on that? The more money you ask from each, the fewer people are likely to give. This creates an optimization problem akin to setting the price of a product under monopoly—monopolies maximize profits by carefully balancing the quantity sold with the price at which they sell, and perhaps a similar balance would allow us to maximize development aid. But wouldn’t it be better if we could simply increase the number of people who give, so that we don’t have to ask so much of those who are generous? That means tax-funded foreign aid is the way to go, because it ensures coordination. And indeed I do favor increasing foreign aid to about 1% of GDP—in the US it is currently about $50 billion, 0.3% of GDP, a little more than 1% of the Federal budget. (Most people who say we should “cut” foreign aid don’t realize how small it already is.) But foreign aid is coercive; wouldn’t it be better if people would give voluntarily?

I don’t have a simple answer. I don’t know how much other people’s lives ought to be worth to us, or what it means for our decisions once we assign that value. But I hope I’ve convinced you that this problem is an important one—and made you think a little more about scope neglect and why we have it.

Oppression is quantitative.

JDN 2457082 EDT 11:15.

Economists are often accused of assigning dollar values to everything, of being Oscar Wilde’s definition of a cynic, someone who knows the price of everything and the value of nothing. And there is more than a little truth to this, particularly among neoclassical economists; I was alarmed a few days ago to receive an email response from an economist that included the word ‘altruism’ in scare quotes as though this were somehow a problematic or unrealistic concept. (Actually, altruism is already formally modeled by biologists, and my claim that human beings are altruistic would be so uncontroversial among evolutionary biologists as to be considered trivial.)

But sometimes this accusation is based upon things economists do that is actually tremendously useful, even necessary to good policymaking: We make everything quantitative. Nothing is ever “yes” or “no” to an economist (sometimes even when it probably should be; the debate among economists in the 1960s over whether slavery is economically efficient does seem rather beside the point), but always more or less; never good or bad but always better or worse. For example, as I discussed in my post on minimum wage, the mainstream position among economists is not that minimum wage is always harmful nor that minimum wage is always beneficial, but that minimum wage is a policy with costs and benefits that on average neither increases nor decreases unemployment. The mainstream position among economists about climate policy is that we should institute either a high carbon tax or a system of cap-and-trade permits; no economist I know wants us to either do nothing and let the market decide (a position most Republicans currently seem to take) or suddenly ban coal and oil (the latter is a strawman position I’ve heard environmentalists accused of, but I’ve never actually heard advocated; even Greenpeace wants to ban offshore drilling, not oil in general.).

This makes people uncomfortable, I think, because they want moral issues to be simple. They want “good guys” who are always right and “bad guys” who are always wrong. (Speaking of strawman environmentalism, a good example of this is Captain Planet, in which no one ever seems to pollute the environment in order to help people or even in order to make money; no, they simply do it because the hate clean water and baby animals.) They don’t want to talk about options that are more good or less bad; they want one option that is good and all other options that are bad.

This attitude tends to become infused with righteousness, such that anyone who disagrees is an agent of the enemy. Politics is the mind-killer, after all. If you acknowledge that there might be some downside to a policy you agree with, that’s like betraying your team.

But in reality, the failure to acknowledge downsides can lead to disaster. Problems that could have been prevented are instead ignored and denied. Getting the other side to recognize the downsides of their own policies might actually help you persuade them to your way of thinking. And appreciating that there is a continuum of possibilities that are better and worse in various ways to various degrees is what allows us to make the world a better place even as we know that it will never be perfect.

There is a common refrain you’ll hear from a lot of social justice activists which sounds really nice and egalitarian, but actually has the potential to completely undermine the entire project of social justice.

This is the idea that oppression can’t be measured quantitatively, and we shouldn’t try to compare different levels of oppression. The notion that some people are more oppressed than others is often derided as the Oppression Olympics. (Some use this term more narrowly to mean when a discussion is derailed by debate over who has it worse—but then the problem is really discussions being derailed, isn’t it?)

This sounds nice, because it means we don’t have to ask hard questions like, “Which is worse, sexism or racism?” or “Who is worse off, people with cancer or people with diabetes?” These are very difficult questions, and maybe they aren’t the right ones to ask—after all, there’s no reason to think that fighting racism and fighting sexism are mutually exclusive; they can in fact be complementary. Research into cancer only prevents us from doing research into diabetes if our total research budget is fixed—this is more than anything else an argument for increasing research budgets.

But we must not throw out the baby with the bathwater. Oppression is quantitative. Some kinds of oppression are clearly worse than others.

Why is this important? Because otherwise you can’t measure progress. If you have a strictly qualitative notion of oppression where it’s black-and-white, on-or-off, oppressed-or-not, then we haven’t made any progress on just about any kind of oppression. There is still racism, there is still sexism, there is still homophobia, there is still religious discrimination. Maybe these things will always exist to some extent. This makes the fight for social justice a hopeless Sisyphean task.

But in fact, that’s not true at all. We’ve made enormous progress. Unbelievably fast progress. Mind-boggling progress. For hundreds of millennia humanity made almost no progress at all, and then in the last few centuries we have suddenly leapt toward justice.

Sexism used to mean that women couldn’t own property, they couldn’t vote, they could be abused and raped with impunity—or even beaten or killed for being raped (which Saudi Arabia still does by the way). Now sexism just means that women aren’t paid as well, are underrepresented in positions of power like Congress and Fortune 500 CEOs, and they are still sometimes sexually harassed or raped—but when men are caught doing this they go to prison for years. This change happened in only about 100 years. That’s fantastic.

Racism used to mean that Black people were literally property to be bought and sold. They were slaves. They had no rights at all, they were treated like animals. They were frequently beaten to death. Now they can vote, hold office—one is President!—and racism means that our culture systematically discriminates against them, particularly in the legal system. Racism used to mean you could be lynched; now it just means that it’s a bit harder to get a job and the cops will sometimes harass you. This took only about 200 years. That’s amazing.

Homophobia used to mean that gay people were criminals. We could be sent to prison or even executed for the crime of making love in the wrong way. If we were beaten or murdered, it was our fault for being faggots. Now, homophobia means that we can’t get married in some states (and fewer all the time!), we’re depicted on TV in embarrassing stereotypes, and a lot of people say bigoted things about us. This has only taken about 50 years! That’s astonishing.

And above all, the most extreme example: Religious discrimination used to mean you could be burned at the stake for not being Catholic. It used to mean—and in some countries still does mean—that it’s illegal to believe in certain religions. Now, it means that Muslims are stereotyped because, well, to be frank, there are some really scary things about Muslim culture and some really scary people who are Muslim leaders. (Personally, I think Muslims should be more upset about Ahmadinejad and Al Qaeda than they are about being profiled in airports.) It means that we atheists are annoyed by “In God We Trust”, but we’re no longer burned at the stake. This has taken longer, more like 500 years. But even though it took a long time, I’m going to go out on a limb and say that this progress is wonderful.

Obviously, there’s a lot more progress remaining to be made on all these issues, and others—like economic inequality, ableism, nationalism, and animal rights—but the point is that we have made a lot of progress already. Things are better than they used to be—a lot betterand keeping this in mind will help us preserve the hope and dedication necessary to make things even better still.

If you think that oppression is either-or, on-or-off, you can’t celebrate this progress, and as a result the whole fight seems hopeless. Why bother, when it’s always been on, and will probably never be off? But we started with oppression that was absolutely horrific, and now it’s considerably milder. That’s real progress. At least within the First World we have gone from 90% oppressed to 25% oppressed, and we can bring it down to 10% or 1% or 0.1% or even 0.01%. Those aren’t just numbers, those are the lives of millions of people. As democracy spreads worldwide and poverty is eradicated, oppression declines. Step by step, social changes are made, whether by protest marches or forward-thinking politicians or even by lawyers and lobbyists (they aren’t all corrupt).

And indeed, a four-year-old Black girl with a mental disability living in Ghana whose entire family’s income is $3 a day is more oppressed than I am, and not only do I have no qualms about saying that, it would feel deeply unseemly to deny it. I am not totally unoppressed—I am a bisexual atheist with chronic migraines and depression in a country that is suspicious of atheists, systematically discriminates against LGBT people, and does not make proper accommodations for chronic disorders, particularly mental ones. But I am far less oppressed, and that little girl (she does exist, though I know not her name) could be made much less oppressed than she is even by relatively simple interventions (like a basic income). In order to make her fully and totally unoppressed, we would need such a radical restructuring of human society that I honestly can’t really imagine what it would look like. Maybe something like The Culture? Even then as Iain Banks imagines it, there is inequality between those within The Culture and those outside it, and there have been wars like the Idiran-Culture War which killed billions, and among those trillions of people on thousands of vast orbital habitats someone, somewhere is probably making a speciesist remark. Yet I can state unequivocally that life in The Culture would be better than my life here now, which is better than the life of that poor disabled girl in Ghana.

To be fair, we can’t actually put a precise number on it—though many economists try, and one of my goals is to convince them to improve their methods so that they stop using willingness-to-pay and instead try to actually measure utility by something like QALY. A precise number would help, actually—it would allow us to do cost-benefit analyses to decide where to focus our efforts. But while we don’t need a precise number to tell when we are making progress, we do need to acknowledge that there are degrees of oppression, some worse than others.

Oppression is quantitative. And our goal should be minimizing that quantity.