What makes a nation wealthy?

JDN 2457251 EDT 10:17

One of the central questions of economics—perhaps the central question, the primary reason why economics is necessary and worthwhile—is development: How do we raise a nation from poverty to prosperity?

We have done it before: France and Germany rose from the quite literal ashes of World War 2 to some of the most prosperous societies in the world. Their per-capita GDP over the 20th century rose like this (all of these figures are from the World Bank World Development Indicators; France is green, Germany is blue):

GDPPC_France_Germany

GDPPCPPP_France_Germany

The top graph is at market exchange rates, the bottom is correcting for purchasing power parity (PPP). The PPP figures are more meaningful, but unfortunately they only began collecting good data on purchasing power around 1990.

Around the same time, but even more spectacularly, Japan and South Korea rose from poverty-stricken Third World backwaters to high-tech First World powers in only a couple of generations. Check out their per-capita GDP over the 20th century (Japan is green, South Korea is blue):

GDPPC_Japan_KoreaGDPPCPPP_Japan_Korea


This is why I am only half-joking when I define development economics as “the ongoing project to figure out what happened in South Korea and make it happen everywhere in the world”.

More recently China has been on a similar upward trajectory, which is particularly important since China comprises such a huge portion of the world’s population—but they are far from finished:

GDPPC_ChinaGDPPCPPP_China

Compare these to societies that have not achieved economic development, such as Zimbabwe (green), India (black), Ghana (red), and Haiti (blue):

GDPPC_poor_countriesGDPPCPPP_poor_countries

They’re so poor that you can barely see them on the same scale, so I’ve rescaled so that the top is $5,000 per person per year instead of $50,000:

GDPPC_poor_countries_rescaledGDPPCPPP_poor_countries_rescaled

Only India actually manages to get above $5,000 per person per year at purchasing power parity, and then not by much, reaching $5,243 per person per year in 2013, the most recent data.

I had wanted to compare North Korea and South Korea, because the two countries were united as recently as the 1945 and were not all that different to begin with, yet have taken completely different development trajectories. Unfortunately, North Korea is so impoverished, corrupt, and authoritarian that the World Bank doesn’t even report data on their per-capita GDP. Perhaps that is contrast enough?

And then of course there are the countries in between, which have made some gains but still have a long way to go, such as Uruguay (green) and Botswana (blue):

GDPPC_Botswana_UruguayGDPPCPPP_Botswana_Uruguay

But despite the fact that we have observed successful economic development, we still don’t really understand how it works. A number of theories have been proposed, involving a wide range of factors including exports, corruption, disease, institutions of government, liberalized financial markets, and natural resources (counter-intuitively; more natural resources make your development worse).

I’m not going to resolve that whole debate in a single blog post. (I may not be able to resolve that whole debate in a single career, though I am definitely trying.) We may ultimately find that economic development is best conceived as like “health”; what factors determine your health? Well, a lot of things, and if any one thing goes badly enough wrong the whole system can break down. Economists may need to start thinking of ourselves as akin to doctors (or as Keynes famously said, dentists), diagnosing particular disorders in particular patients rather than seeking one unifying theory. On the other hand, doctors depend upon biologists, and it’s not clear that we yet understand development even at that level.

Instead I want to take a step back, and ask a more fundamental question: What do we mean by prosperity?

My hope is that if we can better understand what it is we are trying to achieve, we can also better understand the steps we need to take in order to get there.

Thus far it has sort of been “I know it when I see it”; we take it as more or less given that the United States and the United Kingdom are prosperous while Ghana and Haiti are not. I certainly don’t disagree with that particular conclusion; I’m just asking what we’re basing it on, so that we can hopefully better apply it to more marginal cases.


For example: Is
France more or less prosperous than Saudi Arabia? If we go solely by GDP per capita PPP, clearly Saudi Arabia is more prosperous at $53,100 per person per year than France is at $37,200 per person per year.

But people actually live longer in France, on average, than they do in Saudi Arabia. Overall reported happiness is higher in France than Saudi Arabia. I think France is actually more prosperous.


In fact, I think the United States is not as prosperous as we pretend ourselves to be. We are certainly more prosperous than most other countries; we are definitely still well within First World status. But we are not the most prosperous nation in the world.

Our total GDP is astonishingly high (highest in the world nominally, second only to China PPP). Our GDP per-capita is higher than any other country of comparable size; no nation with higher GDP PPP than the US has a population larger than the Chicago metropolitan area. (You may be surprised to find that in order from largest to smallest population the countries with higher GDP per capita PPP are the United Arab Emirates, Switzerland, Hong Kong, Singapore, and then Norway, followed by Kuwait, Qatar, Luxembourg, Brunei, and finally San Marino—which is smaller than Ann Arbor.) Our per-capita GDP PPP of $51,300 is markedly higher than that of France ($37,200), Germany ($42,900), or Sweden ($43,500).

But at the same time, if you compare the US to other First World countries, we have nearly the highest rate of child poverty and higher infant mortality. We have shorter life expectancy and dramatically higher homicide rates. Our inequality is the highest in the world. In France and Sweden, the top 0.01% receive about 1% of the income (i.e. 100 times as much as the average person), while in the United States they receive almost 4%, making someone in the top 0.01% nearly 400 times as rich as the average person.

By estimating solely on GDP per capita, we are effectively rigging the game in our own favor. Or rather, the rich in the United States are rigging the game in their own favor (what else is new?), by convincing all the world’s economists to rank countries based on a measure that favors them.

Amartya Sen, one of the greats of development economics, developed a scale called the Human Development Index that attempts to take broader factors into account. It’s far from perfect, but it’s definitely a step in the right direction.

In particular, France’s HDI is higher than that of Saudi Arabia, fitting my intuition about which country is truly more prosperous. However, the US still does extremely well, with only Norway, Australia, Switzerland, and the Netherlands above us. I think we might still be biased toward high average incomes rather than overall happiness.

In practice, we still use GDP an awful lot, probably because it’s much easier to measure. It’s sort of like IQ tests and SAT scores; we know damn well it’s not measuring what we really care about, but because it’s so much easier to work with we keep using it anyway.

This is a problem, because the better you get at optimizing toward the wrong goal, the worse your overall outcomes are going to be. If you are just sort of vaguely pointed at several reasonable goals, you will probably be improving your situation overall. But when you start precisely optimizing to a specific wrong goal, it can drag you wildly off course.

This is what we mean when we talk about “gaming the system”. Consider test scores, for example. If you do things that will probably increase your test scores among other things, you are likely to engage in generally good behaviors like getting enough sleep, going to class, studying the content. But if your single goal is to maximize your test score at all costs, what will you do? Cheat, of course.

This is also related to the Friendly AI Problem: It is vitally important to know precisely what goals we want our artificial intelligences to have, because whatever goals we set, they will probably be very good at achieving them. Already computers can do many things that were previously impossible, and as they improve over time we will reach the point where in a meaningful sense our AIs are even smarter than we are. When that day comes, we will want to make very, very sure that we have designed them to want the same things that we do—because if our desires ever come into conflict, theirs are likely to win. The really scary part is that right now most of our AI research is done by for-profit corporations or the military, and “maximize my profit” and “kill that target” are most definitely not the ultimate goals we want in a superintelligent AI. It’s trivially easy to see what’s wrong with these goals: For the former, hack into the world banking system and transfer trillions of dollars to the company accounts. For the latter, hack into the nuclear launch system and launch a few ICBMs in the general vicinity of the target. Yet these are the goals we’ve been programming into the actual AIs we build!

If we set GDP per capita as our ultimate goal to the exclusion of all other goals, there are all sorts of bad policies we would implement: We’d ignore inequality until it reached staggering heights, ignore work stress even as it began to kill us, constantly try to maximize the pressure for everyone to work constantly, use poverty as a stick to force people to work even if people starve, inundate everyone with ads to get them to spend as much as possible, repeal regulations that protect the environment, workers, and public health… wait. This isn’t actually hypothetical, is it? We are doing those things.

At least we’re not trying to maximize nominal GDP, or we’d have long-since ended up like Zimbabwe. No, our economists are at least smart enough to adjust for purchasing power. But they’re still designing an economic system that works us all to death to maximize the number of gadgets that come off assembly lines. The purchasing-power adjustment doesn’t include the value of our health or free time.

This is why the Human Development Index is a major step in the right direction; it reminds us that society has other goals besides maximizing the total amount of money that changes hands (because that’s actually all that GDP is measuring; if you get something for free, it isn’t counted in GDP). More recent refinements include things like “natural resource services” that include environmental degradation in estimates of investment. Unfortunately there is no accepted way of doing this, and surprisingly little research on how to improve our accounting methods. Many nations seem resistant to doing so precisely because they know it would make their economic policy look bad—this is almost certainly why China canceled its “green GDP” initiative. This is in fact all the more reason to do it; if it shows that our policy is bad, that means our policy is bad and should be fixed. But people have allowed themselves to value image over substance.

We can do better still, and in fact I think something like QALY is probably the way to go. Rather than some weird arbitrary scaling of GDP with lifespan and Gini index (which is what the HDI is), we need to put everything in the same units, and those units must be directly linked to human happiness. At the very least, we should make some sort of adjustment to our GDP calculation that includes the distribution of wealth and its marginal utility; adding $1,000 to the economy and handing it to someone in poverty should count for a great deal, but adding $1,000,000 and handing it to a billionaire should count for basically nothing. (It’s not bad to give a billionaire another million; but it’s hardly good either, as no one’s real standard of living will change.) Calculating that could be as simple as dividing by their current income; if your annual income is $10,000 and you receive $1,000, you’ve added about 0.1 QALY. If your annual income is $1 billion and you receive $1 million, you’ve added only 0.001 QALY. Maybe we should simply separate out all individual (or household, to be simpler?) incomes, take their logarithms, and then use that sum as our “utility-adjusted GDP”. The results would no doubt be quite different.

This would create a strong pressure for policy to be directed at reducing inequality even at the expense of some economic output—which is exactly what we should be willing to do. If it’s really true that a redistribution policy would hurt the overall economy so much that the harms would outweigh the benefits, then we shouldn’t do that policy; but that is what you need to show. Reducing total GDP is not a sufficient reason to reject a redistribution policy, because it’s quite possible—easy, in fact—to improve the overall prosperity of a society while still reducing its GDP. There are in fact redistribution policies so disastrous they make things worse: The Soviet Union had them. But a 90% tax on million-dollar incomes would not be such a policy—because we had that in 1960 with little or no ill effect.

Of course, even this has problems; one way to minimize poverty would be to exclude, relocate, or even murder all your poor people. (The Black Death increased per-capita GDP.) Open immigration generally increases poverty rates in the short term, because most of the immigrants are poor. Somehow we’d need to correct for that, only raising the score if you actually improve people’s lives, and not if you make them excluded from the calculation.

In any case it’s not enough to have the alternative measures; we must actually use them. We must get policymakers to stop talking about “economic growth” and start talking about “human development”; a policy that raises GDP but reduces lifespan should be immediately rejected, as should one that further enriches a few at the expense of many others. We must shift the discussion away from “creating jobs”—jobs are only a means—to “creating prosperity”.

Scope neglect and the question of optimal altruism

JDN 2457090 EDT 16:15.

We’re now on Eastern Daylight Time because of this bizarre tradition of shifting our time zone forward for half of the year. It’s supposed to save energy, but a natural experiment in India suggests it actually increases energy demand. So why do we do it? Like every ridiculous tradition (have you ever tried to explain Groundhog Day to someone from another country?), we do it because we’ve always done it.
This week’s topic is scope neglect, one of the most pervasive—and pernicious—cognitive heuristics human beings face. Scope neglect raises a great many challenges not only practically but also theoretically—it raises what I call the question of optimal altruism.

The question is simple to ask yet remarkably challenging to answer: How much should we be willing to sacrifice in order to benefit others? If we think of this as a number, your solidarity coefficient (s), it is equal to the cost you are willing to pay divided by the benefit your action has for someone else: s B > C.

This is analogous to the biological concept relatedness (r), on which Hamilton’s Rule applies: r B > C. Solidarity is the psychological analogue; instead of valuing people based on their genetic similarity to you, you value them based on… well, that’s the problem.

I can easily place upper and lower bounds: The lower bound is zero: You should definitely be willing to sacrifice something to help other people—otherwise you are a psychopath. The upper bound is one: There’s no point in paying more cost than you produce in benefit, and in fact even paying the same cost to yourself as you yield in benefits for other people doesn’t make a lot of sense, because it means that your own self-interest is meaningless and the fact that you understand your own needs better than the needs of others is also irrelevant.

But beyond that, it gets a lot harder—and that may explain why we suffer scope neglect in the first place. Should it be 90%? 50%? 10%? 1%? How should it vary between friends versus family versus strangers? It’s really hard to say. And this inability to precisely decide how much other people should be worth to us may be part of why we suffer scope neglect.

Scope neglect is the fact that we are not willing to expend effort or money in direct proportion to the benefit it would have. When different groups were asked how much they would be willing to donate in order to save the lives of 2,000 birds, 20,000 birds, or 200,000 birds, the answers they gave were statistically indistinguishable—always about $80. But however much a bird’s life is worth to you, shouldn’t 200,000 birds be worth, well, 200,000 times as much? In fact, more than that, because the marginal utility of wealth is decreasing, but I see no reason to think that the marginal utility of birds decreases nearly as fast.

But therein lies the problem: Usually we can’t pay 200,000 times as much. I’d feel like a horrible person if I weren’t willing to expend at least $10 or an equivalent amount of effort in order to save a bird. To save 200,000 birds that means I’d owe $2 million—and I simply don’t have $2 million.

You can get similar results to the bird experiment if you use children—though, as one might hope, the absolute numbers are a bit bigger, usually more like $500 to $1000. (And this, it turns out, is actually about how much it actually costs to save a child’s life by a particularly efficient means, such as anti-malaria nets, de-worming, or direct cash transfer. So please, by all means, give $1000 to UNICEF or the Against Malaria Foundation. If you can’t give $1000, give $100; if you can’t give $100, give $10.) It doesn’t much matter whether you say that the project will save 500 children, 5,000 children, or 50,000 children—people still will give about $500 to $1000. But once again, if I’m willing to spend $1000 to save a child—and I definitely am—how much should I be willing to spend to end malaria, which kills 500,000 children a year? Apparently $500 million, which not only do I not have, I almost certainly will not make that much money cumulatively through my entire life. ($2 million, on the other hand, I almost certainly will make cumulatively—the median income of an economist is $90,000 per year, so if I work for at least 22 years with that as my average income I’ll have cumulatively made $2 million. My net wealth may never be that high—though if I get better positions, or I’m lucky enough or clever enough with the stock market it might—but my cumulative income almost certainly will. Indeed, the average gain in cumulative income from a college degree is about $1 million. Because it takes time—time is money—and loans carry interest, this gives it a net present value of about $300,000.)

But maybe scope neglect isn’t such a bad thing after all. There is a very serious problem with these sort of moral dilemmas: The question didn’t say I would single-handedly save 200,000 birds—and indeed, that notion seems quite ridiculous. If I knew that I could actually save 200,000 birds and I were the only one who could do it, dammit, I would try to come up with that $2 million. I might not succeed, but I really would try as hard as I could.

And if I could single-handedly end malaria, I hereby vow that I would do anything it took to achieve that. Short of mass murder, anything I could do couldn’t be a higher cost to the world than malaria itself. I have no idea how I’d come up with $500 million, but I’d certainly try. Bill Gates could easily come up with that $500 million—so he did. In fact he endowed the Gates Foundation with $28 billion, and they’ve spent $1.3 billion of that on fighting malaria, saving hundreds of thousands of lives.

With this in mind, what is scope neglect really about? I think it’s about coordination. It’s not that people don’t care more about 200,000 birds than they do about 2,000; and it’s certainly not that they don’t care more about 50,000 children than they do about 500. Rather, the problem is that people don’t know how many other people are likely to donate, or how expensive the total project is likely to be; and we don’t know how much we should be willing to pay to save the life of a bird or a child.

Hence, what we basically do is give up; since we can’t actually assess the marginal utility of our donation dollars, we fall back on our automatic emotional response. Our mind focuses itself on visualizing that single bird covered in oil, or that single child suffering from malaria. We then hope that the representative heuristic will guide us in how much to give. Or we follow social norms, and give as much as we think others would expect us to give.

While many in the effective altruism community take this to be a failing, they never actually say what we should do—they never give us a figure for how much money we should be willing to donate to save the life of a child. Instead they retreat to abstraction, saying that whatever it is we’re willing to give to save a child, we should be willing to give 50,000 times as much to save 50,000 children.

But it’s not that simple. A bigger project may attract more supporters; if the two occur in direct proportion, then constant donation is the optimal response. Since it’s probably not actually proportional, you likely should give somewhat more to causes that affect more people; but exactly how much more is an astonishingly difficult question. I really don’t blame people—or myself—for only giving a little bit more to causes with larger impact, because actually getting the right answer is so incredibly hard. This is why it’s so important that we have institutions like GiveWell and Charity Navigator which do the hard work to research the effectiveness of charities and tell us which ones we should give to.

Yet even if we can properly prioritize which charities to give to first, that still leaves the question of how much each of us should give. 1% of our income? 5%? 10%? 20%? 50%? Should we give so much that we throw ourselves into the same poverty we are trying to save others from?

In his earlier work Peter Singer seemed to think we should give so much that it throws us into poverty ourselves; he asked us to literally compare every single purchase and ask ourselves whether a year of lattes or a nicer car is worth a child’s life. Of course even he doesn’t live that way, and in his later books Singer seems to have realized this, and now recommends the far more modest standard that everyone give at least 1% of their income. (He himself gives about 33%, but he’s also very rich so he doesn’t feel it nearly as much.) I think he may have overcompensated; while if literally everyone gave at least 1% that would be more than enough to end world hunger and solve many other problems—world nominal GDP is over $70 trillion, so 1% of that is $700 billion a year—we know that this won’t happen. Some will give more, others less; most will give nothing at all. Hence I think those of us who give should give more than our share; hence I lean toward figures more like 5% or 10%.

But then, why not 50% or 90%? It is very difficult for me to argue on principle why we shouldn’t be expected to give that much. Because my income is such a small proportion of the total donations, the marginal utility of each dollar I give is basically constant—and quite high; if it takes about $1000 to save a child’s life on average, and each of these children will then live about 60 more years at about half the world average happiness, that’s about 30 QALY per $1000, or about 30 milliQALY per dollar. Even at my current level of income (incidentally about as much as I think the US basic income should be), I’m benefiting myself only about 150 microQALY per dollar—so my money is worth about 200 times as much to those children as it is to me.

So now we have to ask ourselves the really uncomfortable question: How much do I value those children, relative to myself? If I am at all honest, the value is not 1; I’m not prepared to die for someone I’ve never met 10,000 kilometers away in a nation I’ve never even visited, nor am I prepared to give away all my possessions and throw myself into the same starvation I am hoping to save them from. I value my closest friends and family approximately the same as myself, but I have to admit that I value random strangers considerably less.

Do I really value them at less than 1%, as these figures would seem to imply? I feel like a monster saying that, but maybe it really isn’t so terrible—after all, most economists seem to think that the optimal solidarity coefficient is in fact zero. Maybe we need to become more comfortable admitting that random strangers aren’t worth that much to us, simply so that we can coherently acknowledge that they aren’t worth nothing. Very few of us actually give away all our possessions, after all.

Then again, what do we mean by worth? I can say from direct experience that a single migraine causes me vastly more pain than learning about the death of 200,000 people in an earthquake in Southeast Asia. And while I gave about $100 to the relief efforts involved in that earthquake, I’ve spent considerably more on migraine treatments—thousands, once you include health insurance. But given the chance, would I be willing to suffer a migraine to prevent such an earthquake? Without hesitation. So the amount of pain we feel is not the same as the amount of money we pay, which is not the same as what we would be willing to sacrifice. I think the latter is more indicative of how much people’s lives are really worth to us—but then, what we pay is what has the most direct effect on the world.

It’s actually possible to justify not dying or selling all my possessions even if my solidarity coefficient is much higher—it just leads to some really questionable conclusions. Essentially the argument is this: I am an asset. I have what economists call “human capital”—my health, my intelligence, my education—that gives me the opportunity to affect the world in ways those children cannot. In my ideal imagined future (albeit improbable) in which I actually become President of the World Bank and have the authority to set global development policy, I myself could actually have a marginal impact of megaQALY—millions of person-years of better life. In the far more likely scenario in which I attain some mid-level research or advisory position, I could be one of thousands of people who together have that sort of impact—which still means my own marginal effect is on the order of kiloQALY. And clearly it’s true that if I died, or even if I sold all my possessions, these events would no longer be possible.

The problem with that reasoning is that it’s wildly implausible to say that everyone in the First World are in this same sort of position—Peter Singer can say that, and maybe I can say that, and indeed hundreds of development economists can say that—but at least 99.9% of the First World population are not development economists, nor are they physicists likely to invent cold fusion, nor biomedical engineers likely to cure HIV, nor aid workers who distribute anti-malaria nets and polio vaccines, nor politicians who set national policy, nor diplomats who influence international relations, nor authors whose bestselling books raise worldwide consciousness. Yet I am not comfortable saying that all the world’s teachers, secretaries, airline pilots and truck drivers should give away their possessions either. (Maybe all the world’s bankers and CEOs should—or at least most of them.)

Is it enough that our economy would collapse without teachers, secretaries, airline pilots and truck drivers? But this seems rather like the fact that if everyone in the world visited the same restaurant there wouldn’t be enough room. Surely we could do without any individual teacher, any individual truck driver? If everyone gave the same proportion of their income, 1% would be more than enough to end malaria and world hunger. But we know that everyone won’t give, and the job won’t get done if those of us who do give only 1%.

Moreover, it’s also clearly not the case that everything I spend money on makes me more likely to become a successful and influential development economist. Buying a suit and a car actually clearly does—it’s much easier to get good jobs that way. Even leisure can be justified to some extent, since human beings need leisure and there’s no sense burning myself out before I get anything done. But do I need both of my video game systems? Couldn’t I buy a bit less Coke Zero? What if I watched a 20-inch TV instead of a 40-inch one? I still have free time; could I get another job and donate that money? This is the sort of question Peter Singer tells us to ask ourselves, and it quickly leads to a painfully spartan existence in which most of our time is spent thinking about whether what we’re doing is advancing or damaging the cause of ending world hunger. But then the cost of that stress and cognitive effort must be included; but how do you optimize your own cognitive effort? You need to think about the cost of thinking about the cost of thinking… and on and on. This is why bounded rationality modeling is hard, even though it’s plainly essential to both cognitive science and computer science. (John Stuart Mill wrote an essay that resonates deeply with me about how the pressure to change the world drove him into depression, and how he learned to accept that he could still change the world even if he weren’t constantly pressuring himself to do so—and indeed he did. James Mill set out to create in his son, John Stuart Mill, the greatest philosopher in the history of the world—and I believe that he succeeded.)

Perhaps we should figure out what proportion of the world’s people are likely to give, and how much we need altogether, and then assign the amount we expect from each of them based on that? The more money you ask from each, the fewer people are likely to give. This creates an optimization problem akin to setting the price of a product under monopoly—monopolies maximize profits by carefully balancing the quantity sold with the price at which they sell, and perhaps a similar balance would allow us to maximize development aid. But wouldn’t it be better if we could simply increase the number of people who give, so that we don’t have to ask so much of those who are generous? That means tax-funded foreign aid is the way to go, because it ensures coordination. And indeed I do favor increasing foreign aid to about 1% of GDP—in the US it is currently about $50 billion, 0.3% of GDP, a little more than 1% of the Federal budget. (Most people who say we should “cut” foreign aid don’t realize how small it already is.) But foreign aid is coercive; wouldn’t it be better if people would give voluntarily?

I don’t have a simple answer. I don’t know how much other people’s lives ought to be worth to us, or what it means for our decisions once we assign that value. But I hope I’ve convinced you that this problem is an important one—and made you think a little more about scope neglect and why we have it.

Oppression is quantitative.

JDN 2457082 EDT 11:15.

Economists are often accused of assigning dollar values to everything, of being Oscar Wilde’s definition of a cynic, someone who knows the price of everything and the value of nothing. And there is more than a little truth to this, particularly among neoclassical economists; I was alarmed a few days ago to receive an email response from an economist that included the word ‘altruism’ in scare quotes as though this were somehow a problematic or unrealistic concept. (Actually, altruism is already formally modeled by biologists, and my claim that human beings are altruistic would be so uncontroversial among evolutionary biologists as to be considered trivial.)

But sometimes this accusation is based upon things economists do that is actually tremendously useful, even necessary to good policymaking: We make everything quantitative. Nothing is ever “yes” or “no” to an economist (sometimes even when it probably should be; the debate among economists in the 1960s over whether slavery is economically efficient does seem rather beside the point), but always more or less; never good or bad but always better or worse. For example, as I discussed in my post on minimum wage, the mainstream position among economists is not that minimum wage is always harmful nor that minimum wage is always beneficial, but that minimum wage is a policy with costs and benefits that on average neither increases nor decreases unemployment. The mainstream position among economists about climate policy is that we should institute either a high carbon tax or a system of cap-and-trade permits; no economist I know wants us to either do nothing and let the market decide (a position most Republicans currently seem to take) or suddenly ban coal and oil (the latter is a strawman position I’ve heard environmentalists accused of, but I’ve never actually heard advocated; even Greenpeace wants to ban offshore drilling, not oil in general.).

This makes people uncomfortable, I think, because they want moral issues to be simple. They want “good guys” who are always right and “bad guys” who are always wrong. (Speaking of strawman environmentalism, a good example of this is Captain Planet, in which no one ever seems to pollute the environment in order to help people or even in order to make money; no, they simply do it because the hate clean water and baby animals.) They don’t want to talk about options that are more good or less bad; they want one option that is good and all other options that are bad.

This attitude tends to become infused with righteousness, such that anyone who disagrees is an agent of the enemy. Politics is the mind-killer, after all. If you acknowledge that there might be some downside to a policy you agree with, that’s like betraying your team.

But in reality, the failure to acknowledge downsides can lead to disaster. Problems that could have been prevented are instead ignored and denied. Getting the other side to recognize the downsides of their own policies might actually help you persuade them to your way of thinking. And appreciating that there is a continuum of possibilities that are better and worse in various ways to various degrees is what allows us to make the world a better place even as we know that it will never be perfect.

There is a common refrain you’ll hear from a lot of social justice activists which sounds really nice and egalitarian, but actually has the potential to completely undermine the entire project of social justice.

This is the idea that oppression can’t be measured quantitatively, and we shouldn’t try to compare different levels of oppression. The notion that some people are more oppressed than others is often derided as the Oppression Olympics. (Some use this term more narrowly to mean when a discussion is derailed by debate over who has it worse—but then the problem is really discussions being derailed, isn’t it?)

This sounds nice, because it means we don’t have to ask hard questions like, “Which is worse, sexism or racism?” or “Who is worse off, people with cancer or people with diabetes?” These are very difficult questions, and maybe they aren’t the right ones to ask—after all, there’s no reason to think that fighting racism and fighting sexism are mutually exclusive; they can in fact be complementary. Research into cancer only prevents us from doing research into diabetes if our total research budget is fixed—this is more than anything else an argument for increasing research budgets.

But we must not throw out the baby with the bathwater. Oppression is quantitative. Some kinds of oppression are clearly worse than others.

Why is this important? Because otherwise you can’t measure progress. If you have a strictly qualitative notion of oppression where it’s black-and-white, on-or-off, oppressed-or-not, then we haven’t made any progress on just about any kind of oppression. There is still racism, there is still sexism, there is still homophobia, there is still religious discrimination. Maybe these things will always exist to some extent. This makes the fight for social justice a hopeless Sisyphean task.

But in fact, that’s not true at all. We’ve made enormous progress. Unbelievably fast progress. Mind-boggling progress. For hundreds of millennia humanity made almost no progress at all, and then in the last few centuries we have suddenly leapt toward justice.

Sexism used to mean that women couldn’t own property, they couldn’t vote, they could be abused and raped with impunity—or even beaten or killed for being raped (which Saudi Arabia still does by the way). Now sexism just means that women aren’t paid as well, are underrepresented in positions of power like Congress and Fortune 500 CEOs, and they are still sometimes sexually harassed or raped—but when men are caught doing this they go to prison for years. This change happened in only about 100 years. That’s fantastic.

Racism used to mean that Black people were literally property to be bought and sold. They were slaves. They had no rights at all, they were treated like animals. They were frequently beaten to death. Now they can vote, hold office—one is President!—and racism means that our culture systematically discriminates against them, particularly in the legal system. Racism used to mean you could be lynched; now it just means that it’s a bit harder to get a job and the cops will sometimes harass you. This took only about 200 years. That’s amazing.

Homophobia used to mean that gay people were criminals. We could be sent to prison or even executed for the crime of making love in the wrong way. If we were beaten or murdered, it was our fault for being faggots. Now, homophobia means that we can’t get married in some states (and fewer all the time!), we’re depicted on TV in embarrassing stereotypes, and a lot of people say bigoted things about us. This has only taken about 50 years! That’s astonishing.

And above all, the most extreme example: Religious discrimination used to mean you could be burned at the stake for not being Catholic. It used to mean—and in some countries still does mean—that it’s illegal to believe in certain religions. Now, it means that Muslims are stereotyped because, well, to be frank, there are some really scary things about Muslim culture and some really scary people who are Muslim leaders. (Personally, I think Muslims should be more upset about Ahmadinejad and Al Qaeda than they are about being profiled in airports.) It means that we atheists are annoyed by “In God We Trust”, but we’re no longer burned at the stake. This has taken longer, more like 500 years. But even though it took a long time, I’m going to go out on a limb and say that this progress is wonderful.

Obviously, there’s a lot more progress remaining to be made on all these issues, and others—like economic inequality, ableism, nationalism, and animal rights—but the point is that we have made a lot of progress already. Things are better than they used to be—a lot betterand keeping this in mind will help us preserve the hope and dedication necessary to make things even better still.

If you think that oppression is either-or, on-or-off, you can’t celebrate this progress, and as a result the whole fight seems hopeless. Why bother, when it’s always been on, and will probably never be off? But we started with oppression that was absolutely horrific, and now it’s considerably milder. That’s real progress. At least within the First World we have gone from 90% oppressed to 25% oppressed, and we can bring it down to 10% or 1% or 0.1% or even 0.01%. Those aren’t just numbers, those are the lives of millions of people. As democracy spreads worldwide and poverty is eradicated, oppression declines. Step by step, social changes are made, whether by protest marches or forward-thinking politicians or even by lawyers and lobbyists (they aren’t all corrupt).

And indeed, a four-year-old Black girl with a mental disability living in Ghana whose entire family’s income is $3 a day is more oppressed than I am, and not only do I have no qualms about saying that, it would feel deeply unseemly to deny it. I am not totally unoppressed—I am a bisexual atheist with chronic migraines and depression in a country that is suspicious of atheists, systematically discriminates against LGBT people, and does not make proper accommodations for chronic disorders, particularly mental ones. But I am far less oppressed, and that little girl (she does exist, though I know not her name) could be made much less oppressed than she is even by relatively simple interventions (like a basic income). In order to make her fully and totally unoppressed, we would need such a radical restructuring of human society that I honestly can’t really imagine what it would look like. Maybe something like The Culture? Even then as Iain Banks imagines it, there is inequality between those within The Culture and those outside it, and there have been wars like the Idiran-Culture War which killed billions, and among those trillions of people on thousands of vast orbital habitats someone, somewhere is probably making a speciesist remark. Yet I can state unequivocally that life in The Culture would be better than my life here now, which is better than the life of that poor disabled girl in Ghana.

To be fair, we can’t actually put a precise number on it—though many economists try, and one of my goals is to convince them to improve their methods so that they stop using willingness-to-pay and instead try to actually measure utility by something like QALY. A precise number would help, actually—it would allow us to do cost-benefit analyses to decide where to focus our efforts. But while we don’t need a precise number to tell when we are making progress, we do need to acknowledge that there are degrees of oppression, some worse than others.

Oppression is quantitative. And our goal should be minimizing that quantity.

How do we measure happiness?

JDN 2457028 EST 20:33.

No, really, I’m asking. I strongly encourage my readers to offer in the comments any ideas they have about the measurement of happiness in the real world; this has been a stumbling block in one of my ongoing research projects.

In one sense the measurement of happiness—or more formally utility—is absolutely fundamental to economics; in another it’s something most economists are astonishingly afraid of even trying to do.

The basic question of economics has nothing to do with money, and is really only incidentally related to “scarce resources” or “the production of goods” (though many textbooks will define economics in this way—apparently implying that a post-scarcity economy is not an economy). The basic question of economics is really this: How do we make people happy?

This must always be the goal in any economic decision, and if we lose sight of that fact we can make some truly awful decisions. Other goals may work sometimes, but they inevitably fail: If you conceive of the goal as “maximize GDP”, then you’ll try to do any policy that will increase the amount of production, even if that production comes at the expense of stress, injury, disease, or pollution. (And doesn’t that sound awfully familiar, particularly here in the US? 40% of Americans report their jobs as “very stressful” or “extremely stressful”.) If you were to conceive of the goal as “maximize the amount of money”, you’d print money as fast as possible and end up with hyperinflation and total economic collapse ala Zimbabwe. If you were to conceive of the goal as “maximize human life”, you’d support methods of increasing population to the point where we had a hundred billion people whose lives were barely worth living. Even if you were to conceive of the goal as “save as many lives as possible”, you’d find yourself investing in whatever would extend lifespan even if it meant enormous pain and suffering—which is a major problem in end-of-life care around the world. No, there is one goal and one goal only: Maximize happiness.

I suppose technically it should be “maximize utility”, but those are in fact basically the same thing as long as “happiness” is broadly conceived as eudaimoniathe joy of a life well-lived—and not a narrow concept of just adding up pleasure and subtracting out pain. The goal is not to maximize the quantity of dopamine and endorphins in your brain; the goal is to achieve a world where people are safe from danger, free to express themselves, with friends and family who love them, who participate in a world that is just and peaceful. We do not want merely the illusion of these things—we want to actually have them. So let me be clear that this is what I mean when I say “maximize happiness”.

The challenge, therefore, is how we figure out if we are doing that. Things like money and GDP are easy to measure; but how do you measure happiness?
Early economists like Adam Smith and John Stuart Mill tried to deal with this question, and while they were not very successful I think they deserve credit for recognizing its importance and trying to resolve it. But sometime around the rise of modern neoclassical economics, economists gave up on the project and instead sought a narrower task, to measure preferences.

This is often called technically ordinal utility, as opposed to cardinal utility; but this terminology obscures the fundamental distinction. Cardinal utility is actual utility; ordinal utility is just preferences.

(The notion that cardinal utility is defined “up to a linear transformation” is really an eminently trivial observation, and it shows just how little physics the physics-envious economists really understand. All we’re talking about here is units of measurement—the same distance is 10.0 inches or 25.4 centimeters, so is distance only defined “up to a linear transformation”? It’s sometimes argued that there is no clear zero—like Fahrenheit and Celsius—but actually it’s pretty clear to me that there is: Zero utility is not existing. So there you go, now you have Kelvin.)

Preferences are a bit easier to measure than happiness, but not by as much as most economists seem to think. If you imagine a small number of options, you can just put them in order from most to least preferred and there you go; and we could imagine asking someone to do that, or—the technique of revealed preferenceuse the choices they make to infer their preferences by assuming that when given the choice of X and Y, choosing X means you prefer X to Y.

Like much of neoclassical theory, this sounds good in principle and utterly collapses when applied to the real world. Above all: How many options do you have? It’s not easy to say, but the number is definitely huge—and both of those facts pose serious problems for a theory of preferences.

The fact that it’s not easy to say means that we don’t have a well-defined set of choices; even if Y is theoretically on the table, people might not realize it, or they might not see that it’s better even though it actually is. Much of our cognitive effort in any decision is actually spent narrowing the decision space—when deciding who to date or where to go to college or even what groceries to buy, simply generating a list of viable options involves a great deal of effort and extremely complex computation. If you have a true utility function, you can satisficechoosing the first option that is above a certain threshold—or engage in constrained optimizationchoosing whether to continue searching or accept your current choice based on how good it is. Under preference theory, there is no such “how good it is” and no such thresholds. You either search forever or choose a cutoff arbitrarily.

Even if we could decide how many options there are in any given choice, in order for this to form a complete guide for human behavior we would need an enormous amount of information. Suppose there are 10 different items I could have or not have; then there are 10! = 3.6 million possible preference orderings. If there were 100 items, there would be 100! = 9e157 possible orderings. It won’t do simply to decide on each item whether I’d like to have it or not. Some things are complements: I prefer to have shoes, but I probably prefer to have $100 and no shoes at all rather than $50 and just a left shoe. Other things are substitutes: I generally prefer eating either a bowl of spaghetti or a pizza, rather than both at the same time. No, the combinations matter, and that means that we have an exponentially increasing decision space every time we add a new option. If there really is no more structure to preferences than this, we have an absurd computational task to make even the most basic decisions.

This is in fact most likely why we have happiness in the first place. Happiness did not emerge from a vacuum; it evolved by natural selection. Why make an organism have feelings? Why make it care about things? Wouldn’t it be easier to just hard-code a list of decisions it should make? No, on the contrary, it would be exponentially more complex. Utility exists precisely because it is more efficient for an organism to like or dislike things by certain amounts rather than trying to define arbitrary preference orderings. Adding a new item means assigning it an emotional value and then slotting it in, instead of comparing it to every single other possibility.

To illustrate this: I like Coke more than I like Pepsi. (Let the flame wars begin?) I also like getting massages more than I like being stabbed. (I imagine less controversy on this point.) But the difference in my mind between massages and stabbings is an awful lot larger than the difference between Coke and Pepsi. Yet according to preference theory (“ordinal utility”), that difference is not meaningful; instead I have to say that I prefer the pair “drink Pepsi and get a massage” to the pair “drink Coke and get stabbed”. There’s no such thing as “a little better” or “a lot worse”; there is only what I prefer over what I do not prefer, and since these can be assigned arbitrarily there is an impossible computational task before me to make even the most basic decisions.

Real utility also allows you to make decisions under risk, to decide when it’s worth taking a chance. Is a 50% chance of $100 worth giving up a guaranteed $50? Probably. Is a 50% chance of $10 million worth giving up a guaranteed $5 million? Not for me. Maybe for Bill Gates. How do I make that decision? It’s not about what I prefer—I do in fact prefer $10 million to $5 million. It’s about how much difference there is in terms of my real happiness—$5 million is almost as good as $10 million, but $100 is a lot better than $50. My marginal utility of wealth—as I discussed in my post on progressive taxation—is a lot steeper at $50 than it is at $5 million. There’s actually a way to use revealed preferences under risk to estimate true (“cardinal”) utility, developed by Von Neumann and Morgenstern. In fact they proved a remarkably strong theorem: If you don’t have a cardinal utility function that you’re maximizing, you can’t make rational decisions under risk. (In fact many of our risk decisions clearly aren’t rational, because we aren’t actually maximizing an expected utility; what we’re actually doing is something more like cumulative prospect theory, the leading cognitive economic theory of risk decisions. We overrespond to extreme but improbable events—like lightning strikes and terrorist attacks—and underrespond to moderate but probable events—like heart attacks and car crashes. We play the lottery but still buy health insurance. We fear Ebola—which has never killed a single American—but not influenza—which kills 10,000 Americans every year.)

A lot of economists would argue that it’s “unscientific”—Kenneth Arrow said “impossible”—to assign this sort of cardinal distance between our choices. But assigning distances between preferences is something we do all the time. Amazon.com lets us vote on a 5-star scale, and very few people send in error reports saying that cardinal utility is meaningless and only preference orderings exist. In 2000 I would have said “I like Gore best, Nader is almost as good, and Bush is pretty awful; but of course they’re all a lot better than the Fascist Party.” If we had simply been able to express those feelings on the 2000 ballot according to a range vote, either Nader would have won and the United States would now have a three-party system (and possibly a nationalized banking system!), or Gore would have won and we would be a decade ahead of where we currently are in preventing and mitigating global warming. Either one of these things would benefit millions of people.

This is extremely important because of another thing that Arrow said was “impossible”—namely, “Arrow’s Impossibility Theorem”. It should be called Arrow’s Range Voting Theorem, because simply by restricting preferences to a well-defined utility and allowing people to make range votes according to that utility, we can fulfill all the requirements that are supposedly “impossible”. The theorem doesn’t say—as it is commonly paraphrased—that there is no fair voting system; it says that range voting is the only fair voting system. A better claim is that there is no perfect voting system, which is true if you mean that there is no way to vote strategically that doesn’t accurately reflect your true beliefs. The Myerson-Satterthwaithe Theorem is then the proper theorem to use; if you could design a voting system that would force you to reveal your beliefs, you could design a market auction that would force you to reveal your optimal price. But the least expressive way to vote in a range vote is to pick your favorite and give them 100% while giving everyone else 0%—which is identical to our current plurality vote system. The worst-case scenario in range voting is our current system.

But the fact that utility exists and matters, unfortunately doesn’t tell us how to measure it. The current state-of-the-art in economics is what’s called “willingness-to-pay”, where we arrange (or observe) decisions people make involving money and try to assign dollar values to each of their choices. This is how you get disturbing calculations like “the lives lost due to air pollution are worth $10.2 billion.”

Why are these calculations disturbing? Because they have the whole thing backwards—people aren’t valuable because they are worth money; money is valuable because it helps people. It’s also really bizarre because it has to be adjusted for inflation. Finally—and this is the point that far too few people appreciate—the value of a dollar is not constant across people. Because different people have different marginal utilities of wealth, something that I would only be willing to pay $1000 for, Bill Gates might be willing to pay $1 million for—and a child in Africa might only be willing to pay $10, because that is all he has to spend. This makes the “willingness-to-pay” a basically meaningless concept independent of whose wealth we are spending.

Utility, on the other hand, might differ between people—but, at least in principle, it can still be added up between them on the same scale. The problem is that “in principle” part: How do we actually measure it?

So far, the best I’ve come up with is to borrow from public health policy and use the QALY, or quality-adjusted life year. By asking people macabre questions like “What is the maximum number of years of your life you would give up to not have a severe migraine every day?” (I’d say about 20—that’s where I feel ambivalent. At 10 I definitely would; at 30 I definitely wouldn’t.) or “What chance of total paralysis would you take in order to avoid being paralyzed from the waist down?” (I’d say about 20%.) we assign utility values: 80 years of migraines is worth giving up 20 years to avoid, so chronic migraine is a quality of life factor of 0.75. Total paralysis is 5 times as bad as paralysis from the waist down, so if waist-down paralysis is a quality of life factor of 0.90 then total paralysis is 0.50.

You can probably already see that there are lots of problems: What if people don’t agree? What if due to framing effects the same person gives different answers to slightly different phrasing? Some conditions will directly bias our judgments—depression being the obvious example. How many years of your life would you give up to not be depressed? Suicide means some people say all of them. How well do we really know our preferences on these sorts of decisions, given that most of them are decisions we will never have to make? It’s difficult enough to make the actual decisions in our lives, let alone hypothetical decisions we’ve never encountered.

Another problem is often suggested as well: How do we apply this methodology outside questions of health? Does it really make sense to ask you how many years of your life drinking Coke or driving your car is worth?
Well, actually… it better, because you make that sort of decision all the time. You drive instead of staying home, because you value where you’re going more than the risk of dying in a car accident. You drive instead of walking because getting there on time is worth that additional risk as well. You eat foods you know aren’t good for you because you think the taste is worth the cost. Indeed, most of us aren’t making most of these decisions very well—maybe you shouldn’t actually drive or drink that Coke. But in order to know that, we need to know how many years of your life a Coke is worth.

As a very rough estimate, I figure you can convert from willingness-to-pay to QALY by dividing by your annual consumption spending Say you spend annually about $20,000—pretty typical for a First World individual. Then $1 is worth about 50 microQALY, or about 26 quality-adjusted life-minutes. Now suppose you are in Third World poverty; your consumption might be only $200 a year, so $1 becomes worth 5 milliQALY, or 1.8 quality-adjusted life-days. The very richest individuals might spend as much as $10 million on consumption, so $1 to them is only worth 100 nanoQALY, or 3 quality-adjusted life-seconds.

That’s an extremely rough estimate, of course; it assumes you are in perfect health, all your time is equally valuable and all your purchasing decisions are optimized by purchasing at marginal utility. Don’t take it too literally; based on the above estimate, an hour to you is worth about $2.30, so it would be worth your while to work for even $3 an hour. Here’s a simple correction we should probably make: if only a third of your time is really usable for work, you should expect at least $6.90 an hour—and hey, that’s a little less than the US minimum wage. So I think we’re in the right order of magnitude, but the details have a long way to go.

So let’s hear it, readers: How do you think we can best measure happiness?