What if you couldn’t own land?

JDN 2457145 EDT 20:49.

Today’s post we’re on the socialism scale somewhere near the The Guess Who, but not quite all the way to John Lennon. I’d like to questions one of the fundamental tenets of modern capitalism, but not the basic concept of private ownership itself:

What if you couldn’t own land?

Many things that you can own were more-or-less straightforwardly created by someone. A car, a computer, a television, a pair of shoes; for today let’s even take for granted intellectual property like books, movies, and songs; at least those things (“things”) were actually made by someone.

But land? We’re talking about chunks of the Earth here. They were here billions of years before us, and in all probability will be here billions of years after we’re gone. There’s no need to incentivize its creation; the vast majority of land was already here and did not need to be created. (I do have to say “the vast majority”, because in places like Japan, Hong Kong, and the Netherlands real estate has become so scarce that people do literally build land out into the sea. But this is something like 0.0001% of the world’s land.)

What we want to incentivize is land development; we want it to be profitable to build buildings and irrigate deserts, and yes, even cut down forests sometimes (though then there should be a carbon tax with credits for forested land to ensure that there isn’t too much incentive). Yet our current property tax system doesn’t do this very well; if you build bigger buildings, you end up paying more property taxes. Yes, you may also make some profit on the buildings—but it’s risky, and you may not get enough benefit to justify the added property taxes.

Moreover, we want to allocate land—we want some way of deciding who is allowed to use what land where and when (and perhaps why). Allowing land to be bought and sold is one way to do that, but it is not the only way.

Indeed, land ownership suffers from a couple of truly glaring flaws as an allocation system:

      1. It creates self-perpetuating inequality. Because land grows in value over time (due to population growth and urbanization, among other things), those who currently own land end up getting an ever-growing quantity of wealth while those who do not own land do not, and very likely end up having to pay ever-growing rents to the landlords. (I like calling them “landlords”; it really drives home the fact that our landholding system is still basically the same as it was under feudalism.) In fact, the recent rise in the share of income that goes to owners of capital rather than workers is almost entirely attributable to the rise in the price of real estate. As that post rightly recognizes, this does nothing to undermine Piketty’s central message of rising inequality due to capital income (pace The Washington Post); it merely tells us to focus on real estate instead of other forms of capital.
      2. It has no non-arbitrary allocation. If we want to decide who owns a car, we can ask questions like, “Who built it? Did someone buy it from them? Did they pay a fair price?”; if we want to decide who owns a book, we can ask questions like, “Who wrote it? Did they sell it to a publisher? What was the royalty rate?” That is, there is a clear original owner, and there is a sense of whether the transfer of ownership can be considered fair. But if we want to decide who owns a chunk of land, basically all we can ask is, “What does the deed say?” The owner is the owner because they are the owner; there’s no sense in which that ownership is fair. We certainly can’t go back to the original creation of the land, because that was due to natural forces gigayears ago. If we keep tracing the ownership backward, we will eventually end up with some guy (almost certainly a man, a White man in fact) with a gun who pointed that gun at other people and said, “This is mine.” This is true of basically all the land in the world (aside from those little bits of Japan and such); it was already there, and the only reason someone got to own it was because they said so and had a bigger gun. And a flag, perhaps: “Do you have a flag?” I suppose, in theory at least, there are a few ways of allocating land which seem less arbitrary: One would be to give everyone an equal amount. But this is practically very difficult: What do you do when the population changes? If you have 2% annual population growth, do you carve off 2% of everybody’s lot each year? Another would be to let people squat land, and automatically own the land that they live on—but again practical difficulties quickly become enormous. In any case, these two methods bear about as much resemblance to our actual allocation of land as a squirrel does to a Tyrannosaurus.

So, what else might we use? The system that makes the most sense to me is that we would own all land as a society. In practical terms this would mean that all land is Federal land, and if you want to use it for something, you need to pay rent to the government. There are many different ways the government could set the rent, but the most sensible might be to charge a flat rate per hectare regardless of where the land is or what it’s being used for, because that would maximize the incentive to develop the land. It would also make the rent fall entirely on the landowner, because the rent would be perfectly inelasticmeaning that you can’t change the quantity you make based on the price, because you aren’t making it; it’s just already sitting there.

Of course, this idea is obviously politically impossible in our current environment—or indeed any foreseeable political environment. I’m just fantasizing here, right?

Well, not quite. There is one thing we could do that would be economically quite similar to government-only land ownership; it’s called a land tax. The idea is incredibly simple: you just collect a flat tax per hectare of land. Economists have known that a land tax is efficient at providing revenue and reducing inequality since at least Adam Smith. So maybe ownership of land isn’t actually foundational to capitalism, after all; maybe we’ve just never fully gotten over feudalism. (I basically agree with Adam Smith, and for doing so I am often called a socialist.) The beautiful thing about a land tax is that it has a tax incidence in which the owners of the land end up bearing the full brunt of the tax.

Tax incidence is something it’s very important to understand; it would be on my list of the top ten economic principles that people should learn. We often have fierce political debates over who will actually write the check: Should employers pay the health insurance premium, or should employees? Will buyers pay sales tax, or sellers? Should we tax corporate profits or personal capital gains?

Please understand that I am not exaggerating when I say that these sorts of questions are totally irrelevant. It simply does not matter who actually writes the check; what matters is who bears the cost. Making the employer pay the health insurance premium doesn’t make the slightest difference if all they’re going to do is cut wages by the exact same amount. You can see the irrelevance of the fact that sellers pay sales tax every time you walk into a store—you always end up paying the price plus the tax, don’t you? (I found that the base price of most items was the same between Long Beach and Ann Arbor, but my total expenditure was always 3% more because of the 9% sales tax versus the 6%.) How do we determine who actually pays the tax? It depends on the elasticity—how easily can you change your behavior in order to avoid the tax? Can you find a different job because the health insurance premiums are too high? No? Then you’re probably paying that premium, even if your employer writes the check. If you can find a new job whenever you want, your employer might have to pay it for you even if you write the check.

The incidence of corporate taxes and taxes on capital gains are even more complicated, because it could affect the behavior of corporations in many different ways; indeed, many economists argue that the corporate tax simply results in higher unemployment or lower wages for workers. I don’t think that’s actually true, but I honestly can’t rule it out completely, precisely because corporate taxes are so complicated. You need to know all sorts of things about the structure of stock markets, and the freedom of trade, and the mobility of immigration… it’s a complete and total mess.

It’s because of tax incidence that a land tax makes so much sense; there’s no way for the landowner to escape it, other than giving up the land entirely. In particular, they can’t charge more for rent without being out-competed (unless landowners are really good at colluding—which might be true for large developers, but not individual landlords). Their elasticity is so low that they’re forced to bear the full cost of the tax.

If the land tax were high enough, it could eliminate the automatic growth in wealth that comes from holding land, and thereby reducing long-run inequality dramatically. The revenue could be used for my other favorite fiscal policy, the basic income—and real estate is a big enough part of our nation’s wealth that it’s actually entirely realistic to fund an $8,000 per person per year basic income entirely on land tax revenue. The total value of US land is about $14 trillion, and an $8,000 basic income for 320 million people would cost about $2.6 trillion; that’s only 19%. You’d actually want to make it a flat tax per hectare, so how much would that be? Well, 60% of US land is privately owned at present (no sense taxing the land the government already owns), and total US land area is about 9 million square kilometers, so to raise $2.5 trillion you’d need a tax of $289,000 per square kilometer, or $2,890 per hectare. If you own a hectare—which is bigger than most single-family lots—you’d only pay $2,890 per year in land tax, well within what most middle-class families could handle. But if you own 290,000 acres like Jeff Bezos, (that’s 117,000 hectares) you’re paying $338 million per year. Since Jeff Bezos has about $38 billion in net wealth, he can actually afford to pay that ($338 million per year is about one-tenth of what Jeff Bezos makes automatically on dividends), though he might consider selling off some of the land to avoid the taxes, which is exactly the sort of incentive we wanted to create.

Indeed, when I contemplate this policy I’m struck by the fact that it has basically no downside—usually in public policy you’re forced to make hard compromises and tradeoffs, but a land tax plus basic income is a system that carries almost no downsides at all. It won’t disincentivize investment, it won’t disincentivize working, it will dramatically reduce inequality, it will save the government a great deal of money on social welfare spending, and best of all it will eliminate poverty immediately and forever. The only people it would hurt at all are extremely rich, and they wouldn’t even be hurt very much, while it would benefit millions of people including some of the most needy.

Why aren’t we doing this already!?

The Cognitive Science of Morality Part II: Molly Crockett

JDN 2457140 EDT 20:16.

This weekend has been very busy for me, so this post is going to be shorter than most—which is probably a good thing anyway, since my posts tend to run a bit long.

In an earlier post I discussed the Weinberg Cognitive Science Conference and my favorite speaker in the lineup, Joshua Greene. After a brief interlude from Capybara Day, it’s now time to talk about my second-favorite speaker, Molly Crockett. (Is it just me, or does the name “Molly” somehow seem incongruous with a person of such prestige?)

Molly Crockett is a neuroeconomist, though you’d never hear her say that. She doesn’t think of herself as an economist at all, but purely as a neuroscientist. I suspect this is because when she hears the word “economist” she thinks of only mainstream neoclassical economists, and she doesn’t want to be associated with such things.

Still, what she studies is clearly neuroeconomics—I in fact first learned of her work by reading the textbook Neuroeconomics, though I really got interested in her work after watching her TED Talk. It’s one of the better TED talks (they put out so many of them now that the quality is mixed at best); she talks about news reporting on neuroscience, how it is invariably ridiculous and sensationalist. This is particularly frustrating because of how amazing and important neuroscience actually is.

I could almost forgive the sensationalism if they were talking about something that’s actually fantastically boring, like, say, tax codes, or financial regulations. Of course, even then there is the Oliver Effect: You can hide a lot of evil by putting it in something boring. But Dodd-Frank is 2300 pages long; I read an earlier draft that was only (“only”) 600 pages, and it literally contained a three-page section explaining how to define the word “bank”. (Assuming direct proportionality, I would infer that there is now a twelve-page section defining the word “bank”. Hopefully not?) It doesn’t get a whole lot more snoozeworthy than that. So if you must be a bit sensationalist in order to get people to see why eliminating margin requirements and the swaps pushout rule are terrible, terrible ideas, so be it.

But neuroscience is not boring, and so sensationalism only means that news outlets are making up exciting things that aren’t true instead of saying the actually true things that are incredibly exciting.

Here, let me express without sensationalism what Molly Crockett does for a living: Molly Crockett experimentally determines how psychoactive drugs modulate moral judgments. The effects she observes are small, but they are real; and since these experiments are done using small doses for a short period of time, if these effects scale up they could be profound. This is the basic research component—when it comes to technological fruition it will be literally A Clockwork Orange. But it may be A Clockwork Orange in the best possible way: It could be, at last, a medical cure for psychopathy, a pill to make us not just happier or healthier, but better. We are not there yet by any means, but this is clearly the first step: Molly Crockett is to A Clockwork Orange roughly as Michael Faraday is to the Internet.

In one of the experiments she talked about at the conference, Crockett found that serotonin reuptake inhibitors enhance harm aversion. Serotonin reuptake inhibitors are very commonly used drugs—you are likely familiar with one called Prozac. So basically what this study means is that Prozac makes people more averse to causing pain in themselves or others. It doesn’t necessarily make them more altruistic, let alone more ethical; but it does make them more averse to causing pain. (To see the difference, imagine a 19th-century field surgeon dealing with a wounded soldier; there is no anesthetic, but an amputation must be made. Sometimes being ethical requires causing pain.)

The experiment is actually what Crockett calls “the honest Milgram Experiment“; under Milgram, the experimenters told their subjects they would be causing shocks, but no actual shocks were administered. Under Crockett, the shocks are absolutely 100% real (though they are restricted to a much lower voltage of course). People are given competing offers that contain an amount of money and a number of shocks to be delivered, either to you or to the other subject. They decide how much it’s worth to them to bear the shocks—or to make someone else bear them. It’s a classic willingness-to-pay paradigm, applied to the Milgram Experiment.

What Crockett found did not surprise me, nor do I expect it will surprise you if you imagine yourself in the same place; but it would totally knock the socks off of any neoclassical economist. People are much more willing to bear shocks for money than they are to give shocks for money. They are what Crockett terms hyper-altruistic; I would say that they are exhibiting an apparent solidarity coefficient greater than 1. They seem to be valuing others more than they value themselves.

Normally I’d say that this makes no sense at all—why would you value some random stranger more than yourself? Equally perhaps, and obviously only a psychopath would value them not at all; but more? And there’s no way you can actually live this way in your daily life; you’d give away all your possessions and perhaps even starve yourself to death. (I guess maybe Jesus lived that way.) But Crockett came up with a model that explains it pretty well: We are morally risk-averse. If we knew we were dealing with someone very strong who had no trouble dealing with shocks, we’d be willing to shock them a fairly large amount. But we might actually be dealing with someone very vulnerable who would suffer greatly; and we don’t want to take that chance.

I think there’s some truth to that. But her model leaves something else out that I think is quite important: We are also averse to unfairness. We don’t like the idea of raising one person while lowering another. (Obviously not so averse as to never do it—we do it all the time—but without a compelling reason we consider it morally unjustified.) So if the two subjects are in roughly the same condition (being two undergrads at Oxford, they probably are), then helping one while hurting the other is likely to create inequality where none previously existed. But if you hurt yourself in order to help yourself, no such inequality is created; all you do is raise yourself up, provided that you do believe that the money is good enough to be worth the shocks. It’s actually quite Rawslian; lifting one person up while not affecting the other is exactly the sort of inequality you’re allowed to create according to the Difference Principle.

There’s also the fact that the subjects can’t communicate; I think if I could make a deal to share the money afterward, I’d feel better about shocking someone more in order to get us both more money. So perhaps with communication people would actually be willing to shock others more. (And the sensation headline would of course be: “Talking makes people hurt each other.”)

But all of these ideas are things that could be tested in future experiments! And maybe I’ll do those experiments someday, or Crockett, or one of her students. And with clever experimental paradigms we might find out all sorts of things about how the human mind works, how moral intuitions are structured, and ultimately how chemical interventions can actually change human moral behavior. The potential for both good and evil is so huge, it’s both wondrous and terrifying—but can you deny that it is exciting?

And that’s not even getting into the Basic Fact of Cognitive Science, which undermines all concepts of afterlife and theistic religion. I already talked about it before—as the sort of thing that I sort of wish I could say when I introduce myself as a cognitive scientist—but I think it bears repeating.

As Patricia Churchland said on the Colbert Report: Colbert asked, “Are you saying I have no soul?” and she answered, “Yes.” I actually prefer Daniel Dennett’s formulation: “Yes, we have a soul, but it’s made of lots of tiny robots.”

We don’t have a magical, supernatural soul (whatever that means); we don’t have an immortal soul that will rise into Heaven or be reincarnated in someone else. But we do have something worth preserving: We have minds that are capable of consciousness. We love and hate, exalt and suffer, remember and imagine, understand and wonder. And yes, we are born and we die. Once the unique electrochemical pattern that defines your consciousness is sufficiently degraded, you are gone. Nothing remains of what you were—except perhaps the memories of others, or things you have created. But even this legacy is unlikely to last forever. One day it is likely that all of us—and everything we know, and everything we have built, from the Great Pyramids to Hamlet to Beethoven’s Ninth to Principia Mathematica to the US Interstate Highway System—will be gone. I don’t have any consolation to offer you on that point; I can’t promise you that anything will survive a thousand years, much less a million. There is a chance—even a chance that at some point in the distant future, whatever humanity has become will find a way to reverse the entropic decay of the universe itself—but nothing remotely like a guarantee. In all probability you, and I, and all of this will be gone someday, and that is absolutely terrifying.

But it is also undeniably true. The fundamental link between the mind and the brain is one of the basic facts of cognitive science; indeed I like to call it The Basic Fact of Cognitive Science. We know specifically which kinds of brain damage will make you unable to form memories, comprehend language, speak language (a totally different area), see, hear, smell, feel anger, integrate emotions with logic… do I need to go on? Everything that you are is done by your brain—because you are your brain.

Now why can’t the science journalists write about that? Instead we get “The Simple Trick That Can Boost Your Confidence Immediately” and “When it Comes to Picking Art, Men & Women Just Don’t See Eye to Eye.” HuffPo is particularly awful of course; the New York Times is better, but still hardly as good as one might like. They keep trying to find ways to make it exciting—but so rarely seem to grasp how exciting it already is.

Happy Capybara Day! Or the power of culture

JDN 2457131 EDT 14:33.

Did you celebrate Capybara Day yesterday? You didn’t? Why not? We weren’t able to find any actual capybaras this year, but maybe next year we’ll be able to plan better and find a capybara at a zoo; unfortunately the nearest zoo with a capybara appears to be in Maryland. But where would we be without a capybara to consult annually on the stock market?

Right now you are probably rather confused, perhaps wondering if I’ve gone completely insane. This is because Capybara Day is a holiday of my own invention, one which only a handful of people have even heard about.

But if you think we’d never have a holiday so bizarre, think again: For all I did was make some slight modifications to Groundhog Day. Instead of consulting a groundhog about the weather every February 2, I proposed that we consult a capybara about the stock market every April 17. And if you think you have some reason why groundhogs are better at predicting the weather (perhaps because they at least have some vague notion of what weather is) than capybaras are at predicting the stock market (since they have no concept of money or numbers), think about this: Capybara Day could produce extremely accurate predictions, provided only that people actually believed it. The prophecy of rising or falling stock prices could very easily become self-fulfilling. If it were a cultural habit of ours to consult capybaras about the stock market, capybaras would become good predictors of the stock market.

That might seem a bit far-fetched, but think about this: Why is there a January Effect? (To be fair, some researchers argue that there isn’t, and the apparent correlation between higher stock prices and the month of January is simply an illusion, perhaps the result of data overfitting.)

But I think it probably is real, and moreover has some very obvious reasons behind it. In this I’m in agreement with Richard Thaler, a founder of cognitive economics who wrote about such anomalies in the 1980s. December is a time when two very culturally-important events occur: The end of the year, during which many contracts end, profits are assessed, and tax liabilities are determined; and Christmas, the greatest surge of consumer spending and consumer debt.

The first effect means that corporations are very likely to liquidate assets—particularly assets that are running at a loss—in order to minimize their tax liabilities for the year, which will drive down prices. The second effect means that consumers are in search of financing for extravagant gift purchases, and those who don’t run up credit cards may instead sell off stocks. This is if anything a more rational way of dealing with the credit constraint, since interest rates on credit cards are typically far in excess of stock returns. But this surge of selling due to credit constraints further depresses prices.

In January, things return to normal; assets are repurchased, debt is repaid. This brings prices back up to where they were, which results in a higher than normal return for January.

Neoclassical economists are loath to admit that such a seasonal effect could exist, because it violates their concept of how markets work—and to be fair, the January Effect is actually weak enough to be somewhat ambiguous. But actually it doesn’t take much deviation from neoclassical models to explain the effect: Tax policies and credit constraints are basically enough to do it, so you don’t even need to go that far into understanding human behavior. It’s perfectly rational to behave this way given the distortions that are created by taxes and credit limits, and the arbitrage opportunity is one that you can only take advantage of if you have large amounts of credit and aren’t worried about minimizing your tax liabilities. It’s important to remember just how strong the assumptions of models like CAPM truly are; in addition to the usual infinite identical psychopaths, CAPM assumes there are no taxes, no transaction costs, and unlimited access to credit. I’d say it’s amazing that it works at all, but actually, it doesn’t—check out this graph of risk versus return and tell me if you think CAPM is actually giving us any information at all about how stock markets behave. It frankly looks like you could have drawn a random line through a scatter plot and gotten just as good a fit. Knowing how strong its assumptions are, we would not expect CAPM to work—and sure enough, it doesn’t.

Of course, that leaves the question of why our tax policy would be structured in this way—why make the year end on December 31 instead of some other date? And for that, you need to go back through hundreds of years of history, the Gregorian calendar, which in turn was influenced by Christianity, and before that the Julian calendar—in other words, culture.

Culture is one of the most powerful forces that influences human behavior—and also one of the strangest and least-understood. Economic theory is basically silent on the matter of culture. Typically it is ignored entirely, assumed to be irrelevant against the economic incentives that are the true drivers of human action. (There’s a peculiar emotion many neoclassical economists express that I can best describe as self-righteous cynicism, the attitude that we alone—i.e., economists—understand that human beings are not the noble and altruistic creatures many imagine us to be, nor beings of art and culture, but simply cold, calculating machines whose true motives are reducible to profit incentives—and all who think otherwise are being foolish and naïve; true enlightenment is understanding that human beings are infinite identical psychopaths. This is the attitude epitomized by the economist who once sent me an email with “altruism” written in scare quotes.)

Occasionally culture will be invoked as an external (in jargon, exogenous) force, to explain some aspect of human behavior that is otherwise so totally irrational that even invoking nonsensical preferences won’t make it go away. When a suicide bomber blows himself up in a crowd of people, it’s really pretty hard to explain that in terms of rational profit incentives—though I have seen it tried. (It could be self-interest at a larger scale, like families or nations—but then, isn’t that just the tribal paradigm I’ve been arguing for all along?)

But culture doesn’t just motivate us to do extreme or wildly irrational things. It motivates us all the time, often in quite beneficial ways; we wait in line, hold doors for people walking behind us, tip waiters who serve us, and vote in elections, not because anyone pressures us directly to do so (unlike say Australia we do not have compulsory voting) but because it’s what we feel we ought to do. There is a sense of altruism—and altruism provides the ultimate justification for why it is right to do these things—but the primary motivator in most cases is culture—that’s what people do, and are expected to do, around here.

Indeed, even when there is a direct incentive against behaving a certain way—like criminal penalties against theft—the probability of actually suffering a direct penalty is generally so low that it really can’t be our primary motivation. Instead, the reason we don’t cheat and steal is that we think we shouldn’t, and a major part of why we think we shouldn’t is that we have cultural norms against it.

We can actually observe differences in cultural norms across countries in the laboratory. In this 2008 study by Massimo Castro (PDF) comparing British and Italian people playing an economic game called the public goods game in which you can pay a cost yourself to benefit the group as a whole, it was found not only that people were less willing to benefit groups of foreigners than groups of compatriots, British people were overall more generous than Italian people. This 2010 study by Gachter et. al. (actually Joshua Greene talked about it last week) compared how people play the game in various cities, they found three basic patterns: In Western European and American cities such as Zurich, Copenhagen and Boston, cooperation started out high and remained high throughout; people were just cooperative in general. In Asian cities such as Chengdu and Seoul, cooperation started out low, but if people were punished for not cooperating, cooperation would improve over time, eventually reaching about the same place as in the highly cooperative cities. And in Mediterranean cities such as Istanbul, Athens, and Riyadh, cooperation started low and stayed low—even when people could be punished for not cooperating, nobody actually punished them. (These patterns are broadly consistent with the World Bank corruption ratings of these regions, by the way; Western Europe shows very low corruption, while Asia and the Mediterranean show high corruption. Of course this isn’t all that’s going on—and Asia isn’t much less corrupt than the Middle East, while this experiment might make you think so.)

Interestingly, these cultural patterns showed Melbourne as behaving more like an Asian city than a Western European one—perhaps being in the Pacific has worn off on Australia more than they realize.

This is very preliminary, cutting-edge research I’m talking about, so be careful about drawing too many conclusions. But in general we’ve begun to find some fairly clear cultural differences in economic behavior across different societies. While this would not be at all surprising to a sociologist or anthropologist, it’s the sort of thing that economists have insisted for years is impossible.

This is the frontier of cognitive economics, in my opinion. We know that culture is a very powerful motivator of our behavior, and it is time for us to understand how it works—and then, how it can be changed. We know that culture can be changed—cultural norms do change over time, sometimes remarkably rapidly; but we have only a faint notion of how or why they change. Changing culture has the power to do things that simply changing policy cannot, however; policy requires enforcement, and when the enforcement is removed the behavior will often disappear. But if a cultural norm can be imparted, it could sustain itself for a thousand years without any government action at all.

The cognitive science of morality part I: Joshua Greene

JDN 2457124 EDT 15:33.

Thursday and Friday of this past week there was a short symposium at the University of Michigan called “The Cognitive Science of Moral Minds“, sponsored by the Weinberg Cognitive Science Institute, a new research institute at Michigan. It was founded by a former investment banker, because those are the only people who actually have money these days—and Michigan, like most universities, will pretty much take money from whoever offers it (including naming buildings after those people and not even changing the name after it’s revealed that the money was obtained in a $550-million fraud scheme, for which he was fined $200 million, because that’s apparently how our so-called “justice” system so-called “works”. A hint for the SEC: If the fine paid divided by the amount defrauded would be a sensible rate for a marginal income tax, that’s not a punishment). So far as I know Weinberg isn’t a white-collar criminal the way Wyly is, so that’s good at least. Still, why are we relying upon investment bankers to decide what science institutes we’ll found?

The Weinberg Institute was founded just last year. Yes, four years after I got my bachelor’s degree in cognitive science from Michigan, they decide to actually make that a full institute instead of an awkward submajor of the psychology department. Oh, and did I mention how neither the psychology or economics department would support my thesis research in behavioral economics but then called in Daniel Kahneman as the keynote speaker at my graduation? Yeah, sometimes I think I’m a little too cutting-edge for my own good.

The symposium had Joshua Greene of Harvard and Molly Crockett of Oxford, both of whom I’d been hoping to meet for a few years now. I finally got the chance! (It also had Peter Railton—likely not hard to get, seeing as he works at our own philosophy department, but still has some fairly interesting ideas—and some law professor I’d never heard of named John Mikhail, whose talk was really boring.) I asked Greene how I could get in on his research, and he said I should do a PhD at Harvard… which is something I’ve been trying to convince Harvard for three years now—they keep not letting me in.

Anyway… the symposium was actually quite good, and the topic of moral cognition is incredibly fascinating and of course incredibly relevant to Infinite Identical Psychopaths.

Let’s start with Greene’s work. His basic research program is studying what our brains are doing when we try to resolve moral dilemmas. Normally I’m not a huge fan of fMRI research, because it’s just so damn coarse; I like to point out that it is basically equivalent to trying to understand how your computer works by running a voltmeter over the motherboard. But Greene does a good job of not over-interpreting results and combining careful experimental methods to really get a better sense of what’s going on.

There are basically two standard moral dilemmas people like to use in moral cognition research, and frankly I think this is a problem, because they don’t only differ in the intended way but also in many other ways; also once you’ve heard them, they no longer surprise you, so if you ever are a subject in one moral cognition experiment, it’s going to color your responses in any others from then on. I think we should come up with a much more extensive list of dilemmas that differ in various different dimensions; this would also make it much less likely for someone to already have seen them all before. A few weeks ago I made a Facebook post proposing a new dilemma of this sort, and the response, while an entirely unscientific poll, at least vaguely suggested that something may be wrong with the way Greene and others interpret the two standard dilemmas.

What are the standard dilemmas? They are called the trolley dilemma and the footbridge dilemma respectively; collectively they are trolley problems, of which there are several—but most aren’t actually used in moral cognition research for some reason.

In the trolley dilemma, there is, well, a trolley, hurtling down a track on which, for whatever reason, five people are trapped. There is another track, and you can flip a switch to divert the trolley onto that track, which will save those five people; but alas there is one other person trapped on that other track, who will now die. Do you flip the switch? Like most people, I say “Yes”.

In the footbridge dilemma, the trolley is still hurtling toward five people, but now you are above the track, standing on a footbridge beside an extremely fat man. The man is so fat, in fact, that if you push him in front of the trolley he will cause it to derail before it hits the five other people. You yourself are not fat enough to achieve this. Do you push the fat man? Like most people, I say “No.”

I actually hope you weren’t familiar with those dilemmas before, because your first impression is really useful to what I’m about to say next: Aren’t those really weird?

I mean, really weird, particularly the second one—what sort of man is fat enough to stop a trolley, yet nonetheless light enough or precariously balanced enough that I can reliably push him off a footbridge? These sorts of dilemmas are shades of the plugged-in-violinist; well, if the Society of Violin Enthusiasts ever does that, I suppose you can unplug the violinist—but what the hell does that have to do with abortion? (At the end of this post I’ve made a little appendix about the plugged-in-violinist and why it fails so miserably as an argument, but since it’s tangential I’ll move on for now.)

Even the first trolley problem, which seems a paragon of logical causality by comparison, is actually pretty bizarre. What are these people doing on the tracks? Why can’t they get off the tracks? Why is the trolley careening toward them? Why can’t the trolley be stopped some other way? Why is nobody on the trolley? What is this switch doing here, and why am I able to switch tracks despite having no knowledge, expertise or authority in trolley traffic control? Where are the proper traffic controllers? (There’s actually a pretty great sequence in Stargate: Atlantis where they have exactly this conversation.)

Now, if your goal is only to understand the core processes of human moral reasoning, using bizarre scenarios actually makes some sense; you can precisely control the variables—though, as I already said, they really don’t usuallyand see what exactly it is that makes us decide right from wrong. Would you do it for five? No? What about ten? What about fifty? Just what is the marginal utility of pushing a fat man off a footbridge? What if you could flip a switch to drop him through a trapdoor instead of pushing him? (Actually Greene did do that one, and the result is that more people do it than would push him, but not as many as would flip the switch to shift the track.) You’d probably do it if he willingly agreed, right? What if you had to pay his family $100,000 in life insurance as part of the deal? Does it matter if it’s your money or someone else’s? Does it matter how much you have to pay his family? $1,000,000? $1,000? Only $10? If he only needs $1 of enticement, is that as good as giving free consent?

You can go the other way as well: So you’d flip the switch for five? What about three? What about two? Okay, you strict act-utilitarian you: Would you do it for only one? Would you flip a coin because the expected marginal utility of two random strangers is equal? You wouldn’t, would you? So now your intervention does mean something, even if you think it’s less important than maximizing the number of lives saved. What if it were 10,000,001 lives versus 10,000,000 lives? Would you nuke a slightly smaller city to save a slightly larger one? Does it matter to you which country the cities are in? Should it matter?

Greene’s account is basically the standard one, which is that the reason we won’t push the fat man off the footbridge is that we have an intense emotional reaction to physically manhandling someone, but in the case of flipping the switch we don’t have that reaction, so our minds are clearer and we can simply rationally assess that five lives matter more than one. Greene maintains that this emotional response is irrational, an atavistic holdover from our evolutionary history, and we would make society better by suppressing it and going with the “rational”, (act-)utilitarian response. (I know he knows the difference between act-utilitarian and rule-utilitarian, because he has a PhD in philosophy. Why he didn’t mention it in the lecture, I cannot say.)

He does make a pretty good case for that, including the fMRIs showing that emotion centers light up a lot more for the footbridge dilemma than for the trolley dilemma; but I must say, I’m really not quite convinced.

Does flipping the switch to drop him through a trapdoor yield more support because it’s emotionally more distant? Or because it makes a bit more sense? We’ve solved the “Why can I push him hard enough?” problem, albeit not the “How he is heavy enough to stop a trolley?” problem.

I’ve also thought about ways to make the gruesome manhandling happen but nonetheless make more logical sense, and the best I’ve come up with is what we might call the lion dilemma: There is a hungry lion about to attack a group of five children and eat them all. You are standing on a ridge above, where the lion can’t easily get to you; if he eats the kids you’ll easily escape. Beside you is a fat man who weighs as much as the five children combined. If you push him off the ridge, he’ll be injured and unable to run, so the lion will attack him first, and then after eating him the lion will no longer be hungry and will leave the children alone. You yourself aren’t fat enough to make this work, however; you only weigh as much as two of the kids, not all five. You don’t have any weapons to kill the lion or anyone you could call for help, but you are sure you can push the fat man off the ridge quickly enough. Do you push the fat man off the ridge? I think I do—as did most of my friends in my aforementioned totally unscientific Facebook poll—though I’m not as sure of that as I was about flipping the switch. Yet nobody can deny the physicality of my action; not only am I pushing him just as before, he’s not going to be merely run over by a trolley, he’s going to be mauled and eaten by a lion. Of course, I might actually try something else, like yelling, “Run, kids!” and sliding down with the fat man to try to wrestle the lion together; and again we can certainly ask what the seven of us are doing out here unarmed and alone with lions about. But given the choice between the kids being eaten, myself and three of the kids being eaten, or the fat man being eaten, the last one does actually seem like the least-bad option.

Another good one, actually by the same Judith Thompson of plugged-in-violinist fame (for once her dilemma actually makes some sense; seriously, read A Defense of Abortion and you’ll swear she was writing it on psilocybin), is the transplant dilemma: You’re a doctor in a hospital where are five dying patients of different organ failures—two kidneys, one liver, one heart, and one lung, let’s say. You are one of the greatest transplant surgeons of all time, and there is no doubt in your mind that if you had a viable organ for each of them, you could save their lives—but you don’t. Yet as it so happens, a young man is visiting town and came to the hospital after severely breaking his leg in a skateboarding accident. He is otherwise in perfect health, and what’s more, he’s an organ donor and actually a match for all five of your dying patients. You could quietly take him into the surgical wing, give him a little too much anesthesia “by accident” as you operate on his leg, and then take his organs and save all five other patients. Nobody would ever know. Do you do it? Of course you don’t, you’re not a monster. But… you could save five by killing one, right? Is it just your irrational emotional aversion to cutting people open? No, you’re a surgeon—and I think you’ll be happy to know that actual surgeons agree that this is not the sort of thing they should be doing, despite the fact that they obviously have no problem cutting people open for the greater good all the time. The aversion to harm your own patient may come from (or be the source of) the Hippocratic Oath—are we prepared to say that the Hippocratic Oath is irrational?

I also came up with another really interesting one I’ll call the philanthropist assassin dilemma. One day, as you are walking past a dark alley, a shady figure pops out and makes you an offer: If you take this little vial of cyanide and pour it in the coffee of that man across the street while he’s in the bathroom, a donation of $100,000 will be made to UNICEF. If you refuse, the shady character will keep the $100,000 for himself. Nevermind the weirdness—they’re all weird, and unlike the footbridge dilemma this one actually could happen even though it probably won’t. Assume that despite being a murderous assassin this fellow really intends to make the donation if you help him carry out this murder. $100,000 to UNICEF would probably save the lives of over a hundred children. Furthermore, you can give the empty vial back to the philanthropist assassin and since there’s no logical connection between you and the victim, there’s basically no chance you’d ever be caught even if he is. (Also, how can you care more about your own freedom than the lives of a hundred children?) How can you justify not doing it? It’s just one man you don’t know, who apparently did something bad enough to draw the ire of philanthropist assassins, against the lives of a hundred innocent children! Yet I’m sure you share my strong intuition that you should not take the offer. It doesn’t require manhandling anybody—just a quick little pour into a cup of coffee—so that can’t be it. A hundred children! And yet I still don’t see how I could carry out this murder. Is that irrational, as Greene claims? Should we be prepared to carry out such a murder if the opportunity ever arises?

Okay, how about this one then, the white-collar criminal dilemma? You are a highly-skilled hacker, and you could hack into the accounts of a major bank and steal a few dollars from each account, gathering a total of $1 billion that you can then immediately donate to UNICEF, covering their entire operating budget for this year and possibly next year as well, saving the lives of countless children—perhaps literally millions of children. Should you do it? Honestly in this case I think maybe you should! (Maybe Sam Wyly isn’t so bad after all? He donated his stolen money to a university, which isn’t nearly as good as UNICEF… also he stole $550 million and donated $10 million, so there’s that.) But now suppose that you can only get into the system if you physically break into the bank and kill several of the guards. What are a handful of guards against millions of children? Yet you sound like a Well-Intentioned Extremist in a Hollywood blockbuster (seriously, someone should make this movie), and your action certainly doesn’t seem as unambigously heroic as one might think of any act that saves the lives of a million children and only kills a handful of people. Why is it that I think we should lobby governments and corporations to make these donations voluntarily, even if it takes a decade longer, rather than finding someone who can steal the money by force? Children will die in the meantime! Don’t those children matter?

I don’t have a good answer, actually. Maybe Greene is right and it’s just this atavistic emotional response that prevents me from seeing that these acts would be justified. But then again, maybe it’s not—maybe there’s something more here that Greene is missing.

And that brings me back to the act-utilitarian versus rule-utilitarian distinction, which Greene ignored in his lecture. In act-utilitarian terms, obviously you save the children; it’s a no-brainer, 100 children > 1 hapless coffee-drinker and 1,000,000 children >> 10 guards. But in rule-utilitarian terms, things come out a bit different. What kind of society would we live in, if at any moment we could fear the wrath of philanthropist assassins? Right now, there’s plenty of money in the bank for anyone to steal, but what would happen to our financial system if we didn’t punish bank robbers so long as they spent the money on the right charities? All of it, or just most of it? And which charities are the right charities? What would our medical system be like if we knew that our organs might be harvested at any time so long as there were two or more available recipients? Despite these dilemmas actually being a good deal more realistic than the standard trolley problems, the act-utilitarian response still relies upon assuming that this is an exceptional circumstance which will never be heard about or occur again. Yet those are by definition precisely the sort of moral principles we can’t live our lives by.

This post has already gotten really long, so I won’t even get into Molly Crockett’s talk until a later post. I probably won’t do it as the next post either, but the one after that, because next Friday is Capybara Day (“What?” you say? Stay tuned).

Appendix: The plugged-in-violinist

Continue reading

What do we mean by “risk”?

JDN 2457118 EDT 20:50.

In an earlier post I talked about how, empirically, expected utility theory can’t explain the fact that we buy both insurance and lottery tickets, and how, normatively it really doesn’t make a lot of sense to buy lottery tickets precisely because of what expected utility theory says about them.

But today I’d like to talk about one of the major problems with expected utility theory, which I consider one of the major unexplored frontiers of economics: Expected utility theory treats all kinds of risk exactly the same.

In reality there are three kinds of risk: The first is what I’ll call classical risk, which is like the game of roulette; the odds are well-defined and known in advance, and you can play the game a large number of times and average out the results. This is where expected utility theory really shines; if you are dealing with classical risk, expected utility is obviously the way to go and Von Neumann and Morgenstern quite literally proved mathematically that anything else is irrational.

The second is uncertainty, a distinction which was most famously expounded by Frank Knight, an economist at the University of Chicago. (Chicago is a funny place; on the one hand they are a haven for the madness that is Austrian economics; on the other hand they have led the charge in behavioral and cognitive economics. Knight was a perfect fit, because he was a little of both.) Uncertainty is risk under ill-defined or unknown probabilities, where there is no way to play the game twice. Most real-world “risk” is actually uncertainty: Will the People’s Republic of China collapse in the 21st century? How many deaths will global warming cause? Will human beings ever colonize Mars? Is P = NP? None of those questions have known answers, but nor can we clearly assign probabilities either; Either P = NP or not, as a mathematical theorem (or, like the continuum hypothesis, it’s independent of ZFC, the most bizarre possibility of all), and it’s not as if someone is rolling dice to decide how many people global warming will kill. You can think of this in terms of “possible worlds”, though actually most modal theorists would tell you that we can’t even say that P=NP is possible (nor can we say it isn’t possible!) because, as a necessary statement, it can only be possible if it is actually true; this follows from the S5 axiom of modal logic, and you know what, even I am already bored with that sentence. Clearly there is some sense in which P=NP is possible, and if that’s not what modal logic says then so much the worse for modal logic. I am not a modal realist (not to be confused with a moral realist, which I am); I don’t think that possible worlds are real things out there somewhere. I think possibility is ultimately a statement about ignorance, and since we don’t know that P=NP is false then I contend that it is possible that it is true. Put another way, it would not be obviously irrational to place a bet that P=NP will be proved true by 2100; but if we can’t even say that it is possible, how can that be?

Anyway, that’s the mess that uncertainty puts us in, and almost everything is made of uncertainty. Expected utility theory basically falls apart under uncertainty; it doesn’t even know how to give an answer, let alone one that is correct. In reality what we usually end up doing is waving our hands and trying to assign a probability anyway—because we simply don’t know what else to do.

The third one is not one that’s usually talked about, yet I think it’s quite important; I will call it one-shot risk. The probabilities are known or at least reasonably well approximated, but you only get to play the game once. You can also generalize to few-shot risk, where you can play a small number of times, where “small” is defined relative to the probabilities involved; this is a little vaguer, but basically what I have in mind is that even though you can play more than once, you can’t play enough times to realistically expect the rarest outcomes to occur. Expected utility theory almost works on one-shot and few-shot risk, but you have to be very careful about taking it literally.

I think an example make things clearer: Playing the lottery is a few-shot risk. You can play the lottery multiple times, yes; potentially hundreds of times in fact. But hundreds of times is nothing compared to the 1 in 400 million chance you have of actually winning. You know that probability; it can be computed exactly from the rules of the game. But nonetheless expected utility theory runs into some serious problems here.

If we were playing a classical risk game, expected utility would obviously be right. So for example if you know that you will live one billion years, and you are offered the chance to play a game (somehow compensating for the mind-boggling levels of inflation, economic growth, transhuman transcendence, and/or total extinction that will occur during that vast expanse of time) in which at each year you can either have a guaranteed $40,000 of inflation-adjusted income or a 99.999,999,75% chance of $39,999 of inflation-adjusted income and a 0.000,000,25% chance of $100 million in inflation-adjusted income—which will disappear at the end of the year, along with everything you bought with it, so that each year you start afresh. Should you take the second option? Absolutely not, and expected utility theory explains why; that one or two years where you’ll experience 8 QALY per year isn’t worth dropping from 4.602056 QALY per year to 4.602049 QALY per year for the other nine hundred and ninety-eight million years. (Can you even fathom how long that is? From here, one billion years is all the way back to the Mesoproterozoic Era, which we think is when single-celled organisms first began to reproduce sexually. The gain is to be Mitt Romney for a year or two; the loss is the value of a dollar each year over and over again for the entire time that has elapsed since the existence of gamete meiosis.) I think it goes without saying that this whole situation is almost unimaginably bizarre. Yet that is implicitly what we’re assuming when we use expected utility theory to assess whether you should buy lottery tickets.

The real situation is more like this: There’s one world you can end up in, and almost certainly will, in which you buy lottery tickets every year and end up with an income of $39,999 instead of $40,000. There is another world, so unlikely as to be barely worth considering, yet not totally impossible, in which you get $100 million and you are completely set for life and able to live however you want for the rest of your life. Averaging over those two worlds is a really weird thing to do; what do we even mean by doing that? You don’t experience one world 0.000,000,25% as much as the other (whereas in the billion-year scenario, that is exactly what you do); you only experience one world or the other.

In fact, it’s worse than this, because if a classical risk game is such that you can play it as many times as you want as quickly as you want, we don’t even need expected utility theory—expected money theory will do. If you can play a game where you have a 50% chance of winning $200,000 and a 50% chance of losing $50,000, which you can play up to once an hour for the next 48 hours, and you will be extended any credit necessary to cover any losses, you’d be insane not to play; your 99.9% confidence level of wealth at the end of the two days is from $850,000 to $6,180,000. While you may lose money for awhile, it is vanishingly unlikely that you will end up losing more than you gain.

Yet if you are offered the chance to play this game only once, you probably should not take it, and the reason why then comes back to expected utility. If you have good access to credit you might consider it, because going $50,000 into debt is bad but not unbearably so (I did, going to college) and gaining $200,000 might actually be enough better to justify the risk. Then the effect can be averaged over your lifetime; let’s say you make $50,000 per year over 40 years. Losing $50,000 means making your average income $48,750, while gaining $200,000 means making your average income $55,000; so your QALY per year go from a guaranteed 4.70 to a 50% chance of 4.69 and a 50% chance of 4.74; that raises your expected utility from 4.70 to 4.715.

But if you don’t have good access to credit and your income for this year is $50,000, then losing $50,000 means losing everything you have and living in poverty or even starving to death. The benefits of raising your income by $200,000 this year aren’t nearly great enough to take that chance. Your expected utility goes from 4.70 to a 50% chance of 5.30 and a 50% chance of zero.

So expected utility theory only seems to properly apply if we can play the game enough times that the improbable events are likely to happen a few times, but not so many times that we can be sure our money will approach the average. And that’s assuming we know the odds and we aren’t just stuck with uncertainty.

Unfortunately, I don’t have a good alternative; so far expected utility theory may actually be the best we have. But it remains deeply unsatisfying, and I like to think we’ll one day come up with something better.

How following the crowd can doom us all

JDN 2457110 EDT 21:30

Humans are nothing if not social animals. We like to follow the crowd, do what everyone else is doing—and many of us will continue to do so even if our own behavior doesn’t make sense to us. There is a very famous experiment in cognitive science that demonstrates this vividly.

People are given a very simple task to perform several times: We show you line X and lines A, B, and C. Now tell us which of A, B or C is the same length as X. Couldn’t be easier, right? But there’s a trick: seven other people are in the same room performing the same experiment, and they all say that B is the same length as X, even though you can clearly see that A is the correct answer. Do you stick with what you know, or say what everyone else is saying? Typically, you say what everyone else is saying. Over 18 trials, 75% of people followed the crowd at least once, and some people followed the crowd every single time. Some people even began to doubt their own perception, wondering if B really was the right answer—there are four lights, anyone?

Given that our behavior can be distorted by others in such simple and obvious tasks, it should be no surprise that it can be distorted even more in complex and ambiguous tasks—like those involved in finance. If everyone is buying up Beanie Babies or Tweeter stock, maybe you should too, right? Can all those people be wrong?

In fact, matters are even worse with the stock market, because it is in a sense rational to buy into a bubble if you know that other people will as well. As long as you aren’t the last to buy in, you can make a lot of money that way. In speculation, you try to predict the way that other people will cause prices to move and base your decisions around that—but then everyone else is doing the same thing. By Keynes called it a “beauty contest”; apparently in his day it was common to have contests for picking the most beautiful photo—but how is beauty assessed? By how many people pick it! So you actually don’t want to choose the one you think is most beautiful, you want to choose the one you think most people will think is the most beautiful—or the one you think most people will think most people will think….

Our herd behavior probably made a lot more sense when we evolved it millennia ago; when most of your threats are external and human beings don’t have that much influence over our environment, the majority opinion is quite likely to be right, and can often given you an answer much faster than you could figure it out on your own. (If everyone else thinks a lion is hiding in the bushes, there’s probably a lion hiding in the bushes—and if there is, the last thing you want is to be the only one who didn’t run.) The problem arises when this tendency to follow the ground feeds back on itself, and our behavior becomes driven not by the external reality but by an attempt to predict each other’s predictions of each other’s predictions. Yet this is exactly how financial markets are structured.

With this in mind, the surprise is not why markets are unstable—the surprise is why markets are ever stable. I think the main reason markets ever manage price stability is actually something most economists think of as a failure of markets: Price rigidity and so-called “menu costs“. If it’s costly to change your price, you won’t be constantly trying to adjust it to the mood of the hour—or the minute, or the microsecondbut instead trying to tie it to the fundamental value of what you’re selling so that the price will continue to be close for a long time ahead. You may get shortages in times of high demand and gluts in times of low demand, but as long as those two things roughly balance out you’ll leave the price where it is. But if you can instantly and costlessly change the price however you want, you can raise it when people seem particularly interested in buying and lower it when they don’t, and then people can start trying to buy when your price is low and sell when it is high. If people were completely rational and had perfect information, this arbitrage would stabilize prices—but since they’re not, arbitrage attempts can over- or under-compensate, and thus result in cyclical or even chaotic changes in prices.

Our herd behavior then makes this worse, as more people buying leads to, well, more people buying, and more people selling leads to more people selling. If there were no other causes of behavior, the result would be prices that explode outward exponentially; but even with other forces trying to counteract them, prices can move suddenly and unpredictably.

If most traders are irrational or under-informed while a handful are rational and well-informed, the latter can exploit the former for enormous amounts of money; this fact is often used to argue that irrational or under-informed traders will simply drop out, but it should only take you a few moments of thought to see why that isn’t necessarily true. The incentives isn’t just to be well-informed but also to keep others from being well-informed. If everyone were rational and had perfect information, stock trading would be the most boring job in the world, because the prices would never change except perhaps to grow with the growth rate of the overall economy. Wall Street therefore has every incentive in the world not to let that happen. And now perhaps you can see why they are so opposed to regulations that would require them to improve transparency or slow down market changes. Without the ability to deceive people about the real value of assets or trigger irrational bouts of mass buying or selling, Wall Street would make little or no money at all. Not only are markets inherently unstable by themselves, in addition we have extremely powerful individuals and institutions who are driven to ensure that this instability is never corrected.

This is why as our markets have become ever more streamlined and interconnected, instead of becoming more efficient as expected, they have actually become more unstable. They were never stable—and the gold standard made that instability worse—but despite monetary policy that has provided us with very stable inflation in the prices of real goods, the prices of assets such as stocks and real estate have continued to fluctuate wildly. Real estate isn’t as bad as stocks, again because of price rigidity—houses rarely have their values re-assessed multiple times per year, let alone multiple times per second. But real estate markets are still unstable, because of so many people trying to speculate on them. We think of real estate as a good way to make money fast—and if you’re lucky, it can be. But in a rational and efficient market, real estate would be almost as boring as stock trading; your profits would be driven entirely by population growth (increasing the demand for land without changing the supply) and the value added in construction of buildings. In fact, the population growth effect should be sapped by a land tax, and then you should only make a profit if you actually build things. Simply owning land shouldn’t be a way of making money—and the reason for this should be obvious: You’re not actually doing anything. I don’t like patent rents very much, but at least inventing new technologies is actually beneficial for society. Owning land contributes absolutely nothing, and yet it has been one of the primary means of amassing wealth for centuries and continues to be today.

But (so-called) investors and the banks and hedge funds they control have little reason to change their ways, as long as the system is set up so that they can keep profiting from the instability that they foster. Particularly when we let them keep the profits when things go well, but immediately rush to bail them out when things go badly, they have basically no incentive at all not to take maximum risk and seek maximum instability. We need a fundamentally different outlook on the proper role and structure of finance in our economy.

Fortunately one is emerging, summarized in a slogan among economically-savvy liberals: Banking should be boring. (Elizabeth Warren has said this, as have Joseph Stiglitz and Paul Krugman.) And indeed it should, for all banks are supposed to be doing is lending money from people who have it and don’t need it to people who need it but don’t have it. They aren’t supposed to be making large profits of their own, because they aren’t the ones actually adding value to the economy. Indeed it was never quite clear to me why banks should be privatized in the first place, though I guess it makes more sense than, oh, say, prisons.

Unfortunately, the majority opinion right now, at least among those who make policy, seems to be that banks don’t need to be restructured or even placed on a tighter leash; no, they need to be set free so they can work their magic again. Even otherwise reasonable, intelligent people quickly become unshakeable ideologues when it comes to the idea of raising taxes or tightening regulations. And as much as I’d like to think that it’s just a small but powerful minority of people who thinks this way, I know full well that a large proportion of Americans believe in these views and intentionally elect politicians who will act upon them.

All the more reason to break from the crowd, don’t you think?

Why did we ever privatize prisons?

JDN 2457103 EDT 10:24.

Since the Reagan administration (it’s always Reagan), the United States has undergone a spree of privatization of public services, in which services that are ordinarily performed by government agencies are instead contracted out to private companies. Enormous damage to our society has been done by this sort of privatization, from healthcare to parking meters.

This process can vary in magnitude.

The weakest form, which is relatively benign, is for the government to buy specific services like food service or equipment manufacturing from companies that already provide them to consumers. There’s no particular reason for the government to make their own toothpaste or wrenches rather than buy them from corporations like Proctor & Gamble and Sears. Toothpaste is toothpaste and wrenches are wrenches.

The moderate form is for the government to contract services to specific companies that may involve government-specific features like security clearances or powerful military weapons. This is already raising a lot of problems: When Northrop-Grumman makes our stealth bombers, and Boeing builds our nuclear ICBMs, these are publicly-traded, for-profit corporations manufacturing some of the deadliest weapons ever created—weapons that could literally destroy human civilization in a matter of minutes. Markets don’t work well in the presence of externalities, and weapons by definition are almost nothing but externalities; their entire function is to cause harm—typically, death—to people without their consent. While this violence may sometimes be justified, it must never be taken lightly; and we are right to be uncomfortable with the military-industrial complex whose shareholders profit from death and destruction. (Eisenhower tried to warn us!) Still, there are some good arguments to be made for this sort of privatization, since many of these corporations already have high-tech factories and skilled engineers that they can easily repurpose, and competitive bids between different corporations can keep the price down. (Of course, with no-bid contracts that no longer applies; and it certainly hasn’t stopped us from spending nearly as much on the military as the rest of the world combined.)

What I’d really like to focus on today is the strongest form of privatization, in which basic government services are contracted out to private companies. This is what happens when you attempt to privatize soldiers, SWAT teams, and prisons—all of which the United States has done since Reagan.

I say “attempt” to privatize because in a very real sense the privatization of these services is incoherent—they are functions so basic to government that simply to do them makes you, de facto, part of the government. (Or, if done without government orders, it would be organized crime.) All you’ve really done by “privatizing” these services is reduced their transparency and accountability, as well as siphoning off a portion of the taxpayer money in the form of profits for shareholders.

The benefits of privatization, when they exist, are due to competition and consumer freedom. The foundation of a capitalist economy is the ability to say “I’ll take my business elsewhere.” (This is why the notion that a bank can sell your loan to someone else is the opposite of a free market; forcing you to write a check to someone you never made a contract with is antithetical to everything the free market stands for.) Actually the closest thing to a successful example of privatized government services is the United States Postal Service, which collects absolutely no tax income. They do borrow from the government and receive subsidies for some of their services—but so does General Motors. Frankly I think the Postal Service has a better claim to privatization than GM, which you may recall only exists today because of a massive government bailout with a net cost to the US government of $11 billion. All the Postal Service does differently is act as a tightly-regulated monopoly that provides high-quality service to everyone at low prices and pays good wages and pensions, all without siphoning profits to shareholders. (They really screwed up my mail forwarding lately, but they are still one of the best postal systems in the world.) It is in many ways the best of both worlds, the efficiency of capitalism with the humanity of socialism.

The Corrections Corporation of America, on the other hand, is the exact opposite, the worst of both worlds, the inefficiency of socialism with the inhumanity of capitalism. It is not simply corrupt but frankly inherently corrupt—there is simply no way you can have a for-profit prison system that isn’t corrupt. Maybe it can be made less corrupt or more corrupt, but the mere fact that shareholders are earning profits from incarcerating prisoners is fundamentally antithetical to a free and just society.

I really can’t stress this enough: Privatizing soldiers and prisons makes no sense at all. It doesn’t even make sense in a world of infinite identical psychopaths; nothing in neoclassical economic theory in any way supports these privatizations. Neoclassical theory is based upon the presumption of a stable government that enforces property rights, a government that provides as much service as necessary exactly at cost and is not attempting to maximize any notion of its own “profit”.

That’s ridiculous, of course—much like the neoclassical rational agent—and more recent work has been done in public choice theory about the various interest groups that act against each other in government, including lobbyists for private corporations—but public choice theory is above all a theory of government failure. It is a theory of why governments don’t work as well as we would like them to—the main question is how we can suppress the influence of special interest groups to advance the public good. Privatization of prisons means creating special interest groups where none existed, making the government less directed at the public good.

Privatizing government services is often described as “reducing the size of government”, usually interpreted in the most narrow sense to mean the tax burden. But Big Government doesn’t mean you pay 22% of GDP instead of 18% of GDP; Big Government means you can be arrested and imprisoned without trial. Even using the Heritage Foundation’s metrics, the correlation between tax burden and overall freedom is positive. Tyrannical societies don’t bother with taxes; they own the oil refineries directly (Venezuela), or print money whenever they want (Zimbabwe), or build the whole society around doing what they want (North Korea).

The incarceration rate is a much better measure of a society’s freedom than the tax rate will ever be—and the US isn’t doing so well in that regard; indeed we have by some measures the highest incarceration rate in the world. Fortunately we do considerably better when it comes to things like free speech and freedom of religion—indeed we are still above average in overall freedom. Though we do imprison more of our people than China, I’m not suggesting that China has a freer society. But why do we imprison so many people?

Well, it seems to have something to do with privatization of prisons. Indeed, there is a strong correlation between the privatization of US prisons and the enormous explosion of incarceration in the United States. In fact privatized prisons don’t even reduce the tax burden, because privatization does not decrease demand and “privatized” prisons must still be funded by taxes. Prisons do not have customers who choose between different competing companies and shop for the highest quality and lowest price—prisoners go to the prison they are assigned to and they can’t leave (which is really the whole point). Even competition at the purchase end doesn’t make much sense, since the government can’t easily transfer all the prisoners to a new company. Maybe they could transfer ownership of the prison to a different company, but even then the transition costs would be substantial, and besides, there are only a handful of prison corporations that corner most of the (so-called) market.

There is simply no economic basis for privatization of prisons. Nothing in either neoclassical theory or more modern cognitive science in any way supports the idea. So the real question is: Why did we ever privatize prisons?

Basically there is only one reason: Ideology. The post-Reagan privatization spree was not actually based on economics—it was based on economic ideology. Either because they actually believed it, or by the Upton Sinclair Principle, a large number of economists adopted a radical far-right ideology that government basically should not exist—that the more we give more power to corporations and less power to elected officials the better off we will be.

They defended this ideology on vaguely neoclassical grounds, mumbling something about markets being more efficient; but this isn’t even like cutting off the wings of the airplane because we’re assuming frictionless vacuum—it’s like cutting off the engines of the airplane because we simply hate engines and are looking for any excuse to get rid of them. There is absolutely nothing in neoclassical economic theory that says it would be efficient or really beneficial in any way to privatize prisons. It was all about taking power away from the elected government and handing it over to for-profit corporations.

This is a bit of consciousness-raising I’m trying to do: Any time you hear someone say that something should be apolitical, I want you to substitute the word undemocratic. When they say that judges shouldn’t be elected so that they can be apolitical—they mean undemocratic. When they say that the Federal Reserve should be independent of politics—they mean independent of voting. They want to take decision power away from the public at large and concentrate it more in the hands of an elite. People who say this sort of thing literally do not believe in democracy.

To be fair, there may actually be good reasons to not believe in democracy, or at least to believe that democracy should be constrained by a constitution and a system of representation. Certain rights are inalienable, regardless of what the voting public may say, which is why we need a constitution that protects those rights above all else. (In theory… there’s always the PATRIOT ACT, speaking of imprisoning people without trial.) Moreover, most people are simply not interested enough—or informed enough—to vote on every single important decision the government makes. It makes sense for us to place this daily decision-making power in the hands of an elite—but it must be an elite we choose.

And yes, people often vote irrationally. One of the central problems in the United States today is that almost half the population consistently votes against rational government and their own self-interest on the basis of a misguided obsession with banning abortion, combined with a totally nonsensical folk theory of economics in which poor people are poor because they are lazy, the government inherently destroys whatever wealth it touches, and private-sector “job creators” simply hand out jobs to other people because they have extra money lying around. Then of course there’s—let’s face it—deep-seated bigotry toward women, racial minorities, and LGBT people. (The extreme hatred toward Obama and suspicion that he isn’t really born in the US really can’t be explained any other way.) In such circumstances it may be tempting to say that we should give up on democracy and let expert technocrats take charge; but in the absence of democratic safeguards, technocracy is little more than another name for oligarchy. Maybe it’s enough that the President appoints the Federal Reserve chair and the Supreme Court? I’m not so sure. Ben Bernanke definitely handled the Second Depression better than Congress did, I’ll admit; but I’m not sure Alan Greenspan would have in his place, and given his babbling lately about returning to Bretton Woods I’m pretty sure Paul Volcker wouldn’t have. (If you don’t see what’s wrong with going back to Bretton Woods, which was basically a variant of the gold standard, you should read what Krugman has to say about the gold standard.) So basically we got lucky and our monetary quasi-tyrant was relatively benevolent and wise. (Or maybe Bernanke was better because Obama appointed him, while Reagan appointed Greenspan. Carter appointed Volcker, oddly enough; but Reagan reappointed him. It’s always Reagan.) And if you could indeed ensure that tyrants would always be benevolent and wise, tyranny would be a great system—but you can’t.

Democracy doesn’t always lead to the best outcomes, but that’s really not what it’s for. Rather, democracy is for preventing the worst outcomes—no large-scale famine has ever occurred under a mature democracy, nor has any full-scale genocide. Democracies do sometimes forcibly “relocate” populations (particularly indigenous populations, as the US did under Andrew Jackson), and we should not sugar-coat that; people are forced out of their homes and many die. It could even be considered something close to genocide. But no direct and explicit mass murder of millions has ever occurred under a democratic government—no, the Nazis were not democratically elected—and that by itself is a fully sufficient argument for democracy. It could be true that democracies are economically inefficient (they are economically efficient), unbearably corrupt (they are less corrupt), and full of ignorant idiotic hicks (they have higher average educational attainment), and democracy would still be better simply because it prevents famine and genocide. As Churchill said, “Democracy is the worst system, except for all the others.”

Indeed, I think the central reason why American democracy isn’t working well right now is that it’s not very democratic; a two-party system with a plurality “first-past-the-post” vote is literally the worst possible voting system that can still technically be considered democracy. Any worse than that and you only have one party. If we had a range voting system (which is mathematically optimal) and say a dozen parties (they have about a dozen parties in France), people would be able to express their opinions more clearly and in more detail, with less incentive for strategic voting. We probably wouldn’t have such awful turnout at that point, and after realizing that they actually had such a strong voice, maybe people would even start educating themselves about politics in order to make better decisions.

Privatizing prisons and soldiers takes us in exactly the opposite direction: It makes our government deeply less democratic, fundamentally less accountable to voters. It hands off the power of life and death to institutions whose sole purpose for existence is their own monetary gain. We should never have done it—and we must undo it as soon as we possibly can.

In honor of Pi Day, I for one welcome our new robot overlords

JDN 2457096 EDT 16:08

Despite my preference to use the Julian Date Number system, it has not escaped my attention that this weekend was Pi Day of the Century, 3/14/15. Yesterday morning we had the Moment of Pi: 3/14/15 9:26:53.58979… We arguably got an encore that evening if we allow 9:00 PM instead of 21:00.

Though perhaps it is a stereotype and/or cheesy segue, pi and associated mathematical concepts are often associated with computers and robots. Robots are an increasing part of our lives, from the industrial robots that manufacture our cars to the precision-timed satellites that provide our GPS navigation. When you want to know how to get somewhere, you pull out your pocket thinking machine and ask it to commune with the space robots who will guide you to your destination.

There are obvious upsides to these robots—they are enormously productive, and allow us to produce great quantities of useful goods at astonishingly low prices, including computers themselves, creating a positive feedback loop that has literally lowered the price of a given amount of computing power by a factor of one trillion in the latter half of the 20th century. We now very much live in the early parts of a cyberpunk future, and it is due almost entirely to the power of computer automation.

But if you know your SF you may also remember another major part of cyberpunk futures aside from their amazing technology; they also tend to be dystopias, largely because of their enormous inequality. In the cyberpunk future corporations own everything, governments are virtually irrelevant, and most individuals can barely scrape by—and that sounds all too familiar, doesn’t it? This isn’t just something SF authors made up; there really are a number of ways that computer technology can exacerbate inequality and give more power to corporations.

Why? The reason that seems to get the most attention among economists is skill-biased technological change; that’s weird because it’s almost certainly the least important. The idea is that computers can automate many routine tasks (no one disputes that part) and that routine tasks tend to be the sort of thing that uneducated workers generally do more often than educated ones (already this is looking fishy; think about accountants versus artists). But educated workers are better at using computers and the computers need people to operate them (clearly true). Hence while uneducated workers are substitutes for computers—you can use the computers instead—educated workers are complements for computers—you need programmers and engineers to make the computers work. As computers get cheaper, their substitutes also get cheaper—and thus wages for uneducated workers go down. But their complements get more valuable—and so wages for educated workers go up. Thus, we get more inequality, as high wages get higher and low wages get lower.

Or, to put it more succinctly, robots are taking our jobs. Not all our jobs—actually they’re creating jobs at the top for software programmers and electrical engineers—but a lot of our jobs, like welders and metallurgists and even nurses. As the technology improves more and more jobs will be replaced by automation.

The theory seems plausible enough—and in some form is almost certainly true—but as David Card has pointed out, this fails to explain most of the actual variation in inequality in the US and other countries. Card is one of my favorite economists; he is also famous for completely revolutionizing the economics of minimum wage, showing that prevailing theory that minimum wages must hurt employment simply doesn’t match the empirical data.

If it were just that college education is getting more valuable, we’d see a rise in income for roughly the top 40%, since over 40% of American adults have at least an associate’s degree. But we don’t actually see that; in fact contrary to popular belief we don’t even really see it in the top 1%. The really huge increases in income for the last 40 years have been at the top 0.01%—the top 1% of 1%.

Many of the jobs that are now automated also haven’t seen a fall in income; despite the fact that high-frequency trading algorithms do what stockbrokers do a thousand times better (“better” at making markets more unstable and siphoning wealth from the rest of the economy that is), stockbrokers have seen no such loss in income. Indeed, they simply appropriate the additional income from those computer algorithms—which raises the question why welders couldn’t do the same thing. And indeed, I’ll get to in a moment why that is exactly what we must do, that the robot revolution must also come with a revolution in property rights and income distribution.

No, the real reasons why technology exacerbates inequality are twofold: Patent rents and the winner-takes-all effect.

In an earlier post I already talked about the winner-takes-all effect, so I’ll just briefly summarize it this time around. Under certain competitive conditions, a small fraction of individuals can reap a disproportionate share of the rewards despite being only slightly more productive than those beneath them. This often happens when we have network externalities, in which a product becomes more valuable when more people use it, thus creating a positive feedback loop that makes the products which are already successful wildly so and the products that aren’t successful resigned to obscurity.

Computer technology—more specifically, the Internet—is particularly good at creating such situations. Facebook, Google, and Amazon are all examples of companies that (1) could not exist without Internet technology and (2) depend almost entirely upon network externalities for their business model. They are the winners who take all; thousands of other software companies that were just as good or nearly so are now long forgotten. The winners are not always the same, because the system is unstable; for instance MySpace used to be much more important—and much more profitable—until Facebook came along.

But the fact that a different handful of upper-middle-class individuals can find themselves suddenly and inexplicably thrust into fame and fortune while the rest of us toil in obscurity really isn’t much comfort, now is it? While technically the rise and fall of MySpace can be called “income mobility”, it’s clearly not what we actually mean when we say we want a society with a high level of income mobility. We don’t want a society where the top 10% can by little more than chance find themselves becoming the top 0.01%; we want a society where you don’t have to be in the top 10% to live well in the first place.

Even without network externalities the Internet still nurtures winner-takes-all markets, because digital information can be copied infinitely. When it comes to sandwiches or even cars, each new one is costly to make and costly to transport; it can be more cost-effective to choose the ones that are made near you even if they are of slightly lower quality. But with books (especially e-books), video games, songs, or movies, each individual copy costs nothing to create, so why would you settle for anything but the best? This may well increase the overall quality of the content consumers get—but it also ensures that the creators of that content are in fierce winner-takes-all competition. Hence J.K. Rowling and James Cameron on the one hand, and millions of authors and independent filmmakers barely scraping by on the other. Compare a field like engineering; you probably don’t know a lot of rich and famous engineers (unless you count engineers who became CEOs like Bill Gates and Thomas Edison), but nor is there a large segment of “starving engineers” barely getting by. Though the richest engineers (CEOs excepted) are not nearly as rich as the richest authors, the typical engineer is much better off than the typical author, because engineering is not nearly as winner-takes-all.

But the main topic for today is actually patent rents. These are a greatly underappreciated segment of our economy, and they grow more important all the time. A patent rent is more or less what it sounds like; it’s the extra money you get from owning a patent on something. You can get that money either by literally renting it—charging license fees for other companies to use it—or simply by being the only company who is allowed to manufacture something, letting you sell it at monopoly prices. It’s surprisingly difficult to assess the real value of patent rents—there’s a whole literature on different econometric methods of trying to tackle this—but one thing is clear: Some of the largest, wealthiest corporations in the world are built almost entirely upon patent rents. Drug companies, R&D companies, software companies—even many manufacturing companies like Boeing and GM obtain a substantial portion of their income from patents.

What is a patent? It’s a rule that says you “own” an idea, and anyone else who wants to use it has to pay you for the privilege. The very concept of owning an idea should trouble you—ideas aren’t limited in number, you can easily share them with others. But now think about the fact that most of these patents are owned by corporationsnot by inventors themselves—and you’ll realize that our system of property rights is built around the notion that an abstract entity can own an idea—that one idea can own another.

The rationale behind patents is that they are supposed to provide incentives for innovation—in exchange for investing the time and effort to invent something, you receive a certain amount of time where you get to monopolize that product so you can profit from it. But how long should we give you? And is this really the best way to incentivize innovation?

I contend it is not; when you look at the really important world-changing innovations, very few of them were done for patent rents, and virtually none of them were done by corporations. Jonas Salk was indignant at the suggestion he should patent the polio vaccine; it might have made him a billionaire, but only by letting thousands of children die. (To be fair, here’s a scholar arguing that he probably couldn’t have gotten the patent even if he wanted to—but going on to admit that even then the patent incentive had basically nothing to do with why penicillin and the polio vaccine were invented.)

Who landed on the moon? Hint: It wasn’t Microsoft. Who built the Hubble Space Telescope? Not Sony. The Internet that made Google and Facebook possible was originally invented by DARPA. Even when corporations seem to do useful innovation, it’s usually by profiting from the work of individuals: Edison’s corporation stole most of its good ideas from Nikola Tesla, and by the time the Wright Brothers founded a company their most important work was already done (though at least then you could argue that they did it in order to later become rich, which they ultimately did). Universities and nonprofits brought you the laser, light-emitting diodes, fiber optics, penicillin and the polio vaccine. Governments brought you liquid-fuel rockets, the Internet, GPS, and the microchip. Corporations brought you, uh… Viagra, the Snuggie, and Furbies. Indeed, even Google’s vaunted search algorithms were originally developed by the NSF. I can think of literally zero examples of a world-changing technology that was actually invented by a corporation in order to secure a patent. I’m hesitant to say that none exist, but clearly the vast majority of seminal inventions have been created by governments and universities.

This has always been true throughout history. Rome’s fire departments were notorious for shoddy service—and wholly privately-owned—but their great aqueducts that still stand today were built as government projects. When China invented paper, turned it into money, and defended it with the Great Wall, it was all done on government funding.

The whole idea that patents are necessary for innovation is simply a lie; and even the idea that patents lead to more innovation is quite hard to defend. Imagine if instead of letting Google and Facebook patent their technology all the money they receive in patent rents were instead turned into tax-funded research—frankly is there even any doubt that the results would be better for the future of humanity? Instead of better ad-targeting algorithms we could have had better cancer treatments, or better macroeconomic models, or better spacecraft engines.

When they feel their “intellectual property” (stop and think about that phrase for awhile, and it will begin to seem nonsensical) has been violated, corporations become indignant about “free-riding”; but who is really free-riding here? The people who copy music albums for free—because they cost nothing to copy, or the corporations who make hundreds of billions of dollars selling zero-marginal-cost products using government-invented technology over government-funded infrastructure? (Many of these companies also continue receive tens or hundreds of millions of dollars in subsidies every year.) In the immortal words of Barack Obama, “you didn’t build that!”

Strangely, most economists seem to be supportive of patents, despite the fact that their own neoclassical models point strongly in the opposite direction. There’s no logical connection between the fixed cost of inventing a technology and the monopoly rents that can be extracted from its patent. There is some connection—albeit a very weak one—between the benefits of the technology and its monopoly profits, since people are likely to be willing to pay more for more beneficial products. But most of the really great benefits are either in the form of public goods that are unenforceable even with patents (go ahead, try enforcing on that satellite telescope on everyone who benefits from its astronomical discoveries!) or else apply to people who are so needy they can’t possibly pay you (like anti-malaria drugs in Africa), so that willingness-to-pay link really doesn’t get you very far.

I guess a lot of neoclassical economists still seem to believe that willingness-to-pay is actually a good measure of utility, so maybe that’s what’s going on here; if it were, we could at least say that patents are a second-best solution to incentivizing the most important research.

But even then, why use second-best when you have best? Why not devote more of our society’s resources to governments and universities that have centuries of superior track record in innovation? When this is proposed the deadweight loss of taxation is always brought up, but somehow the deadweight loss of monopoly rents never seems to bother anyone. At least taxes can be designed to minimize deadweight loss—and democratic governments actually have incentives to do that; corporations have no interest whatsoever in minimizing the deadweight loss they create so long as their profit is maximized.

I’m not saying we shouldn’t have corporations at all—they are very good at one thing and one thing only, and that is manufacturing physical goods. Cars and computers should continue to be made by corporations—but their technologies are best invented by government. Will this dramatically reduce the profits of corporations? Of course—but I have difficulty seeing that as anything but a good thing.

Why am I talking so much about patents, when I said the topic was robots? Well, it’s typically because of the way these patents are assigned that robots taking people’s jobs becomes a bad thing. The patent is owned by the company, which is owned by the shareholders; so when the company makes more money by using robots instead of workers, the workers lose.

If when a robot takes your job, you simply received the income produced by the robot as capital income, you’d probably be better off—you get paid more and you also don’t have to work. (Of course, if you define yourself by your career or can’t stand the idea of getting “handouts”, you might still be unhappy losing your job even though you still get paid for it.)

There’s a subtler problem here though; robots could have a comparative advantage without having an absolute advantage—that is, they could produce less than the workers did before, but at a much lower cost. Where it cost $5 million in wages to produce $10 million in products, it might cost only $3 million in robot maintenance to produce $9 million in products. Hence you can’t just say that we should give the extra profits to the workers; in some cases those extra profits only exist because we are no longer paying the workers.

As a society, we still want those transactions to happen, because producing less at lower cost can still make our economy more efficient and more productive than it was before. Those displaced workers can—in theory at least—go on to other jobs where they are needed more.

The problem is that this often doesn’t happen, or it takes such a long time that workers suffer in the meantime. Hence the Luddites; they don’t want to be made obsolete even if it does ultimately make the economy more productive.

But this is where patents become important. The robots were probably invented at a university, but then a corporation took them and patented them, and is now selling them to other corporations at a monopoly price. The manufacturing company that buys the robots now has to spend more in order to use the robots, which drives their profits down unless they stop paying their workers.

If instead those robots were cheap because there were no patents and we were only paying for the manufacturing costs, the workers could be shareholders in the company and the increased efficiency would allow both the employers and the workers to make more money than before.

What if we don’t want to make the workers into shareholders who can keep their shares after they leave the company? There is a real downside here, which is that once you get your shares, why stay at the company? We call that a “golden parachute” when CEOs do it, which they do all the time; but most economists are in favor of stock-based compensation for CEOs, and once again I’m having trouble seeing why it’s okay when rich people do it but not when middle-class people do.

Another alternative would be my favorite policy, the basic income: If everyone knows they can depend on a basic income, losing your job to a robot isn’t such a terrible outcome. If the basic income is designed to grow with the economy, then the increased efficiency also raises everyone’s standard of living, as economic growth is supposed to do—instead of simply increasing the income of the top 0.01% and leaving everyone else where they were. (There is a good reason not to make the basic income track economic growth too closely, namely the business cycle; you don’t want the basic income payments to fall in a recession, because that would make the recession worse. Instead they should be smoothed out over multiple years or designed to follow a nominal GDP target, so that they continue to rise even in a recession.)

We could also combine this with expanded unemployment insurance (explain to me again why you can’t collect unemployment if you weren’t working full-time before being laid off, even if you wanted to be or you’re a full-time student?) and active labor market policies that help people re-train and find new and better jobs. These policies also help people who are displaced for reasons other than robots making their jobs obsolete—obviously there are all sorts of market conditions that can lead to people losing their jobs, and many of these we actually want to happen, because they involve reallocating the resources of our society to more efficient ends.

Why aren’t these sorts of policies on the table? I think it’s largely because we don’t think of it in terms of distributing goods—we think of it in terms of paying for labor. Since the worker is no longer laboring, why pay them?

This sounds reasonable at first, but consider this: Why give that money to the shareholder? What did they do to earn it? All they do is own a piece of the company. They may not have contributed to the goods at all. Honestly, on a pay-for-work basis, we should be paying the robot!

If it bothers you that the worker collects dividends even when he’s not working—why doesn’t it bother you that shareholders do exactly the same thing? By definition, a shareholder is paid according to what they own, not what they do. All this reform would do is make workers into owners.

If you justify the shareholder’s wealth by his past labor, again you can do exactly the same to justify worker shares. (And as I said above, if you’re worried about the moral hazard of workers collecting shares and leaving, you should worry just as much about golden parachutes.)

You can even justify a basic income this way: You paid taxes so that you could live in a society that would protect you from losing your livelihood—and if you’re just starting out, your parents paid those taxes and you will soon enough. Theoretically there could be “welfare queens” who live their whole lives on the basic income, but empirical data shows that very few people actually want to do this, and when given opportunities most people try to find work. Indeed, even those who don’t, rarely seem to be motivated by greed (even though, capitalists tell us, “greed is good”); instead they seem to be de-motivated by learned helplessness after trying and failing for so long. They don’t actually want to sit on the couch all day and collect welfare payments; they simply don’t see how they can compete in the modern economy well enough to actually make a living from work.

One thing is certain: We need to detach income from labor. As a society we need to get over the idea that a human being’s worth is decided by the amount of work they do for corporations. We need to get over the idea that our purpose in life is a job, a career, in which our lives are defined by the work we do that can be neatly monetized. (I admit, I suffer from the same cultural blindness at times, feeling like a failure because I can’t secure the high-paying and prestigious employment I want. I feel this clear sense that my society does not value me because I am not making money, and it damages my ability to value myself.)

As robots do more and more of our work, we will need to redefine the way we live by something else, like play, or creativity, or love, or compassion. We will need to learn to see ourselves as valuable even if nothing we do ever sells for a penny to anyone else.

A basic income can help us do that; it can redefine our sense of what it means to earn money. Instead of the default being that you receive nothing because you are worthless unless you work, the default is that you receive enough to live on because you are a human being of dignity and a citizen. This is already the experience of people who have substantial amounts of capital income; they can fall back on their dividends if they ever can’t or don’t want to find employment. A basic income would turn us all into capital owners, shareholders in the centuries of established capital that has been built by our forebears in the form of roads, schools, factories, research labs, cars, airplanes, satellites, and yes—robots.

Scope neglect and the question of optimal altruism

JDN 2457090 EDT 16:15.

We’re now on Eastern Daylight Time because of this bizarre tradition of shifting our time zone forward for half of the year. It’s supposed to save energy, but a natural experiment in India suggests it actually increases energy demand. So why do we do it? Like every ridiculous tradition (have you ever tried to explain Groundhog Day to someone from another country?), we do it because we’ve always done it.
This week’s topic is scope neglect, one of the most pervasive—and pernicious—cognitive heuristics human beings face. Scope neglect raises a great many challenges not only practically but also theoretically—it raises what I call the question of optimal altruism.

The question is simple to ask yet remarkably challenging to answer: How much should we be willing to sacrifice in order to benefit others? If we think of this as a number, your solidarity coefficient (s), it is equal to the cost you are willing to pay divided by the benefit your action has for someone else: s B > C.

This is analogous to the biological concept relatedness (r), on which Hamilton’s Rule applies: r B > C. Solidarity is the psychological analogue; instead of valuing people based on their genetic similarity to you, you value them based on… well, that’s the problem.

I can easily place upper and lower bounds: The lower bound is zero: You should definitely be willing to sacrifice something to help other people—otherwise you are a psychopath. The upper bound is one: There’s no point in paying more cost than you produce in benefit, and in fact even paying the same cost to yourself as you yield in benefits for other people doesn’t make a lot of sense, because it means that your own self-interest is meaningless and the fact that you understand your own needs better than the needs of others is also irrelevant.

But beyond that, it gets a lot harder—and that may explain why we suffer scope neglect in the first place. Should it be 90%? 50%? 10%? 1%? How should it vary between friends versus family versus strangers? It’s really hard to say. And this inability to precisely decide how much other people should be worth to us may be part of why we suffer scope neglect.

Scope neglect is the fact that we are not willing to expend effort or money in direct proportion to the benefit it would have. When different groups were asked how much they would be willing to donate in order to save the lives of 2,000 birds, 20,000 birds, or 200,000 birds, the answers they gave were statistically indistinguishable—always about $80. But however much a bird’s life is worth to you, shouldn’t 200,000 birds be worth, well, 200,000 times as much? In fact, more than that, because the marginal utility of wealth is decreasing, but I see no reason to think that the marginal utility of birds decreases nearly as fast.

But therein lies the problem: Usually we can’t pay 200,000 times as much. I’d feel like a horrible person if I weren’t willing to expend at least $10 or an equivalent amount of effort in order to save a bird. To save 200,000 birds that means I’d owe $2 million—and I simply don’t have $2 million.

You can get similar results to the bird experiment if you use children—though, as one might hope, the absolute numbers are a bit bigger, usually more like $500 to $1000. (And this, it turns out, is actually about how much it actually costs to save a child’s life by a particularly efficient means, such as anti-malaria nets, de-worming, or direct cash transfer. So please, by all means, give $1000 to UNICEF or the Against Malaria Foundation. If you can’t give $1000, give $100; if you can’t give $100, give $10.) It doesn’t much matter whether you say that the project will save 500 children, 5,000 children, or 50,000 children—people still will give about $500 to $1000. But once again, if I’m willing to spend $1000 to save a child—and I definitely am—how much should I be willing to spend to end malaria, which kills 500,000 children a year? Apparently $500 million, which not only do I not have, I almost certainly will not make that much money cumulatively through my entire life. ($2 million, on the other hand, I almost certainly will make cumulatively—the median income of an economist is $90,000 per year, so if I work for at least 22 years with that as my average income I’ll have cumulatively made $2 million. My net wealth may never be that high—though if I get better positions, or I’m lucky enough or clever enough with the stock market it might—but my cumulative income almost certainly will. Indeed, the average gain in cumulative income from a college degree is about $1 million. Because it takes time—time is money—and loans carry interest, this gives it a net present value of about $300,000.)

But maybe scope neglect isn’t such a bad thing after all. There is a very serious problem with these sort of moral dilemmas: The question didn’t say I would single-handedly save 200,000 birds—and indeed, that notion seems quite ridiculous. If I knew that I could actually save 200,000 birds and I were the only one who could do it, dammit, I would try to come up with that $2 million. I might not succeed, but I really would try as hard as I could.

And if I could single-handedly end malaria, I hereby vow that I would do anything it took to achieve that. Short of mass murder, anything I could do couldn’t be a higher cost to the world than malaria itself. I have no idea how I’d come up with $500 million, but I’d certainly try. Bill Gates could easily come up with that $500 million—so he did. In fact he endowed the Gates Foundation with $28 billion, and they’ve spent $1.3 billion of that on fighting malaria, saving hundreds of thousands of lives.

With this in mind, what is scope neglect really about? I think it’s about coordination. It’s not that people don’t care more about 200,000 birds than they do about 2,000; and it’s certainly not that they don’t care more about 50,000 children than they do about 500. Rather, the problem is that people don’t know how many other people are likely to donate, or how expensive the total project is likely to be; and we don’t know how much we should be willing to pay to save the life of a bird or a child.

Hence, what we basically do is give up; since we can’t actually assess the marginal utility of our donation dollars, we fall back on our automatic emotional response. Our mind focuses itself on visualizing that single bird covered in oil, or that single child suffering from malaria. We then hope that the representative heuristic will guide us in how much to give. Or we follow social norms, and give as much as we think others would expect us to give.

While many in the effective altruism community take this to be a failing, they never actually say what we should do—they never give us a figure for how much money we should be willing to donate to save the life of a child. Instead they retreat to abstraction, saying that whatever it is we’re willing to give to save a child, we should be willing to give 50,000 times as much to save 50,000 children.

But it’s not that simple. A bigger project may attract more supporters; if the two occur in direct proportion, then constant donation is the optimal response. Since it’s probably not actually proportional, you likely should give somewhat more to causes that affect more people; but exactly how much more is an astonishingly difficult question. I really don’t blame people—or myself—for only giving a little bit more to causes with larger impact, because actually getting the right answer is so incredibly hard. This is why it’s so important that we have institutions like GiveWell and Charity Navigator which do the hard work to research the effectiveness of charities and tell us which ones we should give to.

Yet even if we can properly prioritize which charities to give to first, that still leaves the question of how much each of us should give. 1% of our income? 5%? 10%? 20%? 50%? Should we give so much that we throw ourselves into the same poverty we are trying to save others from?

In his earlier work Peter Singer seemed to think we should give so much that it throws us into poverty ourselves; he asked us to literally compare every single purchase and ask ourselves whether a year of lattes or a nicer car is worth a child’s life. Of course even he doesn’t live that way, and in his later books Singer seems to have realized this, and now recommends the far more modest standard that everyone give at least 1% of their income. (He himself gives about 33%, but he’s also very rich so he doesn’t feel it nearly as much.) I think he may have overcompensated; while if literally everyone gave at least 1% that would be more than enough to end world hunger and solve many other problems—world nominal GDP is over $70 trillion, so 1% of that is $700 billion a year—we know that this won’t happen. Some will give more, others less; most will give nothing at all. Hence I think those of us who give should give more than our share; hence I lean toward figures more like 5% or 10%.

But then, why not 50% or 90%? It is very difficult for me to argue on principle why we shouldn’t be expected to give that much. Because my income is such a small proportion of the total donations, the marginal utility of each dollar I give is basically constant—and quite high; if it takes about $1000 to save a child’s life on average, and each of these children will then live about 60 more years at about half the world average happiness, that’s about 30 QALY per $1000, or about 30 milliQALY per dollar. Even at my current level of income (incidentally about as much as I think the US basic income should be), I’m benefiting myself only about 150 microQALY per dollar—so my money is worth about 200 times as much to those children as it is to me.

So now we have to ask ourselves the really uncomfortable question: How much do I value those children, relative to myself? If I am at all honest, the value is not 1; I’m not prepared to die for someone I’ve never met 10,000 kilometers away in a nation I’ve never even visited, nor am I prepared to give away all my possessions and throw myself into the same starvation I am hoping to save them from. I value my closest friends and family approximately the same as myself, but I have to admit that I value random strangers considerably less.

Do I really value them at less than 1%, as these figures would seem to imply? I feel like a monster saying that, but maybe it really isn’t so terrible—after all, most economists seem to think that the optimal solidarity coefficient is in fact zero. Maybe we need to become more comfortable admitting that random strangers aren’t worth that much to us, simply so that we can coherently acknowledge that they aren’t worth nothing. Very few of us actually give away all our possessions, after all.

Then again, what do we mean by worth? I can say from direct experience that a single migraine causes me vastly more pain than learning about the death of 200,000 people in an earthquake in Southeast Asia. And while I gave about $100 to the relief efforts involved in that earthquake, I’ve spent considerably more on migraine treatments—thousands, once you include health insurance. But given the chance, would I be willing to suffer a migraine to prevent such an earthquake? Without hesitation. So the amount of pain we feel is not the same as the amount of money we pay, which is not the same as what we would be willing to sacrifice. I think the latter is more indicative of how much people’s lives are really worth to us—but then, what we pay is what has the most direct effect on the world.

It’s actually possible to justify not dying or selling all my possessions even if my solidarity coefficient is much higher—it just leads to some really questionable conclusions. Essentially the argument is this: I am an asset. I have what economists call “human capital”—my health, my intelligence, my education—that gives me the opportunity to affect the world in ways those children cannot. In my ideal imagined future (albeit improbable) in which I actually become President of the World Bank and have the authority to set global development policy, I myself could actually have a marginal impact of megaQALY—millions of person-years of better life. In the far more likely scenario in which I attain some mid-level research or advisory position, I could be one of thousands of people who together have that sort of impact—which still means my own marginal effect is on the order of kiloQALY. And clearly it’s true that if I died, or even if I sold all my possessions, these events would no longer be possible.

The problem with that reasoning is that it’s wildly implausible to say that everyone in the First World are in this same sort of position—Peter Singer can say that, and maybe I can say that, and indeed hundreds of development economists can say that—but at least 99.9% of the First World population are not development economists, nor are they physicists likely to invent cold fusion, nor biomedical engineers likely to cure HIV, nor aid workers who distribute anti-malaria nets and polio vaccines, nor politicians who set national policy, nor diplomats who influence international relations, nor authors whose bestselling books raise worldwide consciousness. Yet I am not comfortable saying that all the world’s teachers, secretaries, airline pilots and truck drivers should give away their possessions either. (Maybe all the world’s bankers and CEOs should—or at least most of them.)

Is it enough that our economy would collapse without teachers, secretaries, airline pilots and truck drivers? But this seems rather like the fact that if everyone in the world visited the same restaurant there wouldn’t be enough room. Surely we could do without any individual teacher, any individual truck driver? If everyone gave the same proportion of their income, 1% would be more than enough to end malaria and world hunger. But we know that everyone won’t give, and the job won’t get done if those of us who do give only 1%.

Moreover, it’s also clearly not the case that everything I spend money on makes me more likely to become a successful and influential development economist. Buying a suit and a car actually clearly does—it’s much easier to get good jobs that way. Even leisure can be justified to some extent, since human beings need leisure and there’s no sense burning myself out before I get anything done. But do I need both of my video game systems? Couldn’t I buy a bit less Coke Zero? What if I watched a 20-inch TV instead of a 40-inch one? I still have free time; could I get another job and donate that money? This is the sort of question Peter Singer tells us to ask ourselves, and it quickly leads to a painfully spartan existence in which most of our time is spent thinking about whether what we’re doing is advancing or damaging the cause of ending world hunger. But then the cost of that stress and cognitive effort must be included; but how do you optimize your own cognitive effort? You need to think about the cost of thinking about the cost of thinking… and on and on. This is why bounded rationality modeling is hard, even though it’s plainly essential to both cognitive science and computer science. (John Stuart Mill wrote an essay that resonates deeply with me about how the pressure to change the world drove him into depression, and how he learned to accept that he could still change the world even if he weren’t constantly pressuring himself to do so—and indeed he did. James Mill set out to create in his son, John Stuart Mill, the greatest philosopher in the history of the world—and I believe that he succeeded.)

Perhaps we should figure out what proportion of the world’s people are likely to give, and how much we need altogether, and then assign the amount we expect from each of them based on that? The more money you ask from each, the fewer people are likely to give. This creates an optimization problem akin to setting the price of a product under monopoly—monopolies maximize profits by carefully balancing the quantity sold with the price at which they sell, and perhaps a similar balance would allow us to maximize development aid. But wouldn’t it be better if we could simply increase the number of people who give, so that we don’t have to ask so much of those who are generous? That means tax-funded foreign aid is the way to go, because it ensures coordination. And indeed I do favor increasing foreign aid to about 1% of GDP—in the US it is currently about $50 billion, 0.3% of GDP, a little more than 1% of the Federal budget. (Most people who say we should “cut” foreign aid don’t realize how small it already is.) But foreign aid is coercive; wouldn’t it be better if people would give voluntarily?

I don’t have a simple answer. I don’t know how much other people’s lives ought to be worth to us, or what it means for our decisions once we assign that value. But I hope I’ve convinced you that this problem is an important one—and made you think a little more about scope neglect and why we have it.

Oppression is quantitative.

JDN 2457082 EDT 11:15.

Economists are often accused of assigning dollar values to everything, of being Oscar Wilde’s definition of a cynic, someone who knows the price of everything and the value of nothing. And there is more than a little truth to this, particularly among neoclassical economists; I was alarmed a few days ago to receive an email response from an economist that included the word ‘altruism’ in scare quotes as though this were somehow a problematic or unrealistic concept. (Actually, altruism is already formally modeled by biologists, and my claim that human beings are altruistic would be so uncontroversial among evolutionary biologists as to be considered trivial.)

But sometimes this accusation is based upon things economists do that is actually tremendously useful, even necessary to good policymaking: We make everything quantitative. Nothing is ever “yes” or “no” to an economist (sometimes even when it probably should be; the debate among economists in the 1960s over whether slavery is economically efficient does seem rather beside the point), but always more or less; never good or bad but always better or worse. For example, as I discussed in my post on minimum wage, the mainstream position among economists is not that minimum wage is always harmful nor that minimum wage is always beneficial, but that minimum wage is a policy with costs and benefits that on average neither increases nor decreases unemployment. The mainstream position among economists about climate policy is that we should institute either a high carbon tax or a system of cap-and-trade permits; no economist I know wants us to either do nothing and let the market decide (a position most Republicans currently seem to take) or suddenly ban coal and oil (the latter is a strawman position I’ve heard environmentalists accused of, but I’ve never actually heard advocated; even Greenpeace wants to ban offshore drilling, not oil in general.).

This makes people uncomfortable, I think, because they want moral issues to be simple. They want “good guys” who are always right and “bad guys” who are always wrong. (Speaking of strawman environmentalism, a good example of this is Captain Planet, in which no one ever seems to pollute the environment in order to help people or even in order to make money; no, they simply do it because the hate clean water and baby animals.) They don’t want to talk about options that are more good or less bad; they want one option that is good and all other options that are bad.

This attitude tends to become infused with righteousness, such that anyone who disagrees is an agent of the enemy. Politics is the mind-killer, after all. If you acknowledge that there might be some downside to a policy you agree with, that’s like betraying your team.

But in reality, the failure to acknowledge downsides can lead to disaster. Problems that could have been prevented are instead ignored and denied. Getting the other side to recognize the downsides of their own policies might actually help you persuade them to your way of thinking. And appreciating that there is a continuum of possibilities that are better and worse in various ways to various degrees is what allows us to make the world a better place even as we know that it will never be perfect.

There is a common refrain you’ll hear from a lot of social justice activists which sounds really nice and egalitarian, but actually has the potential to completely undermine the entire project of social justice.

This is the idea that oppression can’t be measured quantitatively, and we shouldn’t try to compare different levels of oppression. The notion that some people are more oppressed than others is often derided as the Oppression Olympics. (Some use this term more narrowly to mean when a discussion is derailed by debate over who has it worse—but then the problem is really discussions being derailed, isn’t it?)

This sounds nice, because it means we don’t have to ask hard questions like, “Which is worse, sexism or racism?” or “Who is worse off, people with cancer or people with diabetes?” These are very difficult questions, and maybe they aren’t the right ones to ask—after all, there’s no reason to think that fighting racism and fighting sexism are mutually exclusive; they can in fact be complementary. Research into cancer only prevents us from doing research into diabetes if our total research budget is fixed—this is more than anything else an argument for increasing research budgets.

But we must not throw out the baby with the bathwater. Oppression is quantitative. Some kinds of oppression are clearly worse than others.

Why is this important? Because otherwise you can’t measure progress. If you have a strictly qualitative notion of oppression where it’s black-and-white, on-or-off, oppressed-or-not, then we haven’t made any progress on just about any kind of oppression. There is still racism, there is still sexism, there is still homophobia, there is still religious discrimination. Maybe these things will always exist to some extent. This makes the fight for social justice a hopeless Sisyphean task.

But in fact, that’s not true at all. We’ve made enormous progress. Unbelievably fast progress. Mind-boggling progress. For hundreds of millennia humanity made almost no progress at all, and then in the last few centuries we have suddenly leapt toward justice.

Sexism used to mean that women couldn’t own property, they couldn’t vote, they could be abused and raped with impunity—or even beaten or killed for being raped (which Saudi Arabia still does by the way). Now sexism just means that women aren’t paid as well, are underrepresented in positions of power like Congress and Fortune 500 CEOs, and they are still sometimes sexually harassed or raped—but when men are caught doing this they go to prison for years. This change happened in only about 100 years. That’s fantastic.

Racism used to mean that Black people were literally property to be bought and sold. They were slaves. They had no rights at all, they were treated like animals. They were frequently beaten to death. Now they can vote, hold office—one is President!—and racism means that our culture systematically discriminates against them, particularly in the legal system. Racism used to mean you could be lynched; now it just means that it’s a bit harder to get a job and the cops will sometimes harass you. This took only about 200 years. That’s amazing.

Homophobia used to mean that gay people were criminals. We could be sent to prison or even executed for the crime of making love in the wrong way. If we were beaten or murdered, it was our fault for being faggots. Now, homophobia means that we can’t get married in some states (and fewer all the time!), we’re depicted on TV in embarrassing stereotypes, and a lot of people say bigoted things about us. This has only taken about 50 years! That’s astonishing.

And above all, the most extreme example: Religious discrimination used to mean you could be burned at the stake for not being Catholic. It used to mean—and in some countries still does mean—that it’s illegal to believe in certain religions. Now, it means that Muslims are stereotyped because, well, to be frank, there are some really scary things about Muslim culture and some really scary people who are Muslim leaders. (Personally, I think Muslims should be more upset about Ahmadinejad and Al Qaeda than they are about being profiled in airports.) It means that we atheists are annoyed by “In God We Trust”, but we’re no longer burned at the stake. This has taken longer, more like 500 years. But even though it took a long time, I’m going to go out on a limb and say that this progress is wonderful.

Obviously, there’s a lot more progress remaining to be made on all these issues, and others—like economic inequality, ableism, nationalism, and animal rights—but the point is that we have made a lot of progress already. Things are better than they used to be—a lot betterand keeping this in mind will help us preserve the hope and dedication necessary to make things even better still.

If you think that oppression is either-or, on-or-off, you can’t celebrate this progress, and as a result the whole fight seems hopeless. Why bother, when it’s always been on, and will probably never be off? But we started with oppression that was absolutely horrific, and now it’s considerably milder. That’s real progress. At least within the First World we have gone from 90% oppressed to 25% oppressed, and we can bring it down to 10% or 1% or 0.1% or even 0.01%. Those aren’t just numbers, those are the lives of millions of people. As democracy spreads worldwide and poverty is eradicated, oppression declines. Step by step, social changes are made, whether by protest marches or forward-thinking politicians or even by lawyers and lobbyists (they aren’t all corrupt).

And indeed, a four-year-old Black girl with a mental disability living in Ghana whose entire family’s income is $3 a day is more oppressed than I am, and not only do I have no qualms about saying that, it would feel deeply unseemly to deny it. I am not totally unoppressed—I am a bisexual atheist with chronic migraines and depression in a country that is suspicious of atheists, systematically discriminates against LGBT people, and does not make proper accommodations for chronic disorders, particularly mental ones. But I am far less oppressed, and that little girl (she does exist, though I know not her name) could be made much less oppressed than she is even by relatively simple interventions (like a basic income). In order to make her fully and totally unoppressed, we would need such a radical restructuring of human society that I honestly can’t really imagine what it would look like. Maybe something like The Culture? Even then as Iain Banks imagines it, there is inequality between those within The Culture and those outside it, and there have been wars like the Idiran-Culture War which killed billions, and among those trillions of people on thousands of vast orbital habitats someone, somewhere is probably making a speciesist remark. Yet I can state unequivocally that life in The Culture would be better than my life here now, which is better than the life of that poor disabled girl in Ghana.

To be fair, we can’t actually put a precise number on it—though many economists try, and one of my goals is to convince them to improve their methods so that they stop using willingness-to-pay and instead try to actually measure utility by something like QALY. A precise number would help, actually—it would allow us to do cost-benefit analyses to decide where to focus our efforts. But while we don’t need a precise number to tell when we are making progress, we do need to acknowledge that there are degrees of oppression, some worse than others.

Oppression is quantitative. And our goal should be minimizing that quantity.