Actually, our economic growth has been fairly ecologically sustainable lately!

JDN 2457538

Environmentalists have a reputation for being pessimists, and it is not entirely undeserved. While as Paul Samuelson said, all Street indexes have predicted nine out of the last five recessions, environmentalists have predicted more like twenty out of the last zero ecological collapses.

Some fairly serious scientists have endorsed predictions of imminent collapse that haven’t panned out, and many continue to do so. This Guardian article should be hilarious to statisticians, as it literally takes trends that are going one direction, maps them onto a theory that arbitrarily decides they’ll suddenly reverse, and then says “the theory fits the data”. This should be taught in statistics courses as a lesson in how not to fit models. More data distortion occurs in this Scientific American article, which contains the phrase “food per capita is decreasing”; well, that’s true if you just look at the last couple of years, but according to FAOSTAT, food production per capita in 2012 (the most recent data in FAOSTAT) was higher than literally every other year on record except 2011. So if you allow for even the slightest amount of random fluctuation, it’s very clear that food per capita is increasing, not decreasing.


So many people predicting imminent collapse of human civilization. And yet, for some reason, all the people predicting this go about their lives as if it weren’t happening! Why, it’s almost as if they don’t really believe it, and just say it to get attention. Nobody gets on the news by saying “Civilization is doing fine; things are mostly getting better.”

There’s a long history of these sorts of gloom and doom predictions; perhaps the paradigm example is Thomas Malthus in 1779 predicting the imminent destruction of civilization by inevitable famine—just in time for global infant mortality rates to start plummeting and economic output to surge beyond anyone’s wildest dreams.

Still, when I sat down to study this it was remarkable to me just how good the outlook is for future sustainability. The Index of Sustainable Economic Welfare was created essentially in an attempt to show how our economic growth is largely an illusion driven by our rapacious natural resource consumption, but it has since been discontinued, perhaps because it didn’t show that. Using the US as an example, I reconstructed the index as best I could from World Bank data, and here’s what came out for the period since 1990:


The top line is US GDP as normally measured. The bottom line is the ISEW. The gap between those lines expands on a linear scale, but not on a logarithmic scale; that is to say, GDP and ISEW grow at almost exactly the same rate, so ISEW is always a constant (and large) proportion of GDP. By construction it is necessarily smaller (it basically takes GDP and subtracts out from it), but the fact that it is growing at the same rate shows that our economic growth is not being driven by depletion of natural resources or the military-industrial complex; it’s being driven by real improvements in education and technology.

The Human Development Index has grown in almost every country (albeit at quite different rates) since 1990. Global poverty is the lowest it has ever been. We are living in a golden age of prosperity. This is such a golden age for our civilization, our happiness rating maxed out and now we’re getting +20% production and extra gold from every source. (Sorry, gamer in-joke.)

Now, it is said that pride cometh before a fall; so perhaps our current mind-boggling improvements in human welfare have only been purchased on borrowed time as we further drain our natural resources.

There is some cause for alarm: We’re literally running out of fish, and groundwater tables are falling rapidly. Due to poor land use deserts are expanding. Huge quantities of garbage now float in our oceans. And of course, climate change is poised to kill millions of people. Arctic ice will melt every summer starting in the next few years.

And yet, global carbon emissions have not been increasing the last few years, despite strong global economic growth. We need to be reducing emissions, not just keeping them flat (in a previous post I talked about some policies to do that); but even keeping them flat while still raising standard of living is something a lot of environmentalists kept telling us we couldn’t possibly do. Despite constant talk of “overpopulation” and a “population bomb”, population growth rates are declining and world population is projected to level off around 9 billion. Total solar power production in the US expanded by a factor of 40 in just the last 10 years.

Of course, I don’t deny that there are serious environmental problems, and we need to make policies to combat them; but we are doing that. Humanity is not mindlessly plunging headlong into an abyss; we are taking steps to improve our future.

And in fact I think environmentalists deserve a lot of credit for that! Raising awareness of environmental problems has made most Americans recognize that climate change is a serious problem. Further pressure might make them realize it should be one of our top priorities (presently most Americans do not).

And who knows, maybe the extremist doomsayers are necessary to set the Overton Window for the rest of us. I think we of the center-left (toward which reality has a well-known bias) often underestimate how much we rely upon the radical left to pull the discussion away from the radical right and make us seem more reasonable by comparison. It could well be that “climate change will kill tens of millions of people unless we act now to institute a carbon tax and build hundreds of nuclear power plants” is easier to swallow after hearing “climate change will destroy humanity unless we act now to transform global capitalism to agrarian anarcho-socialism.” Ultimately I wish people could be persuaded simply by the overwhelming scientific evidence in favor of the carbon tax/nuclear power argument, but alas, humans are simply not rational enough for that; and you must go to policy with the public you have. So maybe irrational levels of pessimism are a worthwhile corrective to the irrational levels of optimism coming from the other side, like the execrable sophistry of “in praise of fossil fuels” (yes, we know our economy was built on coal and oil—that’s the problem. We’re “rolling drunk on petroleum”; when we’re trying to quit drinking, reminding us how much we enjoy drinking is not helpful.).

But I worry that this sort of irrational pessimism carries its own risks. First there is the risk of simply giving up, succumbing to learned helplessness and deciding there’s nothing we can possibly do to save ourselves. Second is the risk that we will do something needlessly drastic (like the a radical socialist revolution) that impoverishes or even kills millions of people for no reason. The extreme fear that we are on the verge of ecological collapse could lead people to take a “by any means necessary” stance and end up with a cure worse than the disease. So far the word “ecoterrorism” has mainly been applied to what was really ecovandalism; but if we were in fact on the verge of total civilizational collapse, I can understand why someone would think quite literal terrorism was justified (actually the main reason I don’t is that I just don’t see how it could actually help). Just about anything is worth it to save humanity from destruction.

What is progress? How far have we really come?

JDN 2457534

It is a controversy that has lasted throughout the ages: Is the world getting better? Is it getting worse? Or is it more or less staying the same, changing in ways that don’t really constitute improvements or detriments?

The most obvious and indisputable change in human society over the course of history has been the advancement of technology. At one extreme there are techno-utopians, who believe that technology will solve all the world’s problems and bring about a glorious future; at the other extreme are anarcho-primitivists, who maintain that civilization, technology, and industrialization were all grave mistakes, removing us from our natural state of peace and harmony.

I am not a techno-utopian—I do not believe that technology will solve all our problems—but I am much closer to that end of the scale. Technology has solved a lot of our problems, and will continue to solve a lot more. My aim in this post is to convince you that progress is real, that things really are, on the whole, getting better.

One of the more baffling arguments against progress comes from none other than Jared Diamond, the social scientist most famous for Guns, Germs and Steel (which oddly enough is mainly about horses and goats). About seven months before I was born, Diamond wrote an essay for Discover magazine arguing quite literally that agriculture—and by extension, civilization—was a mistake.

Diamond fortunately avoids the usual argument based solely on modern hunter-gatherers, which is a selection bias if ever I heard one. Instead his main argument seems to be that paleontological evidence shows an overall decrease in health around the same time as agriculture emerged. But that’s still an endogeneity problem, albeit a subtler one. Maybe agriculture emerged as a response to famine and disease. Or maybe they were both triggered by rising populations; higher populations increase disease risk, and are also basically impossible to sustain without agriculture.

I am similarly dubious of the claim that hunter-gatherers are always peaceful and egalitarian. It does seem to be the case that herders are more violent than other cultures, as they tend to form honor cultures that punish all sleights with overwhelming violence. Even after the Industrial Revolution there were herder honor cultures—the Wild West. Yet as Steven Pinker keeps trying to tell people, the death rates due to homicide in all human cultures appear to have steadily declined for thousands of years.

I read an article just a few days ago on the Scientific American blog which included the following claim so astonishingly nonsensical it makes me wonder if the authors can even do arithmetic or read statistical tables correctly:

I keep reminding readers (see Further Reading), the evidence is overwhelming that war is a relatively recent cultural invention. War emerged toward the end of the Paleolithic era, and then only sporadically. A new study by Japanese researchers published in the Royal Society journal Biology Letters corroborates this view.

Six Japanese scholars led by Hisashi Nakao examined the remains of 2,582 hunter-gatherers who lived 12,000 to 2,800 years ago, during Japan’s so-called Jomon Period. The researchers found bashed-in skulls and other marks consistent with violent death on 23 skeletons, for a mortality rate of 0.89 percent.

That is supposed to be evidence that ancient hunter-gatherers were peaceful? The global homicide rate today is 62 homicides per million people per year. Using the worldwide life expectancy of 71 years (which is biasing against modern civilization because our life expectancy is longer), that means that the worldwide lifetime homicide rate is 4,400 homicides per million people, or 0.44%—that’s less than half the homicide rate of these “peaceful” hunter-gatherers. If you compare just against First World countries, the difference is even starker; let’s use the US, which has the highest homicide rate in the First World. Our homicide rate is 38 homicides per million people per year, which at our life expectancy of 79 years is 3,000 homicides per million people, or an overall homicide rate of 0.3%, slightly more than a third of this “peaceful” ancient culture. The most peaceful societies today—notably Japan, where these remains were found—have homicide rates as low as 3 per million people per year, which is a lifetime homicide rate of 0.02%, forty times smaller than their supposedly utopian ancestors. (Yes, all of Japan has fewer total homicides than Chicago. I’m sure it has nothing to do with their extremely strict gun control laws.) Indeed, to get a modern homicide rate as high as these hunter-gatherers, you need to go to a country like Congo, Myanmar, or the Central African Republic. To get a substantially higher homicide rate, you essentially have to be in Latin America. Honduras, the murder capital of the world, has a lifetime homicide rate of about 6.7%.

Again, how did I figure these things out? By reading basic information from publicly-available statistical tables and then doing some simple arithmetic. Apparently these paleoanthropologists couldn’t be bothered to do that, or didn’t know how to do it correctly, before they started proclaiming that human nature is peaceful and civilization is the source of violence. After an oversight as egregious as that, it feels almost petty to note that a sample size of a few thousand people from one particular region and culture isn’t sufficient data to draw such sweeping judgments or speak of “overwhelming” evidence.

Of course, in order to decide whether progress is a real phenomenon, we need a clearer idea of what we mean by progress. It would be presumptuous to use per-capita GDP, though there can be absolutely no doubt that technology and capitalism do in fact raise per-capita GDP. If we measure by inequality, modern society clearly fares much worse (our top 1% share and Gini coefficient may be higher than Classical Rome!), but that is clearly biased in the opposite direction, because the main way we have raised inequality is by raising the ceiling, not lowering the floor. Most of our really good measures (like the Human Development Index) only exist for the last few decades and can barely even be extrapolated back through the 20th century.

How about babies not dying? This is my preferred measure of a society’s value. It seems like something that should be totally uncontroversial: Babies dying is bad. All other things equal, a society is better if fewer babies die.

I suppose it doesn’t immediately follow that all things considered a society is better if fewer babies die; maybe the dying babies could be offset by some greater good. Perhaps a totalitarian society where no babies die is in fact worse than a free society in which a few babies die, or perhaps we should be prepared to accept some small amount of babies dying in order to save adults from poverty, or something like that. But without some really powerful overriding reason, babies not dying probably means your society is doing something right. (And since most ancient societies were in a state of universal poverty and quite frequently tyranny, these exceptions would only strengthen my case.)

Well, get ready for some high-yield truth bombs about infant mortality rates.

It’s hard to get good data for prehistoric cultures, but the best data we have says that infant mortality in ancient hunter-gatherer cultures was about 20-50%, with a best estimate around 30%. This is statistically indistinguishable from early agricultural societies.

Indeed, 30% seems to be the figure humanity had for most of history. Just shy of a third of all babies died for most of history.

In Medieval times, infant mortality was about 30%.

This same rate (fluctuating based on various plagues) persisted into the Enlightenment—Sweden has the best records, and their infant mortality rate in 1750 was about 30%.

The decline in infant mortality began slowly: During the Industrial Era, infant mortality was about 15% in isolated villages, but still as high as 40% in major cities due to high population densities with poor sanitation.

Even as recently as 1900, there were US cities with infant mortality rates as high as 30%, though the overall rate was more like 10%.

Most of the decline was recent and rapid: Just within the US since WW2, infant mortality fell from about 5.5% to 0.7%, though there remains a substantial disparity between White and Black people.

Globally, the infant mortality rate fell from 6.3% to 3.2% within my lifetime, and in Africa today, the region where it is worst, it is about 5.5%—or what it was in the US in the 1940s.

This precipitous decline in babies dying is the main reason ancient societies have such low life expectancies; actually once they reached adulthood they lived to be about 70 years old, not much worse than we do today. So my multiplying everything by 71 actually isn’t too far off even for ancient societies.

Let me make a graph for you here, of the approximate rate of babies dying over time from 10,000 BC to today:


Let’s zoom in on the last 250 years, where the data is much more solid:


I think you may notice something in these graphs. There is quite literally a turning point for humanity, a kink in the curve where we suddenly begin a rapid decline from an otherwise constant mortality rate.

That point occurs around or shortly before 1800—that is, it occurs at industrial capitalism. Adam Smith (not to mention Thomas Jefferson) was writing at just about the point in time when humanity made a sudden and unprecedented shift toward saving the lives of millions of babies.

So now, think about that the next time you are tempted to say that capitalism is an evil system that destroys the world; the evidence points to capitalism quite literally saving babies from dying.

How would it do so? Well, there’s that rising per-capita GDP we previously ignored, for one thing. But more important seems to be the way that industrialization and free markets support technological innovation, and in this case especially medical innovation—antibiotics and vaccines. Our higher rates of literacy and better communication, also a result of raised standard of living and improved technology, surely didn’t hurt. I’m not often in agreement with the Cato Institute, but they’re right about this one: Industrial capitalism is the chief source of human progress.

Billions of babies would have died but we saved them. So yes, I’m going to call that progress. Civilization, and in particular industrialization and free markets, have dramatically improved human life over the last few hundred years.

In a future post I’ll address one of the common retorts to this basically indisputable fact: “You’re making excuses for colonialism and imperialism!” No, I’m not. Saying that modern capitalism is a better system (not least because it saves babies) is not at all the same thing as saying that our ancestors were justified in using murder, slavery, and tyranny to force people into it.

Why it matters that torture is ineffective

JDN 2457531

Like “longest-ever-serving Speaker of the House sexually abuses teenagers” and “NSA spy program is trying to monitor the entire telephone and email system”, the news that the US government systematically tortures suspects is an egregious violation that goes to the highest levels of our government—that for some reason most Americans don’t particularly seem to care about.

The good news is that President Obama signed an executive order in 2009 banning torture domestically, reversing official policy under the Bush Administration, and then better yet in 2014 expanded the order to apply to all US interests worldwide. If this is properly enforced, perhaps our history of hypocrisy will finally be at its end. (Well, not if Trump wins…)

Yet as often seems to happen, there are two extremes in this debate and I think they’re both wrong.
The really disturbing side is “Torture works and we have to use it!” The preferred mode of argumentation for this is the “ticking time bomb scenario”, in which we have some urgent disaster to prevent (such as a nuclear bomb about to go off) and torture is the only way to stop it from happening. Surely then torture is justified? This argument may sound plausible, but as I’ll get to below, this is a lot like saying, “If aliens were attacking from outer space trying to wipe out humanity, nuclear bombs would probably be justified against them; therefore nuclear bombs are always justified and we can use them whenever we want.” If you can’t wait for my explanation, The Atlantic skewers the argument nicely.

Yet the opponents of torture have brought this sort of argument on themselves, by staking out a position so extreme as “It doesn’t matter if torture works! It’s wrong, wrong, wrong!” This kind of simplistic deontological reasoning is very appealing and intuitive to humans, because it casts the world into simple black-and-white categories. To show that this is not a strawman, here are several different people all making this same basic argument, that since torture is illegal and wrong it doesn’t matter if it works and there should be no further debate.

But the truth is, if it really were true that the only way to stop a nuclear bomb from leveling Los Angeles was to torture someone, it would be entirely justified—indeed obligatory—to torture that suspect and stop that nuclear bomb.

The problem with that argument is not just that this is not our usual scenario (though it certainly isn’t); it goes much deeper than that:

That scenario makes no sense. It wouldn’t happen.

To use the example the late Antonin Scalia used from an episode of 24 (perhaps the most egregious Fictional Evidence Fallacy ever committed), if there ever is a nuclear bomb planted in Los Angeles, that would literally be one of the worst things that ever happened in the history of the human race—literally a Holocaust in the blink of an eye. We should be prepared to cause extreme suffering and death in order to prevent it. But not only is that event (fortunately) very unlikely, torture would not help us.

Why? Because torture just doesn’t work that well.

It would be too strong to say that it doesn’t work at all; it’s possible that it could produce some valuable intelligence—though clear examples of such results are amazingly hard to come by. There are some social scientists who have found empirical results showing some effectiveness of torture, however. We can’t say with any certainty that it is completely useless. (For obvious reasons, a randomized controlled experiment in torture is wildly unethical, so none have ever been attempted.) But to justify torture it isn’t enough that it could work sometimes; it has to work vastly better than any other method we have.

And our empirical data is in fact reliable enough to show that that is not the case. Torture often produces unreliable information, as we would expect from the game theory involved—your incentive is to stop the pain, not provide accurate intel; the psychological trauma that torture causes actually distorts memory and reasoning; and as a matter of fact basically all the useful intelligence obtained in the War on Terror was obtained through humane interrogation methods. As interrogation experts agree, torture just isn’t that effective.

In principle, there are four basic cases to consider:

1. Torture is vastly more effective than the best humane interrogation methods.

2. Torture is slightly more effective than the best humane interrogation methods.

3. Torture is as effective as the best humane interrogation methods.

4. Torture is less effective than the best humane interrogation methods.

The evidence points most strongly to case 4, which would mean that torture is a no-brainer; if it doesn’t even work as well as other methods, it’s absurd to use it. You’re basically kicking puppies at that point—purely sadistic violence that accomplishes nothing. But the data isn’t clear enough for us to rule out case 3 or even case 2. There is only one case we can strictly rule out, and that is case 1.

But it was only in case 1 that torture could ever be justified!

If you’re trying to justify doing something intrinsically horrible, it’s not enough that it has some slight benefit.

People seem to have this bizarre notion that we have only two choices in morality:

Either we are strict deontologists, and wrong actions can never be justified by good outcomes ever, in which case apparently vaccines are morally wrong, because stabbing children with needles is wrong. Tto be fair, some people seem to actually believe this; but then, some people believe the Earth is less than 10,000 years old.

Or alternatively we are the bizarre strawman concept most people seem to have of utilitarianism, under which any wrong action can be justified by even the slightest good outcome, in which case all you need to do to justify slavery is show that it would lead to a 1% increase in per-capita GDP. Sadly, there honestly do seem to be economists who believe this sort of thing. Here’s one arguing that US chattel slavery was economically efficient, and some of the more extreme arguments for why sweatshops are good can take on this character. Sweatshops may be a necessary evil for the time being, but they are still an evil.

But what utilitarianism actually says (and I consider myself some form of nuanced rule-utilitarian, though actually I sometimes call it “deontological consequentialism” to emphasize that I mean to synthesize the best parts of the two extremes) is not that the ends always justify the means, but that the ends can justify the means—that it can be morally good or even obligatory to do something intrinsically bad (like stabbing children with needles) if it is the best way to accomplish some greater good (like saving them from measles and polio). But the good actually has to be greater, and it has to be the best way to accomplish that good.

To see why this later proviso is important, consider the real-world ethical issues involved in psychology experiments. The benefits of psychology experiments are already quite large, and poised to grow as the science improves; one day the benefits of cognitive science to humanity may be even larger than the benefits of physics and biology are today. Imagine a world without mood disorders or mental illness of any kind; a world without psychopathy, where everyone is compassionate; a world where everyone is achieving their full potential for happiness and self-actualization. Cognitive science may yet make that world possible—and I haven’t even gotten into its applications in artificial intelligence.

To achieve that world, we will need a great many psychology experiments. But does that mean we can just corral people off the street and throw them into psychology experiments without their consent—or perhaps even their knowledge? That we can do whatever we want in those experiments, as long as it’s scientifically useful? No, it does not. We have ethical standards in psychology experiments for a very good reason, and while those ethical standards do slightly reduce the efficiency of the research process, the reduction is small enough that the moral choice is obviously to retain the ethics committees and accept the slight reduction in research efficiency. Yes, randomly throwing people into psychology experiments might actually be slightly better in purely scientific terms (larger and more random samples)—but it would be terrible in moral terms.

Along similar lines, even if torture works about as well or even slightly better than other methods, that’s simply not enough to justify it morally. Making a successful interrogation take 16 days instead of 17 simply wouldn’t be enough benefit to justify the psychological trauma to the suspect (and perhaps the interrogator!), the risk of harm to the falsely accused, or the violation of international human rights law. And in fact a number of terrorism suspects were waterboarded for months, so even the idea that it could shorten the interrogation is pretty implausible. If anything, torture seems to make interrogations take longer and give less reliable information—case 4.

A lot of people seem to have this impression that torture is amazingly, wildly effective, that a suspect who won’t crack after hours of humane interrogation can be tortured for just a few minutes and give you all the information you need. This is exactly what we do not find empirically; if he didn’t crack after hours of talk, he won’t crack after hours of torture. If you literally only have 30 minutes to find the nuke in Los Angeles, I’m sorry; you’re not going to find the nuke in Los Angeles. No adversarial interrogation is ever going to be completed that quickly, no matter what technique you use. Evacuate as many people to safe distances or underground shelters as you can in the time you have left.

This is why the “ticking time-bomb” scenario is so ridiculous (and so insidious); that’s simply not how interrogation works. The best methods we have for “rapid” interrogation of hostile suspects take hours or even days, and they are humane—building trust and rapport is the most important step. The goal is to get the suspect to want to give you accurate information.

For the purposes of the thought experiment, okay, you can stipulate that it would work (this is what the Stanford Encyclopedia of Philosophy does). But now all you’ve done is made the thought experiment more distant from the real-world moral question. The closest real-world examples we’ve ever had involved individual crimes, probably too small to justify the torture (as bad as a murdered child is, think about what you’re doing if you let the police torture people). But by the time the terrorism to be prevented is large enough to really be sufficient justification, it (1) hasn’t happened in the real world and (2) surely involves terrorists who are sufficiently ideologically committed that they’ll be able to resist the torture. If such a situation arises, of course we should try to get information from the suspects—but what we try should be our best methods, the ones that work most consistently, not the ones that “feel right” and maybe happen to work on occasion.

Indeed, the best explanation I have for why people use torture at all, given its horrible effects and mediocre effectiveness at best is that it feels right.

When someone does something terrible (such as an act of terrorism), we rightfully reduce our moral valuation of them relative to everyone else. If you are even tempted to deny this, suppose a terrorist and a random civilian are both inside a burning building and you only have time to save one. Of course you save the civilian and not the terrorist. And that’s still true even if you know that once the terrorist was rescued he’d go to prison and never be a threat to anyone else. He’s just not worth as much.

In the most extreme circumstances, a person can be so terrible that their moral valuation should be effectively zero: If the only person in a burning building is Stalin, I’m not sure you should save him even if you easily could. But it is a grave moral mistake to think that a person’s moral valuation should ever go negative, yet I think this is something that people do when confronted with someone they truly hate. The federal agents torturing those terrorists didn’t merely think of them as worthless—they thought of them as having negative worth. They felt it was a positive good to harm them. But this is fundamentally wrong; no sentient being has negative worth. Some may be so terrible as to have essentially zero worth; and we are often justified in causing harm to some in order to save others. It would have been entirely justified to kill Stalin (as a matter of fact he died of heart disease and old age), to remove the continued threat he posed; but to torture him would not have made the world a better place, and actually might well have made it worse.

Yet I can see how psychologically it could be useful to have a mechanism in our brains that makes us hate someone so much we view them as having negative worth. It makes it a lot easier to harm them when necessary, makes us feel a lot better about ourselves when we do. The idea that any act of homicide is a tragedy but some of them are necessary tragedies is a lot harder to deal with than the idea that some people are just so evil that killing or even torturing them is intrinsically good. But some of the worst things human beings have ever done ultimately came from that place in our brains—and torture is one of them.

The powerful persistence of bigotry

JDN 2457527

Bigotry has been a part of human society since the beginning—people have been hating people they perceive as different since as long as there have been people, and maybe even before that. I wouldn’t be surprised to find that different tribes of chimpanzees or even elephants hold bigoted beliefs about each other.

Yet it may surprise you that neoclassical economics has basically no explanation for this. There is a long-standing famous argument that bigotry is inherently irrational: If you hire based on anything aside from actual qualifications, you are leaving money on the table for your company. Because women CEOs are paid less and perform better, simply ending discrimination against women in top executive positions could save any typical large multinational corporation tens of millions of dollars a year. And yet, they don’t! Fancy that.

More recently there has been work on the concept of statistical discrimination, under which it is rational (in the sense of narrowly-defined economic self-interest) to discriminate because categories like race and gender may provide some statistically valid stereotype information. For example, “Black people are poor” is obviously not true across the board, but race is strongly correlated with wealth in the US; “Asians are smart” is not a universal truth, but Asian-Americans do have very high educational attainment. In the absence of more reliable information that might be your best option for making good decisions. Of course, this creates a vicious cycle where people in the positive stereotype group are better off and have more incentive to improve their skills than people in the negative stereotype group, thus perpetuating the statistical validity of the stereotype.

But of course that assumes that the stereotypes are statistically valid, and that employers don’t have more reliable information. Yet many stereotypes aren’t even true statistically: If “women are bad drivers”, then why do men cause 75% of traffic fatalities? Furthermore, in most cases employers have more reliable information—resumes with education and employment records. Asian-Americans are indeed more likely to have bachelor’s degrees than Latino Americans, but when it say right on Mr. Lorenzo’s resume that he has a B.A. and on Mr. Suzuki’s resume that he doesn’t, that racial stereotype no longer provides you with any further information. Yet even if the resumes are identical, employers will be more likely to hire a White applicant than a Black applicant, and more likely to hire a male applicant than a female applicant—we have directly tested this in experiments. In an experiment where employers had direct performance figures in front of them, they were still more likely to choose the man when they had the same scores—and sometimes even when the woman had a higher score!

Even our assessments of competence are often biased, probably subconsciously; given the same essay to review, most reviewers find more spelling errors and are more concerned about those errors if they are told that the author is Black. If they thought the author was White, they thought of the errors as “minor mistakes” by a student with “otherwise good potential”; but if they thought the author was Black, they “can’t believe he got into this school in the first place”. These reviewers were reading the same essay. The alleged author’s race was decided randomly. Most if not all of these reviewers were not consciously racist. Subconscious racial biases are all over the place; almost everyone exhibits some subconscious racial bias.

No, discrimination isn’t just rational inference based on valid (if unfortunate and self-reinforcing) statistical trends. There is a significant component of just outright irrational bigotry.

We’re seeing this play out in North Carolina; due to their arbitrary discrimination against lesbian, gay, bisexual and especially transgender people, they are now hemorrhaging jobs as employers pull out, and their federal funding for student loans is now in jeopardy due to the obvious Title IX violation. This is obviously not in the best interest of the people of North Carolina (even the ones who aren’t LGBT!); and it’s all being justified on the grounds of an epidemic of sexual assaults by people pretending to be trans that doesn’t even exist. It turns out that more Republican Senators have been arrested for sexual misconduct in bathrooms than transgender people—and while the number of transgender people in the US is surprisingly hard to measure, it’s clearly a lot larger than the number of Republican Senators!

In fact, discrimination is even more irrational than it may seem, because empirically the benefits of discrimination (such as they are—short-term narrow economic self-interest) fall almost entirely on the rich while the harms fall mainly on the poor, yet poor people are much more likely to be racist! Since income and education are highly correlated, education accounts for some of this effect. This is reason to be hopeful, for as educational attainment has soared, we have found that racism has decreased.

But education doesn’t seem to explain the full effect. One theory to account this is what’s called last-place aversiona highly pernicious heuristic where people are less concerned about their own absolute status than they are about not having the worst status. In economic experiments, people are usually more willing to give money to people worse off than them than to those better off than them—unless giving it to the worse-off would make those people better off than they themselves are. I think we actually need to do further study to see what happens if it would make those other people exactly as well-off as they are, because that turns out to be absolutely critical to whether people would be willing to support a basic income. In other words, do people count “tied for last”? Would they rather play a game where everyone gets $100, or one where they get $50 but everyone else only gets $10?

I would hope that humanity is better than that—that we would want to play the $100 game, which is analogous to a basic income. But when I look at the extreme and persistent inequality that has plagued human society for millennia, I begin to wonder if perhaps there really are a lot of people who think of the world in such zero-sum, purely relative terms, and care more about being better than others than they do about doing well themselves. Perhaps the horrific poverty of Sub-Saharan Africa and Southeast Asia is, for many First World people, not a bug but a feature; we feel richer when we know they are poorer. Scarcity seems to amplify this zero-sum thinking; racism gets worse whenever we have economic downturns. Precisely because discrimination is economically inefficient, this can create a vicious cycle where poverty causes bigotry which worsens poverty.

There is also something deeper going on, something evolutionary; bigotry is part of what I call the tribal paradigm, the core aspect of human psychology that defines identity in terms of in-groups which are good and out-groups which are bad. We will probably never fully escape the tribal paradigm, but this is not a reason to give up hope; we have made substantial progress in reducing bigotry in many places. What seems to happen is that people learn to expand their mental tribe, so that it encompasses larger and larger groups—not just White Americans but all Americans, or not just Americans but all human beings. Peter Singer calls this the Expanding Circle (also the title of his book on it). We may one day be able to make our tribe large enough to encompass all sentient beings in the universe; at that point, it’s just fine if we are only interested in advancing the interests of those in our tribe, because our tribe would include everyone. Yet I don’t think any of us are quite there yet, and some people have a really long way to go.

But with these expanding tribes in mind, perhaps I can leave you with a fact that is as counter-intuitive as it is encouraging, and even easier still to take out of context: Racism was better than what came before it. What I mean by this is not that racism is good—of course it’s terrible—but that in order to be racism, to define the whole world into a small number of “racial groups”, people already had to enormously expand their mental tribe from where it started. When we evolved on the African savannah millions of years ago, our tribe was 150 people; to this day, that’s about the number of people we actually feel close to and interact with on a personal level. We could have stopped there, and for millennia we did. But over time we managed to expand beyond that number, to a village of 1,000, a town of 10,000, a city of 100,000. More recently we attained mental tribes of whole nations, in some case hundreds of millions of people. Racism is about that same scale, if not a bit larger; what most people (rather arbitrarily, and in a way that changes over time) call “White” constitutes about a billion people. “Asian” (including South Asian) is almost four billion. These are astonishingly huge figures, some seven orders of magnitude larger than what we originally evolved to handle. The ability to feel empathy for all “White” people is just a little bit smaller than the ability to feel empathy for all people period. Similarly, while today the gender in “all men are created equal” is jarring to us, the idea at the time really was an incredibly radical broadening of the moral horizon—Half the world? Are you mad?

Therefore I am confident that one day, not too far from now, the world will take that next step, that next order of magnitude, which many of us already have (or try to), and we will at last conquer bigotry, and if not eradicate it entirely then force it completely into the most distant shadows and deny it its power over our society.

Is there hope for stopping climate change?

JDN 2457523

This topic was decided by vote of my Patreons (there are still few enough that the vote usually has only two or three people, but hey, what else can I do?).

When it comes to climate change, I have good news and bad news.

First, the bad news:

We are not going to be able to stop climate change, or even stop making it worse, any time soon. Because of this, millions of people are going to die and there’s nothing we can do about it.

Now, the good news:

We can do a great deal to slow down our contribution to climate change, reduce its impact on human society, and save most of the people who would otherwise have been killed by it. It is currently forecasted that climate change will cause somewhere between 10 million and 100 million deaths over the next century; if we can hold to the lower end of that error bar instead of the upper end, that’s half a dozen Holocausts prevented.

There are three basic approaches to take, and we will need all of them:

1. Emission reduction: Put less carbon in

2. Geoengineering: Take more carbon out

3. Adaptation: Protect more humans from the damage

Strategies 1 and 2 are classified as mitigation, while strategy 3 is classified as adaptation. Mitigation is reducing climate change; adaptation is reducing the effect of climate change on people.

Let’s start with strategy 1, emission reduction. It’s probably the most important; without it the others are clearly doomed to fail.

So, what are our major sources of emissions, and what can we do to reduce them?

While within the US and most other First World countries the primary sources of emissions are electricity and transportation, worldwide transportation is less important and agriculture is about as large a source of emissions as electricity. 25% of global emissions are due to electricity, 24% are due to agriculture, 21% are due to industry, 14% are due to transportation, only 6% are due to buildings, and everything else adds up to 10%.


1A. Both within the First World and worldwide, the leading source of emissions is electricity. Our first priority is therefore electrical grid reform.

Energy efficiency can help—and it already is helping, as global electricity consumption has stopped growing despite growth in population and GDP. Energy intensity of GDP is declining. But the main thing we need to do is reform the way that electricity is produced.

Let’s take a look at how the world currently produces electricity. Currently, the leading source of electricity is “liquids”, an odd euphemism for oil; currently about 175 quadrillion BTU per year, 30% of all production. This is closely followed by coal, at about 160 quadrillion BTU per year, 28%. Then we have natural gas, about 130 quadrillion BTU per year (23%), wind, solar, hydroelectric, and geothermal altogether about 60 quadrillion BTU per year (11%), and nuclear fission only about 40 quadrillion BTU per year (7%).

This list basically needs to be reversed. We will probably not be able to completely stop using oil for transportation, but we have no excuse for using for electricity production. We also need to stop using coal for, well, just about anything. There are a few industrial processes that basically have to use coal; fine, use it for that. But just as something to burn, coal is one of the most heavily-polluting technologies in existence—the only things we burn that are worse are wood and animal dung. Simply ending the burning of coal, wood, and dung would by itself save 4 million lives a year just from reduced pollution.

Natural gas burns cleaner than coal or oil, but it still produces a lot of carbon emissions. Even worse, natural gas is itself one of the worst greenhouse gases—and so natural gas leaks are a major source of greenhouse emissions. Last year a single massive leak accounted for 25% of California’s methane emissions. Like oil, natural gas is also something we’ll want to use quite sparingly.

The best power source is solar power, hands-down. In the long run, the goal should be to convert as much as possible of the grid to solar. Wind, hydroelectric, and geothermal are also very useful, though wind power peaks at the wrong time of day for high energy demand and hydro and geothermal require specific geography to work. Solar is also the most scalable; as long as you have the raw materials and the technology, you can keep expanding solar production all the way up to a Dyson Sphere.

But solar is intermittent, and we don’t have good enough energy storage methods right now to ensure a steady grid on solar alone. The bulk of our grid is therefore going to have to be made of the one energy source we have with negligible carbon emissions, mature technology, and virtually unlimited and fully controllable output: Nuclear fission. At least until fusion matures or we solve the solar energy storage problem, nuclear fission is our best option for minimizing carbon emissions immediatelynot waiting for some new technology to come save us, but building efficient reactors now. Why does France only emit 6 tonnes of carbon per person per year while the UK emits 9, Germany emits 10, and the US emits a whopping 17? Because France’s electricity grid is almost entirely nuclear.

But nuclear power is dangerous!” people will say. France has indeed had several nuclear accidents in the last 40 years; guess how many deaths those accidents have caused? Zero. Deepwater Horizon killed more people than the sum total of all nuclear accidents in all First World countries. Worldwide, there was one Black Swan horrible nuclear event—Chernobyl (which still only killed about as many people as die in the US each year of car accidents or lung cancer), and other than that, nuclear power is safer that every form of fossil fuel.

“Where will we store the nuclear waste?” Well, that’s a more legitimate question, but you know what? It can wait. Nuclear waste doesn’t accumulate very fast, precisely because fission is thousands of times more efficient than combustion; so we’ll have plenty of room in existing facilities or easily-built expansions for the next century. By that point, we should have fusion or a good way of converting the whole grid to solar. We should of course invest in R&D in the meantime. But right now, we need fission.

So, after we’ve converted the electricity grid to nuclear, what next?
1B. To reduce the effect of agriculture, we need to eat less meat; among agricultural sources, livestock is the leading contributor of greenhouse emissions, followed by land use “emissions” (i.e. deforestation), which could also be reduced by converting more crop production to vegetables instead of meat because vegetables are much more land-efficient (and just-about-everything-else-efficient).

1C. To reduce the effect of transportation, we need huge investments in public transit, as well as more fuel-efficient vehicles like hybrids and electric cars. Switching to public transit could cut private transportation-related emissions in half. 100% electric cars are too much to hope for, but by implementing a high carbon tax, we might at least raise the cost of gasoline enough to incentivize makers and buyers of cars to choose more fuel-efficient models.
The biggest gains in fuel efficiency happen on the most gas-guzzling vehicles—indeed, so much so that our usual measure “miles per gallon” is highly misleading.

Quick: Which of the following changes would reduce emissions more, assuming all the vehicles drive the same amount? Switching from a hybrid of 50 MPG to a zero-emission electric (infinity MPG!), switching from a normal sedan of 20 MPG to a hybrid of 50 MPG, or switching from an inefficient diesel truck of 3 MPG to a modern diesel truck of 7 MPG?

The diesel truck, by far.

If each vehicle drives 10,000 miles per year: The first switch will take us from consuming 200 gallons to consuming 0 gallons—saving 200 gallons. The second switch will take us from consuming 500 gallons to consuming 200 gallons—saving 300 gallons. But the third switch will take us from consuming 3,334 gallons to consuming only 1,429 gallons—saving a whopping 1,905 gallons. Even slight increases in the fuel efficiency of highly inefficient vehicles have a huge impact, while you can raise an already-efficient vehicle to perfect efficiency and barely notice a difference.

We really should measure in gallons per mile—or better yet, liters per megameter. (Most of the world uses liters per 100 km; almost!)

All right, let’s assume we’ve done that: The whole grid is nuclear, and everyone is a vegetarian driving an electric car. That’s a good start. But we can’t stop there. Because of the feedback loops involved, we only reduce our emissions—even to near zero—the amount of carbon dioxide will continue to increase for decades. We need to somehow take the carbon out that is already there, which brings me to strategy 2, geoengineering.

2A. There are some exotic proposals out there for geoengineering (putting sulfur into the air to block out the Sun; what could possibly go wrong?), and maybe we’ll end up using some of them. I think iron fertilization of the oceans is one of the more promising options. But we need to be careful to make sure we actually know what these projects will do; we got into this mess by doing things without appreciating their long-run environmental impact, so let’s not make the same mistake again.

2B. But really, the most effective form of geoengineering is simply reforestation. Trees are very good at capturing carbon from the atmosphere; it’s what they evolved to do. So let’s plant trees—lots of trees. Many countries already have net positive forestation (such as the US as a matter of fact), but the world still has net deforestation, and that needs to be reversed.

But even if we do all that, at this point we probably can’t do enough fast enough to actually stop climate change from causing damage. After we’ve done our best to slow it down, we’re still going to need to respond to its effects and find ways to minimize the harm. That’s strategy 3, adaptation.

3A. Coastal regions around the world are going to have to turn into the Netherlands, surrounded by dikes and polders. First World countries already have the resources to do this, and will most likely do it on our own (many cities already have plans to); but other countries need to be given the resources to do it. We’re responsible for most of the emissions, and we have the most wealth, so we should pick up the tab for most of the adaptation.

3B. Some places aren’t going to be worth saving—so that means saving the people, by moving them somewhere else. We’re going to have global refugee crises, and we need to prepare for them, not in the usual way of “How can I clear my conscience while xenophobically excluding these people?” but by welcoming them with open arms. We are going to need to resettle tens of millions—possibly hundreds of millions—of people, and we need a process for doing that efficiently and integrating these people into the societies they end up living in. We must stop presuming that closed borders are the default and realize that the burden of proof was always on anyone who says that people should have different rights based on whether they were born on the proper side of an imaginary line. If open borders are utopian, then it is utopian we must be.

The bad news is that even if we do all these things, millions of people are still going to die from climate change—but a lot fewer millions than would if we didn’t.

And the really good news is that people are finally starting to do these things. It took a lot longer than it should, and there are still a lot of holdouts; but significant progress is already being made. There are a lot of reasons to be hopeful.

How not to do financial transaction tax

JDN 2457520

I strongly support the implementation of a financial transaction tax; like a basic income, it’s one of those economic policy ideas that are so brilliantly simple it’s honestly a little hard to believe how incredibly effective they are at making the world a better place. You mean we might be able to end stock market crashes just by implementing this little tax that most people will never even notice, and it will raise enough revenue to pay for food stamps? Yes, a financial transaction tax is that good.

So, keep that in mind when I say this:

TruthOut’s proposal for a financial transaction tax is somewhere between completely economically illiterate and outright insane.

They propose a 10% transaction tax on stocks and a 1% transaction tax on notional value of derivatives, then offer a “compromise” of 5% on stocks and 0.5% on derivatives. They make a bunch of revenue projections based on these that clearly amount to nothing but multiplying the current amount of transactions by the tax rate, which is so completely wrong we now officially have a left-wing counterpart to trickle-down voodoo economics.

Their argument is basically like this (I’m paraphrasing): “If we have to pay 5% sales tax on groceries, why shouldn’t you have to pay 5% on stocks?”

But that’s not how any of this works.

Demand for most groceries is very inelastic, especially in the aggregate. While you might change which groceries you’ll buy depending on their respective prices, and you may buy in bulk or wait for sales, over a reasonably long period (say a year) across a large population (say all of Michigan or all of the US), total amount of spending on groceries is extremely stable. People only need a certain amount of food, and they generally buy that amount and then stop.

So, if you implement a 5% sales tax that applies to groceries (actually sales tax in most states doesn’t apply to most groceries, but honestly it probably should—offset the regressiveness by providing more social services), people would just… spend about 5% more on groceries. Probably a bit less than that, actually, since suppliers would absorb some of the tax; but demand is much less elastic for groceries than supply, so buyers would bear most of the incidence of the tax. (It does not matter how the tax is collected; see my tax incidence series for further explanation of why.)

Other goods like clothing and electronics are a bit more elastic, so you’d get some deadweight loss from the sales tax; but at a typical 5% to 10% in the US this is pretty minimal, and even the hefty 20% or 30% VATs in some European countries only have a moderate effect. (Denmark’s 180% sales tax on cars seems a bit excessive to me, but it is Pigovian to disincentivize driving, so it also has very little deadweight loss.)

But what would happen if you implemented a 5% transaction tax on stocks? The entire stock market would immediately collapse.

A typical return on stocks is between 5% and 15% per year. As a rule of thumb, let’s say about 10%.

If you pay 5% sales tax and trade once per year, tax just cut your return in half.

If you pay 5% sales tax and trade twice per year, tax destroyed your return completely.

Even if you only trade once every five years, a 5% sales tax means that instead of your stocks being worth 61% more after those 5 years they are only worth 53% more. Your annual return has been reduced from 10% to 8.9%.

But in fact there are many perfectly legitimate reasons to trade as often as monthly, and a 5% tax would make monthly trading completely unviable.

Even if you could somehow stop everyone from pulling out all their money just before the tax takes effect, you would still completely dry up the stock market as a source of funding for all but the most long-term projects. Corporations would either need to finance their entire operations out of cash or bonds, or collapse and trigger a global depression.

Derivatives are even more extreme. The notional value of derivatives is often ludicrously huge; we currently have over a quadrillion dollars in notional value of outstanding derivatives. Assume that say 10% of those are traded every year, and we’re talking $100 trillion in notional value of transactions. At 0.5% you’re trying to take in a tax of $500 billion. That sounds fantastic—so much money!—but in fact what you should be thinking about is that’s a really strong avoidance incentive. You don’t think banks will find a way to restructure their trading practices—or stop trading altogether—to avoid this tax?

Honestly, maybe a total end to derivatives trading would be tolerable. I certainly think we need to dramatically reduce the amount of derivatives trading, and much of what is being traded—credit default swaps, collateralized debt obligations, synthetic collateralized debt obligations, etc.—really should not exist and serves no real function except to obscure fraud and speculation. (Credit default swaps are basically insurance you can buy on other people’s companies. There’s a reason you’re not allowed to buy insurance on other people’s stuff!) Interest rate swaps aren’t terrible (when they’re not being used to perpetrate the largest white-collar crime in history), but they also aren’t necessary. You might be able to convince me that commodity futures and stock options are genuinely useful, though even these are clearly overrated. (Fun fact: Futures markets have been causing financial crises since at least Classical Rome.) Exchange-traded funds are technically derivatives, and they’re just fine (actually ETFs are very low-risk, because they are inherently diversified—which is why you should probably be buying them); but actually their returns are more like stocks, so the 0.5% might not be insanely high in that case.

But stocks? We kind of need those. Equity financing has been the foundation of capitalism since the very beginning. Maybe we could conceivably go to a fully debt-financed system, but it would be a radical overhaul of our entire financial system and is certainly not something to be done lightly.

Indeed, TruthOut even seems to think we could apply the same sales tax rate to bonds, which means that debt financing would also collapse, and now we’re definitely talking about global depression. How exactly is anyone supposed to finance new investments, if they can’t sell stock or bonds? And a 5% tax on the face value of stock or bonds, for all practical purposes, is saying that you can’t sell stock or bonds. It would make no one want to buy them.

Wealthy investors buying of stocks and bonds is essentially no different than average folks buying food, clothing or other real “goods and services.”

Yes it is. It is fundamentally different.

People buy goods to use them. People buy stocks to make money selling them.

This seems perfectly obvious, but it is a vital distinction that seems to be lost on TruthOut.

When you buy an apple or a shoe or a phone or a car, you care how much it costs relative to how useful it is to you; if we make it a bit more expensive, that will make you a bit less likely to buy it—but probably not even one-to-one so that a 5% tax would reduce purchases by 5%; it would probably be more like a 2% reduction. Demand for goods is inelastic. Taxing them will raise a lot of revenue and not reduce the quantity purchased very much.

But when you buy a stock or a bond or an interest rate swap, you care how much it costs relative to what you will be able to sell it for—you care about not its utility but its return. So a 5% tax will reduce the amount of buying and selling by substantially more than 5%—it could well be 50% or even 100%. Demand for financial assets is elastic. Taxing them will not raise much revenue but will substantially reduce the quantity purchased.

Now, for some financial assets, we want to reduce the quantity purchased—the derivatives market is clearly too big, and high-frequency trading that trades thousands of times per second can do nothing but destabilize the financial system. Joseph Stiglitz supports a small financial transaction tax precisely because it would substantially reduce high-frequency trading, and he’s a Nobel Laureate as you may recall. Naturally, he was excluded from the SEC hearings on the subject, because reasons. But the figures Stiglitz is talking about (and I agree with) are on the order of 0.1% for stocks and 0.01% for derivatives—50 times smaller than what TruthOut is advocating.

At the end, they offer another “compromise”:

Okay, half it again, to a 2.5 percent tax on stocks and bonds and a 0.25 percent on derivative trades. That certainly won’t discourage stock and bond trading by the rich (not that that is an all bad idea either).

Yes it will. By a lot. That’s the whole point.

A financial transaction tax is a great idea whose time has come; let’s not ruin its reputation by setting it at a preposterous value. Just as a $15 minimum wage is probably a good idea but a $250 minimum wage is definitely a terrible idea, a 0.1% financial transaction tax could be very beneficial but a 5% financial transaction tax would clearly be disastrous.

Super PACs are terrible—but ineffective

JDN 2457516

It’s now beginning to look like an ongoing series: “Reasons to be optimistic about our democracy.”

Super PACs, in case you didn’t know, are a bizarre form of legal entity, established after the ludicrous Citizens United ruling (“Corporations are people” and “money is speech” are literally Orwellian), which allows corporations to donate essentially unlimited funds to political campaigns with minimal disclosure and zero accountability. This creates an arms race where even otherwise-honest candidates feel pressured to take more secret money just to keep up.

At the time, a lot of policy wonks said “Don’t worry, they already give tons of money anyway, what’s the big deal?”

Well, those wonks were wrong—it was a big deal. Corporate donations to political campaigns exploded in the era of Super PACs. The Citizens United ruling was made in 2010, and take a look at this graph of total “independent” (i.e., not tied to candidate or party) campaign spending (using data from OpenSecrets):


It’s a small sample size, to be sure, and campaign spending was already rising. But 2010 and 2014 were very high by the usual standards of midterm elections, and 2012 was absolutely unprecedented—over $1 billion spent on campaigns. Moreover, the only reason 2016 looks lower than 2012 is that we’re not done with 2016 yet; I’m sure it will rise a lot higher than it is now, and very likely overtake 2012. (And if it doesn’t it’ll be because Bernie Sanders and Donald Trump made very little use Super-PACs, for quite different reasons.) It was projected to exceed $4 billion, though I doubt it will actually make it quite that high.

Worst of all, this money is all coming from a handful of billionaires. 41% of Super-PAC funds comes from the same 50 households. That’s fifty. Even including everyone living in the household, this group of people could easily fit inside an average lecture hall—and they account for two-fifths of independent campaign spending in the US.

Weirdest of all, there are still people who seem to think that the problem with American democracy is it’s too hard for rich people to give huge amounts of money to political campaigns in secret, and they are trying to weaken our campaign spending regulations even more.

So that’s the bad news—but here’s the good news.

Super-PACs are ludicrously ineffective.

Hillary Clinton is winning, and will probably win the election; and she does have the most Super-PAC money among candidates still in the race (at $76 million, about what the Clintons themselves make in 3 years). Ted Cruz also has $63 million in Super-PAC money. But Bernie Sanders only has $600,000 in Super-PAC money (actually also about 3 times his household income, coincidentally), and Donald Trump only has $2.7 million. Both of these are less than John Kasich’s $13 million in Super-PAC spending, and yet Kasich and Cruz are now dropped out and only Trump remains.

But more importantly, the largest amount of Super-PAC money went to none other than Jeb Bush—a whopping $121 million—and it did basically nothing for him. Marco Rubio had $62 million in Super-PAC money, and he dropped out too. Martin O’Malley had more Super-PAC money than Bernie Sanders, and where is he now? In fact, literally every Republican candidate had more Super-PAC money than Bernie Sanders, and every Republican but Rick Santorum, Jim Gilmore, and George Pataki (you’re probably thinking: “Who?” Exactly.) had more Super-PAC money than Donald Trump.

Indeed, political spending in general is not very effective. Additional spending on political campaigns has minimal effects on election outcomes.

You wouldn’t immediately see that from our current Presidential race; while Rubio raised $117 million and Jeb! raised $155 million and both of them lost, the winners also raised a great deal. Hillary Clinton raised $256 million, Bernie Sanders raised $180 million, Ted Cruz raised $142 million, and Donald Trump raised $48 million. Even that last figure is mainly so low because Donald Trump is a master at getting free publicity; the media effectively gave Trump an astonishing $1.89 billion in free publicity. To be fair, a lot of that was bad publicity—but it still got his name and his ideas out there and didn’t cost him a dime.

So, just from the overall spending figures, it looks like maybe total campaign spending is important, even if Super-PACs in particular are useless.

But empirical research has shown that political spending has minimal effects on actual election outcomes. So ineffective, in fact, that a lot of economists are puzzled that there’s so much spending anyway. Here’s a paper arguing that once you include differences in advertising prices, political spending does matter. Here are two papers proposing different explanations for why incumbent spending appears to be less effective than challenger spending:This one says that it’s a question of accounting for how spending is caused by voter participation (rather than the reverse), while this one argues that the abuse of incumbent privileges like franking gives incumbents more real “spending” power. It’s easy to miss that both of them are trying to explain a basic empirical fact that candidates that spend a lot more still often lose.

Political advertising can be effective at changing minds, but only to a point.

The candidate who spends the most usually does win—but that’s because the candidate who spends the most usually raises the most, and the candidate who raises the most usually has the most support.

The model that makes the most sense to me is that political spending is basically a threshold; you need to spend enough that people know you exist, but beyond that additional spending won’t make much difference. In 1996 that threshold was estimated to be about $400,000 for a House election; that’s still only about $600,000 in today’s money.

Campaign spending is more effective when there are caps on individual contributions; a lot of people find this counter-intuitive, but it makes perfect sense on a threshold model, because spending caps could hold candidates below the threshold. Limits on campaign spending have a large effect on spending, but a small effect on outcomes.

Does this mean we shouldn’t try to limit campaign spending? I don’t think so. It can still be corrupt and undesirable even if isn’t all that effective.

But it is good news: You can’t actually just buy elections—not in America, not yet.