Nuclear power is safe. Why don’t people like it?

Sep 24, JDN 2457656

This post will have two parts, corresponding to each sentence. First, I hope to convince you that nuclear power is safe. Second, I’ll try to analyze some of the reasons why people don’t like it and what we might be able to do about that.

Depending on how familiar you are with the statistics on nuclear power, the idea that nuclear power is safe may strike you as either a completely ridiculous claim or an egregious understatement. If your primary familiarity with nuclear power safety is via the widely-publicized examples of Chernobyl, Three Mile Island, and more recently Fukushima, you may have the impression that nuclear power carries huge, catastrophic risks. (You may also be confusing nuclear power with nuclear weapons—nuclear weapons are indeed the greatest catastrophic risk on Earth today, but equating the two is like equating automobiles and machine guns because both of them are made of metal and contain lubricant, flammable materials, and springs.)

But in fact nuclear energy is astonishingly safe. Indeed, even those examples aren’t nearly as bad as people have been led to believe. Guess how many people died as a result of Three Mile Island, including estimated increased cancer deaths from radiation exposure?

Zero. There are zero confirmed deaths and the consensus estimate of excess deaths caused by the Three Mile Island incident by all causes combined is zero.

What about Fukushima? Didn’t 10,000 people die there? From the tsunami, yes. But the nuclear accident resulted in zero fatalities. If anything, those 10,000 people were killed by coal—by climate change. They certainly weren’t killed by nuclear.

Chernobyl, on the other hand, did actually kill a lot of people. Chernobyl caused 31 confirmed direct deaths, as well as an estimated 4,000 excess deaths by all causes. On the one hand, that’s more than 9/11; on the other hand, it’s about a month of US car accidents. Imagine if people had the same level of panic and outrage at automobiles after a month of accidents that they did at nuclear power after Chernobyl.

The vast majority of nuclear accidents cause zero fatalities; other than Chernobyl, none have ever caused more than 10. Deepwater Horizon killed 11 people, and yet for some reason Americans did not unite in opposition against ever using oil (or even offshore drilling!) ever again.

In fact, even that isn’t fair to nuclear power, because we’re not including the thousands of lives saved every year by using nuclear instead of coal and oil.

Keep in mind, the WHO estimates 10 to 100 million excess deaths due to climate change over the 21st century. That’s an average of 100,000 to 1 million deaths every year. Nuclear power currently produces about 11% of the world’s energy, so let’s do a back-of-the-envelope calculation for how many lives that’s saving. Assuming that additional climate change would be worse in direct proportion to the additional carbon emissions (which is conservative), and assuming that half that energy would be replaced by coal or oil (also conservative, using Germany’s example), we’re looking at about a 6% increase in deaths due to climate change if all those nuclear power plants were closed. That’s 6,000 to 60,000 lives that nuclear power plants save every year.

I also haven’t included deaths due to pollution—note that nuclear power plants don’t pollute air or water whatsoever, and only produce very small amounts of waste that can be quite safely stored. Air pollution in all its forms is responsible for one in eight deaths worldwide. Let me say that again: One in eight of all deaths in the world is caused by air pollution—so this is on the order of 7 million deaths per year, every year. We burn our way to a biannual Holocaust. Most of this pollution is actually caused by burning wood—fireplaces, wood stoves, and bonfires are terrible for the air—and many countries would actually see a substantial reduction in their toxic pollution if they switched to oil or even coal in favor of wood. But a large part of that pollution is caused by coal, and a nontrivial amount is caused by oil. Coal-burning factories and power plants are responsible for about 1 million deaths per year in China alone. Most of that pollution could be prevented if those power plants were nuclear instead.

Factor all that in, and nuclear power currently saves tens if not hundreds of thousands of lives per year, and expanding it to replace all fossil fuels could save millions more. Indeed, a more precise estimate of the benefits of nuclear power published a few years ago in Environmental Science and Technology is that nuclear power plants have saved some 1.8 million human lives since their invention, putting them on a par with penicillin and the polio vaccine.

So, I hope I’ve convinced you of the first proposition: Nuclear power plants are safe—and not just safe, but heroic, in fact one of the greatest life-saving technologies ever invented. So, why don’t people like them?

Unfortunately, I suspect that no amount of statistical data by itself will convince those who still feel a deep-seated revulsion to nuclear power. Even many environmentalists, people who could be nuclear energy’s greatest advocates, are often opposed to it. I read all the way through Naomi Klein’s This Changes Everything and never found even a single cogent argument against nuclear power; she simply takes it as obvious that nuclear power is “more of the same line of thinking that got us in this mess”. Perhaps because nuclear power could be enormously profitable for certain corporations (which is true; but then, it’s also true of solar and wind power)? Or because it also fits this narrative of “raping and despoiling the Earth” (sort of, I guess)? She never really does explain; I’m guessing she assumes that her audience will simply share her “gut feeling” intuition that nuclear power is dangerous and untrustworthy. One of the most important inconvenient truths for environmentalists is that nuclear power is not only safe, it is almost certainly our best hope for stopping climate change.

Perhaps all this is less baffling when we recognize that other heroic technologies are often also feared or despised for similarly bizarre reasons—vaccines, for instance.

First of all, human beings fear what we cannot understand, and while the human immune system is certainly immensely complicated, nuclear power is based on quantum mechanics, a realm of scientific knowledge so difficult and esoteric that it is frequently used as the paradigm example of something that is hard to understand. (As Feynman famously said, “I think I can safely say that nobody understands quantum mechanics.”) Nor does it help that popular treatments of quantum physics typically bear about as much resemblance to the actual content of the theory as the X-Men films do to evolutionary biology, and con artists like Deepak Chopra take advantage of this confusion to peddle their quackery.

Nuclear radiation is also particularly terrifying because it is invisible and silent; while a properly-functioning nuclear power plant emits less ionizing radiation than the Capitol Building and eating a banana poses substantially higher radiation risk than talking on a cell phone, nonetheless there is real danger posed by ionizing radiation, and that danger is particularly terrifying because it takes a form that human senses cannot detect. When you are burned by fire or cut by a knife, you know immediately; but gamma rays could be coursing through you right now and you’d feel no different. (Huge quantities of neutrinos are coursing through you, but fear not, for they’re completely harmless.) The symptoms of severe acute radiation poisoning also take a particularly horrific form: After the initial phase of nausea wears off, you can enter a “walking ghost phase”, where your eventual death is almost certain due to your compromised immune and digestive systems, but your current condition is almost normal. This makes the prospect of death by nuclear accident a particularly vivid and horrible image.

Vividness makes ideas more available to our memory; and thus, by the availability heuristic, we automatically infer that it must be more probable than it truly is. You can think of horrific nuclear accidents like Chernobyl, and all the carnage they caused; but all those millions of people choking to death in China don’t make for a compelling TV news segment (or at least, our TV news doesn’t seem to think so). Vividness doesn’t actually seem to make things more persuasive, but it does make them more memorable.

Yet even if we allow for the possibility that death by radiation poisoning is somewhat worse than death by coal pollution (if I had to choose between the two, okay, maybe I’d go with the coal), surely it’s not ten thousand times worse? Surely it’s not worth sacrificing entire cities full of people to coal in order to prevent a handful of deaths by nuclear energy?

Another reason that has been proposed is a sense that we can control risk from other sources, but a nuclear meltdown would be totally outside our control. Perhaps that is the perception, but if you think about it, it really doesn’t make a lot of sense. If there’s a nuclear meltdown, emergency services will report it, and you can evacuate the area. Yes, the radiation moves at the speed of light; but it also dissipates as the inverse square of distance, so if you just move further away you can get a lot safer quite quickly. (Think about the brightness of a lamp in your face versus across a football field. Radiation works the same way.) The damage is also cumulative, so the radiation risk from a meltdown is only going to be serious if you stay close to the reactor for a sustained period of time. Indeed, it’s much easier to avoid nuclear radiation than it is to avoid air pollution; you can’t just stand behind a concrete wall to shield against air pollution, and moving further away isn’t possible if you don’t know where it’s coming from. Control would explain why we fear cars less than airplanes (which is also statistically absurd), but it really can’t explain why nuclear power scares people more than coal and oil.

Another important factor may be an odd sort of bipartisan consensus: While the Left hates nuclear power because it makes corporations profitable or because it’s unnatural and despoils the Earth or something, the Right hates nuclear power because it requires substantial government involvement and might displace their beloved fossil fuels. (The Right’s deep, deep love of the fossil fuel industry now borders on the pathological. Even now that they are obviously economically inefficient and environmentally disastrous, right-wing parties around the world continue to defend enormous subsidies for oil and coal companies. Corruption and regulatory capture could partly explain this, but only partly. Campaign contributions can’t explain why someone would write a book praising how wonderful fossil fuels are and angrily denouncing anyone who would dare criticize them.) So while the two sides may hate each other in general and disagree on most other issues—including of course climate change itself—they can at least agree that nuclear power is bad and must be stopped.

Where do we go from here, then? I’m not entirely sure. As I said, statistical data by itself clearly won’t be enough. We need to find out what it is that makes people so uniquely terrified of nuclear energy, and we need to find a way to assuage those fears.

And we must do this now. For every day we don’t—every day we postpone the transition to a zero-carbon energy grid—is another thousand people dead.

Toward an economics of social norms

Sep 17, JDN 2457649

It is typical in economics to assume that prices are set by perfect competition in markets with perfect information. This is obviously ridiculous, so many economists do go further and start looking into possible distortions of the market, such as externalities and monopolies. But almost always the assumption is still that human beings are neoclassical rational agents, what I call “infinite identical psychopaths”, selfish profit-maximizers with endless intelligence and zero empathy.

What happens when we recognize that human beings are not like this, but in fact are empathetic, social creatures, who care about one another and work toward the interests of (what they perceive to be) their tribe? How are prices really set? What actually decides what is made and sold? What does economics become once you understand sociology? (The good news is that experiments are now being done to find out.)

Presumably some degree of market competition is involved, and no small amount of externalities and monopolies. But one of the very strongest forces involved in setting prices in the real world is almost completely ignored, and that is social norms.

Social norms are tremendously powerful. They will drive us to bear torture, fight and die on battlefields, even detonate ourselves as suicide bombs. When we talk about “religion” or “ideology” motivating people to do things, really what we are talking about is social norms. While some weaker norms can be overridden, no amount of economic incentive can ever override a social norm at its full power. Moreover, most of our behavior in daily life is driven by social norms: How to dress, what to eat, where to live. Even the fundamental structure of our lives is written by social norms: Go to school, get a job, get married, raise a family.

Even academic economists, who imagine themselves one part purveyor of ultimate wisdom and one part perfectly rational agent, are clearly strongly driven by social norms—what problems are “interesting”, which researchers are “renowned”, what approaches are “sensible”, what statistical methods are “appropriate”. If economists were perfectly rational, dynamic stochastic general equilibrium models would be in the dustbin of history (because, like string theory, they have yet to lead to a single useful empirical prediction), research journals would not be filled with endless streams of irrelevant but impressive equations (I recently read one that basically spent half a page of calculus re-deriving the concept of GDP—and computer-generated gibberish has been published, because its math looked so impressive), and instead of frequentist p-values (and often misinterpreted at that), all the statistics would be written in the form of Bayesian logodds.

Indeed, in light of all this, I often like to say that to a first approximation, all human behavior is social norms.

How does this affect buying and selling? Well, first of all, there are some things we refuse to buy and sell, or at least that most of us refuse to buy and sell, and who use social pressure, public humilitation, or even the force of law to prevent. You’re not supposed to sell children. You’re not supposed to sell your vote. You’re not even supposed to sell sexual favors (though every society has always had a large segment of people who do, and more recently people are becoming more open to the idea of at least decriminalizing it). If we were neoclassical rational agents, we would have no such qualms; if we want something and someone is willing to sell it to us, we’ll buy it. But as actual human beings with emotions and social norms, we recognize that there is something fundamentally different about selling your vote as opposed to selling a shirt or a television. It’s not always immediately obvious where to draw the line, which is why sex work can be such a complicated issue (You can’t get paid to have sex… unless someone is filming it?). Different societies may do it differently: Part of the challenge of fighting corruption in Third World countries is that much of what we call corruption—and which actually is harmful to long-run economic development—isn’t perceived as “corruption” by the people involved in it, just as social custom (“Of course I’d hire my cousin! What kind of cousin would I be if I didn’t?”). Yet despite all that, almost everyone agrees that there is a line to be drawn. So there are whole markets that theoretically could exist, but don’t, or only exist as tiny black markets most people never participate in, because we consider selling those things morally wrong. Recently a whole subfield of cognitive economics has emerged studying these repugnant markets.

Even if a transaction is not considered so repugnant as to be unacceptable, there are also other classes of goods that are in some sense unsavory; something you really shouldn’t buy, but you’re not a monster for doing so. These are often called sin goods, and they have always included drugs, alcohol, and gambling—and I do mean always, as every human civilization has had these things—they include prostitution where it is legal, and as social norms change they are now beginning to include oil and coal as well (which can only be good for the future of Earth’s climate). Sin goods are systematically more expensive than they should be for their marginal cost, because most people are unwilling to participate in selling them. As a result, the financial returns for producing sin goods are systematically higher. Actually, this could partially explain why Wall Street banks are so profitable; when the banking system is corrupt as it is—and you’re not imagining that; laundering money for terroriststhen banking becomes a sin good, and good people don’t want to participate in it. Or perhaps the effect runs the other way around: Banking has been viewed as sinful for centuries (in Medieval times, usury was punished much the same way as witchcraft), and as a result only the sort of person who doesn’t care about social and moral norms becomes a banker—and so the banking system becomes horrifically corrupt. Is this a reason for good people to force ourselves to become bankers? Or is there another way—perhaps credit unions?

There are other ways that social norms drive prices as well. We have a concept ofa “fair wage”, which is quite distinct from the economic concept of a “market-clearing wage”. When people ask whether someone’s wage is fair, they don’t look at supply and demand and try to determine whether there are too many or too few people offering that service. They ask themselves what the labor is worth—what value has it added—and how hard that person has worked to do it—what cost it bore. Now, these aren’t totally unrelated to supply and demand (people are less likely to supply harder work, people are more likely to demand higher value), so it’s conceivable that these heuristics could lead us to more or less achieve the market-clearing wage most of the time. But there are also some systematic distortions to consider.

Perhaps the most important way fairness matters in economics is necessities: Basic requirements for human life such as food, housing, and medicine. The structure of our society also makes transportation, education, and Internet access increasingly necessary for basic functioning. From the perspective of an economist, it is a bit paradoxical how angry people get when the price of something important (such as healthcare) is increased: If it’s extremely valuable, shouldn’t you be willing to pay more? Why does it bother you less when something like a Lamborghini or a Rolex rises in price, something that almost certainly wasn’t even worth its previous price? You’re going to buy the necessities anyway, right? Well, as far as most economists are concerned, that’s all that matters—what gets bought and sold. But of course as a human being I do understand why people get angry about these things, and it is because they have to buy them anyway. When someone like Martin Shkreli raises the prices on basic goods, we feel exploited. There’s even a way to make this economically formal: When demand is highly inelastic, we are rightly very sensitive to the possibility of a monopoly, because monopolies under inelastic demand can extract huge profits and cause similarly huge amounts of damage to the welfare of their customers. That isn’t quite how most people would put it, but I think that has something to do with the ultimate reason we evolved that heuristic: It’s dangerous to let someone else control your basic necessities, because that gives them enormous power to exploit you. If they control things that aren’t as important to you, that doesn’t matter so much, because you can always do without if you must. So a norm that keeps businesses from overcharging on necessities is very important—and probably not as strong anymore as it should be.

Another very important way that fairness and markets can be misaligned is talent: What if something is just easier for one person than another? If you achieve the same goal with half the work, should you be rewarded more for being more efficient, or less because you bore less cost? Neoclassical economics doesn’t concern itself with such questions, asking only if supply and demand reached equilibrium. But we as human beings do care about such things; we want to know what wage a person deserves, not just what wage they would receive in a competitive market.

Could we be wrong to do that? Might it be better if we just let the market do its work? In some cases I think that may actually be true. Part of why CEO pay is rising so fast despite being uncorrelated with corporate profitability or even negatively correlated is that CEOs have convinced us (or convinced their boards of directors) that this is fair, that they deserve more stock options. They even convince them that their pay is based on performance, by using highly distorted measures of performance. If boards thought more like economic rational agents, when a CEO asked for more pay they’d ask: “What other company gave you a higher offer?” and if the CEO didn’t have an answer, they’d laugh and refuse the raise. Because in purely economic terms, that is all a salary does: it keeps you from quitting to work somewhere else. The competitive mechanism of the market is supposed to then ensure that your wage aligns with your marginal cost and marginal productivity purely due to that.

On the other hand, there are many groups of people who simply aren’t doing very well in the market: Women, racial minorities, people with disabilities. There are a lot of reasons for this, some of which might go away if markets were made more competitive—the classic argument that competitive markets reward companies that don’t discriminate—but many clearly wouldn’t. Indeed, that argument was never as strong as it at first appears; in a society where social norms are strongly in favor of bigotry, it can be completely economically rational to participate in bigotry to avoid being penalized. When Chick-Fil-A was revealed to have donated to anti-LGBT political groups, many people tried to boycott—but their sales actually increased from the publicity. Honestly it’s a bit baffling that they promised not to donate to such causes anymore; it was apparently a profitable business decision to be revealed as supporters of bigotry. And even when discrimination does hurt economic performance, companies are run by human beings, and they are still quite capable of discriminating regardless. Indeed, the best evidence we have that discrimination is inefficient comes from… businesses that persist in discriminating despite the fact that it is inefficient.

But okay, suppose we actually did manage to make everyone compensated according to their marginal productivity. (Or rather, what Rawls derided: “From each according to his marginal productivity, to each according to his threat advantage.”) The market would then clear and be highly efficient. Would that actually be a good thing? I’m not so sure.

A lot of people are highly unproductive through no fault of their own—particularly children and people with disabilities. Much of this is not discrimination; it’s just that they aren’t as good at providing services. Should we simply leave them to fend for themselves? Then there’s the key point about what marginal means in this case—it means “given what everyone else is doing”. But that means that you can be made obsolete by someone else’s actions, and in this era of rapid technological advancement, jobs become obsolete faster than ever. Unlike a lot of people, I recognize that it makes no sense to keep people working at jobs that can be automated—the machines are better. But still, what do we do with the people whose jobs have been eliminated? Do we treat them as worthless? When automated buses become affordable—and they will; I give it 20 years—do we throw the human bus drivers under them?

One way out is of course a basic income: Let the market wage be what it will, and then use the basic income to provide for what human beings deserve irrespective of their market productivity. I definitely support a basic income, of course, and this does solve the most serious problems like children and quadriplegics starving in the streets.

But as I read more of the arguments by people who favor a job guarantee instead of a basic income, I begin to understand better why they are uncomfortable with the idea: It doesn’t seem fair. A basic income breaks once and for all the link between “a fair day’s work” and “a fair day’s wage”. It runs counter to this very deep-seated intuition most people have that money is what you earn—and thereby deserve—by working, and only by working. That is an extremely powerful social norm, and breaking it will be very difficult; so it’s worth asking: Should we even try to break it? Is there a way to achieve a system where markets are both efficient and fair?

I’m honestly not sure; but I do know that we could make substantial progress from where we currently stand. Most billionaire wealth is pure rent in the economic sense: It’s received by corruption and market distortion, not by efficient market competition. Most poverty is due to failures of institutions, not lack of productivity of workers. As George Monblot famously wrote, “If wealth was the inevitable result of hard work and enterprise, every woman in Africa would be a millionaire.” Most of the income disparity between White men and others is due to discrimination, not actual skill—and what skill differences there are are largely the result of differences in education and upbringing anyway. So if we do in fact correct these huge inefficiencies, we will also be moving toward fairness at the same time. But still that nagging thought remains: When all that is done, will there come a day where we must decide whether we would rather have an efficient economy or a just society? And if it does, will we decide the right way?

Zootopia taught us constructive responses to bigotry

Sep 10, JDN 2457642

Zootopia wasn’t just a good movie; Zootopia was a great movie. I’m not just talking about its grosses (over $1 billion worldwide), or its ratings, 8.1 on IMDB, 98% from critics and 93% from viewers on Rotten Tomatoes, 78 from critics and 8.8 from users on Metacritic. No, I’m talking about its impact on the world. This movie isn’t just a fun and adorable children’s movie (though it is that). This movie is a work of art that could have profound positive effects on our society.

Why? Because Zootopia is about bigotry—and more than that, it doesn’t just say “bigotry is bad, bigots are bad”; it provides us with a constructive response to bigotry, and forces us to confront the possibility that sometimes the bigots are us.

Indeed, it may be no exaggeration (though I’m sure I’ll get heat on the Internet for suggesting it) to say that Zootopia has done more to fight bigotry than most social justice activists will achieve in their entire lives. Don’t get me wrong, some social justice activists have done great things; and indeed, I may have to count myself in this “most activists” category, since I can’t point to any major accomplishments I’ve yet made in social justice.

But one of the biggest problems I see in the social justice community is the tendency to exclude and denigrate (in sociology jargon, “other” as a verb) people for acts of bigotry, even quite mild ones. Make one vaguely sexist joke, and you may as well be a rapist. Use racially insensitive language by accident, and clearly you are a KKK member. Say something ignorant about homosexuality, and you may as well be Rick Santorum. It becomes less about actually moving the world forward, and more about reaffirming our tribal unity as social justice activists. We are the pure ones. We never do wrong. All the rest of you are broken, and the only way to fix yourself is to become one of us in every way.

In the process of fighting tribal bigotry, we form our own tribe and become our own bigots.

Zootopia offers us another way. If you haven’t seen it, go rent it on DVD or stream it on Netflix right now. Seriously, this blog post will be here when you get back. I’m not going to play any more games with “spoilers!” though. It is definitely worth seeing, and from this point forward I’m going to presume you have.

The brilliance of Zootopia lies in the fact that it made bigotry what it is—not some evil force that infests us from outside, nor something that only cruel, evil individuals would ever partake in, but thoughts and attitudes that we all may have from time to time, that come naturally, and even in some cases might be based on a kernel of statistical truth. Judy Hopps is prey, she grew up in a rural town surrounded by others of her own species (with a population the size of New York City according to the sign, because this is still sometimes a silly Disney movie). She only knew a handful of predators growing up, yet when she moves to Zootopia suddenly she’s confronted with thousands of them, all around her. She doesn’t know what most predators are like, or how best to deal with them.

What she does know is that her ancestors were terrorized, murdered, and quite literally eaten by the ancestors of predators. Her instinctual fear of predators isn’t something utterly arbitrary; it was written into the fabric of her DNA by her ancestral struggle for survival. She has a reason to hate and fear predators that, on its face, actually seems to make sense.

And when there is a spree of murders, all committed by predators, it feels natural to us that Judy would fall back on her old prejudices; indeed, the brilliance of it is that they don’t immediately feel like prejudices. It takes us a moment to let her off-the-cuff comments at the press conference sink in (and Nick’s shocked reaction surely helps), before we realize that was really bigoted. Our adorable, innocent, idealistic, beloved protagonist is a bigot!

Or rather, she has done something bigoted. Because she is such a sympathetic character, we avoid the implication that she is a bigot, that this is something permanent and irredeemable about her. We have already seen the good in her, so we know that this bigotry isn’t what defines who she is. And in the end, she realizes where she went wrong and learns to do better. Indeed, it is ultimately revealed that the murders were orchestrated by someone whose goal was specifically to trigger those ancient ancestral feuds, and Judy reveals that plot and ultimately ends up falling in love with a predator herself.

What Zootopia is really trying to tell us is that we are all Judy Hopps. Every one of us most likely harbors some prejudiced attitude toward someone. If it’s not Black people or women or Muslims or gays, well, how about rednecks? Or Republicans? Or (perhaps the hardest for me) Trump supporters? If you are honest with yourself, there is probably some group of people on this planet that you harbor attitudes of disdain or hatred toward that nonetheless contains a great many good people who do not deserve your disdain.

And conversely, all bigots are Judy Hopps too, or at least the vast majority of them. People don’t wake up in the morning concocting evil schemes for the sake of being evil like cartoon supervillains. (Indeed, perhaps the greatest thing about Zootopia is that it is a cartoon in the sense of being animated, but it is not a cartoon in the sense of being morally simplistic. Compare Captain Planet, wherein polluters aren’t hardworking coal miners with no better options or even corrupt CEOs out to make an extra dollar to go with their other billion; no, they pollute on purpose, for no reason, because they are simply evil. Now that is a cartoon.) Normal human beings don’t plan to make the world a worse place. A handful of psychopaths might, but even then I think it’s more that they don’t care; they aren’t trying to make the world worse, they just don’t particularly mind if they do, as long as they get what they want. Robert Mugabe and Kim-Jong Un are despicable human beings with the blood of millions on their hands, but even they aren’t trying to make the world worse.

And thus, if your theory of bigotry requires that bigots are inhuman monsters who harm others by their sheer sadistic evil, that theory is plainly wrong. Actually I think when stated outright, hardly anyone would agree with that theory; but the important thing is that we often act as if we do. When someone does something bigoted, we shun them, deride them, push them as far as we can to the fringes of our own social group or even our whole society. We don’t say that your statement was racist; we say you are racist. We don’t say your joke was sexist; we say you are sexist. We don’t say your decision was homophobic; we say you are homophobic. We define bigotry as part of your identity, something as innate and ineradicable as your race or sex or sexual orientation itself.

I think I know why we do this: It is to protect ourselves from the possibility that we ourselves might sometimes do bigoted things. Because only bigots do bigoted things, and we know that we are not bigots.

We laugh at this when someone else does it: “But some of my best friends are Black!” “Happy #CincoDeMayo; I love Hispanics!” But that is the very same psychological defense mechanism we’re using ourselves, albeit in a more extreme application. When we commit an act that is accused of being bigoted, we begin searching for contextual evidence outside that act to show that we are not bigoted. The truth we must ultimately confront is that this is irrelevant: The act can still be bigoted even if we are not overall bigots—for we are all Judy Hopps.

This seems like terrible news, even when delivered by animated animals (or fuzzy muppets in Avenue Q), because we tend to hear it as “We are all bigots.” We hear this as saying that bigotry is inevitable, inescapable, literally written into the fabric of our DNA. At that point, we may as well give up, right? It’s hopeless!

But that much we know can’t be true. It could be (indeed, likely is) true that some amount of bigotry is inevitable, just as no country has ever managed to reach zero homicide or zero disease. But just as rates of homicide and disease have precipitously declined with the advancement of human civilization (starting around industrial capitalism, as I pointed out in a previous post!), so indeed have rates of bigotry, at least in recent times.

For goodness’ sake, it used to be a legal, regulated industry to buy and sell other human beings in the United States! This was seen as normal; indeed many argued that it was economically indispensable.

Is 1865 too far back for you? How about racially segregated schools, which were only eliminated from US law in 1954, a time where my parents were both alive? (To be fair, only barely; my father was a month old.) Yes, even today the racial composition of our schools is far from evenly mixed; but it used to be a matter of law that Black children could not go to school with White children.

Women were only granted the right to vote in the US in 1920. My parents weren’t alive yet, but there definitely are people still alive today who were children when the Nineteenth Amendment was ratified.

Same-sex marriage was not legalized across the United States until last year. My own life plans were suddenly and directly affected by this change.

We have made enormous progress against bigotry, in a remarkably short period of time. It has been argued that social change progresses by the death of previous generations; but that simply can’t be true, because we are moving much too fast for that! Attitudes toward LGBT people have improved dramatically in just the last decade.

Instead, it must be that we are actually changing people’s minds. Not everyone’s, to be sure; and often not as quickly as we’d like. But bit by bit, we tear bigotry down, like people tearing off tiny pieces of the Berlin Wall in 1989.

It is important to understand what we are doing here. We are not getting rid of bigots; we are getting rid of bigotry. We want to convince people, “convert” them if you like, not shun them or eradicate them. And we want to strive to improve our own behavior, because we know it will not always be perfect. By forgiving others for their mistakes, we can learn to forgive ourselves for our own.

It is only by talking about bigoted actions and bigoted ideas, rather than bigoted people, that we can hope to make this progress. Someone can’t change who they are, but they can change what they believe and what they do. And along those same lines, it’s important to be clear about detailed, specific actions that people can take to make themselves and the world better.

Don’t just say “Check your privilege!” which at this point is basically a meaningless Applause Light. Instead say “Here are some articles I think you should read on police brutality, including this one from The American Conservative. And there’s a Black Lives Matter protest next weekend, would you like to join me there to see what we do?” Don’t just say “Stop being so racist toward immigrants!”; say “Did you know that about a third of undocumented immigrants are college students on overstayed visas? If we deport all these people, won’t that break up families?” Don’t try to score points. Don’t try to show that you’re the better person. Try to understand, inform, and persuade. You are talking to Judy Hopps, for we are all Judy Hopps.

And when you find false beliefs or bigoted attitudes in yourself, don’t deny them, don’t suppress them, don’t make excuses for them—but also don’t hate yourself for having them. Forgive yourself for your mistake, and then endeavor to correct it. For we are all Judy Hopps.

The high cost of frictional unemployment

Sep 3, JDN 2457635

I had wanted to open this post with an estimate of the number of people in the world, or at least in the US, who are currently between jobs. It turns out that such estimates are essentially nonexistent. The Bureau of Labor Statistics maintains a detailed database of US unemployment; they don’t estimate this number. We have this concept in macroeconomics of frictional unemployment, the unemployment that results from people switching jobs; but nobody seems to have any idea how common it is.

I often hear a ballpark figure of about 4-5%, which is related to a notion that “full employment” should really be about 4-5% unemployment because otherwise we’ll trigger horrible inflation or something. There is almost no evidence for this. In fact, the US unemployment rate has gotten as low as 2.5%, and before that was stable around 3%. This was during the 1950s, the era of the highest income tax rates ever imposed in the United States, a top marginal rate of 92%. Coincidence? Maybe. Obviously there were a lot of other things going on at the time. But it sure does hurt the argument that high income taxes “kill jobs”, don’t you think?

Indeed, it may well be that the rate of frictional unemployment varies all the time, depending on all sorts of different factors. But here’s what we do know: Frictional unemployment is a serious problem, and yet most macroeconomists basically ignore it.

Talk to most macroeconomists about “unemployment”, and they will assume you mean either cyclical unemployment (the unemployment that results from recessions and bad fiscal and monetary policy responses to them), or structural unemployment (the unemployment that results from systematic mismatches between worker skills and business needs). If you specifically mention frictional unemployment, the response is usually that it’s no big deal and there’s nothing we can do about it anyway.

Yet at least when we aren’t in a recession, frictional employment very likely accounts for the majority of unemployment, and thus probably the majority of misery created by unemployment. (Not necessarily, since it probably doesn’t account for much long-term unemployment, which is by far the worst.) And it is quite clear to me that there are things we can do about it—they just might be difficult and/or expensive.

Most of you have probably changed jobs at least once. Many of you have, like me, moved far away to a new place for school or work. Think about how difficult that was. There is the monetary cost, first of all; you need to pay for the travel of course, and then usually leases and paychecks don’t line up properly for a month or two (for some baffling and aggravating reason, UCI won’t actually pay me my paychecks until November, despite demanding rent starting the last week of July!). But even beyond that, you are torn from your social network and forced to build a new one. You have to adapt to living in a new place which may have differences in culture and climate. Bureaucracy often makes it difficult to change over documentation of such as your ID and your driver’s license.

And that’s assuming that you already found a job before you moved, which isn’t always an option. Many people move to new places and start searching for jobs when they arrive, which adds an extra layer of risk and difficulty above and beyond the transition itself.

With all this in mind, the wonder is that anyone is willing to move at all! And this is probably a large part of why people are so averse to losing their jobs even when it is clearly necessary; the frictional unemployment carries enormous real costs. (That and loss aversion, of course.)

What could we do, as a matter of policy, to make such transitions easier?

Well, one thing we could do is expand unemployment insurance, which reduces the cost of losing your job (which, despite the best efforts of Republicans in Congress, we ultimately did do in the Second Depression). We could expand unemployment insurance to cover voluntary quits. Right now, quitting voluntarily makes you forgo all unemployment benefits, which employers pay for in the form of insurance premiums; so an employer is much better off making your life miserable until you quit than they are laying you off. They could also fire you for cause, if they can find a cause (and usually there’s something they could trump up enough to get rid of you, especially if you’re not prepared for the protracted legal battle of a wrongful termination lawsuit). The reasoning of our current system appears to be something like this: Only lazy people ever quit jobs, and why should we protect lazy people? This is utter nonsense and it needs to go. Many states already have no-fault divorce and no-fault auto collision insurance; it’s time for no-fault employment termination.

We could establish a basic income of course; then when you lose your job your income would go down, but to a higher floor where you know you can meet certain basic needs. We could provide subsidized personal loans, similar to the current student loan system, that allow people to bear income gaps without losing their homes or paying exorbitant interest rates on credit cards.

We could use active labor market programs to match people with jobs, or train them with the skills needed for emerging job markets. Denmark has extensive active labor market programs (they call it “flexicurity”), and Denmark’s unemployment rate was 2.4% before the Great Recession, hit a peak of 6.2%, and has now recovered to 4.2%. What Denmark calls a bad year, the US calls a good year—and Greece fantasizes about as something they hope one day to achieve. #ScandinaviaIsBetter once again, and Norway fits this pattern also, though to be fair Sweden’s unemployment rate is basically comparable to the US or even slightly worse (though it’s still nothing like Greece).

Maybe it’s actually all right that we don’t have estimates of the frictional unemployment rate, because the goal really isn’t to reduce the number of people who are unemployed; it’s to reduce the harm caused by unemployment. Most of these interventions would very likely increase the rate frictional unemployment, as people who always wanted to try to find better jobs but could never afford to would now be able to—but they would dramatically reduce the harm caused by that unemployment.

This is a more general principle, actually; it’s why we should basically stop taking seriously this argument that social welfare benefits destroy work incentives. That may well be true; so what? Maximizing work incentives was never supposed to be a goal of public policy, as far as I can tell. Maximizing human welfare is the goal, and the only way a welfare program could reduce work incentives is by making life better for people who aren’t currently working, and thereby reducing the utility gap between working and not working. If your claim is that the social welfare program (and its associated funding mechanism, i.e. taxes, debt, or inflation) would make life sufficiently worse for everyone else that it’s not worth it, then say that (and for some programs that might actually be true). But in and of itself, making life better for people who don’t work is a benefit to society. Your supposed downside is in fact an upside. If there’s a downside, it must be found elsewhere.

Indeed, I think it’s worth pointing out that slavery maximizes work incentives. If you beat or kill people who don’t work, sure enough, everyone works! But that is not even an efficient economy, much less a just society. To be clear, I don’t think most people who say they want to maximize work incentives would actually support slavery, but that is the logical extent of the assertion. (Also, many Libertarians, often the first to make such arguments, do have a really bizarre attitude toward slavery; taxation is slavery, regulation is slavery, conscription is slavery—the last not quite as ridiculous—but actual forced labor… well, that really isn’t so bad, especially if the contract is “voluntary”. Fortunately some Libertarians are not so foolish.) If your primary goal is to make people work as much as possible, slavery would be a highly effective way to achieve that goal. And that really is the direction you’re heading when you say we shouldn’t do anything to help starving children lest their mothers have insufficient incentive to work.

More people not working could have a downside, if it resulted in less overall production of goods. But even in the US, one of the most efficient labor markets in the world, the system of job matching is still so ludicrously inefficient that people have to send out dozens if not hundreds of applications to jobs they barely even want, and there are still 1.4 times as many job seekers as there are openings (at the trough of the Great Recession, the ratio was 6.6 to 1). There’s clearly a lot of space here to improve the matching efficiency, and simply giving people more time to search could make a big difference there. Total output might decrease for a little while during the first set of transitions, but afterward people would be doing jobs they want, jobs they care about, jobs they’re good at—and people are vastly more productive under those circumstances. It’s quite likely that total employment would decrease, but productivity would increase so much that total output increased.

Above all, people would be happier, and that should have been our goal all along.

The replication crisis, and the future of science

Aug 27, JDN 2457628 [Sat]

After settling in a little bit in Irvine, I’m now ready to resume blogging, but for now it will be on a reduced schedule. I’ll release a new post every Saturday, at least for the time being.

Today’s post was chosen by Patreon vote, though only one person voted (this whole Patreon voting thing has not been as successful as I’d hoped). It’s about something we scientists really don’t like to talk about, but definitely need to: We are in the middle of a major crisis of scientific replication.

Whenever large studies are conducted attempting to replicate published scientific results, their ability to do so is almost always dismal.

Psychology is the one everyone likes to pick on, because their record is particularly bad. Only 39% of studies were really replicated with the published effect size, though a further 36% were at least qualitatively but not quantitatively similar. Yet economics has its own replication problem, and even medical research is not immune to replication failure.

It’s important not to overstate the crisis; the majority of scientific studies do at least qualitatively replicate. We are doing better than flipping a coin, which is better than one can say of financial forecasters.
There are three kinds of replication, and only one of them should be expected to give near-100% results. That kind is reanalysiswhen you take the same data and use the same methods, you absolutely should get the exact same results. I favor making reanalysis a routine requirement of publication; if we can’t get your results by applying your statistical methods to your data, then your paper needs revision before we can entrust it to publication. A number of papers have failed on reanalysis, which is absurd and embarrassing; the worst offender was probably Rogart-Reinhoff, which was used in public policy decisions around the world despite having spreadsheet errors.

The second kind is direct replication—when you do the exact same experiment again and see if you get the same result within error bounds. This kind of replication should work something like 90% of the time, but in fact works more like 60% of the time.

The third kind is conceptual replication—when you do a similar experiment designed to test the same phenomenon from a different perspective. This kind of replication should work something like 60% of the time, but actually only works about 20% of the time.

Economists are well equipped to understand and solve this crisis, because it’s not actually about science. It’s about incentives. I facepalm every time I see another article by an aggrieved statistician about the “misunderstanding” of p-values; no, scientist aren’t misunderstanding anything. They know damn well how p-values are supposed to work. So why do they keep using them wrong? Because their jobs depend on doing so.

The first key point to understand here is “publish or perish”; academics in an increasingly competitive system are required to publish their research in order to get tenure, and frequently required to get tenure in order to keep their jobs at all. (Or they could become adjuncts, who are paid one-fifth as much.)

The second is the fundamentally defective way our research journals are run (as I have discussed in a previous post). As private for-profit corporations whose primary interest is in raising more revenue, our research journals aren’t trying to publish what will genuinely advance scientific knowledge. They are trying to publish what will draw attention to themselves. It’s a similar flaw to what has arisen in our news media; they aren’t trying to convey the truth, they are trying to get ratings to draw advertisers. This is how you get hours of meaningless fluff about a missing airliner and then a single chyron scroll about a war in Congo or a flood in Indonesia. Research journals haven’t fallen quite so far because they have reputations to uphold in order to attract scientists to read them and publish in them; but still, their fundamental goal is and has always been to raise attention in order to raise revenue.

The best way to do that is to publish things that are interesting. But if a scientific finding is interesting, that means it is surprising. It has to be unexpected or unusual in some way. And above all, it has to be positive; you have to have actually found an effect. Except in very rare circumstances, the null result is never considered interesting. This adds up to making journals publish what is improbable.

In particular, it creates a perfect storm for the abuse of p-values. A p-value, roughly speaking, is the probability you would get the observed result if there were no effect at all—for instance, the probability that you’d observe this wage gap between men and women in your sample if in the real world men and women were paid the exact same wages. The standard heuristic is a p-value of 0.05; indeed, it has become so enshrined that it is almost an explicit condition of publication now. Your result must be less than 5% likely to happen if there is no real difference. But if you will only publish results that show a p-value of 0.05, then the papers that get published and read will only be the ones that found such p-values—which renders the p-values meaningless.

It was never particularly meaningful anyway; as we Bayesians have been trying to explain since time immemorial, it matters how likely your hypothesis was in the first place. For something like wage gaps where we’re reasonably sure, but maybe could be wrong, the p-value is not too unreasonable. But if the theory is almost certainly true (“does gravity fall off as the inverse square of distance?”), even a high p-value like 0.35 is still supportive, while if the theory is almost certainly false (“are human beings capable of precognition?”—actual study), even a tiny p-value like 0.001 is still basically irrelevant. We really should be using much more sophisticated inference techniques, but those are harder to do, and don’t provide the nice simple threshold of “Is it below 0.05?”

But okay, p-values can be useful in many cases—if they are used correctly and you see all the results. If you have effect X with p-values 0.03, 0.07, 0.01, 0.06, and 0.09, effect X is probably a real thing. If you have effect Y with p-values 0.04, 0.02, 0.29, 0.35, and 0.74, effect Y is probably not a real thing. But I’ve just set it up so that these would be published exactly the same. They each have two published papers with “statistically significant” results. The other papers never get published and therefore never get seen, so we throw away vital information. This is called the file drawer problem.

Researchers often have a lot of flexibility in designing their experiments. If their only goal were to find truth, they would use this flexibility to test a variety of scenarios and publish all the results, so they can be compared holistically. But that isn’t their only goal; they also care about keeping their jobs so they can pay rent and feed their families. And under our current system, the only way to ensure that you can do that is by publishing things, which basically means only including the parts that showed up as statistically significant—otherwise, journals aren’t interested. And so we get huge numbers of papers published that tell us basically nothing, because we set up such strong incentives for researchers to give misleading results.

The saddest part is that this could be easily fixed.

First, reduce the incentives to publish by finding other ways to evaluate the skill of academics—like teaching for goodness’ sake. Working papers are another good approach. Journals already get far more submissions than they know what to do with, and most of these papers will never be read by more than a handful of people. We don’t need more published findings, we need better published findings—so stop incentivizing mere publication and start finding ways to incentivize research quality.

Second, eliminate private for-profit research journals. Science should be done by government agencies and nonprofits, not for-profit corporations. (And yes, I would apply this to pharmaceutical companies as well, which should really be pharmaceutical manufacturers who make cheap drugs based off of academic research and carry small profit margins.) Why? Again, it’s all about incentives. Corporations have no reason to want to find truth and every reason to want to tilt it in their favor.

Third, increase the number of tenured faculty positions. Instead of building so many new grand edifices to please your plutocratic donors, use your (skyrocketing) tuition money to hire more professors so that you can teach more students better. You can find even more funds if you cut the salaries of your administrators and football coaches. Come on, universities; you are the one industry in the world where labor demand and labor supply are the same people a few years later. You have no excuse for not having the smoothest market clearing in the world. You should never have gluts or shortages.

Fourth, require pre-registration of research studies (as some branches of medicine already do). If the study is sound, an optimal rational agent shouldn’t care in the slightest whether it had a positive or negative result, and if our ape brains won’t let us think that way, we need to establish institutions to force it to happen. They shouldn’t even see the effect size and p-value before they make the decision to publish it; all they should care about is that the experiment makes sense and the proper procedure was conducted.
If we did all that, the replication crisis could be almost completely resolved, as the incentives would be realigned to more closely match the genuine search for truth.

Alas, I don’t see universities or governments or research journals having the political will to actually make such changes, which is very sad indeed.

How personality makes cognitive science hard

August 13, JDN 2457614

Why is cognitive science so difficult? First of all, let’s acknowledge that it is difficult—that even those of us who understand it better than most are still quite baffled by it in quite fundamental ways. The Hard Problem still looms large over us all, and while I know that the Chinese Room Argument is wrong, I cannot precisely pin down why.

The recursive, reflexive character of cognitive science is part of the problem; can a thing understand itself without understanding understanding itself, understanding understanding understanding itself, and on in an infinite regress? But this recursiveness applies just as much to economics and sociology, and honestly to physics and biology as well. We are physical biological systems in an economic and social system, yet most people at least understand these sciences at the most basic level—which is simply not true of cognitive science.

One of the most basic facts of cognitive science (indeed I am fond of calling it The Basic Fact of Cognitive Science) is that we are our brains, that everything human consciousness does is done by and within the brain. Yet the majority of humans believe in souls (including the majority of Americans and even the majority of Brits), and just yesterday I saw a news anchor say “Based on a new study, that feeling may originate in your brain!” He seriously said “may”. “may”? Why, next you’ll tell me that when my arms lift things, maybe they do it with muscles! Other scientists are often annoyed by how many misconceptions the general public has about science, but this is roughly the equivalent of a news anchor saying, “Based on a new study, human bodies may be made of cells!” or “Based on a new study, diamonds may be made of carbon atoms!” The misunderstanding of many sciences is widespread, but the misunderstanding of cognitive science is fundamental.

So what makes cognitive science so much harder? I have come to realize that there is a deep feature of human personality that makes cognitive science inherently difficult in a way other sciences are not.

Decades of research have uncovered a number of consistent patterns in human personality, where people’s traits tend to lie along a continuum from one extreme to another, and usually cluster near either end. Most people are familiar with a few of these, such as introversion/extraversion and optimism/pessimism; but the one that turns out to be important here is empathizing/systematizing.

Empathizers view the world as composed of sentient beings, living agents with thoughts, feelings, and desires. They are good at understanding other people and providing social support. Poets are typically empathizers.

Systematizers view the world as composed of interacting parts, interlocking components that have complex inner workings which can be analyzed and understood. They are good at solving math problems and tinkering with machines. Engineers are typically systematizers.

Most people cluster near one end of the continuum or the other; they are either strong empathizers or strong systematizers. (If you’re curious, there’s an online test you can take to find out which you are.)

But a rare few of us, perhaps as little as 2% and no more than 10%, are both; we are empathizer-systematizers, strong on both traits (showing that it’s not really a continuum between two extremes after all, and only seemed to be because the two traits are negatively correlated). A comparable number are also low on both traits, which must quite frankly make the world a baffling place in general.

Empathizer-systematizers understand the world as it truly is: Composed of sentient beings that are made of interacting parts.

The very title of this blog shows I am among this group: “human” for the empathizer, “economics” for the systematizer!

We empathizer-systematizers can intuitively grasp that there is no contradiction in saying that a person is sad because he lost his job and he is sad because serotonin levels in his cingulate gyrus are low—because it was losing his job that triggered other thoughts and memories that lowered serotonin levels in his cingulate gyrus and thereby made him sad. No one fully understands the details of how low serotonin feels like sadness—hence, the Hard Problem—but most people can’t even seem to grasp the connection at all. How can something as complex and beautiful as a human mind be made of… sparking gelatin?

Well, what would you prefer it to be made of? Silicon chips? We’re working on that. Something else? Magical fairy dust, perhaps? Pray tell, what material could the human mind be constructed from that wouldn’t bother you on a deep level?

No, what really seems to bother people is the very idea that a human mind can be constructed from material, that thoughts and feelings can be divisible into their constituent parts.

This leads people to adopt one of two extreme positions on cognitive science, both of which are quite absurd—frankly I’m not sure they are even coherent.

Pure empathizers often become dualists, saying that the mind cannot be divisible, cannot be made of material, but must be… something else, somehow, outside the material universe—whatever that means.

Pure systematizers instead often become eliminativists, acknowledging the functioning of the brain and then declaring proudly that the mind does not exist—that consciousness, emotion, and experience are all simply illusions that advanced science will one day dispense with—again, whatever that means.

I can at least imagine what a universe would be like if eliminativism were true and there were no such thing as consciousness—just a vast expanse of stars and rocks and dust, lifeless and empty. Of course, I know that I’m not in such a universe, because I am experiencing consciousness right now, and the illusion of consciousness is… consciousness. (You are not experiencing what you are experiencing right now, I say!) But I can at least visualize what such a universe would be like, and indeed it probably was our universe (or at least our solar system) up until about a billion years ago when the first sentient animals began to evolve.

Dualists, on the other hand, are speaking words, structured into grammatical sentences, but I’m not even sure they are forming coherent assertions. Sure, you can sort of imagine our souls being floating wisps of light and energy (ala the “ascended beings”, my least-favorite part of the Stargate series, which I otherwise love), but ultimately those have to be made of something, because nothing can be both fundamental and complex. Moreover, the fact that they interact with ordinary matter strongly suggests that they are made of ordinary matter (and to be fair to Stargate, at one point in the series Rodney with his already-great intelligence vastly increased declares confidently that ascended beings are indeed nothing more than “protons and electrons, protons and electrons”). Even if they were made of some different kind of matter like dark matter, they would need to obey a common system of physical laws, and ultimately we would come to think of them as matter. Otherwise, how do the two interact? If we are made of soul-stuff which is fundamentally different from other stuff, then how do we even know that other stuff exists? If we are not our bodies, then how do we experience pain when they are damaged and control them with our volition? The most coherent theory of dualism is probably Malebranche’s, which is quite literally “God did it”. Epiphenomenalism, which says that thoughts are just sort of an extra thing that also happens but has no effect (an “epiphenomenon”) on the physical brain, is also quite popular for some reason. People don’t quite seem to understand that the Law of Conservation of Energy directly forbids an “epiphenomenon” in this sense, because anything that happens involves energy, and that energy (unlike, say, money) can’t be created out of nothing; it has to come from somewhere. Analogies are often used: The whistle of a train, the smoke of a flame. But the whistle of a train is a pressure wave that vibrates the train; the smoke from a flame is made of particulates that could be used to smother the flame. At best, there are some phenomena that don’t affect each other very much—but any causal interaction at all makes dualism break down.

How can highly intelligent, highly educated philosophers and scientists make such basic errors? I think it has to be personality. They have deep, built-in (quite likely genetic) intuitions about the structure of the universe, and they just can’t shake them.

And I confess, it’s very hard for me to figure out what to say in order to break those intuitions, because my deep intuitions are so different. Just as it seems obvious to them that the world cannot be this way, it seems obvious to me that it is. It’s a bit like living in a world where 45% of people can see red but not blue and insist the American Flag is red and white, another 45% of people can see blue but not red and insist the flag is blue and white, and I’m here in the 10% who can see all colors and I’m trying to explain that the flag is red, white, and blue.

The best I can come up with is to use analogies, and computers make for quite good analogies, not least because their functioning is modeled on our thinking.

Is this word processor program (LibreOffice Writer, as it turns out) really here, or is it merely an illusion? Clearly it’s really here, right? I’m using it. It’s doing things right now. Parts of it are sort of illusions—it looks like a blank page, but it’s actually an LCD screen lit up all the way; it looks like ink, but it’s actually where the LCD turns off. But there is clearly something here, an actual entity worth talking about which has properties that are usefully described without trying to reduce them to the constituent interactions of subatomic particles.

On the other hand, can it be reduced to the interactions of subatomic particles? Absolutely. A brief sketch is something like this: It’s a software program, running on an operating system, and these in turn are represented in the physical hardware as long binary sequences, stored by ever-so-slightly higher or lower voltages in particular hardware components, which in turn are due to electrons being moved from one valence to another. Those electrons move in precise accordance with the laws of quantum mechanics, I assure you; yet this in no way changes the fact that I’m typing a blog post on a word processor.

Indeed, it’s not even particularly useful to know that the electrons are obeying the laws of quantum mechanics, and quite literally no possible computer that could be constructed in our universe could ever be large enough to fully simulate all these quantum interactions within the amount of time since the dawn of the universe. If we are to understand it at all, it must be at a much higher level—and the “software program” level really seems to be the best one for most circumstances. The vast majority of problems I’m likely to encounter are either at the software level or the macro hardware level; it’s conceivable that a race condition could emerge in the processor cache or the voltage could suddenly spike or even that a cosmic ray could randomly ionize a single vital electron, but these scenarios are far less likely to affect my life than, say, I accidentally deleted the wrong file or the battery ran out of charge because I forgot to plug it in.

Likewise, when dealing with a relationship problem, or mediating a conflict between two friends, it’s rarely relevant that some particular neuron is firing in someone’s nucleus accumbens, or that one of my friends is very low on dopamine in his mesolimbic system today. It could be, particularly if some sort of mental or neurological illness in involved, but in most cases the real issues are better understood as higher level phenomena—people being angry, or tired, or sad. These emotions are ultimately constructed of axon potentials and neurotransmitters, but that doesn’t make them any less real, nor does it change the fact that it is at the emotional level that most human matters are best understood.

Perhaps part of the problem is that human emotions take on moral significance, which other higher-level entities generally do not? But they sort of do, really, in a more indirect way. It matters a great deal morally whether or not climate change is a real phenomenon caused by carbon emissions (it is). Ultimately this moral significance can be tied to human experiences, so everything rests upon human experiences being real; but they are real, in much the same way that rocks and trees and carbon emissions are real. No amount of neuroscience will ever change that, just as no amount of biological science would disprove the existence of trees.

Indeed, some of the world’s greatest moral problems could be better solved if people were better empathizer-systematizers, and thus more willing to do cost-benefit analysis.

Why are movies so expensive? Did they used to be? Do they need to be?

August 10, JDN 2457611

One of the better arguments in favor of copyright involves film production. Films are extraordinarily expensive to produce; without copyright, how would they recover their costs? $100 million is a common budget these days.

It is commonly thought that film budgets used to be much smaller, so I looked at some data from The Numbers on over 5,000 films going back to 1915, and inflation-adjusted the budgets using the CPI. (I learned some interesting LibreOffice Calc functions in the process of merging the data; also LibreOffice crashed a few times trying to make the graphs, so that’s fun. I finally realized it had copied over all the 10,000 hyperlinks from the HTML data set.)

If you just look at the nominal figures, there does seem to be some sort of upward trend:

Movie_Budgets_nominal

But once you do the proper inflation adjustment, this trend basically disappears:

Movie_Budgets_adjusted

In real terms, the grosses of some early movies are quite large. Adjusted to 2015 dollars, Gone with the Wind grossed $6.659 billion—still the highest ever. In 1937, Snow White and the Seven Dwarfs grossed over $3.043 billion in 2015 dollars. In 1950, Cinderella made it to $2.592 billion in today’s money. (Horrifyingly, The Birth of a Nation grossed $258 million in today’s money.)

Nor is there any evidence that movie production has gotten more expensive. The linear trend is actually negative, though with a very small slope that is not statistically significant. On average, the real budget of a movie falls by $1752 per year.

Movie_Budgets_trend

While the two most expensive movies came out recently (Pirates of the Caribbean: At World’s End and Avatar), the third most expensive was released in 1963 (Cleopatra). The really hugely expensive movies do seem to cluster relatively recently—but then so do the really cheap films, some of which have budgets under $10,000. It may just be that more movies are produced in general, and overall the cost of producing a film doesn’t seem to have changed in real terms. The best return on investment is My Date with Drew, released in 2005, which had a budget of $1,100 but grossed $181,000, giving it an ROI of 16,358%. The highest real profit was of course Gone with the Wind, which made an astonishing $6.592 billion, though Titanic, Avatar, Aliens and Terminator 2 combined actually beat it with a total profit of $6.651 billion, which may explain why James Cameron can now basically make any movie he wants and already has four sequels lined up for Avatar.

The biggest real loss was 1970’s Waterloo, which made back only $18 million of its $153 million budget, losing $135 million and having an ROI of -87.7%. This was not quite as bad an ROI as 2002’s The Adventures of Pluto Nash, which had an ROI of -92.91%.

But making movies has always been expensive, at least for big blockbusters. (The $8,900 budget of Primer is something I could probably put on credit cards if I had to.) It’s nothing new to spend $100 million in today’s money.

When considering the ethics and economics of copyright, it’s useful to think about what Michele Boldrin calls “pizzaright”: you can’t copy my pizza, or you are guilty of pizzaright infringement. Many of the arguments for copyright are so general—this is a valuable service, it carries some risk of failure, it wouldn’t be as profitable without the monopoly, so fewer companies might enter the business—that they would also apply to pizza. Yet somehow nobody thinks that pizzaright should be a thing. If there is a justification for copyrights, it must come from the special circumstances of works of art (broadly conceived, including writing, film, music, etc.), and the only one that really seems strong enough is the high upfront cost of certain types of art—and indeed, the only ones that really seem to fit that are films and video games.

Painting, writing, and music just aren’t that expensive. People are willing to create these things for very little money, and can do so more or less on their own, especially nowadays. If the prices are reasonable, people will still want to buy from the creators directly—and sure enough, widespread music piracy hasn’t killed music, it has only killed the corporate record industry. But movies and video games really can easily cost $100 million to make, so there’s a serious concern of what might happen if they couldn’t use copyright to recover their costs.

The question for me is, did we really need copyright to fund these budgets?

Let’s take a look at how Star Wars made its money. $6.249 billion came from box office revenue, while $873 million came from VHS and DVD sales; those would probably be substantially reduced if not for copyright. But even before The Force Awakens was released, the Star Wars franchise had already made some $12 billion in toy sales alone. “Merchandizing, merchandizing, where the real money from the movie is made!”

Did they need intellectual property to do that? Well, yes—but all they needed was trademark. Defenders of “intellectual property” like to use that term because it elides fundamental distinctions between the three types: trademark, copyright, and patent.
Trademark is unproblematic. You can’t lie about who you are or where you products came from when you’re selling something. So if you are claiming to sell official Star Wars merchandise, you’d better be selling official Star Wars merchandise, and trademark protects that.

Copyright is problematic, but may be necessary in some cases. Copyright protects the content of the movies from being copied or modified without Lucasfilm’s permission. So now rather than simply protecting against the claim that you represent Lucasfilm, we are protecting against people buying the movie, copying it, and reselling the copies—even though that is a real economic service they are providing, and is in no way fraudulent as long as they are clear about the fact that they made the copies.

Patent is, frankly, ridiculous. The concept of “owning” ideas is absurd. You came up with a good way to do something? Great! Go do it then. But don’t expect other people to pay you simply for the privilege of hearing your good idea. Of course I want to financially support researchers, but there are much, much better ways of doing that, like government grants and universities. Patents only raise revenue for research that sells, first of all—so vaccines and basic research can’t be funded that way, even though they are the most important research by far. Furthermore, there’s nothing to guarantee that the person who actually invented the idea is the one who makes the profit from it—and in our current system where corporations can own patents (and do own almost 90% of patents), it typically isn’t. Even if it were, the whole concept of owning ideas is nonsensical, and it has driven us to the insane extremes of corporations owning patents on human DNA. The best argument I’ve heard for patents is that they are a second-best solution that incentivizes transparency and avoids trade secrets from becoming commonplace; but in that case they should definitely be short, and we should never extend them. Companies should not be able to make basically cosmetic modifications and renew the patent, and expiring patents should be a cause for celebration.

Hollywood actually formed in Los Angeles precisely to escape patents, but of course they love copyright and trademark. So do they like “intellectual property”?

Could blockbuster films be produced profitably using only trademark, in the absence of copyright?

Clearly Star Wars would have still turned a profit. But not every movie can do such merchandizing, and when movies start getting written purely for merchandizing it can be painful to watch.

The real question is whether a film like Gone with the Wind or Avatar could still be made, and make a reasonable profit (if a much smaller one).

Well, there’s always porn. Porn raises over $400 million per year in revenue, despite having essentially unenforceable copyright. They too are outraged over piracy, yet somehow I don’t think porn will ever cease to exist. A top porn star can make over $200,000 per year.Then there are of course independent films that never turn a profit at all, yet people keep making them.

So clearly it is possible to make some films without copyright protection, and something like Gone with the Wind needn’t cost $100 million to make. The only reason it cost as much as it did (about $66 million in today’s money) was that movie stars could command huge winner-takes-all salaries, which would no longer be true if copyright went away. And don’t tell me people wouldn’t be willing to be movie stars for $200,000 a year instead of $1.8 million (what Clark Gable made for Gone with the Wind, adjusted for inflation).

Yet some Hollywood blockbuster budgets are genuinely necessary. The real question is whether we could have Avatar without copyright. Not having films like Avatar is something I would count as a substantial loss to our society; we would lose important pieces of our art and culture.

So, where did all that money go? I don’t have a breakdown for Avatar in particular, but I do have a full budget breakdown for The Village. Of its $71.7 million, $33.5 million was “above the line”, which basically means the winner-takes-all superstar salaries for the director, producer, and cast. That amount could be dramatically reduced with no real cost to society—let’s drop it to say $3 million. Shooting costs were $28.8 million, post-production was $8.4 million, and miscellaneous expenses added about $1 million; all of those would be much harder to reduce (they mainly go to technical staff who make reasonable salaries, not to superstars), so let’s assume the full amount is necessary. That’s about $38 million in real cost to produce. Avatar had a lot more (and better) post-production, so let’s go ahead and multiply the post-production budget by an order of magnitude to $84 million. Our new total budget is $113.8 million.
That sounds like a lot, and it is; but this could be made back without copyright. Avatar sold over 14.5 million DVDs and over 8 million Blu-Rays. Conservatively assuming that the price elasticity of demand is zero (which is ridiculous—assuming the monopoly pricing is optimal it should be -1), if those DVDs were sold for $2 each and the Blu-Rays were sold for $5 each, with 50% of those prices being profit, this would yield a total profit of $14.5 million from DVDs and $20 million from Blu-Rays. That’s already $34.5 million. With realistic assumptions about elasticity of demand, cutting the prices this much (DVDs down from an average of $16, Blu-Rays down from an average of $20) would multiply the number of DVDs sold by at least 5 and the number of Blu-Rays sold by at least 3, which would get us all the way up to $132 million—enough to cover our new budget. (Of course this is much less than they actually made, which is why they set the prices they did—but that doesn’t mean it’s optimal from society’s perspective.)

But okay, suppose I’m wrong about the elasticity, and dropping the price from $16 to $2 for a DVD somehow wouldn’t actually increase the number purchased. What other sources of revenue would they have? Well, box office tickets would still be a thing. They’d have to come down in price, but given the high-quality high-fidelity versions that cinemas require—making them quite hard to pirate—they would still get decent money from each cinema. Let’s say the price drops by 90%—all cinemas are now $1 cinemas!—and the sales again somehow remain exactly the same (rather than dramatically increasing as they actually would). What would Avatar’s worldwide box office gross be then? $278 million. They could give the DVDs away for free and still turn a profit.

And that’s Avatar, one of the most expensive movies ever made. By cutting out the winner-takes-all salaries and huge corporate profits, the budget can be substantially reduced, and then what real costs remain can be quite well covered by box office and DVD sales at reasonable prices. If you imagine that piracy somehow undercuts everything until you have to give away things for free, you might think this is impossible; but in reality pirated versions are of unreliable quality, people do want to support artists and they are willing to pay something for their entertainment. They’re just tired of paying monopoly prices to benefit the shareholders of Viacom.

Would this end the era of the multi-millionaire movie star? Yes, I suppose it might. But it would also put about $10 billion per year back in the pockets of American consumers—and there’s little reason to think it would take away future Avatars, much less future Gone with the Winds.