How do we reach people with ridiculous beliefs?

Oct 16, JDN 2457678

One of the most unfortunate facts in the world—indeed, perhaps the most unfortunate fact, from which most other unfortunate facts follow—is that it is quite possible for a human brain to sincerely and deeply hold a belief that is, by any objective measure, totally and utterly ridiculous.

And to be clear, I don’t just mean false; I mean ridiculous. People having false beliefs is an inherent part of being finite beings in a vast and incomprehensible universe. Monetarists are wrong, but they are not ludicrous. String theorists are wrong, but they are not absurd. Multiregionalism is wrong, but it is not nonsensical. Indeed, I, like anyone else, am probably wrong about a great many things, though of course if I knew which ones I’d change my mind. (Indeed, I admit a small but nontrivial probability of being wrong about the three things I just listed.)

I mean ridiculous beliefs. I mean that any rational, objective assessment of the probability of that belief being true would be vanishingly small, 1 in 1 million at best. I’m talking about totally nonsensical beliefs, beliefs that go against overwhelming evidence; some of them are outright incoherent. Yet millions of people go on believing them.

For example, over 40% of Americans believe that human beings were created by God in their present form less than 10,000 years ago, and typically offer no evidence for this besides “The Bible says so.” (Strictly speaking, even that isn’t true—standard interpretations of the Bible say so. The Bible itself contains no clearly stated date for creation.) This despite the absolutely overwhelming body of evidence supporting the theory of evolution by Darwinian natural selection.

Over a third of Americans don’t believe in global warming, which is not only a complete consensus among all credible climate scientists based on overwhelming evidence, but one of the central threats facing human civilization over the 21st century. On a global scale this is rather like standing on a train track and saying you don’t believe in trains. (Or like the time my mother once told me about where an alert went out to her office that there was a sniper in the area, indiscriminately shooting at civilians, and one of her co-workers refused to join the security protocol and declared smugly, “I don’t believe in snipers.” Fortunately, he was unharmed in the incident. This time.)

1/4 of Americans believe in astrology, and 1/4 Americans believe that aliens have visited the Earth. (Not sure if it’s the same 1/4. Probably considerable but not total overlap.) The existence of extraterrestrial civilizations somewhere in this mind-bogglingly (perhaps infinitely) vast universe has probability 1. But visiting us is quite another matter, and there is absolutely no credible evidence of it. As for astrology? I shouldn’t have to explain why the position of Jupiter, much less Sirius, on your birthday is not a major influence on your behavior or life outcomes. Your obstetrician exerted more gravitational force on you than Jupiter did at the moment you were born.

The majority of Americans believe in telepathy or extrasensory perception. I confess that I actually did when I was very young, though I think I disabused myself of this around the time I stopped believing in Santa Claus.

I love the term “extrasensory perception” because it is such an oxymoron; if you’re perceiving, it is via senses. “Sixth sense” is better, except that we actually already have at least nine senses: The ones you probably know, vision (sight), audition (hearing), olfaction (smell), gustation (taste), and tactition (touch)—and the ones you may not know, thermoception (heat), proprioception (body position), vestibulation (balance), and nociception (pain). These can probably be subdivided further—vision and spatial reasoning are dissociated in blind people, heat and cold are separate nerve pathways, pain and itching are distinct systems, and there are a variety of different sensors used for proprioception. So we really could have as many as twenty senses, depending on how you’re counting.

What about telepathy? Well, that is not actually impossible in principle; it’s just that there’s no evidence that humans actually do it. Smartphones do it almost literally constantly, transmitting data via high-frequency radio waves back and forth to one another. We could have evolved some sort of radio transceiver organ (perhaps an offshoot of an electric defense organ such as that of electric eels), but as it turns out we didn’t. Actually in some sense—which some might say is trivial, but I think it’s actually quite deep—we do have telepathy; it’s just that we transmit our thoughts not via radio waves or anything more exotic, but via sound waves (speech) and marks on paper (writing) and electronic images (what you’re reading right now). Human beings really do transmit our thoughts to one another, and this truly is a marvelous thing we should not simply take for granted (it is one of our most impressive feats of Mundane Magic); but somehow I don’t think that’s what people mean when they say they believe in psychic telepathy.

And lest you think this is a uniquely American phenomenon: The particular beliefs vary from place to place, but bizarre beliefs abound worldwide, from conspiracy theories in the UK to 9/11 “truthers” in Canada to HIV denialism in South Africa (fortunately on the wane). The American examples are more familiar to me and most of my readers are Americans, but wherever you are reading from, there are probably ridiculous beliefs common there.

I could go on, listing more objectively ridiculous beliefs that are surprisingly common; but the more I do that, the more I risk alienating you, in case you should happen to believe one of them. When you add up the dizzying array of ridiculous beliefs one could hold, odds are that most people you’d ever meet will have at least one of them. (“Not me!” you’re thinking; and perhaps you’re right. Then again, I’m pretty sure that the 4% or so of people who believe in the Reptilians think the same thing.)

Which brings me to my real focus: How do we reach these people?

One possible approach would be to just ignore them, leave them alone, or go about our business with them as though they did not have ridiculous beliefs. This is in fact the right thing to do under most circumstances, I think; when a stranger on the bus starts blathering about how the lizard people are going to soon reveal themselves and establish the new world order, I don’t think it’s really your responsibility to persuade that person to realign their beliefs with reality. Nodding along quietly would be acceptable, and it would be above and beyond the call of duty to simply say, “Um, no… I’m fairly sure that isn’t true.”

But this cannot always be the answer, if for no other reason than the fact that we live in a democracy, and people with ridiculous beliefs frequently vote according to them. Then people with ridiculous beliefs can take office, and make laws that affect our lives. Actually this would be true even if we had some other system of government; there’s nothing in particular to stop monarchs, hereditary senates, or dictators from believing ridiculous things. If anything, the opposite; dictators are known for their eccentricity precisely because there are no checks on their behavior.

At some point, we’re going to need to confront the fact that over half of the Republicans in the US Congress do not believe in climate change, and are making policy accordingly, rolling drunk on petroleum and treating the hangover with the hair of the dog.

We’re going to have to confront the fact that school boards in Southern states, particularly Texas, continually vote to censor biology textbooks of their dreaded Darwinian evolution.

So we really do need to find a way to talk to people who have ridiculous beliefs, and engage with them, understand why they think the way they do, and then—hopefully at least—tilt them a little bit back toward rational reality. You will not be able to change their mind completely right away, but if each of us can at least chip away at their edifice of absurdity, then all together perhaps we can eventually bring them to enlightenment.

Of course, a good start is probably not to say you think that their beliefs are ridiculous, because people get very defensive when you do that, even—perhaps especially—when it’s true. People invest their identity in beliefs, and decide what beliefs to profess based on the group identities they value most.

This is the link that we must somehow break. We must show people that they are not defined by their beliefs, that it is okay to change your mind. We must be patient and compassionate—sometimes heroically so, as people spout offensive nonsense in our faces, sometimes offensive nonsense that directly attacks us personally. (“Atheists deserve Hell”, taken literally, would constitute something like a death threat except infinitely worse. While to them it very likely is just reciting a slogan, to the atheist listening it says that you believe that they are so evil, so horrible that they deserve eternal torture for believing what they do. And you get mad when we say your beliefs are ridiculous?)

We must also remind people that even very smart people can believe very dumb things—indeed, I’d venture a guess that most dumb things are in fact believed by smart people. Even the most intelligent human beings can only glimpse a tiny fraction of the universe, and all human brains are subject to the same fundamental limitations, the same core heuristics and biases. Make it clear that you’re saying you think their beliefs are false, not that they are stupid or crazy. And indeed, make it clear to yourself that this is indeed what you believe, because it ought to be. It can be tempting to think that only an idiot would believe something so ridiculous—and you are safe, for you are no idiot!—but the truth is far more humbling: Human brains are subject to many flaws, and guarding the fortress of the mind against error and deceit is a 24-7 occupation. Indeed, I hope that you will ask yourself: “What beliefs do I hold that other people might find ridiculous? Are they, in fact, ridiculous?”

Even then, it won’t be easy. Most people are strongly resistant to any change in belief, however small, and it is in the nature of ridiculous beliefs that they require radical changes in order to restore correspondence with reality. So we must try in smaller steps.

Maybe don’t try to convince them that 9/11 was actually the work of Osama bin Laden; start by pointing out that yes, steel does bend much more easily at the temperature at which jet fuel burns. Maybe don’t try to persuade them that astrology is meaningless; start by pointing out the ways that their horoscope doesn’t actually seem to fit them, or could be made to fit anybody. Maybe don’t try to get across the real urgency of climate change just yet, and instead point out that the “study” they read showing it was a hoax was clearly funded by oil companies, who would perhaps have a vested interest here. And as for ESP? I think it’s a good start just to point out that we have more than five senses already, and there are many wonders of the human brain that actual scientists know about well worth exploring—so who needs to speculate about things that have no scientific evidence?

Sometimes people have to lose their jobs. This isn’t a bad thing.

Oct 8, JDN 2457670

Eleizer Yudkowsky (founder of the excellent blog forum Less Wrong) has a term he likes to use to distinguish his economic policy views from either liberal, conservative, or even libertarian: “econoliterate”, meaning the sort of economic policy ideas one comes up with when one actually knows a good deal about economics.

In general I think Yudkowsky overestimates this effect; I’ve known some very knowledgeable economists who disagree quite strongly over economic policy, and often following the conventional political lines of liberal versus conservative: Liberal economists want more progressive taxation and more Keynesian monetary and fiscal policy, while conservative economists want to reduce taxes on capital and remove regulations. Theoretically you can want all these things—as Miles Kimball does—but it’s rare. Conservative economists hate minimum wage, and lean on the theory that says it should be harmful to employment; liberal economists are ambivalent about minimum wage, and lean on the empirical data that shows it has almost no effect on employment. Which is more reliable? The empirical data, obviously—and until more economists start thinking that way, economics is never truly going to be a science as it should be.

But there are a few issues where Yudkowsky’s “econoliterate” concept really does seem to make sense, where there is one view held by most people, and another held by economists, regardless of who is liberal or conservative. One such example is free trade, which almost all economists believe in. A recent poll of prominent economists by the University of Chicago found literally zero who agreed with protectionist tariffs.

Another example is my topic for today: People losing their jobs.

Not unemployment, which both economists and almost everyone else agree is bad; but people losing their jobs. The general consensus among the public seems to be that people losing jobs is always bad, while economists generally consider it a sign of an economy that is run smoothly and efficiently.

To be clear, of course losing your job is bad for you; I don’t mean to imply that if you lose your job you shouldn’t be sad or frustrated or anxious about that, particularly not in our current system. Rather, I mean to say that policy which tries to keep people in their jobs is almost always a bad idea.

I think the problem is that most people don’t quite grasp that losing your job and not having a job are not the same thing. People not having jobs who want to have jobs—unemployment—is a bad thing. But losing your job doesn’t mean you have to stay unemployed; it could simply mean you get a new job. And indeed, that is what it should mean, if the economy is running properly.

Check out this graph, from FRED:

hires_separations

The red line shows hires—people getting jobs. The blue line shows separations—people losing jobs or leaving jobs. During a recession (the most recent two are shown on this graph), people don’t actually leave their jobs faster than usual; if anything, slightly less. Instead what happens is that hiring rates drop dramatically. When the economy is doing well (as it is right now, more or less), both hires and separations are at very high rates.

Why is this? Well, think about what a job is, really: It’s something that needs done, that no one wants to do for free, so someone pays someone else to do it. Once that thing gets done, what should happen? The job should end. It’s done. The purpose of the job was not to provide for your standard of living; it was to achieve the task at hand. Once it doesn’t need done, why keep doing it?

We tend to lose sight of this, for a couple of reasons. First, we don’t have a basic income, and our social welfare system is very minimal; so a job usually is the only way people have to provide for their standard of living, and they come to think of this as the purpose of the job. Second, many jobs don’t really “get done” in any clear sense; individual tasks are completed, but new ones always arise. After every email sent is another received; after every patient treated is another who falls ill.

But even that is really only true in the short run. In the long run, almost all jobs do actually get done, in the sense that no one has to do them anymore. The job of cleaning up after horses is done (with rare exceptions). The job of manufacturing vacuum tubes for computers is done. Indeed, the job of being a computer—that used to be a profession, young women toiling away with slide rules—is very much done. There are no court jesters anymore, no town criers, and very few artisans (and even then, they’re really more like hobbyists). There are more writers now than ever, and occasional stenographers, but there are no scribes—no one powerful but illiterate pays others just to write things down, because no one powerful is illiterate (and even few who are not powerful, and fewer all the time).

When a job “gets done” in this long-run sense, we usually say that it is obsolete, and again think of this as somehow a bad thing, like we are somehow losing the ability to do something. No, we are gaining the ability to do something better. Jobs don’t become obsolete because we can’t do them anymore; they become obsolete because we don’t need to do them anymore. Instead of computers being a profession that toils with slide rules, they are thinking machines that fit in our pockets; and there are plenty of jobs now for software engineers, web developers, network administrators, hardware designers, and so on as a result.

Soon, there will be no coal miners, and very few oil drillers—or at least I hope so, for the sake of our planet’s climate. There will be far fewer auto workers (robots have already done most of that already), but far more construction workers who install rail lines. There will be more nuclear engineers, more photovoltaic researchers, even more miners and roofers, because we need to mine uranium and install solar panels on rooftops.

Yet even by saying that I am falling into the trap: I am making it sound like the benefit of new technology is that it opens up more new jobs. Typically it does do that, but that isn’t what it’s for. The purpose of technology is to get things done.

Remember my parable of the dishwasher. The goal of our economy is not to make people work; it is to provide people with goods and services. If we could invent a machine today that would do the job of everyone in the world and thereby put us all out of work, most people think that would be terrible—but in fact it would be wonderful.

Or at least it could be, if we did it right. See, the problem right now is that while poor people think that the purpose of a job is to provide for their needs, rich people think that the purpose of poor people is to do jobs. If there are no jobs to be done, why bother with them? At that point, they’re just in the way! (Think I’m exaggerating? Why else would anyone put a work requirement on TANF and SNAP? To do that, you must literally think that poor people do not deserve to eat or have homes if they aren’t, right now, working for an employer. You can couch that in cold economic jargon as “maximizing work incentives”, but that’s what you’re doing—you’re threatening people with starvation if they can’t or won’t find jobs.)

What would happen if we tried to stop people from losing their jobs? Typically, inefficiency. When you aren’t allowed to lay people off when they are no longer doing useful work, we end up in a situation where a large segment of the population is being paid but isn’t doing useful work—and unlike the situation with a basic income, those people would lose their income, at least temporarily, if they quit and tried to do something more useful. There is still considerable uncertainty within the empirical literature on just how much “employment protection” (laws that make it hard to lay people off) actually creates inefficiency and reduces productivity and employment, so it could be that this effect is small—but even so, likewise it does not seem to have the desired effect of reducing unemployment either. It may be like minimum wage, where the effect just isn’t all that large. But it’s probably not saving people from being unemployed; it may simply be shifting the distribution of unemployment so that people with protected jobs are almost never unemployed and people without it are unemployed much more frequently. (This doesn’t have to be based in law, either; while it is made by custom rather than law, it’s quite clear that tenure for university professors makes tenured professors vastly more secure, but at the cost of making employment tenuous and underpaid for adjuncts.)

There are other policies we could make that are better than employment protection, active labor market policies like those in Denmark that would make it easier to find a good job. Yet even then, we’re assuming that everyone needs jobs–and increasingly, that just isn’t true.

So, when we invent a new technology that replaces workers, workers are laid off from their jobs—and that is as it should be. What happens next is what we do wrong, and it’s not even anybody in particular; this is something our whole society does wrong: All those displaced workers get nothing. The extra profit from the more efficient production goes entirely to the shareholders of the corporation—and those shareholders are almost entirely members of the top 0.01%. So the poor get poorer and the rich get richer.

The real problem here is not that people lose their jobs; it’s that capital ownership is distributed so unequally. And boy, is it ever! Here are some graphs I made of the distribution of net wealth in the US, using from the US Census.

Here are the quintiles of the population as a whole:

net_wealth_us

And here are the medians by race:

net_wealth_race

Medians by age:

net_wealth_age

Medians by education:

net_wealth_education

And, perhaps most instructively, here are the quintiles of people who own their homes versus renting (The rent is too damn high!)

net_wealth_rent

All that is just within the US, and already they are ranging from the mean net wealth of the lowest quintile of people under 35 (-$45,000, yes negative—student loans) to the mean net wealth of the highest quintile of people with graduate degrees ($3.8 million). All but the top quintile of renters are poorer than all but the bottom quintile of homeowners. And the median Black or Hispanic person has less than one-tenth the wealth of the median White or Asian person.

If we look worldwide, wealth inequality is even starker. Based on UN University figures, 40% of world wealth is owned by the top 1%; 70% by the top 5%; and 80% by the top 10%. There is less total wealth in the bottom 80% than in the 80-90% decile alone. According to Oxfam, the richest 85 individuals own as much net wealth as the poorest 3.7 billion. They are the 0.000,001%.

If we had an equal distribution of capital ownership, people would be happy when their jobs became obsolete, because it would free them up to do other things (either new jobs, or simply leisure time), while not decreasing their income—because they would be the shareholders receiving those extra profits from higher efficiency. People would be excited to hear about new technologies that might displace their work, especially if those technologies would displace the tedious and difficult parts and leave the creative and fun parts. Losing your job could be the best thing that ever happened to you.

The business cycle would still be a problem; we have good reason not to let recessions happen. But stopping the churn of hiring and firing wouldn’t actually make our society better off; it would keep people in jobs where they don’t belong and prevent us from using our time and labor for its best use.

Perhaps the reason most people don’t even think of this solution is precisely because of the extreme inequality of capital distribution—and the fact that it has more or less always been this way since the dawn of civilization. It doesn’t seem to even occur to most people that capital income is a thing that exists, because they are so far removed from actually having any amount of capital sufficient to generate meaningful income. Perhaps when a robot takes their job, on some level they imagine that the robot is getting paid, when of course it’s the shareholders of the corporations that made the robot and the corporations that are using the robot in place of workers. Or perhaps they imagine that those shareholders actually did so much hard work they deserve to get paid that money for all the hours they spent.

Because pay is for work, isn’t it? The reason you get money is because you’ve earned it by your hard work?

No. This is a lie, told to you by the rich and powerful in order to control you. They know full well that income doesn’t just come from wages—most of their income doesn’t come from wages! Yet this is even built into our language; we say “net worth” and “earnings” rather than “net wealth” and “income”. (Parade magazine has a regular segment called “What People Earn”; it should be called “What People Receive”.) Money is not your just reward for your hard work—at least, not always.

The reason you get money is that this is a useful means of allocating resources in our society. (Remember, money was created by governments for the purpose of facilitating economic transactions. It is not something that occurs in nature.) Wages are one way to do that, but they are far from the only way; they are not even the only way currently in use. As technology advances, we should expect a larger proportion of our income to go to capital—but what we’ve been doing wrong is setting it up so that only a handful of people actually own any capital.

Fix that, and maybe people will finally be able to see that losing your job isn’t such a bad thing; it could even be satisfying, the fulfillment of finally getting something done.

No, Scandinavian countries aren’t parasites. They’re just… better.

Oct 1, JDN 2457663

If you’ve been reading my blogs for awhile, you likely have noticed me occasionally drop the hashtag #ScandinaviaIsBetter; I am in fact quite enamored of the Scandinavian (or Nordic more generally) model of economic and social policy.

But this is not a consensus view (except perhaps within Scandinavia itself), and I haven’t actually gotten around to presenting a detailed argument for just what it is that makes these countries so great.

I was inspired to do this by discussion with a classmate of mine (who shall remain nameless) who emphatically disagreed; he actually seems to think that American economic policy is somewhere near optimal (and to be fair, it might actually be near optimal, in the broad space of all possible economic policies—we are not Maoist China, we are not Somalia, we are not a nuclear wasteland). He couldn’t disagree with the statistics on how wealthy and secure and happy Scandinavian countries are, so instead he came up with this: “They are parasites.”

What he seemed to mean by this is that somehow Scandinavian countries achieve their success by sapping wealth from other countries, perhaps the rest of Europe, perhaps the world more generally. On this view, it’s not that Norway and Denmark aren’t rich because they economic policy basically figured out; no, they are somehow draining those riches from elsewhere.

This could scarcely be further from the truth.

But first, consider a couple of countries that are parasites, at least partially: Luxembourg and Singapore.

Singapore has an enormous trade surplus: 5.5 billion SGD per month, which is $4 billion per month, so almost $50 billion per year. They also have a positive balance of payments of $61 billion per year. Singapore’s total GDP is about $310 billion, so these are not small amounts. What does this mean? It means that Singapore is taking in a lot more money than they are spending out. They are effectively acting as mercantilists, or if you like as a profit-seeking corporation.

Moreover, Singapore is totally dependent on trade: their exports are over $330 billion per year, and their imports are over $280 billion. You may recognize each of these figures as comparable to the entire GDP of the country. Yes, their total trade is 200% of GDP. They aren’t really so much a country as a gigantic trading company.

What about Luxembourg? Well, they have a trade deficit of 420 million Euros per month, which is about $560 million per year. Their imports total about $2 billion per year, and their exports about $1.5 billion. Since Luxembourg’s total GDP is $56 billion, these aren’t unreasonably huge figures (total trade is about 6% of GDP); so Luxembourg isn’t a parasite in the sense that Singapore is.

No, what makes Luxembourg a parasite is the fact that 36% of their GDP is due to finance. Compare the US, where 12% of our GDP is finance—and we are clearly overfinancialized. Over a third of Luxembourg’s income doesn’t involve actually… doing anything. They hold onto other people’s money and place bets with it. Even insofar as finance can be useful, it should be only very slightly profitable, and definitely not more than 10% of GDP. As Stiglitz and Krugman agree (and both are Nobel Laureate economists), banking should be boring.

Do either of these arguments apply to Scandinavia? Let’s look at trade first. Denmark’s imports total about 42 billion DKK per month, which is about $70 billion per year. Their exports total about $90 billion per year. Denmark’s total GDP is $330 billion, so these numbers are quite reasonable. What are their main sectors? Manufacturing, farming, and fuel production. Notably, not finance.

Similar arguments hold for Sweden and Norway. They may be small countries, but they have diversified economies and strong production of real economic goods. Norway is probably overly dependent on oil exports, but they are specifically trying to move away from that right now. Even as it is, only about $90 billion of their $150 billion exports are related to oil, and exports in general are only about 35% of GDP, so oil is about 20% of Norway’s GDP. Compare that to Saudi Arabia, of which has 90% of its exports related to oil, accounting for 45% of GDP. If oil were to suddenly disappear, Norway would lose 20% of their GDP, dropping their per-capita GDP… all the way to the same as the US. (Terrifying!) But Saudi Arabia would suffer a total economic collapse, and their per capita-GDP would fall from where it is now at about the same as the US to about the same as Greece.

And at least oil actually does things. Oil exporting countries aren’t parasites so much as they are drug dealers. The world is “rolling drunk on petroleum”, and until we manage to get sober we’re going to continue to need that sweet black crude. Better we buy it from Norway than Saudi Arabia.

So, what is it that makes Scandinavia so great? Why do they have the highest happiness ratings, the lowest poverty rates, the best education systems, the lowest unemployment rates, the best social mobility and the highest incomes? To be fair, in most of these not literally every top spot is held by a Scandinavian country; Canada does well, Germany does well, the UK does well, even the US does well. Unemployment rates in particular deserve further explanation, because a lot of very poor countries report surprisingly low unemployment rates, such as Cambodia and Laos.

It’s also important to recognize that even great countries can have serious flaws, and the remnants of the feudal system in Scandinavia—especially in Sweden—still contribute to substantial inequality of wealth and power.

But in general, I think if you assembled a general index of overall prosperity of a country (or simply used one that already exists like the Human Development Index), you would find that Scandinavian countries are disproportionately represented at the very highest rankings. This calls out for some sort of explanation.

Is it simply that they are so small? They are certainly quite small; Norway and Denmark each have fewer people than the core of New York City, and Sweden has slightly more people than the Chicago metropolitan area. Put them all together, add in Finland and Iceland (which aren’t quite Scandinavia), and all together you have about the population of the New York City Combined Statistical Area.

But some of the world’s smallest countries are also its poorest. Samoa and Kiribati each have populations comparable to the city of Ann Arbor and per-capita GDPs 1/10 that of the US. Eritrea is the same size as Norway, and 70 times poorer. Burundi is slightly larger than Sweden, and has a per-capita GDP PPP of only $3.14 per day.

There’s actually a good statistical reason to expect that the smallest countries should vary the most in their incomes; you’re averaging over a smaller sample so you get more variance in the estimate. But this doesn’t explain why Norway is rich and Eritrea is poor. Incomes aren’t assigned randomly. This might be a reason to try comparing Norway to specifically New York City or Los Angeles rather than to the United States as a whole (Norway still does better, in case you were wondering—especially compared to LA); but it’s not a reason to say that Norway’s wealth doesn’t really count.

Is it because they are ethnically homogeneous? Yes, relatively speaking; but perhaps not as much as you imagine. 14% of Sweden’s population is immigrants, of which 64% are from outside the EU. 10% of Denmark’s population is comprised of immigrants, of which 66% came from non-Western countries. Immigrants are 13% of Norway’s population, of which half are from non-Western countries.

That’s certainly more ethnically homogeneous than the United States; 13% of our population is immigrants, which may sound comparable, but almost all non-immigrants in Scandinavia are of indigenous Nordic descent, all “White” by the usual classification. Meanwhile the United States is 64% non-Hispanic White, 16% Hispanic, 12% Black, 5% Asian, and 1% Native American or Pacific Islander.

Scandinavian countries are actually by some measures less homogeneous than the US in terms of religion, however; only 4% of Americans are not Christian (78.5%), atheist (16.1%), or Jewish (1.7%), and only 0.6% are Muslim. As much as In Sweden, on the other hand, 60% of the population is nominally Lutheran, but 80% is atheist, and 5% of the population is Muslim. So if you think of Christian/Muslim as the sharp divide (theologically this doesn’t make a whole lot of sense, but it seems to be the cultural norm in vogue), then Sweden has more religious conflict to worry about than the US does.

Moreover, there are some very ethnically homogeneous countries that are in horrible shape. North Korea is almost completely ethnically homogeneous, for example, as is Haiti. There does seem to be a correlation between higher ethnic diversity and lower economic prosperity, but Canada and the US are vastly more diverse than Japan and South Korea yet significantly richer. So clearly ethnicity is not the whole story here.

I do think ethnic homogeneity can partly explain why Scandinavian countries have the good policies they do; because humans are tribal, ethnic homogeneity engenders a sense of unity and cooperation, a notion that “we are all in this together”. That egalitarian attitude makes people more comfortable with some of the policies that make Scandinavia what it is, which I will get into at the end of this post.

What about culture? Is there something about Nordic ideas, those Viking traditions, that makes Scandinavia better? Miles Kimball has argued this; he says we need to import “hard work, healthy diets, social cohesion and high levels of trust—not Socialism”. And truth be told, it’s hard to refute this assertion, since it’s very difficult to isolate and control for cultural variables even though we know they are important.

But this difficulty in falsification is a reason to be cautious about such a hypothesis; it should be a last resort when all the more testable theories have been ruled out. I’m not saying culture doesn’t matter; it clearly does. But unless you can test it, “culture” becomes a theory that can explain just about anything—which means that it really explains nothing.

The “social cohesion and high levels of trust” part actually can be tested to some extent—and it is fairly well supported. High levels of trust are strongly correlated with economic prosperity. But we don’t really need to “import” that; the US is already near the top of the list in countries with the highest levels of trust.

I can’t really disagree with “good diet”, except to say that almost everywhere eats a better diet than the United States. The homeland of McDonald’s and Coca-Cola is frankly quite dystopian when it comes to rates of heart disease and diabetes. Given our horrible diet and ludicrously inefficient healthcare system, the only reason we live as long as we do is that we are an extremely rich country (so we can afford to pay the most for healthcare, for certain definitions of “afford”), and almost no one here smokes anymore. But good diet isn’t so much Scandinavian as it is… un-American.

But as for “hard work”, he’s got it backwards; the average number of work hours per week is 33 in Denmark and Norway, compared to 38 in the US. Among full-time workers in the US, the average number of hours per week is a whopping 47. Working hours in the US are much more intensive than anywhere in Europe, including Scandinavia. Though of course we are nowhere near the insane work addiction suffered by most East Asian countries; lately South Korea and Japan have been instituting massive reforms to try to get people to stop working themselves to death. And not surprisingly, work-related stress is a leading cause of death in the United States. If anything, we need to import some laziness, or at least a sense of work-life balance. (Indeed, I’m fairly sure that the only reason he said “hard work” is that it’s a cultural Applause Light in the US; being against hard work is like being against the American Flag or homemade apple pie. At this point, “we need more hard work” isn’t so much an assertion as it is a declaration of tribal membership.)

But none of these things adequately explains why poverty and inequality is so much lower in Scandinavia than it is in the United States, and there’s really a quite simple explanation.

Why is it that #ScandinaviaIsBetter? They’re not afraid to make rich people pay higher taxes so they can help poor people.

In the US, this idea of “redistribution of wealth” is anathema, even taboo; simply accusing a policy of being “redistributive” or “socialist” is for many Americans a knock-down argument against that policy. In Denmark, “socialist” is a meaningful descriptor; some policies are “socialist”, others “capitalist”, and these aren’t particularly weighted terms; it’s like saying here that a policy is “Keynesian” or “Monetarist”, or if that’s too obscure, saying that it’s “liberal” or “conservative”. People will definitely take sides, and it is a matter of political importance—but it’s inside the Overton Window. It’s not almost unthinkable as it is here.

If culture has an effect here, it likely comes from Scandinavia’s long traditions of egalitarianism. Going at least back to the Vikings, in theory at least (clearly not always in practice), people—or at least fellow Scandinavians—were considered equal participants in society, no one “better” or “higher” than anyone else. Even today, it is impolite in Denmark to express pride at your own accomplishments; there’s a sense that you are trying to present yourself as somehow more deserving than others. Honestly this attitude seems unhealthy to me, though perhaps preferable to the unrelenting narcissism of American society; but insofar as culture is making Scandinavia better, it’s almost certainly because this thoroughgoing sense of egalitarianism underlies all their economic policy. In the US, the rich are brilliant and the poor are lazy; in Denmark, the rich are fortunate and the poor are unlucky. (Which theory is more accurate? Donald Trump. I rest my case.)

To be clear, Scandinavia is not communist; and they are certainly not Stalinist. They don’t believe in total collectivization of industry, or complete government control over the economy. They don’t believe in complete, total equality, or even a hard cap on wealth: Stefan Persson is an 11-figure billionaire. Does he pay high taxes, living in Sweden? Yes he does, considerably higher than he’d pay in the US. He seems to be okay with that. Why, it’s almost like his marginal utility of wealth is now negligible.

Scandinavian countries also don’t try to micromanage your life in the way often associated with “socialism”–in fact I’d say they do it less than we do in the US. Here we have Republicans who want to require drug tests for food stamps even though that literally wastes money and helps no one; there they just provide a long list of government benefits for everyone free of charge. They just held a conference in Copenhagen to discuss the possibility of transitioning many of these benefits into a basic income; and basic income is the least intrusive means of redistributing wealth.

In fact, because Scandinavian countries tax differently, it’s not necessarily the case that people always pay higher taxes there. But they pay more transparent taxes, and taxes with sharper incidence. Denmark’s corporate tax rate is only 22% compared to 35% in the US; but their top personal income tax bracket is 59% while ours is only 39.6% (though it can rise over 50% with some state taxes). Denmark also has a land value tax and a VAT, both of which most economists have clamored for for generations. (The land value tax I totally agree with; the VAT I’m a little more ambivalent about.) Moreover, filing your taxes in Denmark is not a month-long stress marathon of gathering paperwork, filling out forms, and fearing that you’ll get something wrong and be audited as it is in the US; they literally just send you a bill. You can contest it, but most people don’t. You just pay it and you’re done.

Now, that does mean the government is keeping track of your income; and I might think that Americans would never tolerate such extreme surveillance… and then I remember that PRISM is a thing. Apparently we’re totally fine with the NSA reading our emails, but God forbid the IRS just fill out our 1040s for us (that they are going to read anyway). And there’s no surveillance involved in requiring retail stores to incorporate sales tax into listed price like they do in Europe instead of making us do math at the cash register like they do here. It’s almost like Americans are trying to make taxes as painful as possible.

Indeed, I think Scandanavian socialism is a good example of how high taxes are a sign of a free society, not an authoritarian one. Taxes are a minimal incursion on liberty. High taxes are how you fund a strong government and maintain extensive infrastructure and public services while still being fair and following the rule of law. The lowest tax rates in the world are in North Korea, which has ostensibly no taxes at all; the government just confiscates whatever they decide they want. Taxes in Venezuela are quite low, because the government just owns all the oil refineries (and also uses multiple currency exchange rates to arbitrage seigniorage). US taxes are low by First World standards, but not by world standards, because we combine a free society with a staunch opposition to excessive taxation. Most of the rest of the free world is fine with paying a lot more taxes than we do. In fact, even using Heritage Foundation data, there is a clear positive correlation between higher tax rates and higher economic freedom:
Graph: Heritage Foundation Economic Freedom Index and tax burden

What’s really strange, though, is that most Americans actually support higher taxes on the rich. They often have strange or even incoherent ideas about what constitutes “rich”; I have extended family members who have said they think $100,000 is an unreasonable amount of money for someone to make, yet somehow are totally okay with Donald Trump making $300,000,000. The chant “we are the 99%” has always been off by a couple orders of magnitude; the plutocrat rentier class is the top 0.01%, not the top 1%. The top 1% consists mainly of doctors and lawyers and engineers; the top 0.01%, to a man—and they are nearly all men, in fact White men—either own corporations or work in finance. But even adjusting for all this, it seems like at least a bare majority of Americans are all right with “redistributive” “socialist” policies—as long as you don’t call them that.

So I suppose that’s sort of what I’m trying to do; don’t think of it as “socialism”. Think of it as #ScandinaviaIsBetter.

Nuclear power is safe. Why don’t people like it?

Sep 24, JDN 2457656

This post will have two parts, corresponding to each sentence. First, I hope to convince you that nuclear power is safe. Second, I’ll try to analyze some of the reasons why people don’t like it and what we might be able to do about that.

Depending on how familiar you are with the statistics on nuclear power, the idea that nuclear power is safe may strike you as either a completely ridiculous claim or an egregious understatement. If your primary familiarity with nuclear power safety is via the widely-publicized examples of Chernobyl, Three Mile Island, and more recently Fukushima, you may have the impression that nuclear power carries huge, catastrophic risks. (You may also be confusing nuclear power with nuclear weapons—nuclear weapons are indeed the greatest catastrophic risk on Earth today, but equating the two is like equating automobiles and machine guns because both of them are made of metal and contain lubricant, flammable materials, and springs.)

But in fact nuclear energy is astonishingly safe. Indeed, even those examples aren’t nearly as bad as people have been led to believe. Guess how many people died as a result of Three Mile Island, including estimated increased cancer deaths from radiation exposure?

Zero. There are zero confirmed deaths and the consensus estimate of excess deaths caused by the Three Mile Island incident by all causes combined is zero.

What about Fukushima? Didn’t 10,000 people die there? From the tsunami, yes. But the nuclear accident resulted in zero fatalities. If anything, those 10,000 people were killed by coal—by climate change. They certainly weren’t killed by nuclear.

Chernobyl, on the other hand, did actually kill a lot of people. Chernobyl caused 31 confirmed direct deaths, as well as an estimated 4,000 excess deaths by all causes. On the one hand, that’s more than 9/11; on the other hand, it’s about a month of US car accidents. Imagine if people had the same level of panic and outrage at automobiles after a month of accidents that they did at nuclear power after Chernobyl.

The vast majority of nuclear accidents cause zero fatalities; other than Chernobyl, none have ever caused more than 10. Deepwater Horizon killed 11 people, and yet for some reason Americans did not unite in opposition against ever using oil (or even offshore drilling!) ever again.

In fact, even that isn’t fair to nuclear power, because we’re not including the thousands of lives saved every year by using nuclear instead of coal and oil.

Keep in mind, the WHO estimates 10 to 100 million excess deaths due to climate change over the 21st century. That’s an average of 100,000 to 1 million deaths every year. Nuclear power currently produces about 11% of the world’s energy, so let’s do a back-of-the-envelope calculation for how many lives that’s saving. Assuming that additional climate change would be worse in direct proportion to the additional carbon emissions (which is conservative), and assuming that half that energy would be replaced by coal or oil (also conservative, using Germany’s example), we’re looking at about a 6% increase in deaths due to climate change if all those nuclear power plants were closed. That’s 6,000 to 60,000 lives that nuclear power plants save every year.

I also haven’t included deaths due to pollution—note that nuclear power plants don’t pollute air or water whatsoever, and only produce very small amounts of waste that can be quite safely stored. Air pollution in all its forms is responsible for one in eight deaths worldwide. Let me say that again: One in eight of all deaths in the world is caused by air pollution—so this is on the order of 7 million deaths per year, every year. We burn our way to a biannual Holocaust. Most of this pollution is actually caused by burning wood—fireplaces, wood stoves, and bonfires are terrible for the air—and many countries would actually see a substantial reduction in their toxic pollution if they switched to oil or even coal in favor of wood. But a large part of that pollution is caused by coal, and a nontrivial amount is caused by oil. Coal-burning factories and power plants are responsible for about 1 million deaths per year in China alone. Most of that pollution could be prevented if those power plants were nuclear instead.

Factor all that in, and nuclear power currently saves tens if not hundreds of thousands of lives per year, and expanding it to replace all fossil fuels could save millions more. Indeed, a more precise estimate of the benefits of nuclear power published a few years ago in Environmental Science and Technology is that nuclear power plants have saved some 1.8 million human lives since their invention, putting them on a par with penicillin and the polio vaccine.

So, I hope I’ve convinced you of the first proposition: Nuclear power plants are safe—and not just safe, but heroic, in fact one of the greatest life-saving technologies ever invented. So, why don’t people like them?

Unfortunately, I suspect that no amount of statistical data by itself will convince those who still feel a deep-seated revulsion to nuclear power. Even many environmentalists, people who could be nuclear energy’s greatest advocates, are often opposed to it. I read all the way through Naomi Klein’s This Changes Everything and never found even a single cogent argument against nuclear power; she simply takes it as obvious that nuclear power is “more of the same line of thinking that got us in this mess”. Perhaps because nuclear power could be enormously profitable for certain corporations (which is true; but then, it’s also true of solar and wind power)? Or because it also fits this narrative of “raping and despoiling the Earth” (sort of, I guess)? She never really does explain; I’m guessing she assumes that her audience will simply share her “gut feeling” intuition that nuclear power is dangerous and untrustworthy. One of the most important inconvenient truths for environmentalists is that nuclear power is not only safe, it is almost certainly our best hope for stopping climate change.

Perhaps all this is less baffling when we recognize that other heroic technologies are often also feared or despised for similarly bizarre reasons—vaccines, for instance.

First of all, human beings fear what we cannot understand, and while the human immune system is certainly immensely complicated, nuclear power is based on quantum mechanics, a realm of scientific knowledge so difficult and esoteric that it is frequently used as the paradigm example of something that is hard to understand. (As Feynman famously said, “I think I can safely say that nobody understands quantum mechanics.”) Nor does it help that popular treatments of quantum physics typically bear about as much resemblance to the actual content of the theory as the X-Men films do to evolutionary biology, and con artists like Deepak Chopra take advantage of this confusion to peddle their quackery.

Nuclear radiation is also particularly terrifying because it is invisible and silent; while a properly-functioning nuclear power plant emits less ionizing radiation than the Capitol Building and eating a banana poses substantially higher radiation risk than talking on a cell phone, nonetheless there is real danger posed by ionizing radiation, and that danger is particularly terrifying because it takes a form that human senses cannot detect. When you are burned by fire or cut by a knife, you know immediately; but gamma rays could be coursing through you right now and you’d feel no different. (Huge quantities of neutrinos are coursing through you, but fear not, for they’re completely harmless.) The symptoms of severe acute radiation poisoning also take a particularly horrific form: After the initial phase of nausea wears off, you can enter a “walking ghost phase”, where your eventual death is almost certain due to your compromised immune and digestive systems, but your current condition is almost normal. This makes the prospect of death by nuclear accident a particularly vivid and horrible image.

Vividness makes ideas more available to our memory; and thus, by the availability heuristic, we automatically infer that it must be more probable than it truly is. You can think of horrific nuclear accidents like Chernobyl, and all the carnage they caused; but all those millions of people choking to death in China don’t make for a compelling TV news segment (or at least, our TV news doesn’t seem to think so). Vividness doesn’t actually seem to make things more persuasive, but it does make them more memorable.

Yet even if we allow for the possibility that death by radiation poisoning is somewhat worse than death by coal pollution (if I had to choose between the two, okay, maybe I’d go with the coal), surely it’s not ten thousand times worse? Surely it’s not worth sacrificing entire cities full of people to coal in order to prevent a handful of deaths by nuclear energy?

Another reason that has been proposed is a sense that we can control risk from other sources, but a nuclear meltdown would be totally outside our control. Perhaps that is the perception, but if you think about it, it really doesn’t make a lot of sense. If there’s a nuclear meltdown, emergency services will report it, and you can evacuate the area. Yes, the radiation moves at the speed of light; but it also dissipates as the inverse square of distance, so if you just move further away you can get a lot safer quite quickly. (Think about the brightness of a lamp in your face versus across a football field. Radiation works the same way.) The damage is also cumulative, so the radiation risk from a meltdown is only going to be serious if you stay close to the reactor for a sustained period of time. Indeed, it’s much easier to avoid nuclear radiation than it is to avoid air pollution; you can’t just stand behind a concrete wall to shield against air pollution, and moving further away isn’t possible if you don’t know where it’s coming from. Control would explain why we fear cars less than airplanes (which is also statistically absurd), but it really can’t explain why nuclear power scares people more than coal and oil.

Another important factor may be an odd sort of bipartisan consensus: While the Left hates nuclear power because it makes corporations profitable or because it’s unnatural and despoils the Earth or something, the Right hates nuclear power because it requires substantial government involvement and might displace their beloved fossil fuels. (The Right’s deep, deep love of the fossil fuel industry now borders on the pathological. Even now that they are obviously economically inefficient and environmentally disastrous, right-wing parties around the world continue to defend enormous subsidies for oil and coal companies. Corruption and regulatory capture could partly explain this, but only partly. Campaign contributions can’t explain why someone would write a book praising how wonderful fossil fuels are and angrily denouncing anyone who would dare criticize them.) So while the two sides may hate each other in general and disagree on most other issues—including of course climate change itself—they can at least agree that nuclear power is bad and must be stopped.

Where do we go from here, then? I’m not entirely sure. As I said, statistical data by itself clearly won’t be enough. We need to find out what it is that makes people so uniquely terrified of nuclear energy, and we need to find a way to assuage those fears.

And we must do this now. For every day we don’t—every day we postpone the transition to a zero-carbon energy grid—is another thousand people dead.

Toward an economics of social norms

Sep 17, JDN 2457649

It is typical in economics to assume that prices are set by perfect competition in markets with perfect information. This is obviously ridiculous, so many economists do go further and start looking into possible distortions of the market, such as externalities and monopolies. But almost always the assumption is still that human beings are neoclassical rational agents, what I call “infinite identical psychopaths”, selfish profit-maximizers with endless intelligence and zero empathy.

What happens when we recognize that human beings are not like this, but in fact are empathetic, social creatures, who care about one another and work toward the interests of (what they perceive to be) their tribe? How are prices really set? What actually decides what is made and sold? What does economics become once you understand sociology? (The good news is that experiments are now being done to find out.)

Presumably some degree of market competition is involved, and no small amount of externalities and monopolies. But one of the very strongest forces involved in setting prices in the real world is almost completely ignored, and that is social norms.

Social norms are tremendously powerful. They will drive us to bear torture, fight and die on battlefields, even detonate ourselves as suicide bombs. When we talk about “religion” or “ideology” motivating people to do things, really what we are talking about is social norms. While some weaker norms can be overridden, no amount of economic incentive can ever override a social norm at its full power. Moreover, most of our behavior in daily life is driven by social norms: How to dress, what to eat, where to live. Even the fundamental structure of our lives is written by social norms: Go to school, get a job, get married, raise a family.

Even academic economists, who imagine themselves one part purveyor of ultimate wisdom and one part perfectly rational agent, are clearly strongly driven by social norms—what problems are “interesting”, which researchers are “renowned”, what approaches are “sensible”, what statistical methods are “appropriate”. If economists were perfectly rational, dynamic stochastic general equilibrium models would be in the dustbin of history (because, like string theory, they have yet to lead to a single useful empirical prediction), research journals would not be filled with endless streams of irrelevant but impressive equations (I recently read one that basically spent half a page of calculus re-deriving the concept of GDP—and computer-generated gibberish has been published, because its math looked so impressive), and instead of frequentist p-values (and often misinterpreted at that), all the statistics would be written in the form of Bayesian logodds.

Indeed, in light of all this, I often like to say that to a first approximation, all human behavior is social norms.

How does this affect buying and selling? Well, first of all, there are some things we refuse to buy and sell, or at least that most of us refuse to buy and sell, and who use social pressure, public humilitation, or even the force of law to prevent. You’re not supposed to sell children. You’re not supposed to sell your vote. You’re not even supposed to sell sexual favors (though every society has always had a large segment of people who do, and more recently people are becoming more open to the idea of at least decriminalizing it). If we were neoclassical rational agents, we would have no such qualms; if we want something and someone is willing to sell it to us, we’ll buy it. But as actual human beings with emotions and social norms, we recognize that there is something fundamentally different about selling your vote as opposed to selling a shirt or a television. It’s not always immediately obvious where to draw the line, which is why sex work can be such a complicated issue (You can’t get paid to have sex… unless someone is filming it?). Different societies may do it differently: Part of the challenge of fighting corruption in Third World countries is that much of what we call corruption—and which actually is harmful to long-run economic development—isn’t perceived as “corruption” by the people involved in it, just as social custom (“Of course I’d hire my cousin! What kind of cousin would I be if I didn’t?”). Yet despite all that, almost everyone agrees that there is a line to be drawn. So there are whole markets that theoretically could exist, but don’t, or only exist as tiny black markets most people never participate in, because we consider selling those things morally wrong. Recently a whole subfield of cognitive economics has emerged studying these repugnant markets.

Even if a transaction is not considered so repugnant as to be unacceptable, there are also other classes of goods that are in some sense unsavory; something you really shouldn’t buy, but you’re not a monster for doing so. These are often called sin goods, and they have always included drugs, alcohol, and gambling—and I do mean always, as every human civilization has had these things—they include prostitution where it is legal, and as social norms change they are now beginning to include oil and coal as well (which can only be good for the future of Earth’s climate). Sin goods are systematically more expensive than they should be for their marginal cost, because most people are unwilling to participate in selling them. As a result, the financial returns for producing sin goods are systematically higher. Actually, this could partially explain why Wall Street banks are so profitable; when the banking system is corrupt as it is—and you’re not imagining that; laundering money for terroriststhen banking becomes a sin good, and good people don’t want to participate in it. Or perhaps the effect runs the other way around: Banking has been viewed as sinful for centuries (in Medieval times, usury was punished much the same way as witchcraft), and as a result only the sort of person who doesn’t care about social and moral norms becomes a banker—and so the banking system becomes horrifically corrupt. Is this a reason for good people to force ourselves to become bankers? Or is there another way—perhaps credit unions?

There are other ways that social norms drive prices as well. We have a concept ofa “fair wage”, which is quite distinct from the economic concept of a “market-clearing wage”. When people ask whether someone’s wage is fair, they don’t look at supply and demand and try to determine whether there are too many or too few people offering that service. They ask themselves what the labor is worth—what value has it added—and how hard that person has worked to do it—what cost it bore. Now, these aren’t totally unrelated to supply and demand (people are less likely to supply harder work, people are more likely to demand higher value), so it’s conceivable that these heuristics could lead us to more or less achieve the market-clearing wage most of the time. But there are also some systematic distortions to consider.

Perhaps the most important way fairness matters in economics is necessities: Basic requirements for human life such as food, housing, and medicine. The structure of our society also makes transportation, education, and Internet access increasingly necessary for basic functioning. From the perspective of an economist, it is a bit paradoxical how angry people get when the price of something important (such as healthcare) is increased: If it’s extremely valuable, shouldn’t you be willing to pay more? Why does it bother you less when something like a Lamborghini or a Rolex rises in price, something that almost certainly wasn’t even worth its previous price? You’re going to buy the necessities anyway, right? Well, as far as most economists are concerned, that’s all that matters—what gets bought and sold. But of course as a human being I do understand why people get angry about these things, and it is because they have to buy them anyway. When someone like Martin Shkreli raises the prices on basic goods, we feel exploited. There’s even a way to make this economically formal: When demand is highly inelastic, we are rightly very sensitive to the possibility of a monopoly, because monopolies under inelastic demand can extract huge profits and cause similarly huge amounts of damage to the welfare of their customers. That isn’t quite how most people would put it, but I think that has something to do with the ultimate reason we evolved that heuristic: It’s dangerous to let someone else control your basic necessities, because that gives them enormous power to exploit you. If they control things that aren’t as important to you, that doesn’t matter so much, because you can always do without if you must. So a norm that keeps businesses from overcharging on necessities is very important—and probably not as strong anymore as it should be.

Another very important way that fairness and markets can be misaligned is talent: What if something is just easier for one person than another? If you achieve the same goal with half the work, should you be rewarded more for being more efficient, or less because you bore less cost? Neoclassical economics doesn’t concern itself with such questions, asking only if supply and demand reached equilibrium. But we as human beings do care about such things; we want to know what wage a person deserves, not just what wage they would receive in a competitive market.

Could we be wrong to do that? Might it be better if we just let the market do its work? In some cases I think that may actually be true. Part of why CEO pay is rising so fast despite being uncorrelated with corporate profitability or even negatively correlated is that CEOs have convinced us (or convinced their boards of directors) that this is fair, that they deserve more stock options. They even convince them that their pay is based on performance, by using highly distorted measures of performance. If boards thought more like economic rational agents, when a CEO asked for more pay they’d ask: “What other company gave you a higher offer?” and if the CEO didn’t have an answer, they’d laugh and refuse the raise. Because in purely economic terms, that is all a salary does: it keeps you from quitting to work somewhere else. The competitive mechanism of the market is supposed to then ensure that your wage aligns with your marginal cost and marginal productivity purely due to that.

On the other hand, there are many groups of people who simply aren’t doing very well in the market: Women, racial minorities, people with disabilities. There are a lot of reasons for this, some of which might go away if markets were made more competitive—the classic argument that competitive markets reward companies that don’t discriminate—but many clearly wouldn’t. Indeed, that argument was never as strong as it at first appears; in a society where social norms are strongly in favor of bigotry, it can be completely economically rational to participate in bigotry to avoid being penalized. When Chick-Fil-A was revealed to have donated to anti-LGBT political groups, many people tried to boycott—but their sales actually increased from the publicity. Honestly it’s a bit baffling that they promised not to donate to such causes anymore; it was apparently a profitable business decision to be revealed as supporters of bigotry. And even when discrimination does hurt economic performance, companies are run by human beings, and they are still quite capable of discriminating regardless. Indeed, the best evidence we have that discrimination is inefficient comes from… businesses that persist in discriminating despite the fact that it is inefficient.

But okay, suppose we actually did manage to make everyone compensated according to their marginal productivity. (Or rather, what Rawls derided: “From each according to his marginal productivity, to each according to his threat advantage.”) The market would then clear and be highly efficient. Would that actually be a good thing? I’m not so sure.

A lot of people are highly unproductive through no fault of their own—particularly children and people with disabilities. Much of this is not discrimination; it’s just that they aren’t as good at providing services. Should we simply leave them to fend for themselves? Then there’s the key point about what marginal means in this case—it means “given what everyone else is doing”. But that means that you can be made obsolete by someone else’s actions, and in this era of rapid technological advancement, jobs become obsolete faster than ever. Unlike a lot of people, I recognize that it makes no sense to keep people working at jobs that can be automated—the machines are better. But still, what do we do with the people whose jobs have been eliminated? Do we treat them as worthless? When automated buses become affordable—and they will; I give it 20 years—do we throw the human bus drivers under them?

One way out is of course a basic income: Let the market wage be what it will, and then use the basic income to provide for what human beings deserve irrespective of their market productivity. I definitely support a basic income, of course, and this does solve the most serious problems like children and quadriplegics starving in the streets.

But as I read more of the arguments by people who favor a job guarantee instead of a basic income, I begin to understand better why they are uncomfortable with the idea: It doesn’t seem fair. A basic income breaks once and for all the link between “a fair day’s work” and “a fair day’s wage”. It runs counter to this very deep-seated intuition most people have that money is what you earn—and thereby deserve—by working, and only by working. That is an extremely powerful social norm, and breaking it will be very difficult; so it’s worth asking: Should we even try to break it? Is there a way to achieve a system where markets are both efficient and fair?

I’m honestly not sure; but I do know that we could make substantial progress from where we currently stand. Most billionaire wealth is pure rent in the economic sense: It’s received by corruption and market distortion, not by efficient market competition. Most poverty is due to failures of institutions, not lack of productivity of workers. As George Monblot famously wrote, “If wealth was the inevitable result of hard work and enterprise, every woman in Africa would be a millionaire.” Most of the income disparity between White men and others is due to discrimination, not actual skill—and what skill differences there are are largely the result of differences in education and upbringing anyway. So if we do in fact correct these huge inefficiencies, we will also be moving toward fairness at the same time. But still that nagging thought remains: When all that is done, will there come a day where we must decide whether we would rather have an efficient economy or a just society? And if it does, will we decide the right way?

Zootopia taught us constructive responses to bigotry

Sep 10, JDN 2457642

Zootopia wasn’t just a good movie; Zootopia was a great movie. I’m not just talking about its grosses (over $1 billion worldwide), or its ratings, 8.1 on IMDB, 98% from critics and 93% from viewers on Rotten Tomatoes, 78 from critics and 8.8 from users on Metacritic. No, I’m talking about its impact on the world. This movie isn’t just a fun and adorable children’s movie (though it is that). This movie is a work of art that could have profound positive effects on our society.

Why? Because Zootopia is about bigotry—and more than that, it doesn’t just say “bigotry is bad, bigots are bad”; it provides us with a constructive response to bigotry, and forces us to confront the possibility that sometimes the bigots are us.

Indeed, it may be no exaggeration (though I’m sure I’ll get heat on the Internet for suggesting it) to say that Zootopia has done more to fight bigotry than most social justice activists will achieve in their entire lives. Don’t get me wrong, some social justice activists have done great things; and indeed, I may have to count myself in this “most activists” category, since I can’t point to any major accomplishments I’ve yet made in social justice.

But one of the biggest problems I see in the social justice community is the tendency to exclude and denigrate (in sociology jargon, “other” as a verb) people for acts of bigotry, even quite mild ones. Make one vaguely sexist joke, and you may as well be a rapist. Use racially insensitive language by accident, and clearly you are a KKK member. Say something ignorant about homosexuality, and you may as well be Rick Santorum. It becomes less about actually moving the world forward, and more about reaffirming our tribal unity as social justice activists. We are the pure ones. We never do wrong. All the rest of you are broken, and the only way to fix yourself is to become one of us in every way.

In the process of fighting tribal bigotry, we form our own tribe and become our own bigots.

Zootopia offers us another way. If you haven’t seen it, go rent it on DVD or stream it on Netflix right now. Seriously, this blog post will be here when you get back. I’m not going to play any more games with “spoilers!” though. It is definitely worth seeing, and from this point forward I’m going to presume you have.

The brilliance of Zootopia lies in the fact that it made bigotry what it is—not some evil force that infests us from outside, nor something that only cruel, evil individuals would ever partake in, but thoughts and attitudes that we all may have from time to time, that come naturally, and even in some cases might be based on a kernel of statistical truth. Judy Hopps is prey, she grew up in a rural town surrounded by others of her own species (with a population the size of New York City according to the sign, because this is still sometimes a silly Disney movie). She only knew a handful of predators growing up, yet when she moves to Zootopia suddenly she’s confronted with thousands of them, all around her. She doesn’t know what most predators are like, or how best to deal with them.

What she does know is that her ancestors were terrorized, murdered, and quite literally eaten by the ancestors of predators. Her instinctual fear of predators isn’t something utterly arbitrary; it was written into the fabric of her DNA by her ancestral struggle for survival. She has a reason to hate and fear predators that, on its face, actually seems to make sense.

And when there is a spree of murders, all committed by predators, it feels natural to us that Judy would fall back on her old prejudices; indeed, the brilliance of it is that they don’t immediately feel like prejudices. It takes us a moment to let her off-the-cuff comments at the press conference sink in (and Nick’s shocked reaction surely helps), before we realize that was really bigoted. Our adorable, innocent, idealistic, beloved protagonist is a bigot!

Or rather, she has done something bigoted. Because she is such a sympathetic character, we avoid the implication that she is a bigot, that this is something permanent and irredeemable about her. We have already seen the good in her, so we know that this bigotry isn’t what defines who she is. And in the end, she realizes where she went wrong and learns to do better. Indeed, it is ultimately revealed that the murders were orchestrated by someone whose goal was specifically to trigger those ancient ancestral feuds, and Judy reveals that plot and ultimately ends up falling in love with a predator herself.

What Zootopia is really trying to tell us is that we are all Judy Hopps. Every one of us most likely harbors some prejudiced attitude toward someone. If it’s not Black people or women or Muslims or gays, well, how about rednecks? Or Republicans? Or (perhaps the hardest for me) Trump supporters? If you are honest with yourself, there is probably some group of people on this planet that you harbor attitudes of disdain or hatred toward that nonetheless contains a great many good people who do not deserve your disdain.

And conversely, all bigots are Judy Hopps too, or at least the vast majority of them. People don’t wake up in the morning concocting evil schemes for the sake of being evil like cartoon supervillains. (Indeed, perhaps the greatest thing about Zootopia is that it is a cartoon in the sense of being animated, but it is not a cartoon in the sense of being morally simplistic. Compare Captain Planet, wherein polluters aren’t hardworking coal miners with no better options or even corrupt CEOs out to make an extra dollar to go with their other billion; no, they pollute on purpose, for no reason, because they are simply evil. Now that is a cartoon.) Normal human beings don’t plan to make the world a worse place. A handful of psychopaths might, but even then I think it’s more that they don’t care; they aren’t trying to make the world worse, they just don’t particularly mind if they do, as long as they get what they want. Robert Mugabe and Kim-Jong Un are despicable human beings with the blood of millions on their hands, but even they aren’t trying to make the world worse.

And thus, if your theory of bigotry requires that bigots are inhuman monsters who harm others by their sheer sadistic evil, that theory is plainly wrong. Actually I think when stated outright, hardly anyone would agree with that theory; but the important thing is that we often act as if we do. When someone does something bigoted, we shun them, deride them, push them as far as we can to the fringes of our own social group or even our whole society. We don’t say that your statement was racist; we say you are racist. We don’t say your joke was sexist; we say you are sexist. We don’t say your decision was homophobic; we say you are homophobic. We define bigotry as part of your identity, something as innate and ineradicable as your race or sex or sexual orientation itself.

I think I know why we do this: It is to protect ourselves from the possibility that we ourselves might sometimes do bigoted things. Because only bigots do bigoted things, and we know that we are not bigots.

We laugh at this when someone else does it: “But some of my best friends are Black!” “Happy #CincoDeMayo; I love Hispanics!” But that is the very same psychological defense mechanism we’re using ourselves, albeit in a more extreme application. When we commit an act that is accused of being bigoted, we begin searching for contextual evidence outside that act to show that we are not bigoted. The truth we must ultimately confront is that this is irrelevant: The act can still be bigoted even if we are not overall bigots—for we are all Judy Hopps.

This seems like terrible news, even when delivered by animated animals (or fuzzy muppets in Avenue Q), because we tend to hear it as “We are all bigots.” We hear this as saying that bigotry is inevitable, inescapable, literally written into the fabric of our DNA. At that point, we may as well give up, right? It’s hopeless!

But that much we know can’t be true. It could be (indeed, likely is) true that some amount of bigotry is inevitable, just as no country has ever managed to reach zero homicide or zero disease. But just as rates of homicide and disease have precipitously declined with the advancement of human civilization (starting around industrial capitalism, as I pointed out in a previous post!), so indeed have rates of bigotry, at least in recent times.

For goodness’ sake, it used to be a legal, regulated industry to buy and sell other human beings in the United States! This was seen as normal; indeed many argued that it was economically indispensable.

Is 1865 too far back for you? How about racially segregated schools, which were only eliminated from US law in 1954, a time where my parents were both alive? (To be fair, only barely; my father was a month old.) Yes, even today the racial composition of our schools is far from evenly mixed; but it used to be a matter of law that Black children could not go to school with White children.

Women were only granted the right to vote in the US in 1920. My parents weren’t alive yet, but there definitely are people still alive today who were children when the Nineteenth Amendment was ratified.

Same-sex marriage was not legalized across the United States until last year. My own life plans were suddenly and directly affected by this change.

We have made enormous progress against bigotry, in a remarkably short period of time. It has been argued that social change progresses by the death of previous generations; but that simply can’t be true, because we are moving much too fast for that! Attitudes toward LGBT people have improved dramatically in just the last decade.

Instead, it must be that we are actually changing people’s minds. Not everyone’s, to be sure; and often not as quickly as we’d like. But bit by bit, we tear bigotry down, like people tearing off tiny pieces of the Berlin Wall in 1989.

It is important to understand what we are doing here. We are not getting rid of bigots; we are getting rid of bigotry. We want to convince people, “convert” them if you like, not shun them or eradicate them. And we want to strive to improve our own behavior, because we know it will not always be perfect. By forgiving others for their mistakes, we can learn to forgive ourselves for our own.

It is only by talking about bigoted actions and bigoted ideas, rather than bigoted people, that we can hope to make this progress. Someone can’t change who they are, but they can change what they believe and what they do. And along those same lines, it’s important to be clear about detailed, specific actions that people can take to make themselves and the world better.

Don’t just say “Check your privilege!” which at this point is basically a meaningless Applause Light. Instead say “Here are some articles I think you should read on police brutality, including this one from The American Conservative. And there’s a Black Lives Matter protest next weekend, would you like to join me there to see what we do?” Don’t just say “Stop being so racist toward immigrants!”; say “Did you know that about a third of undocumented immigrants are college students on overstayed visas? If we deport all these people, won’t that break up families?” Don’t try to score points. Don’t try to show that you’re the better person. Try to understand, inform, and persuade. You are talking to Judy Hopps, for we are all Judy Hopps.

And when you find false beliefs or bigoted attitudes in yourself, don’t deny them, don’t suppress them, don’t make excuses for them—but also don’t hate yourself for having them. Forgive yourself for your mistake, and then endeavor to correct it. For we are all Judy Hopps.

The high cost of frictional unemployment

Sep 3, JDN 2457635

I had wanted to open this post with an estimate of the number of people in the world, or at least in the US, who are currently between jobs. It turns out that such estimates are essentially nonexistent. The Bureau of Labor Statistics maintains a detailed database of US unemployment; they don’t estimate this number. We have this concept in macroeconomics of frictional unemployment, the unemployment that results from people switching jobs; but nobody seems to have any idea how common it is.

I often hear a ballpark figure of about 4-5%, which is related to a notion that “full employment” should really be about 4-5% unemployment because otherwise we’ll trigger horrible inflation or something. There is almost no evidence for this. In fact, the US unemployment rate has gotten as low as 2.5%, and before that was stable around 3%. This was during the 1950s, the era of the highest income tax rates ever imposed in the United States, a top marginal rate of 92%. Coincidence? Maybe. Obviously there were a lot of other things going on at the time. But it sure does hurt the argument that high income taxes “kill jobs”, don’t you think?

Indeed, it may well be that the rate of frictional unemployment varies all the time, depending on all sorts of different factors. But here’s what we do know: Frictional unemployment is a serious problem, and yet most macroeconomists basically ignore it.

Talk to most macroeconomists about “unemployment”, and they will assume you mean either cyclical unemployment (the unemployment that results from recessions and bad fiscal and monetary policy responses to them), or structural unemployment (the unemployment that results from systematic mismatches between worker skills and business needs). If you specifically mention frictional unemployment, the response is usually that it’s no big deal and there’s nothing we can do about it anyway.

Yet at least when we aren’t in a recession, frictional employment very likely accounts for the majority of unemployment, and thus probably the majority of misery created by unemployment. (Not necessarily, since it probably doesn’t account for much long-term unemployment, which is by far the worst.) And it is quite clear to me that there are things we can do about it—they just might be difficult and/or expensive.

Most of you have probably changed jobs at least once. Many of you have, like me, moved far away to a new place for school or work. Think about how difficult that was. There is the monetary cost, first of all; you need to pay for the travel of course, and then usually leases and paychecks don’t line up properly for a month or two (for some baffling and aggravating reason, UCI won’t actually pay me my paychecks until November, despite demanding rent starting the last week of July!). But even beyond that, you are torn from your social network and forced to build a new one. You have to adapt to living in a new place which may have differences in culture and climate. Bureaucracy often makes it difficult to change over documentation of such as your ID and your driver’s license.

And that’s assuming that you already found a job before you moved, which isn’t always an option. Many people move to new places and start searching for jobs when they arrive, which adds an extra layer of risk and difficulty above and beyond the transition itself.

With all this in mind, the wonder is that anyone is willing to move at all! And this is probably a large part of why people are so averse to losing their jobs even when it is clearly necessary; the frictional unemployment carries enormous real costs. (That and loss aversion, of course.)

What could we do, as a matter of policy, to make such transitions easier?

Well, one thing we could do is expand unemployment insurance, which reduces the cost of losing your job (which, despite the best efforts of Republicans in Congress, we ultimately did do in the Second Depression). We could expand unemployment insurance to cover voluntary quits. Right now, quitting voluntarily makes you forgo all unemployment benefits, which employers pay for in the form of insurance premiums; so an employer is much better off making your life miserable until you quit than they are laying you off. They could also fire you for cause, if they can find a cause (and usually there’s something they could trump up enough to get rid of you, especially if you’re not prepared for the protracted legal battle of a wrongful termination lawsuit). The reasoning of our current system appears to be something like this: Only lazy people ever quit jobs, and why should we protect lazy people? This is utter nonsense and it needs to go. Many states already have no-fault divorce and no-fault auto collision insurance; it’s time for no-fault employment termination.

We could establish a basic income of course; then when you lose your job your income would go down, but to a higher floor where you know you can meet certain basic needs. We could provide subsidized personal loans, similar to the current student loan system, that allow people to bear income gaps without losing their homes or paying exorbitant interest rates on credit cards.

We could use active labor market programs to match people with jobs, or train them with the skills needed for emerging job markets. Denmark has extensive active labor market programs (they call it “flexicurity”), and Denmark’s unemployment rate was 2.4% before the Great Recession, hit a peak of 6.2%, and has now recovered to 4.2%. What Denmark calls a bad year, the US calls a good year—and Greece fantasizes about as something they hope one day to achieve. #ScandinaviaIsBetter once again, and Norway fits this pattern also, though to be fair Sweden’s unemployment rate is basically comparable to the US or even slightly worse (though it’s still nothing like Greece).

Maybe it’s actually all right that we don’t have estimates of the frictional unemployment rate, because the goal really isn’t to reduce the number of people who are unemployed; it’s to reduce the harm caused by unemployment. Most of these interventions would very likely increase the rate frictional unemployment, as people who always wanted to try to find better jobs but could never afford to would now be able to—but they would dramatically reduce the harm caused by that unemployment.

This is a more general principle, actually; it’s why we should basically stop taking seriously this argument that social welfare benefits destroy work incentives. That may well be true; so what? Maximizing work incentives was never supposed to be a goal of public policy, as far as I can tell. Maximizing human welfare is the goal, and the only way a welfare program could reduce work incentives is by making life better for people who aren’t currently working, and thereby reducing the utility gap between working and not working. If your claim is that the social welfare program (and its associated funding mechanism, i.e. taxes, debt, or inflation) would make life sufficiently worse for everyone else that it’s not worth it, then say that (and for some programs that might actually be true). But in and of itself, making life better for people who don’t work is a benefit to society. Your supposed downside is in fact an upside. If there’s a downside, it must be found elsewhere.

Indeed, I think it’s worth pointing out that slavery maximizes work incentives. If you beat or kill people who don’t work, sure enough, everyone works! But that is not even an efficient economy, much less a just society. To be clear, I don’t think most people who say they want to maximize work incentives would actually support slavery, but that is the logical extent of the assertion. (Also, many Libertarians, often the first to make such arguments, do have a really bizarre attitude toward slavery; taxation is slavery, regulation is slavery, conscription is slavery—the last not quite as ridiculous—but actual forced labor… well, that really isn’t so bad, especially if the contract is “voluntary”. Fortunately some Libertarians are not so foolish.) If your primary goal is to make people work as much as possible, slavery would be a highly effective way to achieve that goal. And that really is the direction you’re heading when you say we shouldn’t do anything to help starving children lest their mothers have insufficient incentive to work.

More people not working could have a downside, if it resulted in less overall production of goods. But even in the US, one of the most efficient labor markets in the world, the system of job matching is still so ludicrously inefficient that people have to send out dozens if not hundreds of applications to jobs they barely even want, and there are still 1.4 times as many job seekers as there are openings (at the trough of the Great Recession, the ratio was 6.6 to 1). There’s clearly a lot of space here to improve the matching efficiency, and simply giving people more time to search could make a big difference there. Total output might decrease for a little while during the first set of transitions, but afterward people would be doing jobs they want, jobs they care about, jobs they’re good at—and people are vastly more productive under those circumstances. It’s quite likely that total employment would decrease, but productivity would increase so much that total output increased.

Above all, people would be happier, and that should have been our goal all along.

The replication crisis, and the future of science

Aug 27, JDN 2457628 [Sat]

After settling in a little bit in Irvine, I’m now ready to resume blogging, but for now it will be on a reduced schedule. I’ll release a new post every Saturday, at least for the time being.

Today’s post was chosen by Patreon vote, though only one person voted (this whole Patreon voting thing has not been as successful as I’d hoped). It’s about something we scientists really don’t like to talk about, but definitely need to: We are in the middle of a major crisis of scientific replication.

Whenever large studies are conducted attempting to replicate published scientific results, their ability to do so is almost always dismal.

Psychology is the one everyone likes to pick on, because their record is particularly bad. Only 39% of studies were really replicated with the published effect size, though a further 36% were at least qualitatively but not quantitatively similar. Yet economics has its own replication problem, and even medical research is not immune to replication failure.

It’s important not to overstate the crisis; the majority of scientific studies do at least qualitatively replicate. We are doing better than flipping a coin, which is better than one can say of financial forecasters.
There are three kinds of replication, and only one of them should be expected to give near-100% results. That kind is reanalysiswhen you take the same data and use the same methods, you absolutely should get the exact same results. I favor making reanalysis a routine requirement of publication; if we can’t get your results by applying your statistical methods to your data, then your paper needs revision before we can entrust it to publication. A number of papers have failed on reanalysis, which is absurd and embarrassing; the worst offender was probably Rogart-Reinhoff, which was used in public policy decisions around the world despite having spreadsheet errors.

The second kind is direct replication—when you do the exact same experiment again and see if you get the same result within error bounds. This kind of replication should work something like 90% of the time, but in fact works more like 60% of the time.

The third kind is conceptual replication—when you do a similar experiment designed to test the same phenomenon from a different perspective. This kind of replication should work something like 60% of the time, but actually only works about 20% of the time.

Economists are well equipped to understand and solve this crisis, because it’s not actually about science. It’s about incentives. I facepalm every time I see another article by an aggrieved statistician about the “misunderstanding” of p-values; no, scientist aren’t misunderstanding anything. They know damn well how p-values are supposed to work. So why do they keep using them wrong? Because their jobs depend on doing so.

The first key point to understand here is “publish or perish”; academics in an increasingly competitive system are required to publish their research in order to get tenure, and frequently required to get tenure in order to keep their jobs at all. (Or they could become adjuncts, who are paid one-fifth as much.)

The second is the fundamentally defective way our research journals are run (as I have discussed in a previous post). As private for-profit corporations whose primary interest is in raising more revenue, our research journals aren’t trying to publish what will genuinely advance scientific knowledge. They are trying to publish what will draw attention to themselves. It’s a similar flaw to what has arisen in our news media; they aren’t trying to convey the truth, they are trying to get ratings to draw advertisers. This is how you get hours of meaningless fluff about a missing airliner and then a single chyron scroll about a war in Congo or a flood in Indonesia. Research journals haven’t fallen quite so far because they have reputations to uphold in order to attract scientists to read them and publish in them; but still, their fundamental goal is and has always been to raise attention in order to raise revenue.

The best way to do that is to publish things that are interesting. But if a scientific finding is interesting, that means it is surprising. It has to be unexpected or unusual in some way. And above all, it has to be positive; you have to have actually found an effect. Except in very rare circumstances, the null result is never considered interesting. This adds up to making journals publish what is improbable.

In particular, it creates a perfect storm for the abuse of p-values. A p-value, roughly speaking, is the probability you would get the observed result if there were no effect at all—for instance, the probability that you’d observe this wage gap between men and women in your sample if in the real world men and women were paid the exact same wages. The standard heuristic is a p-value of 0.05; indeed, it has become so enshrined that it is almost an explicit condition of publication now. Your result must be less than 5% likely to happen if there is no real difference. But if you will only publish results that show a p-value of 0.05, then the papers that get published and read will only be the ones that found such p-values—which renders the p-values meaningless.

It was never particularly meaningful anyway; as we Bayesians have been trying to explain since time immemorial, it matters how likely your hypothesis was in the first place. For something like wage gaps where we’re reasonably sure, but maybe could be wrong, the p-value is not too unreasonable. But if the theory is almost certainly true (“does gravity fall off as the inverse square of distance?”), even a high p-value like 0.35 is still supportive, while if the theory is almost certainly false (“are human beings capable of precognition?”—actual study), even a tiny p-value like 0.001 is still basically irrelevant. We really should be using much more sophisticated inference techniques, but those are harder to do, and don’t provide the nice simple threshold of “Is it below 0.05?”

But okay, p-values can be useful in many cases—if they are used correctly and you see all the results. If you have effect X with p-values 0.03, 0.07, 0.01, 0.06, and 0.09, effect X is probably a real thing. If you have effect Y with p-values 0.04, 0.02, 0.29, 0.35, and 0.74, effect Y is probably not a real thing. But I’ve just set it up so that these would be published exactly the same. They each have two published papers with “statistically significant” results. The other papers never get published and therefore never get seen, so we throw away vital information. This is called the file drawer problem.

Researchers often have a lot of flexibility in designing their experiments. If their only goal were to find truth, they would use this flexibility to test a variety of scenarios and publish all the results, so they can be compared holistically. But that isn’t their only goal; they also care about keeping their jobs so they can pay rent and feed their families. And under our current system, the only way to ensure that you can do that is by publishing things, which basically means only including the parts that showed up as statistically significant—otherwise, journals aren’t interested. And so we get huge numbers of papers published that tell us basically nothing, because we set up such strong incentives for researchers to give misleading results.

The saddest part is that this could be easily fixed.

First, reduce the incentives to publish by finding other ways to evaluate the skill of academics—like teaching for goodness’ sake. Working papers are another good approach. Journals already get far more submissions than they know what to do with, and most of these papers will never be read by more than a handful of people. We don’t need more published findings, we need better published findings—so stop incentivizing mere publication and start finding ways to incentivize research quality.

Second, eliminate private for-profit research journals. Science should be done by government agencies and nonprofits, not for-profit corporations. (And yes, I would apply this to pharmaceutical companies as well, which should really be pharmaceutical manufacturers who make cheap drugs based off of academic research and carry small profit margins.) Why? Again, it’s all about incentives. Corporations have no reason to want to find truth and every reason to want to tilt it in their favor.

Third, increase the number of tenured faculty positions. Instead of building so many new grand edifices to please your plutocratic donors, use your (skyrocketing) tuition money to hire more professors so that you can teach more students better. You can find even more funds if you cut the salaries of your administrators and football coaches. Come on, universities; you are the one industry in the world where labor demand and labor supply are the same people a few years later. You have no excuse for not having the smoothest market clearing in the world. You should never have gluts or shortages.

Fourth, require pre-registration of research studies (as some branches of medicine already do). If the study is sound, an optimal rational agent shouldn’t care in the slightest whether it had a positive or negative result, and if our ape brains won’t let us think that way, we need to establish institutions to force it to happen. They shouldn’t even see the effect size and p-value before they make the decision to publish it; all they should care about is that the experiment makes sense and the proper procedure was conducted.
If we did all that, the replication crisis could be almost completely resolved, as the incentives would be realigned to more closely match the genuine search for truth.

Alas, I don’t see universities or governments or research journals having the political will to actually make such changes, which is very sad indeed.

How personality makes cognitive science hard

August 13, JDN 2457614

Why is cognitive science so difficult? First of all, let’s acknowledge that it is difficult—that even those of us who understand it better than most are still quite baffled by it in quite fundamental ways. The Hard Problem still looms large over us all, and while I know that the Chinese Room Argument is wrong, I cannot precisely pin down why.

The recursive, reflexive character of cognitive science is part of the problem; can a thing understand itself without understanding understanding itself, understanding understanding understanding itself, and on in an infinite regress? But this recursiveness applies just as much to economics and sociology, and honestly to physics and biology as well. We are physical biological systems in an economic and social system, yet most people at least understand these sciences at the most basic level—which is simply not true of cognitive science.

One of the most basic facts of cognitive science (indeed I am fond of calling it The Basic Fact of Cognitive Science) is that we are our brains, that everything human consciousness does is done by and within the brain. Yet the majority of humans believe in souls (including the majority of Americans and even the majority of Brits), and just yesterday I saw a news anchor say “Based on a new study, that feeling may originate in your brain!” He seriously said “may”. “may”? Why, next you’ll tell me that when my arms lift things, maybe they do it with muscles! Other scientists are often annoyed by how many misconceptions the general public has about science, but this is roughly the equivalent of a news anchor saying, “Based on a new study, human bodies may be made of cells!” or “Based on a new study, diamonds may be made of carbon atoms!” The misunderstanding of many sciences is widespread, but the misunderstanding of cognitive science is fundamental.

So what makes cognitive science so much harder? I have come to realize that there is a deep feature of human personality that makes cognitive science inherently difficult in a way other sciences are not.

Decades of research have uncovered a number of consistent patterns in human personality, where people’s traits tend to lie along a continuum from one extreme to another, and usually cluster near either end. Most people are familiar with a few of these, such as introversion/extraversion and optimism/pessimism; but the one that turns out to be important here is empathizing/systematizing.

Empathizers view the world as composed of sentient beings, living agents with thoughts, feelings, and desires. They are good at understanding other people and providing social support. Poets are typically empathizers.

Systematizers view the world as composed of interacting parts, interlocking components that have complex inner workings which can be analyzed and understood. They are good at solving math problems and tinkering with machines. Engineers are typically systematizers.

Most people cluster near one end of the continuum or the other; they are either strong empathizers or strong systematizers. (If you’re curious, there’s an online test you can take to find out which you are.)

But a rare few of us, perhaps as little as 2% and no more than 10%, are both; we are empathizer-systematizers, strong on both traits (showing that it’s not really a continuum between two extremes after all, and only seemed to be because the two traits are negatively correlated). A comparable number are also low on both traits, which must quite frankly make the world a baffling place in general.

Empathizer-systematizers understand the world as it truly is: Composed of sentient beings that are made of interacting parts.

The very title of this blog shows I am among this group: “human” for the empathizer, “economics” for the systematizer!

We empathizer-systematizers can intuitively grasp that there is no contradiction in saying that a person is sad because he lost his job and he is sad because serotonin levels in his cingulate gyrus are low—because it was losing his job that triggered other thoughts and memories that lowered serotonin levels in his cingulate gyrus and thereby made him sad. No one fully understands the details of how low serotonin feels like sadness—hence, the Hard Problem—but most people can’t even seem to grasp the connection at all. How can something as complex and beautiful as a human mind be made of… sparking gelatin?

Well, what would you prefer it to be made of? Silicon chips? We’re working on that. Something else? Magical fairy dust, perhaps? Pray tell, what material could the human mind be constructed from that wouldn’t bother you on a deep level?

No, what really seems to bother people is the very idea that a human mind can be constructed from material, that thoughts and feelings can be divisible into their constituent parts.

This leads people to adopt one of two extreme positions on cognitive science, both of which are quite absurd—frankly I’m not sure they are even coherent.

Pure empathizers often become dualists, saying that the mind cannot be divisible, cannot be made of material, but must be… something else, somehow, outside the material universe—whatever that means.

Pure systematizers instead often become eliminativists, acknowledging the functioning of the brain and then declaring proudly that the mind does not exist—that consciousness, emotion, and experience are all simply illusions that advanced science will one day dispense with—again, whatever that means.

I can at least imagine what a universe would be like if eliminativism were true and there were no such thing as consciousness—just a vast expanse of stars and rocks and dust, lifeless and empty. Of course, I know that I’m not in such a universe, because I am experiencing consciousness right now, and the illusion of consciousness is… consciousness. (You are not experiencing what you are experiencing right now, I say!) But I can at least visualize what such a universe would be like, and indeed it probably was our universe (or at least our solar system) up until about a billion years ago when the first sentient animals began to evolve.

Dualists, on the other hand, are speaking words, structured into grammatical sentences, but I’m not even sure they are forming coherent assertions. Sure, you can sort of imagine our souls being floating wisps of light and energy (ala the “ascended beings”, my least-favorite part of the Stargate series, which I otherwise love), but ultimately those have to be made of something, because nothing can be both fundamental and complex. Moreover, the fact that they interact with ordinary matter strongly suggests that they are made of ordinary matter (and to be fair to Stargate, at one point in the series Rodney with his already-great intelligence vastly increased declares confidently that ascended beings are indeed nothing more than “protons and electrons, protons and electrons”). Even if they were made of some different kind of matter like dark matter, they would need to obey a common system of physical laws, and ultimately we would come to think of them as matter. Otherwise, how do the two interact? If we are made of soul-stuff which is fundamentally different from other stuff, then how do we even know that other stuff exists? If we are not our bodies, then how do we experience pain when they are damaged and control them with our volition? The most coherent theory of dualism is probably Malebranche’s, which is quite literally “God did it”. Epiphenomenalism, which says that thoughts are just sort of an extra thing that also happens but has no effect (an “epiphenomenon”) on the physical brain, is also quite popular for some reason. People don’t quite seem to understand that the Law of Conservation of Energy directly forbids an “epiphenomenon” in this sense, because anything that happens involves energy, and that energy (unlike, say, money) can’t be created out of nothing; it has to come from somewhere. Analogies are often used: The whistle of a train, the smoke of a flame. But the whistle of a train is a pressure wave that vibrates the train; the smoke from a flame is made of particulates that could be used to smother the flame. At best, there are some phenomena that don’t affect each other very much—but any causal interaction at all makes dualism break down.

How can highly intelligent, highly educated philosophers and scientists make such basic errors? I think it has to be personality. They have deep, built-in (quite likely genetic) intuitions about the structure of the universe, and they just can’t shake them.

And I confess, it’s very hard for me to figure out what to say in order to break those intuitions, because my deep intuitions are so different. Just as it seems obvious to them that the world cannot be this way, it seems obvious to me that it is. It’s a bit like living in a world where 45% of people can see red but not blue and insist the American Flag is red and white, another 45% of people can see blue but not red and insist the flag is blue and white, and I’m here in the 10% who can see all colors and I’m trying to explain that the flag is red, white, and blue.

The best I can come up with is to use analogies, and computers make for quite good analogies, not least because their functioning is modeled on our thinking.

Is this word processor program (LibreOffice Writer, as it turns out) really here, or is it merely an illusion? Clearly it’s really here, right? I’m using it. It’s doing things right now. Parts of it are sort of illusions—it looks like a blank page, but it’s actually an LCD screen lit up all the way; it looks like ink, but it’s actually where the LCD turns off. But there is clearly something here, an actual entity worth talking about which has properties that are usefully described without trying to reduce them to the constituent interactions of subatomic particles.

On the other hand, can it be reduced to the interactions of subatomic particles? Absolutely. A brief sketch is something like this: It’s a software program, running on an operating system, and these in turn are represented in the physical hardware as long binary sequences, stored by ever-so-slightly higher or lower voltages in particular hardware components, which in turn are due to electrons being moved from one valence to another. Those electrons move in precise accordance with the laws of quantum mechanics, I assure you; yet this in no way changes the fact that I’m typing a blog post on a word processor.

Indeed, it’s not even particularly useful to know that the electrons are obeying the laws of quantum mechanics, and quite literally no possible computer that could be constructed in our universe could ever be large enough to fully simulate all these quantum interactions within the amount of time since the dawn of the universe. If we are to understand it at all, it must be at a much higher level—and the “software program” level really seems to be the best one for most circumstances. The vast majority of problems I’m likely to encounter are either at the software level or the macro hardware level; it’s conceivable that a race condition could emerge in the processor cache or the voltage could suddenly spike or even that a cosmic ray could randomly ionize a single vital electron, but these scenarios are far less likely to affect my life than, say, I accidentally deleted the wrong file or the battery ran out of charge because I forgot to plug it in.

Likewise, when dealing with a relationship problem, or mediating a conflict between two friends, it’s rarely relevant that some particular neuron is firing in someone’s nucleus accumbens, or that one of my friends is very low on dopamine in his mesolimbic system today. It could be, particularly if some sort of mental or neurological illness in involved, but in most cases the real issues are better understood as higher level phenomena—people being angry, or tired, or sad. These emotions are ultimately constructed of axon potentials and neurotransmitters, but that doesn’t make them any less real, nor does it change the fact that it is at the emotional level that most human matters are best understood.

Perhaps part of the problem is that human emotions take on moral significance, which other higher-level entities generally do not? But they sort of do, really, in a more indirect way. It matters a great deal morally whether or not climate change is a real phenomenon caused by carbon emissions (it is). Ultimately this moral significance can be tied to human experiences, so everything rests upon human experiences being real; but they are real, in much the same way that rocks and trees and carbon emissions are real. No amount of neuroscience will ever change that, just as no amount of biological science would disprove the existence of trees.

Indeed, some of the world’s greatest moral problems could be better solved if people were better empathizer-systematizers, and thus more willing to do cost-benefit analysis.

Why are movies so expensive? Did they used to be? Do they need to be?

August 10, JDN 2457611

One of the better arguments in favor of copyright involves film production. Films are extraordinarily expensive to produce; without copyright, how would they recover their costs? $100 million is a common budget these days.

It is commonly thought that film budgets used to be much smaller, so I looked at some data from The Numbers on over 5,000 films going back to 1915, and inflation-adjusted the budgets using the CPI. (I learned some interesting LibreOffice Calc functions in the process of merging the data; also LibreOffice crashed a few times trying to make the graphs, so that’s fun. I finally realized it had copied over all the 10,000 hyperlinks from the HTML data set.)

If you just look at the nominal figures, there does seem to be some sort of upward trend:

Movie_Budgets_nominal

But once you do the proper inflation adjustment, this trend basically disappears:

Movie_Budgets_adjusted

In real terms, the grosses of some early movies are quite large. Adjusted to 2015 dollars, Gone with the Wind grossed $6.659 billion—still the highest ever. In 1937, Snow White and the Seven Dwarfs grossed over $3.043 billion in 2015 dollars. In 1950, Cinderella made it to $2.592 billion in today’s money. (Horrifyingly, The Birth of a Nation grossed $258 million in today’s money.)

Nor is there any evidence that movie production has gotten more expensive. The linear trend is actually negative, though with a very small slope that is not statistically significant. On average, the real budget of a movie falls by $1752 per year.

Movie_Budgets_trend

While the two most expensive movies came out recently (Pirates of the Caribbean: At World’s End and Avatar), the third most expensive was released in 1963 (Cleopatra). The really hugely expensive movies do seem to cluster relatively recently—but then so do the really cheap films, some of which have budgets under $10,000. It may just be that more movies are produced in general, and overall the cost of producing a film doesn’t seem to have changed in real terms. The best return on investment is My Date with Drew, released in 2005, which had a budget of $1,100 but grossed $181,000, giving it an ROI of 16,358%. The highest real profit was of course Gone with the Wind, which made an astonishing $6.592 billion, though Titanic, Avatar, Aliens and Terminator 2 combined actually beat it with a total profit of $6.651 billion, which may explain why James Cameron can now basically make any movie he wants and already has four sequels lined up for Avatar.

The biggest real loss was 1970’s Waterloo, which made back only $18 million of its $153 million budget, losing $135 million and having an ROI of -87.7%. This was not quite as bad an ROI as 2002’s The Adventures of Pluto Nash, which had an ROI of -92.91%.

But making movies has always been expensive, at least for big blockbusters. (The $8,900 budget of Primer is something I could probably put on credit cards if I had to.) It’s nothing new to spend $100 million in today’s money.

When considering the ethics and economics of copyright, it’s useful to think about what Michele Boldrin calls “pizzaright”: you can’t copy my pizza, or you are guilty of pizzaright infringement. Many of the arguments for copyright are so general—this is a valuable service, it carries some risk of failure, it wouldn’t be as profitable without the monopoly, so fewer companies might enter the business—that they would also apply to pizza. Yet somehow nobody thinks that pizzaright should be a thing. If there is a justification for copyrights, it must come from the special circumstances of works of art (broadly conceived, including writing, film, music, etc.), and the only one that really seems strong enough is the high upfront cost of certain types of art—and indeed, the only ones that really seem to fit that are films and video games.

Painting, writing, and music just aren’t that expensive. People are willing to create these things for very little money, and can do so more or less on their own, especially nowadays. If the prices are reasonable, people will still want to buy from the creators directly—and sure enough, widespread music piracy hasn’t killed music, it has only killed the corporate record industry. But movies and video games really can easily cost $100 million to make, so there’s a serious concern of what might happen if they couldn’t use copyright to recover their costs.

The question for me is, did we really need copyright to fund these budgets?

Let’s take a look at how Star Wars made its money. $6.249 billion came from box office revenue, while $873 million came from VHS and DVD sales; those would probably be substantially reduced if not for copyright. But even before The Force Awakens was released, the Star Wars franchise had already made some $12 billion in toy sales alone. “Merchandizing, merchandizing, where the real money from the movie is made!”

Did they need intellectual property to do that? Well, yes—but all they needed was trademark. Defenders of “intellectual property” like to use that term because it elides fundamental distinctions between the three types: trademark, copyright, and patent.
Trademark is unproblematic. You can’t lie about who you are or where you products came from when you’re selling something. So if you are claiming to sell official Star Wars merchandise, you’d better be selling official Star Wars merchandise, and trademark protects that.

Copyright is problematic, but may be necessary in some cases. Copyright protects the content of the movies from being copied or modified without Lucasfilm’s permission. So now rather than simply protecting against the claim that you represent Lucasfilm, we are protecting against people buying the movie, copying it, and reselling the copies—even though that is a real economic service they are providing, and is in no way fraudulent as long as they are clear about the fact that they made the copies.

Patent is, frankly, ridiculous. The concept of “owning” ideas is absurd. You came up with a good way to do something? Great! Go do it then. But don’t expect other people to pay you simply for the privilege of hearing your good idea. Of course I want to financially support researchers, but there are much, much better ways of doing that, like government grants and universities. Patents only raise revenue for research that sells, first of all—so vaccines and basic research can’t be funded that way, even though they are the most important research by far. Furthermore, there’s nothing to guarantee that the person who actually invented the idea is the one who makes the profit from it—and in our current system where corporations can own patents (and do own almost 90% of patents), it typically isn’t. Even if it were, the whole concept of owning ideas is nonsensical, and it has driven us to the insane extremes of corporations owning patents on human DNA. The best argument I’ve heard for patents is that they are a second-best solution that incentivizes transparency and avoids trade secrets from becoming commonplace; but in that case they should definitely be short, and we should never extend them. Companies should not be able to make basically cosmetic modifications and renew the patent, and expiring patents should be a cause for celebration.

Hollywood actually formed in Los Angeles precisely to escape patents, but of course they love copyright and trademark. So do they like “intellectual property”?

Could blockbuster films be produced profitably using only trademark, in the absence of copyright?

Clearly Star Wars would have still turned a profit. But not every movie can do such merchandizing, and when movies start getting written purely for merchandizing it can be painful to watch.

The real question is whether a film like Gone with the Wind or Avatar could still be made, and make a reasonable profit (if a much smaller one).

Well, there’s always porn. Porn raises over $400 million per year in revenue, despite having essentially unenforceable copyright. They too are outraged over piracy, yet somehow I don’t think porn will ever cease to exist. A top porn star can make over $200,000 per year.Then there are of course independent films that never turn a profit at all, yet people keep making them.

So clearly it is possible to make some films without copyright protection, and something like Gone with the Wind needn’t cost $100 million to make. The only reason it cost as much as it did (about $66 million in today’s money) was that movie stars could command huge winner-takes-all salaries, which would no longer be true if copyright went away. And don’t tell me people wouldn’t be willing to be movie stars for $200,000 a year instead of $1.8 million (what Clark Gable made for Gone with the Wind, adjusted for inflation).

Yet some Hollywood blockbuster budgets are genuinely necessary. The real question is whether we could have Avatar without copyright. Not having films like Avatar is something I would count as a substantial loss to our society; we would lose important pieces of our art and culture.

So, where did all that money go? I don’t have a breakdown for Avatar in particular, but I do have a full budget breakdown for The Village. Of its $71.7 million, $33.5 million was “above the line”, which basically means the winner-takes-all superstar salaries for the director, producer, and cast. That amount could be dramatically reduced with no real cost to society—let’s drop it to say $3 million. Shooting costs were $28.8 million, post-production was $8.4 million, and miscellaneous expenses added about $1 million; all of those would be much harder to reduce (they mainly go to technical staff who make reasonable salaries, not to superstars), so let’s assume the full amount is necessary. That’s about $38 million in real cost to produce. Avatar had a lot more (and better) post-production, so let’s go ahead and multiply the post-production budget by an order of magnitude to $84 million. Our new total budget is $113.8 million.
That sounds like a lot, and it is; but this could be made back without copyright. Avatar sold over 14.5 million DVDs and over 8 million Blu-Rays. Conservatively assuming that the price elasticity of demand is zero (which is ridiculous—assuming the monopoly pricing is optimal it should be -1), if those DVDs were sold for $2 each and the Blu-Rays were sold for $5 each, with 50% of those prices being profit, this would yield a total profit of $14.5 million from DVDs and $20 million from Blu-Rays. That’s already $34.5 million. With realistic assumptions about elasticity of demand, cutting the prices this much (DVDs down from an average of $16, Blu-Rays down from an average of $20) would multiply the number of DVDs sold by at least 5 and the number of Blu-Rays sold by at least 3, which would get us all the way up to $132 million—enough to cover our new budget. (Of course this is much less than they actually made, which is why they set the prices they did—but that doesn’t mean it’s optimal from society’s perspective.)

But okay, suppose I’m wrong about the elasticity, and dropping the price from $16 to $2 for a DVD somehow wouldn’t actually increase the number purchased. What other sources of revenue would they have? Well, box office tickets would still be a thing. They’d have to come down in price, but given the high-quality high-fidelity versions that cinemas require—making them quite hard to pirate—they would still get decent money from each cinema. Let’s say the price drops by 90%—all cinemas are now $1 cinemas!—and the sales again somehow remain exactly the same (rather than dramatically increasing as they actually would). What would Avatar’s worldwide box office gross be then? $278 million. They could give the DVDs away for free and still turn a profit.

And that’s Avatar, one of the most expensive movies ever made. By cutting out the winner-takes-all salaries and huge corporate profits, the budget can be substantially reduced, and then what real costs remain can be quite well covered by box office and DVD sales at reasonable prices. If you imagine that piracy somehow undercuts everything until you have to give away things for free, you might think this is impossible; but in reality pirated versions are of unreliable quality, people do want to support artists and they are willing to pay something for their entertainment. They’re just tired of paying monopoly prices to benefit the shareholders of Viacom.

Would this end the era of the multi-millionaire movie star? Yes, I suppose it might. But it would also put about $10 billion per year back in the pockets of American consumers—and there’s little reason to think it would take away future Avatars, much less future Gone with the Winds.