Unpaid work and the double burden

Apr 16, JDN 2457860

When we say the word “work”, what leaps to mind is usually paid work in the formal sector—the work people do for employers. When you “go to work” each morning, you are going to do your paid work in the formal sector.

But a large quantity of the world’s labor does not take this form. First, there is the informal sectorwork done for cash “under the table”, where there is no formal employment structure and often no reporting or payment of taxes. Many economists estimate that the majority of the world’s workers are employed in the informal sector. The ILO found that informal employment comprises as much as 70% of employment in some countries. However, it depends how you count: A lot of self-employment could be considered either formal or informal. If you base it on whether you do any work outside an employer-employee relationship, informal sector work is highly prevalent around the world. If you base it on not reporting to the government to avoid taxes, informal sector work is less common. If it must be your primary source of income, whether or not you pay taxes, informal sector work is uncommon. And if you only include informal sector work when it is your primary income source and not reported to the government, informal sector work is relatively rare and largely restricted to underdeveloped countries.

But that’s not really my focus for today, because you at least get paid in the informal sector. Nor am I talking about forced laborthat is, slavery, essentially—which is a serious human rights violation that sadly still goes on in many countries.

No, the unpaid work I want to talk about today is work that people willingly do for free.

I’m also excluding internships and student work, where (at least in theory) the idea is that instead of getting paid you are doing the work in order to acquire skills and experience that will be valuable to you later on. I’m talking about work that you do for its own sake.

Such work can be divided into three major categories.
First there is vocation—the artist who would paint even if she never sold a single canvas; the author who is compelled to write day and night and would give the books away for free. Vocation is work that you do for fun, or because it is fulfilling. It doesn’t even feel like “work” in quite the same sense. For me, writing and research are vocation, at least in part; even if I had $5 million in stocks I would still do at least some writing and research as part of what gives my life meaning.

Second there is volunteering—the soup kitchen, the animal shelter, the protest march. Volunteering is work done out of altruism, to help other people or work toward some greater public goal. You don’t do it for yourself, you do it for others.

Third, and really my main focus for this post, is domestic labor—vacuuming the rug, mopping the floor, washing the dishes, fixing the broken faucet, changing the baby’s diapers. This is generally not work that anyone finds particularly meaningful or fulfilling, nor is it done out of any great sense of altruism (perhaps toward your own family, but that’s about the extent of it). But you also don’t get paid to do it. You do it because it must be done.

There is also considerable overlap, of course: Many people find meaning in their activism or charitable work, and part of what motivates artists and authors is a desire to change the world.

Vocation is ultimately what I would like to see the world move towards. One of the great promises of a basic income is that it might finally free us from the grind of conventional employment that has gripped us ever since we first managed to escape the limitations of subsistence farming—which in turn gripped us ever since we escaped the desperation of hunter-gatherer survival. The fourth great stage in human prosperity might finally be a world where we can work not for food or for pay, but for meaning. A world of musicians and painters, of authors and playwrights, of sculptors and woodcutters, yes; but also a world of cinematographers and video remixers, of 3D modelers and holographers, of VR designers and video game modders. If you ever fret that no work would be done without the constant pressure of the wage incentive, spend some time on Stack Overflow or the Steam Workshop. People will spend hundreds of person-hours at extremely high-skill tasks—I’m talking AI programming and 3D modeling here—not for money but for fun.

Volunteering is frankly kind of overrated; as the Effective Altruism community will eagerly explain to you any chance they get, it’s usually more efficient for you to give money rather than time, because money is fungible while giving your time only makes sense if your skills are actually the ones that the project needs. If this criticism of so much well-intentioned work sounds petty, note that literally thousands of lives would be saved each year if instead of volunteering people donated an equivalent amount of money so that charities could hire qualified workers instead. Unskilled volunteers and donations of useless goods after a disaster typically cause what aid professionals call the “second disaster”. Still, people do find meaning in volunteering, and there is value in that; and also there are times when you really are the best one to do it, particularly when it comes to local politics.

But what should we do with domestic labor?

Some of it can and will be automated away—the Parable of the Dishwasher with literal dishwashers. But it will be awhile before it all can, and right now it’s still a bit expensive. Maybe instead of vacuuming I should buy a Roomba—but $500 feels like a lot of money right now.

Much domestic labor we could hire out to someone else, but we simply choose not to. I could always hire someone to fix my computer, unclog my bathtub, or even mop my floors; I just don’t because it seems too expensive.
From the perspective of an economist, it’s actually a bit odd that it seems too expensive. I might have a comparative advantage in fixing my computer—it’s mine, after all, so I know its ins and outs, and while I’m no hotshot Google admin I am a reasonably competent programmer and debugger in my own right. And while for many people auto repair is a household chore, I do actually hire auto mechanics; I don’t even change my own oil, though partly that’s because my little Smart has an extremely compact design that makes it hard to work on. But I surely have no such comparative advantage in cleaning my floors or unclogging my pipes; so why doesn’t it seem worth it to hire someone else to do that?

Maybe I’m being irrational; hiring a cleaning service isn’t that expensive after all. I could hire a cleaning service to do my whole apartment for something like $80, and if I scheduled a regular maid it would probably be something like that per month. That’s what I would charge for two hours of tutoring, so maybe it would behoove me to hire a maid and spend that extra time tutoring or studying.

Or maybe it’s this grad student budget of mine; money is pretty tight at the moment, as I go through this strange societal ritual where young adults go through a period of near-poverty, overwhelming workload and constant anxiety not in spite but because we are so intelligent and hard-working. Perhaps if and when I get that $70,000 job as a professional economist my marginal utility of wealth will decrease and I will feel more inclined to hire maid services.

There are also transaction costs I save on by doing the work myself. A maid would have to commute here, first of all, reducing the efficiency gains from their comparative advantage in the work; but more than that, there’s a lot of effort I’d have to put in just to prepare for the maid and deal with any problems that might arise. There are scheduling issues, and the work probably wouldn’t get done as quickly unless I were to spend enough to hire a maid on a regular basis. There’s also a psychological cost in comfort and privacy to dealing with a stranger in one’s home, and a small but nontrivial risk that the maid might damage or steal something important.

But honestly it might be as simple as social norms (remember: to a first approximation, all human behavior is social norms). Regardless of whether or not it is affordable, it feels strange to hire a maid. That’s the sort of thing only rich, decadent people do. A responsible middle-class adult is supposed to mop their own floors and do their own laundry. Indeed, while hiring a plumber or an auto mechanic feels like paying for a service, hiring a maid crosses a line and feels like hiring a servant. (I honestly always feel a little awkward around the gardeners hired by our housing development for that reason. I’m only paying them indirectly, but there’s still this vague sense that they are somehow subservient—and surely, we are of quite distinct socioeconomic classes. Maybe it would help if I brushed up on my Spanish and got to know them better?)

And then there’s the gender factor. Being in a same-sex couple household changes the domestic labor dynamic quite a bit relative to the conventional opposite-sex couple household. Even in ostensibly liberal, feminist, egalitarian households, and even when both partners are employed full-time, it usually ends up being the woman who does most of the housework. This is true in the US; it is true in the UK; it is true in Europe; indeed it’s true in most if not all countries around the world, and, unsurprisingly, it is worst in India, where women spend a whopping five hours per day more on housework than men. (I was not surprised by the fact that Japan and China also do poorly, given their overall gender norms; but I’m a bit shocked at how badly Ireland and Italy do on this front.) And yes, while #ScandinaviaIsBetter, still in Sweden and Norway women spend half an hour to an hour more on housework on an average day than men.

Which, of course, supports the social norm theory. Any time you see both an overwhelming global trend against women and considerable cross-country variation within that trend, your first hypothesis should be sexism. Without the cross-country variation, maybe it could be biology—the sex differences in height and upper-body strength, for example, are pretty constant across countries. But women doing half an hour more in Norway but five hours more in India looks an awful lot like sexism.

This is called the double burden: To meet the social norms of being responsible middle-class adults, men are merely expected to work full-time at a high-paying job, but women are expected to do both the full effort of maintaining a household and the full effort of working at a full-time job. This is surely an improvement over the time when women were excluded from the formal workforce, not least because of the financial freedom that full-time work affords many women; but it would be very nice if we could also find a way to share some of that domestic burden as well. There has been some trend toward a less unequal share of housework as more women enter the workforce, but it still has a long way to go, even in highly-developed countries.

So, we can start by trying to shift the social norm that housework is gendered: Women clean the floors and change the diapers, while men fix the car and paint the walls. Childcare in particular is something that should be done equally by all parents, and while it’s plausible that one person may be better or worse at mopping or painting, it strains credulity to think that it’s always the woman who is better at mopping and the man who is better at painting.

Yet perhaps this is a good reason to try to shift away from another social norm as well, the one where only rich people hire maids and maids are servants. Unfortunately, it’s likely that most maids will continue to be women for the foreseeable future—cleaning services are gendered in much the same way that nursing and childcare are gendered. But at least by getting paid to clean, one can fulfill the “job” norm and the “housekeeping” norm in one fell swoop; and then women who are in other professions can carry only one burden instead of two. And if we can begin to think of cleaning services as more like plumbing and auto repair—buying a service, not hiring a servant—this is likely to improve the condition and social status of a great many maids. I doubt we’d ever get to the point where mopping floors is as prestigious as performing neurosurgery, but maybe we can at least get to the point where being a maid is as respectable as being a plumber. Cleaning needs done; it shouldn’t be shameful to be someone who is very good at doing it and gets paid to do so. (That is perhaps the most pernicious aspect of socioeconomic class, this idea that some jobs are “shameful” because they are done by workers with less education or involve more physical labor.)
This also makes good sense in terms of economic efficiency: Your comparative advantage is probably not in cleaning services, or if it is then perhaps you should do that as a career. So by selling your labor at whatever you are good at and then buying the services of someone who is especially good at cleaning, you should, at least in theory, be able to get the same cleaning done and maintain the same standard of living for yourself while also accomplishing more at whatever it is you do in your profession and providing income for whomever you hire to do the cleaning.

So, should I go hire a cleaning service after all? I don’t know, that still sounds pretty expensive.

Is intellectual property justified?

Feb 12, JDN 2457797

I had hoped to make this week’s post more comprehensive, but as I’ve spent the last week suffering from viral bronchitis I think I will keep this one short and revisit the topic in a few weeks.

Intellectual property underlies an increasingly large proportion of the world’s economic activity, more so now than ever before. We don’t just patent machines anymore; we patent drugs, and software programs, and even plants. Compared to that, copyrights on books, music, and movies seem downright pedestrian.

Though surely not the only cause, this is almost certainly contributing to the winner-takes-all effect; if you own the patent to something important, you can appropriate a huge amount of wealth to yourself with very little effort.

Moreover, this is not something that happened automatically as a natural result of market forces or autonomous human behavior. This is a policy, one that requires large investments in surveillance and enforcement to maintain. Intellectual property is probably the single largest market intervention that our government makes, and it is in a very strange direction: With antitrust law, the government seeks to undermine monopolies; but with intellectual property, the government seeks to protect monopolies.

So it’s important to ask: What is the justification for intellectual property? Do we actually have a good reason for doing this?

The basic argument goes something like this:

Many intellectual endeavors, such as research, invention, and the creation of art, require a large up-front investment of resources to complete, but once completed it costs almost nothing to disseminate the results. There is a very large fixed cost that makes it difficult to create these goods at all, but once they exist, the marginal cost of producing more of them is minimal.

If we didn’t have any intellectual property, once someone created an invention or a work of art, someone else could simply copy it and sell it at a much lower price. If enough competition emerged to drive price down to marginal cost, the original creator of the good would not only not profit, but would actually take an enormous loss, as they paid that large fixed cost but none of their competitors did.

Thus, knowing that they will take a loss if they do, individuals will not create inventions or works of art in the first place. Without intellectual property, all research, invention, and art would grind to a halt.

 

That last sentence sounds terrible, right? What would we do without research, invention, or art? But then if you stop and think about it for a minute, it becomes clear that this can’t possibly be the outcome of eliminating intellectual property. Most societies throughout the history of human civilization have not had a system of intellectual property, and yet they have all had art, and most of them have had research and invention as well.

If intellectual property is to be defended, it can’t be because we would have none of these things without it—it must be that we would have less, and so much less that it offsets the obvious harms of concentrating so much wealth and power in a handful of individuals.

I had hoped to get into the empirical results of different intellectual property regimes, but due to my illness I’m going to save that for another day.

Instead I’m just going to try to articulate what the burden of proof here really needs to be.

First of all, showing that we spend a lot of money on patents contributes absolutely nothing useful to defending them. Yes, we all know patents are expensive. The question is whether they are worth it. To show that this is not a strawman, here’s an article by IP Watchdog that takes the fact that “a new study showing that academic patent licensing contributed more than $1 trillion to the U.S. economy over eighteen years” is some kind of knockdown argument in favor of patents. If you actually showed that this economic activity would not exist without patents, then that would be an argument for patents. But all this study actually does is shows that we spend that much on patents, which says nothing about whether this is a good use of resources. It’s like when people try to defend the F-35 boondoggle by saying “it supports thousands of jobs!”; well, yes, but what about the millions of jobs we could be supporting instead if we used that money for something more efficient? (And indeed, the evidence is quite clear that spending on the F-35 destroys more jobs than it creates.) So any serious of estimate of economic benefits of intellectual property must also come with an estimate of the economic cost of intellectual property, or it is just propaganda.
It’s not enough to show some non-negligible (much less “statistically significant”) increase in innovation as a result of intellectual property. The effect size is critical; the increase in innovation needs to be large enough that it justifies having world-spanning monopolies that concentrate the world’s wealth in the hands of a few individuals. Because we already know that intellectual property concentrates wealth; they are monopolies, and monopolies concentrate wealth. It’s not enough to show that there is a benefit; that benefit must be greater than the cost, and there must be no alternative methods that allow us to achieve a greater net benefit.
It’s also important to be clear what we mean by “innovation”; this can be a very difficult thing to measure. But in principle what we really want to know is whether we are supporting important innovation—whether we will get more Mona Lisas and more polio vaccines, not simply whether we will get more Twilight and more Viagra. And one of the key problems with intellectual property as a method of funding innovation is that there is only a vague link between the profits that can be extracted and the benefits of the innovation. (Though to be fair, this is actually a more general problem; it is literally a mathematical theorem that competitive markets only maximize utility if you value rich people more, in inverse proportion to their marginal utility of wealth.)

Innovation is certainly important. Indeed, it is no exaggeration to say that innovation is the foundation of economic development and civilization itself. Defenders of intellectual property often want you to stop the conversation there: “Innovation is important!” Don’t let them. It’s not enough to say that innovation is important; intellectual property must also be the best way of achieving that innovation.

Is it? Well, in a few weeks I’ll get back to what the data actually says on this. There is some evidence supporting intellectual property—but the case is a lot weaker than you have probably been led to believe.

In defense of slacktivism

Jan 22, JDN 2457776

It’s one of those awkward portmanteaus that people often make to try to express a concept in fewer syllables, while also implicitly saying that the phenomenon is specific enough to deserve its own word: “Slacktivism”, made of “slacker” and “activism”, not unlike “mansplain” is made of “man” and “explain” or “edutainment” was made of “education” and “entertainment”—or indeed “gerrymander” was made of “Elbridge Gerry” and “salamander”. The term seems to be particularly popular on Huffington Post, which has a whole category on slacktivism. There is a particular subcategory of slacktivism that is ironically against other slacktivism, which has been dubbed “snarktivism”.

It’s almost always used as a pejorative; very few people self-identify as “slacktivists” (though once I get through this post, you may see why I’m considering it myself). “Slacktivism” is activism that “isn’t real” somehow, activism that “doesn’t count”.

Of course, that raises the question: What “counts” as legitimate activism? Is it only protest marches and sit-ins? Then very few people have ever been or will ever be activists. Surely donations should count, at least? Those have a direct, measurable impact. What about calling your Congressman, or letter-writing campaigns? These have been staples of activism for decades.
If the term “slacktivism” means anything at all, it seems to point to activities surrounding raising awareness, where the goal is not to enact a particular policy or support a particular NGO but to simply get as much public attention to a topic as possible. It seems to be particularly targeted at blogging and social media—and that’s important, for reasons I’ll get to shortly. If you gather a group of people in your community and give a speech about LGBT rights, you’re an activist. If you send out the exact same speech on Facebook, you’re a slacktivist.

One of the arguments against “slacktivism” is that it can be used to funnel resources at the wrong things; this blog post makes a good point that the Kony 2012 campaign doesn’t appear to have actually accomplished anything except profits for the filmmakers behind it. (Then again: A blog post against slacktivism? Are you sure you’re not doing right now the thing you think you are against?) But is this problem unique to slacktivism, or is it a more general phenomenon that people simply aren’t all that informed about how to have the most impact? There are an awful lot of inefficient charities out there, and in fact the most important waste of charitable funds involves people giving to their local churches. Fortunately, this is changing, as people become more secularized; churches used to account for over half of US donations, and now they only account for less than a third. (Naturally, Christian organizations are pulling out their hair over this.) The 60 million Americans who voted for Trump made a horrible mistake and will cause enormous global damage; but they weren’t slacktivists, were they?

Studies do suggest that traditionally “slacktivist” activities like Facebook likes aren’t a very strong predictor of future, larger actions, and more private modes of support (like donations and calling your Congressman) tend to be stronger predictors. But so what? In order for slacktivism to be a bad thing, they would have to be a negative predictor. They would have to substitute for more effective activism, and there’s no evidence that this happens.

In fact, there’s even some evidence that slacktivism has a positive effect (normally I wouldn’t cite Fox News, but I think in this case we should expect a bias in the opposite direction, and you can read the full Georgetown study if you want):

A study from Georgetown University in November entitled “Dynamics of Cause Engagement” looked how Americans learned about and interacted with causes and other social issues, and discovered some surprising findings on Slacktivism.

While the traditional forms of activism like donating money or volunteering far outpaces slacktivism, those who engage in social issues online are twice as likely as their traditional counterparts to volunteer and participate in events. In other words, slacktivists often graduate to full-blown activism.

At worst, most slacktivists are doing nothing for positive social change, and that’s what the vast majority of people have been doing for the entirety of human history. We can bemoan this fact, but that won’t change it. Most people are simply too uniformed to know what’s going on in the world, and too broke and too busy to do anything about it.

Indeed, slacktivism may be the one thing they can do—which is why I think it’s worth defending.

From an economist’s perspective, there’s something quite odd about how people’s objections to slacktivism are almost always formulated. The rational, sensible objection would be to their small benefits—this isn’t accomplishing enough, you should do something more effective. But in fact, almost all the objections to slacktivism I have ever read focus on their small costs—you’re not a “real activist” because you don’t make sacrifices like I do.

Yet it is a basic principle of economic rationality that, all other things equal, lower cost is better. Indeed, this is one of the few principles of economic rationality that I really do think is unassailable; perfect information is unrealistic and total selfishness makes no sense at all. But cost minimization is really very hard to argue with—why pay more, when you can pay less and get the same benefit?

From an economist’s perspective, the most important thing about an activity is its cost-effectiveness, measured either by net benefitbenefit minus cost—or rate of returnbenefit divided by cost. But in both cases, a lower cost is always better; and in fact slacktivism has an astonishing rate of return, precisely because its cost is so small.

Suppose that a campaign of 10 million Facebook likes actually does have a 1% chance of changing a policy in a way that would save 10,000 lives, with a life expectancy of 50 years each. Surely this is conservative, right? I’m only giving it a 1% chance of success, on a policy with a relatively small impact (10,000 lives could be a single clause in an EPA regulatory standard), with a large number of slacktivist participants (10 million is more people than the entire population of Switzerland). Yet because clicking “like” and “share” only costs you maybe 10 seconds, we’re talking about an expected cost of (10 million)(10/86,400/365) = 0.32 QALY for an expected benefit of (10,000)(0.01)(50) = 5000 QALY. That is a rate of return of 1,500,000%—that’s 1.5 million percent.

Let’s compare this to the rate of return on donating to a top charity like UNICEF, Oxfam, the Against Malaria Foundation, or the Schistomoniasis Control Initiative, for which donating about $300 would save the life of 1 child, adding about 50 QALY. That $300 most likely cost you about 0.01 QALY (assuming an annual income of $30,000), so we’re looking at a return of 500,000%. Now, keep in mind that this is a huge rate of return, far beyond what you can ordinarily achieve, that donating $300 to UNICEF is probably one of the best things you could possibly be doing with that money—and yet slacktivism may still exceed it in efficiency. Maybe slacktivism doesn’t sound so bad after all?

Of course, the net benefit of your participation is higher in the case of donation; you yourself contribute 50 QALY instead of only contributing 0.0005 QALY. Ultimately net benefit is what matters; rate of return is a way of estimating what the net benefit would be when comparing different ways of spending the same amount of time or money. But from the figures I just calculated, it begins to seem like maybe the very best thing you could do with your time is clicking “like” and “share” on Facebook posts that will raise awareness of policies of global importance. Now, you have to include all that extra time spent poring through other Facebook posts, and consider that you may not be qualified to assess the most important issues, and there’s a lot of uncertainty involved in what sort of impact you yourself will have… but it’s almost certainly not the worst thing you could be doing with your time, and frankly running these numbers has made me feel a lot better about all the hours I have actually spent doing this sort of thing. It’s a small benefit, yes—but it’s an even smaller cost.

Indeed, the fact that so many people treat low cost as bad, when it is almost by definition good, and the fact that they also target their ire so heavily at blogging and social media, says to me that what they are really trying to accomplish here has nothing to do with actually helping people in the most efficient way possible.

Rather, it’s two things.

The obvious one is generational—it’s yet another chorus in the unending refrain that is “kids these days”. Facebook is new, therefore it is suspicious. Adults have been complaining about their descendants since time immemorial; some of the oldest written works we have are of ancient Babylonians complaining that their kids are lazy and selfish. Either human beings have been getting lazier and more selfish for thousands of years, or, you know, kids are always a bit more lazy and selfish than their parents or at least seem so from afar.

The one that’s more interesting for an economist is signaling. By complaining that other people aren’t paying enough cost for something, what you’re really doing is complaining that they aren’t signaling like you are. The costly signal has been made too cheap, so now it’s no good as a signal anymore.

“Anyone can click a button!” you say. Yes, and? Isn’t it wonderful that now anyone with a smartphone (and there are more people with access to smartphones than toilets, because #WeLiveInTheFuture) can contribute, at least in some small way, to improving the world? But if anyone can do it, then you can’t signal your status by doing it. If your goal was to make yourself look better, I can see why this would bother you; all these other people doing things that look just as good as what you do! How will you ever distinguish yourself from the riffraff now?

This is also likely what’s going on as people fret that “a college degree’s not worth anything anymore” because so many people are getting them now; well, as a signal, maybe not. But if it’s just a signal, why are we spending so much money on it? Surely we can find a more efficient way to rank people by their intellect. I thought it was supposed to be an education—in which case the meteoric rise in global college enrollments should be cause for celebration. (In reality of course a college degree can serve both roles, and it remains an open question among labor economists as to which effect is stronger and by how much. But the signaling role is almost pure waste from the perspective of social welfare; we should be trying to maximize the proportion of real value added.)

For this reason, I think I’m actually prepared to call myself a slacktivist. I aim for cost-effective awareness-raising; I want to spread the best ideas to the most people for the lowest cost. Why, would you prefer I waste more effort, to signal my own righteousness?

The real crisis in education is access, not debt

Jan 8, JDN 2457762

A few weeks ago I tried to provide assurances that the “student debt crisis” is really not much of a crisis; there is a lot of debt, but it is being spent on a very good investment both for individuals and for society. Student debt is not that large in the scheme of things, and it more than pays for itself in the long run.

But this does not mean we are not in the midst of an education crisis. It’s simply not about debt.

The crisis I’m worried about involves access.

As you may recall, there are a substantial number of people with very small amounts of student debt, and they tend to be the most likely to default. The highest default rates are among the group of people with student debt greater than $0 but less than $5000.

So how is it that there are people with only $5,000 in student debt anyway? You can’t buy much college for $5,000 these days, as tuition prices have risen at an enormous rate: From 1983 to 2013, in inflation-adjusted dollars, average annual tuition rose from $7,286 at public institutions and $17,333 at private institutions to $15,640 at public institutions and $35,987 at private institutions—more than doubling in each case.

Enrollments are much higher, but this by itself should not raise tuition per student. So where is all the extra money going? Some of it is due to increases in public funding that have failed to keep up with higher enrollments; but a lot of it just seems to be going to higher pay for administrators and athletic coaches. This is definitely a problem; students should not be forced to subsidize the millions of dollars most universities lose on funding athletics—the NCAA, who if anything are surely biased in favor of athletics, found that the total net loss due to athletics spending at FBS universities was $17 million per year. Only a handful of schools actually turn a profit on athletics, all of them Division I. So it might be fair to speak of an “irresponsible college administration crisis”, administrators who heap wealth upon themselves and their beloved athletic programs while students struggle to pay their bills, or even a “college tuition crisis” where tuition keeps rising far beyond what is sustainable. But that’s not the same thing as a “student debt crisis”—just as the mortgage crisis we had in 2008 is distinct from the slow-burning housing price crisis we’ve been in since the 1980s. Making restrictions on mortgages tighter might prevent banks from being as predatory as they have been lately, but it won’t suddenly allow people to better afford houses.

And likewise, I’m much more worried about students who don’t go to college because they are afraid of this so-called “debt crisis”; they’re going to end up much worse off. As Eduardo Porter put it in the New York Times:

And yet Mr. Beltrán says he probably wouldn’t have gone to college full time if he hadn’t received a Pell grant and financial aid from New York State to defray the costs. He has also heard too many stories about people struggling under an unbearable burden of student loans to even consider going into debt. “Honestly, I don’t think I would have gone,” he said. “I couldn’t have done four years.”

And that would have been the wrong decision.

His reasoning is not unusual. The rising cost of college looms like an insurmountable obstacle for many low-income Americans hoping to get a higher education. The notion of a college education becoming a financial albatross around the neck of the nation’s youth is a growing meme across the culture. Some education experts now advise high school graduates that a college education may not be such a good investment after all. “Sticker price matters a lot,” said Lawrence Katz, a professor of Harvard University. “It is a deterrent.”

 

[…]

 

And the most perplexing part of this accounting is that regardless of cost, getting a degree is the best financial decision a young American can make.

According to the O.E.C.D.’s report, a college degree is worth $365,000 for the average American man after subtracting all its direct and indirect costs over a lifetime. For women — who still tend to earn less than men — it’s worth $185,000.

College graduates have higher employment rates and make more money. According to the O.E.C.D., a typical graduate from a four-year college earns 84 percent more than a high school graduate. A graduate from a community college makes 16 percent more.

A college education is more profitable in the United States than in pretty much every other advanced nation. Only Irish women get more for the investment: $185,960 net.

So, these students who have $5,000 or less in student debt; what does that mean? That amount couldn’t even pay for a single year at most universities, so how did that happen?

Well, they almost certainly went to community college; only a community college could provide you with a nontrivial amount of education for less than $5,000. But community colleges vary tremendously in their quality, and some have truly terrible matriculation rates. While most students who start at a four-year school do eventually get a bachelor’s degree (57% at public schools, 78% at private schools), only 17% of students who start at community college do. And once students drop out, they very rarely actually return to complete a degree.

Indeed, the only way to really have that little student debt is to drop out quickly. Most students who drop out do so chiefly for reasons that really aren’t all that surprising: Mostly, they can’t afford to pay their bills. “Unable to balance school and work” is the number 1 reported reason why students drop out of college.

In the American system, student loans are only designed to pay the direct expenses of education; they often don’t cover the real costs of housing, food, transportation and healthcare, and even when they do, they basically never cover the opportunity cost of education—the money you could be making if you were working full-time instead of going to college. For many poor students, simply breaking even on their own expenses isn’t good enough; they have families that need to be taken care of, and that means working full-time. Many of them even need to provide for their parents or grandparents who may be poor or disabled. Yet in the US system it is tacitly assumed that your parents will help you—so when you need to help them, what are you supposed to do? You give up on college and you get a job.

The most successful reforms for solving this problem have been comprehensive; they involved working to support students directly and intensively in all aspects of their lives, not just the direct financial costs of school itself.

Another option would be to do something more like what they do in Sweden, where there is also a lot of student debt, but for a very different reason. The direct cost of college is paid automatically by the government. Yet essentially all Swedish students have student debt, and total student debt in Sweden is much larger than other European countries and comparable to the United States; why? Because Sweden understands that you should also provide for the opportunity cost. In Sweden, students live fully self-sufficient on student loans, just as if they were working full-time. They are not expected to be supported by their parents.

The problem with American student loans, then, is not that they are too large—but that they are too small. They don’t provide for what students actually need, and thus don’t allow them to make the large investment in their education that would have paid off in the long run. Panic over student loans being too large could make the problem worse, if it causes us to reduce the amount of loanable funds available for students.

The lack of support for poor students isn’t the only problem. There are also huge barriers to education in the US based upon race. While Asian students do as well (if not better) than White students, Black and Latino students have substantially lower levels of educational attainment. Affirmative action programs can reduce these disparities, but they are unpopular and widely regarded as unfair, and not entirely without reason.

A better option—indeed one that should be a no-brainer in my opinion—is not to create counter-biases in favor of Black and Latino students (which is what affirmative action is), but to eliminate biases in favor of White students that we know exist. Chief among these are so-called “legacy admissions”, in which elite universities attract wealthy alumni donors by granting their children admission and funding regardless of whether they even remotely deserve it or would contribute anything academically to the university.

These “legacy admissions” are frankly un-American. They go against everything our nation supposedly stands for; in fact, they reek of feudalism. And unsurprisingly, they bias heavily in favor of White students—indeed, over 90 percent of legacy admits are White and Protestant. Athletic admissions are also contrary to the stated mission of the university, though their racial biases are more complicated (Black students are highly overrepresented in football and basketball admits, for example) and it is at least not inherently un-American to select students based upon their athletic talent as opposed to their academic talent.

But this by itself would not be enough; the gaps are clearly too large to close that way. Getting into college is only the start, and graduation rates are much worse for Black students than White students. Moreover, the education gap begins well before college—high school dropout rates are much higher among Black and Latino studentsas well.

In fact, even closing the education gap by itself would not be enough; racial biases permeate our whole society. Black individuals with college degrees are substantially more likely to be unemployed and have substantially lower wages on average than White individuals with college degrees—indeed, a bachelor’s degree gets a Black man a lower mean wage than a White man would get with only an associate’s degree.

Fortunately, the barriers against women in college education have largely been conquered. In fact, there are now more women in US undergraduate institutions than men. This is not to say that there are not barriers against women in society at large; women still make about 75% as much income as men on average, and even once you adjust for factors such as education and career choice they still only make about 95% as much. Moreover, these factors we’re controlling for are endogenous. Women don’t choose their careers in a vacuum, they choose them based upon a variety of social and cultural pressures. The fact that 93% of auto mechanics are men and 79% of clerical workers are women might reflect innate differences in preferences—but it could just as well reflect a variety of cultural biases or even outright discrimination. Quite likely, it’s some combination of these. So it is not obvious to me that the “adjusted” wage gap is actually a more accurate reflection of the treatment of women in our society than the “unadjusted” wage gap; the true level of bias is most likely somewhere in between the two figures.

Gender wage gaps vary substantially across age groups and between even quite similar countries: Middle-aged women in Germany make 28% less than middle-aged men, while in France that gap is only 19%. Young women in Latvia make 14% less than young men, but in Romania they make 1.1% more. This variation clearly shows that this is not purely the effect of some innate genetic difference in skills or preferences; it must be at least in large part the product of cultural pressures or policy choices.

Even within academia, women are less likely to be hired full-time instead of part-time, awarded tenure, or promoted to administrative positions. Moreover, this must be active discrimination in some form, because gaps in hiring and wage offers between men and women persist in randomized controlled experiments. You can literally present the exact same resume and get a different result depending on whether you attached a male name or a female name.

But at least when it comes to the particular question of getting bachelor’s degrees, we have achieved something approaching equality across gender, and that is no minor accomplishment. Most countries in the world still have more men than women graduating from college, and in some countries the difference is terrifyingly large. I found from World Bank data that in the Democratic Republic of Congo, only 3% of men go to college—and less than 1% of women do. Even in Germany, 29% of men graduate from college but only 19% of women do. Getting both of these figures over 30% and actually having women higher than men is a substantial achievement for which the United States should be proud.

Yet it still remains the case that Americans who are poor, Black, Native American, or Latino are substantially less likely to ever make it through college. Panic about student debt might well be making this problem worse, as someone whose family makes $15,000 per year is bound to hear $50,000 in debt as an overwhelming burden, even as you try to explain that it will eventually pay for itself seven times over.

We need to instead be talking about the barriers that are keeping people from attending college, and pressuring them to drop out once they do. Debt is not the problem. Even tuition is not really the problem. Access is the problem. College is an astonishingly good investment—but most people never get the chance to make it. That is what we need to change.

Bigotry is more powerful than the market

Nov 20, JDN 2457683

If there’s one message we can take from the election of Donald Trump, it is that bigotry remains a powerful force in our society. A lot of autoflagellating liberals have been trying to explain how this election result really reflects our failure to help people displaced by technology and globalization (despite the fact that personal income and local unemployment had negligible correlation with voting for Trump), or Hillary Clinton’s “bad campaign” that nonetheless managed the same proportion of Democrat turnout that re-elected her husband in 1996.

No, overwhelmingly, the strongest predictor of voting for Trump was being White, and living in an area where most people are White. (Well, actually, that’s if you exclude authoritarianism as an explanatory variable—but really I think that’s part of what we’re trying to explain.) Trump voters were actually concentrated in areas less affected by immigration and globalization. Indeed, there is evidence that these people aren’t racist because they have anxiety about the economy—they are anxious about the economy because they are racist. How does that work? Obama. They can’t believe that the economy is doing well when a Black man is in charge. So all the statistics and even personal experiences mean nothing to them. They know in their hearts that unemployment is rising, even as the BLS data clearly shows it’s falling.

The wide prevalence and enormous power of bigotry should be obvious. But economists rarely talk about it, and I think I know why: Their models say it shouldn’t exist. The free market is supposed to automatically eliminate all forms of bigotry, because they are inefficient.

The argument for why this is supposed to happen actually makes a great deal of sense: If a company has the choice of hiring a White man or a Black woman to do the same job, but they know that the market wage for Black women is lower than the market wage for White men (which it most certainly is), and they will do the same quality and quantity of work, why wouldn’t they hire the Black woman? And indeed, if human beings were rational profit-maximizers, this is probably how they would think.

More recently some neoclassical models have been developed to try to “explain” this behavior, but always without daring to give up the precious assumption of perfect rationality. So instead we get the two leading neoclassical theories of discrimination, which are statistical discrimination and taste-based discrimination.

Statistical discrimination is the idea that under asymmetric information (and we surely have that), features such as race and gender can act as signals of quality because they are correlated with actual quality for various reasons (usually left unspecified), so it is not irrational after all to choose based upon them, since they’re the best you have.

Taste-based discrimination is the idea that people are rationally maximizing preferences that simply aren’t oriented toward maximizing profit or well-being. Instead, they have this extra term in their utility function that says they should also treat White men better than women or Black people. It’s just this extra thing they have.

A small number of studies have been done trying to discern which of these is at work.
The correct answer, of course, is neither.

Statistical discrimination, at least, could be part of what’s going on. Knowing that Black people are less likely to be highly educated than Asians (as they definitely are) might actually be useful information in some circumstances… then again, you list your degree on your resume, don’t you? Knowing that women are more likely to drop out of the workforce after having a child could rationally (if coldly) affect your assessment of future productivity. But shouldn’t the fact that women CEOs outperform men CEOs be incentivizing shareholders to elect women CEOs? Yet that doesn’t seem to happen. Also, in general, people seem to be pretty bad at statistics.

The bigger problem with statistical discrimination as a theory is that it’s really only part of a theory. It explains why not all of the discrimination has to be irrational, but some of it still does. You need to explain why there are these huge disparities between groups in the first place, and statistical discrimination is unable to do that. In order for the statistics to differ this much, you need a past history of discrimination that wasn’t purely statistical.

Taste-based discrimination, on the other hand, is not a theory at all. It’s special pleading. Rather than admit that people are failing to rationally maximize their utility, we just redefine their utility so that whatever they happen to be doing now “maximizes” it.

This is really what makes the Axiom of Revealed Preference so insidious; if you really take it seriously, it says that whatever you do, must by definition be what you preferred. You can’t possibly be irrational, you can’t possibly be making mistakes of judgment, because by definition whatever you did must be what you wanted. Maybe you enjoy bashing your head into a wall, who am I to judge?

I mean, on some level taste-based discrimination is what’s happening; people think that the world is a better place if they put women and Black people in their place. So in that sense, they are trying to “maximize” some “utility function”. (By the way, most human beings behave in ways that are provably inconsistent with maximizing any well-defined utility function—the Allais Paradox is a classic example.) But the whole framework of calling it “taste-based” is a way of running away from the real explanation. If it’s just “taste”, well, it’s an unexplainable brute fact of the universe, and we just need to accept it. If people are happier being racist, what can you do, eh?

So I think it’s high time to start calling it what it is. This is not a question of taste. This is a question of tribal instinct. This is the product of millions of years of evolution optimizing the human brain to act in the perceived interest of whatever it defines as its “tribe”. It could be yourself, your family, your village, your town, your religion, your nation, your race, your gender, or even the whole of humanity or beyond into all sentient beings. But whatever it is, the fundamental tribe is the one thing you care most about. It is what you would sacrifice anything else for.

And what we learned on November 9 this year is that an awful lot of Americans define their tribe in very narrow terms. Nationalistic and xenophobic at best, racist and misogynistic at worst.

But I suppose this really isn’t so surprising, if you look at the history of our nation and the world. Segregation was not outlawed in US schools until 1955, and there are women who voted in this election who were born before American women got the right to vote in 1920. The nationalistic backlash against sending jobs to China (which was one of the chief ways that we reduced global poverty to its lowest level ever, by the way) really shouldn’t seem so strange when we remember that over 100,000 Japanese-Americans were literally forcibly relocated into camps as recently as 1942. The fact that so many White Americans seem all right with the biases against Black people in our justice system may not seem so strange when we recall that systemic lynching of Black people in the US didn’t end until the 1960s.

The wonder, in fact, is that we have made as much progress as we have. Tribal instinct is not a strange aberration of human behavior; it is our evolutionary default setting.

Indeed, perhaps it is unreasonable of me to ask humanity to change its ways so fast! We had millions of years to learn how to live the wrong way, and I’m giving you only a few centuries to learn the right way?

The problem, of course, is that the pace of technological change leaves us with no choice. It might be better if we could wait a thousand years for people to gradually adjust to globalization and become cosmopolitan; but climate change won’t wait a hundred, and nuclear weapons won’t wait at all. We are thrust into a world that is changing very fast indeed, and I understand that it is hard to keep up; but there is no way to turn back that tide of change.

Yet “turn back the tide” does seem to be part of the core message of the Trump voter, once you get past the racial slurs and sexist slogans. People are afraid of what the world is becoming. They feel that it is leaving them behind. Coal miners fret that we are leaving them behind by cutting coal consumption. Factory workers fear that we are leaving them behind by moving the factory to China or inventing robots to do the work in half the time for half the price.

And truth be told, they are not wrong about this. We are leaving them behind. Because we have to. Because coal is polluting our air and destroying our climate, we must stop using it. Moving the factories to China has raised them out of the most dire poverty, and given us a fighting chance toward ending world hunger. Inventing the robots is only the next logical step in the process that has carried humanity forward from the squalor and suffering of primitive life to the security and prosperity of modern society—and it is a step we must take, for the progress of civilization is not yet complete.

They wouldn’t have to let themselves be left behind, if they were willing to accept our help and learn to adapt. That carbon tax that closes your coal mine could also pay for your basic income and your job-matching program. The increased efficiency from the automated factories could provide an abundance of wealth that we could redistribute and share with you.

But this would require them to rethink their view of the world. They would have to accept that climate change is a real threat, and not a hoax created by… uh… never was clear on that point actually… the Chinese maybe? But 45% of Trump supporters don’t believe in climate change (and that’s actually not as bad as I’d have thought). They would have to accept that what they call “socialism” (which really is more precisely described as social democracy, or tax-and-transfer redistribution of wealth) is actually something they themselves need, and will need even more in the future. But despite rising inequality, redistribution of wealth remains fairly unpopular in the US, especially among Republicans.

Above all, it would require them to redefine their tribe, and start listening to—and valuing the lives of—people that they currently do not.

Perhaps we need to redefine our tribe as well; many liberals have argued that we mistakenly—and dangerously—did not include people like Trump voters in our tribe. But to be honest, that rings a little hollow to me: We aren’t the ones threatening to deport people or ban them from entering our borders. We aren’t the ones who want to build a wall (though some have in fact joked about building a wall to separate the West Coast from the rest of the country, I don’t think many people really want to do that). Perhaps we live in a bubble of liberal media? But I make a point of reading outlets like The American Conservative and The National Review for other perspectives (I usually disagree, but I do at least read them); how many Trump voters do you think have ever read the New York Times, let alone Huffington Post? Cosmopolitans almost by definition have the more inclusive tribe, the more open perspective on the world (in fact, do I even need the “almost”?).

Nor do I think we are actually ignoring their interests. We want to help them. We offer to help them. In fact, I want to give these people free money—that’s what a basic income would do, it would take money from people like me and give it to people like them—and they won’t let us, because that’s “socialism”! Rather, we are simply refusing to accept their offered solutions, because those so-called “solutions” are beyond unworkable; they are absurd, immoral and insane. We can’t bring back the coal mining jobs, unless we want Florida underwater in 50 years. We can’t reinstate the trade tariffs, unless we want millions of people in China to starve. We can’t tear down all the robots and force factories to use manual labor, unless we want to trigger a national—and then global—economic collapse. We can’t do it their way. So we’re trying to offer them another way, a better way, and they’re refusing to take it. So who here is ignoring the concerns of whom?

Of course, the fact that it’s really their fault doesn’t solve the problem. We do need to take it upon ourselves to do whatever we can, because, regardless of whose fault it is, the world will still suffer if we fail. And that presents us with our most difficult task of all, a task that I fully expect to spend a career trying to do and yet still probably failing: We must understand the human tribal instinct well enough that we can finally begin to change it. We must know enough about how human beings form their mental tribes that we can actually begin to shift those parameters. We must, in other words, cure bigotry—and we must do it now, for we are running out of time.

Congratulations, America.

Nov 13, JDN 2457676

Congratulations, you elected Donald Trump.

Instead of the candidate with decades of experience as Secretary of State, US Senator, and an internationally renowned philanthropist, you chose the first President in history to not have any experience whatsoever in government or the military.

Instead of the candidate with the most comprehensive, evidence-based plan for action against climate change (that is, the only candidate who supports nuclear energy), you elected the one who is planning to appoint a climate-change denier head of the EPA.

Perhaps to punish the candidate who carried out a longstanding custom of using private email servers because the public servers were so defective, you accepted the candidate who is being charged with not only mass fraud but also multiple counts of sexual assault.

Perhaps based on the Russian propaganda—not kidding, read the URL—saying that one candidate could trigger a Third World War, you chose the candidate who has no idea how international diplomacy works and wants to convert NATO into a mercantilist empire (and by the way has no apparent qualms about deploying nuclear weapons).

Because one candidate was “too close to Wall Street” in some vague ill-defined sense (oh my god, she gave speeches! And accepted donations!), you elected the other one who has already vowed to turn back the financial regulations that are currently protecting us from a repeat of the Great Recession.

Because you didn’t trust the candidate with one of the highest honest ratings ever recorded, you elected the one who is surrounded by hundreds of scandals and never even released his tax returns.
Even if you didn’t outright agree with it, you were willing to look past his promise to deport 11 million people and his long history of bigotry toward a wide variety of ethnic groups.
Even his Vice President, who seems like a great statesman simply by comparison, is one of the most fanatical right-wing Vice Presidents we’ve had in decades. He opposes not just abortion, but birth control. He supports—and has signed as governor—“religious freedom” bills designed to legalize discrimination against LGBT people.

Congratulations, America. You literally elected the candidate that was supported by Vladimir Putin, Kim Jong-un, the American Nazi Party, and the Klu Klux Klan. Now, reversed stupidity is not intelligence; being endorsed by someone horrible doesn’t necessarily mean you are horrible. But when this many horrible people endorse you, and start giving the same reasons, and those reasons are based on things you particularly have in common with those horrible people like bigotry and authoritarianism… yeah, I think it does say something about you.

Now, to be fair, much of the blame here goes to the Electoral College.

By current counts, Hillary Clinton won the popular vote by at least 500,000 votes. It is projected that she may even win by as much as 2 million. This will be the fourth time in US history that the Electoral College winner was definitely not the popular vote winner.

But even that is only possible because Hillary Clinton did not win the overwhelming landslide she deserved. The Electoral College should have been irrelevant, because she should have won at least 60% of every demographic in every state. Our whole nation should have declared together in one voice that we will not tolerate bigotry and authoritarianism. The fact that that didn’t happen is reason enough to be ashamed; even if Clinton will slightly win the popular vote that still says something truly terrible about our country.

Indeed, this is what it says:

We slightly preferred democracy over fascism.

We slightly preferred liberty over tyranny.

We slightly preferred justice over oppression.

We slightly preferred feminism over misogyny.

We slightly preferred equality over racism.

We slightly preferred reason over instinct.

We slightly preferred honesty over fraud.

We slightly preferred sustainability over ecological devastation.

We slightly preferred competence over incompetence.

We slightly preferred diplomacy over impulsiveness.

We slightly preferred humility over narcissism.

We were faced with the easiest choice ever given to us in any election, and just a narrow majority got the answer right—and then under the way our system works that wasn’t even enough.

I sincerely hope that Donald Trump is not as bad as I believe he is. The feeling of vindication at being able to tell so many right-wing family members “I told you so” pales in comparison to the fear and despair for the millions of people who will die from his belligerent war policy, his incompetent economic policy, and his insane (anti-)environmental policy. Even the working-class White people who voted for him will surely suffer greatly under his regime.

Yes, I sincerely hope that he is not as bad as we think he is, though I remember saying that George W. Bush was not as bad as we thought when he was elected—and he was. He was. His Iraq War killed hundreds of thousands of people based on lies. His economy policy triggered the worst economic collapse since the Great Depression. So now I have to ask: What if he is as bad as we think?

Fortunately, I do not believe that Trump will literally trigger a global nuclear war.

Then again, I didn’t believe he would win, either.

Belief in belief, and why it’s important

Oct 30, JDN 2457692

In my previous post on ridiculous beliefs, I passed briefly over this sentence:

“People invest their identity in beliefs, and decide what beliefs to profess based on the group identities they value most.”

Today I’d like to talk about the fact that “to profess” is a very important phrase in that sentence. Part of understanding ridiculous beliefs, I think, is understanding that many, if not most, of them are not actually proper beliefs. They are what Daniel Dennett calls “belief in belief”, and has elsewhere been referred to as “anomalous belief”. They are not beliefs in the ordinary sense that we would line up with the other beliefs in our worldview and use them to anticipate experiences and motivate actions. They are something else, lone islands of belief that are not weaved into our worldview. But all the same they are invested with importance, often moral or even ultimate importance; this one belief may not make any sense with everyone else, but you must believe it, because it is a vital part of your identity and your tribe. To abandon it would not simply be mistaken; it would be heresy, it would be treason.

How do I know this? Mainly because nobody has tried to stone me to death lately.

The Bible is quite explicit about at least a dozen reasons I am supposed to be executed forthwith; you likely share many of them: Heresy, apostasy, blasphemy, nonbelief, sodomy, fornication, covetousness, taking God’s name in vain, eating shellfish (though I don’t anymore!), wearing mixed fiber, shaving, working on the Sabbath, making images of things, and my personal favorite, not stoning other people for committing such crimes (as we call it in game theory, a second-order punishment).

Yet I have met many people who profess to be “Bible-believing Christians”, and even may oppose some of these activities (chiefly sodomy, blasphemy, and nonbelief) on the grounds that they are against what the Bible says—and yet not one has tried to arrange my execution, nor have I ever seriously feared that they might.

Is this because we live in a secular society? Well, yes—but not simply that. It isn’t just that these people are afraid of being punished by our secular government should they murder me for my sins; they believe that it is morally wrong to murder me, and would rarely even consider the option. Someone could point them to the passage in Leviticus (20:16, as it turns out) that explicitly says I should be executed, and it would not change their behavior toward me.

On first glance this is quite baffling. If I thought you were about to drink a glass of water that contained cyanide, I would stop you, by force if necessary. So if they truly believe that I am going to be sent to Hell—infinitely worse than cyanide—then shouldn’t they be willing to use any means necessary to stop that from happening? And wouldn’t this be all the more true if they believe that they themselves will go to Hell should they fail to punish me?

If these “Bible-believing Christians” truly believed in Hell the way that I believe in cyanide—that is, as proper beliefs which anticipate experience and motivate action—then they would in fact try to force my conversion or execute me, and in doing so would believe that they are doing right. This used to be quite common in many Christian societies (most infamously in the Salem Witch Trials), and still is disturbingly common in many Muslim societies—ISIS doesn’t just throw gay men off rooftops and stone them as a weird idiosyncrasy; it is written in the Hadith that they’re supposed to. Nor is this sort of thing confined to terrorist groups; the “legitimate” government of Saudi Arabia routinely beheads atheists or imprisons homosexuals (though has a very capricious enforcement system, likely so that the monarchy can trump up charges to justify executing whomever they choose). Beheading people because the book said so is what your behavior would look like if you honestly believed, as a proper belief, that the Qur’an or the Bible or whatever holy book actually contained the ultimate truth of the universe. The great irony of calling religion people’s “deeply-held belief” is that it is in almost all circumstances the exact opposite—it is their most weakly held belief, the one that they could most easily sacrifice without changing their behavior.

Yet perhaps we can’t even say that to people, because they will get equally defensive and insist that they really do hold this very important anomalous belief, and how dare you accuse them otherwise. Because one of the beliefs they really do hold, as a proper belief, and a rather deeply-held one, is that you must always profess to believe your religion and defend your belief in it, and if anyone catches you not believing it that’s a horrible, horrible thing. So even though it’s obvious to everyone—probably even to you—that your behavior looks nothing like what it would if you actually believed in this book, you must say that you do, scream that you do if necessary, for no one must ever, ever find out that it is not a proper belief.

Another common trick is to try to convince people that their beliefs do affect their behavior, even when they plainly don’t. We typically use the words “religious” and “moral” almost interchangeably, when they are at best orthogonal and arguably even opposed. Part of why so many people seem to hold so rigidly to their belief-in-belief is that they think that morality cannot be justified without recourse to religion; so even though on some level they know religion doesn’t make sense, they are afraid to admit it, because they think that means admitting that morality doesn’t make sense. If you are even tempted by this inference, I present to you the entire history of ethical philosophy. Divine Command theory has been a minority view among philosophers for centuries.

Indeed, it is precisely because your moral beliefs are not based on your religion that you feel a need to resort to that defense of your religion. If you simply believed religion as a proper belief, you would base your moral beliefs on your religion, sure enough; but you’d also defend your religion in a fundamentally different way, not as something you’re supposed to believe, not as a belief that makes you a good person, but as something that is just actually true. (And indeed, many fanatics actually do defend their beliefs in those terms.) No one ever uses the argument that if we stop believing in chairs we’ll all become murderers, because chairs are actually there. We don’t believe in belief in chairs; we believe in chairs.

And really, if such a belief were completely isolated, it would not be a problem; it would just be this weird thing you say you believe that everyone really knows you don’t and it doesn’t affect how you behave, but okay, whatever. The problem is that it’s never quite isolated from your proper beliefs; it does affect some things—and in particular it can offer a kind of “support” for other real, proper beliefs that you do have, support which is now immune to rational criticism.

For example, as I already mentioned: Most of these “Bible-believing Christians” do, in fact, morally oppose homosexuality, and say that their reason for doing so is based on the Bible. This cannot literally be true, because if they actually believed the Bible they wouldn’t want gay marriage taken off the books, they’d want a mass pogrom of 4-10% of the population (depending how you count), on a par with the Holocaust. Fortunately their proper belief that genocide is wrong is overriding. But they have no such overriding belief supporting the moral permissibility of homosexuality or the personal liberty of marriage rights, so the very tenuous link to their belief-in-belief in the Bible is sufficient to tilt their actual behavior.

Similarly, if the people I meet who say they think maybe 9/11 was an inside job by our government really believed that, they would most likely be trying to organize a violent revolution; any government willing to murder 3,000 of its own citizens in a false flag operation is one that must be overturned and can probably only be overturned by force. At the very least, they would flee the country. If they lived in a country where the government is actually like that, like Zimbabwe or North Korea, they wouldn’t fear being dismissed as conspiracy theorists, they’d fear being captured and executed. The very fact that you live within the United States and exercise your free speech rights here says pretty strongly that you don’t actually believe our government is that evil. But they wouldn’t be so outspoken about their conspiracy theories if they didn’t at least believe in believing them.

I also have to wonder how many of our politicians who lean on the Constitution as their source of authority have actually read the Constitution, as it says a number of rather explicit things against, oh, say, the establishment of religion (First Amendment) or searches and arrests without warrants (Fourth Amendment) that they don’t much seem to care about. Some are better about this than others; Rand Paul, for instance, actually takes the Constitution pretty seriously (and is frequently found arguing against things like warrantless searches as a result!), but Ted Cruz for example says he has spent decades “defending the Constitution”, despite saying things like “America is a Christian nation” that directly violate the First Amendment. Cruz doesn’t really seem to believe in the Constitution; but maybe he believes in believing the Constitution. (It’s also quite possible he’s just lying to manipulate voters.)

 

Debunking the Simulation Argument

Oct 23, JDN 2457685

Every subculture of humans has words, attitudes, and ideas that hold it together. The obvious example is religions, but the same is true of sports fandoms, towns, and even scientific disciplines. (I would estimate that 40-60% of scientific jargon, depending on discipline, is not actually useful, but simply a way of exhibiting membership in the tribe. Even physicists do this: “quantum entanglement” is useful jargon, but “p-brane” surely isn’t. Statisticians too: Why say the clear and understandable “unequal variance” when you could show off by saying “heteroskedasticity”? In certain disciplines of the humanities this figure can rise as high as 90%: “imaginary” as a noun leaps to mind.)

One particularly odd idea that seems to define certain subcultures of very intelligent and rational people is the Simulation Argument, originally (and probably best) propounded by Nick Bostrom:

This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation.

In this original formulation by Bostrom, the argument actually makes some sense. It can be escaped, because it makes some subtle anthropic assumptions that need to be considered more carefully (in short, there could be ancestor-simulations but we could still know we aren’t in one); but it deserves to be taken seriously. Indeed, I think proposition (2) is almost certainly true, and proposition (1) might be as well; thus I have no problem accepting the disjunction.

Of course, the typical form of the argument isn’t nearly so cogent. In popular outlets as prestigious as the New York Times, Scientific American and the New Yorker, the idea is simply presented as “We are living in a simulation.” The only major outlet I could find that properly presented Bostrom’s disjunction was PBS. Indeed, there are now some Silicon Valley billionaires who believe the argument, or at least think it merits enough attention to be worth funding research into how we might escape the simulation we are in. (Frankly, even if we were inside a simulation, it’s not clear that “escaping” would be something worthwhile or even possible.)

Yet most people, when presented with this idea, think it is profoundly silly and a waste of time.

I believe this is the correct response. I am 99.9% sure we are not living in a simulation.

But it’s one thing to know that an argument is wrong, and quite another to actually show why; in that respect the Simulation Argument is a lot like the Ontological Argument for God:

However, as Bertrand Russell observed, it is much easier to be persuaded that ontological arguments are no good than it is to say exactly what is wrong with them.

To resolve this problem, I am writing this post (at the behest of my Patreons) to provide you now with a concise and persuasive argument directly against the Simulation Argument. No longer will you have to rely on your intuition that it can’t be right; you actually will have compelling logical reasons to reject it.

Note that I will not deny the core principle of cognitive science that minds are computational and therefore in principle could be simulated in such a way that the “simulations” would be actual minds. That’s usually what defenders of the Simulation Argument assume you’re denying, and perhaps in many cases it is; but that’s not what I’m denying. Yeah, sure, minds are computational (probably). There’s still no reason to think we’re living in a simulation.

To make this refutation, I should definitely address the strongest form of the argument, which is Nick Bostrom’s original disjunction. As I already noted, I believe that the disjunction is in fact true; at least one of those propositions is almost certainly correct, and perhaps two of them.

Indeed, I can tell you which one: Proposition (2). That is, I see no reason whatsoever why an advanced “posthuman” species would want to create simulated universes remotely resembling our own.


First of all, let’s assume that we do make it that far and posthumans do come into existence. I really don’t have sufficient evidence to say this is so, and the combination of millions of racists and thousands of nuclear weapons does not bode particularly well for that probability. But I think there is at least some good chance that this will happen—perhaps 10%?—so, let’s concede that point for now, and say that yes, posthumans will one day exist.

To be fair, I am not a posthuman, and cannot say for certain what beings of vastly greater intelligence and knowledge than I might choose to do. But since we are assuming that they exist as the result of our descendants more or less achieving everything we ever hoped for—peace, prosperity, immortality, vast knowledge—one thing I think I can safely extrapolate is that they will be moral. They will have a sense of ethics and morality not too dissimilar from our own. It will probably not agree in every detail—certainly not with what ordinary people believe, but very likely not with what even our greatest philosophers believe. It will most likely be better than our current best morality—closer to the objective moral truth that underlies reality.

I say this because this is the pattern that has emerged throughout the advancement of civilization thus far, and the whole reason we’re assuming posthumans might exist is that we are projecting this advancement further into the future. Humans have, on average, in the long run, become more intelligent, more rational, more compassionate. We have given up entirely on ancient moral concepts that we now recognize to be fundamentally defective, such as “witchcraft” and “heresy”; we are in the process of abandoning others for which some of us see the flaws but others don’t, such as “blasphemy” and “apostasy”. We have dramatically expanded the rights of women and various minority groups. Indeed, we have expanded our concept of which beings are morally relevant, our “circle of concern”, from only those in our tribe on outward to whole nations, whole races of people—and for some of us, as far as all humans or even all vertebrates. Therefore I expect us to continue to expand this moral circle, until it encompasses all sentient beings in the universe. Indeed, on some level I already believe that, though I know I don’t actually live in accordance with that theory—blame me if you will for my weakness of will, but can you really doubt the theory? Does it not seem likely that this it the theory to which our posthuman descendants will ultimately converge?

If that is the case, then posthumans would never make a simulation remotely resembling the universe I live in.

Maybe not me in particular, for I live relatively well—though I must ask why the migraines were really necessary. But among humans in general, there are many millions who live in conditions of such abject squalor and suffering that to create a universe containing them can only be counted as the gravest of crimes, morally akin to the Holocaust.

Indeed, creating this universe must, by construction, literally include the Holocaust. Because the Holocaust happened in this universe, you know.

So unless you think that our posthuman descendants are monstersdemons really, immortal beings of vast knowledge and power who thrive on the death and suffering of other sentient beings, you cannot think that they would create our universe. They might create a universe of some sort—but they would not create this one. You may consider this a corollary of the Problem of Evil, which has always been one of the (many) knockdown arguments against the existence of God as depicted in any major religion.

To deny this, you must twist the simulation argument quite substantially, and say that only some of us are actual people, sentient beings instantiated by the simulation, while the vast majority are, for lack of a better word, NPCs. The millions of children starving in southeast Asia and central Africa aren’t real, they’re just simulated, so that the handful of us who are real have a convincing environment for the purposes of this experiment. Even then, it seems monstrous to deceive us in this way, to make us think that millions of children are starving just to see if we’ll try to save them.

Bostrom presents it as obvious that any species of posthumans would want to create ancestor-simulations, and to make this seem plausible he compares to the many simulations we already create with our current technology, which we call “video games”. But this is such a severe equivocation on the word “simulation” that it frankly seems disingenuous (or for the pun perhaps I should say dissimulation).

This universe can’t possibly be a simulation in the sense that Halo 4 is a simulation. Indeed, this is something that I know with near-perfect certainty, for I am a sentient being (“Cogito ergo sum” and all that). There is at least one actual sentient person here—me—and based on my observations of your behavior, I know with quite high probability that there are many others as well—all of you.

Whereas, if I thought for even a moment there was even a slight probability that Halo 4 contains actual sentient beings that I am murdering, I would never play the game again; indeed I think I would smash the machine, and launch upon a global argumentative crusade to convince everyone to stop playing violent video games forevermore. If I thought that these video game characters that I explode with virtual plasma grenades were actual sentient people—or even had a non-negligible chance of being such—then what I am doing would be literally murder.

So whatever else the posthumans would be doing by creating our universe inside some vast computer, it is not “simulation” in the sense of a video game. If they are doing this for amusement, they are monsters. Even if they are doing it for some higher purpose such as scientific research, I strongly doubt that it can be justified; and I even more strongly doubt that it could be justified frequently. Perhaps once or twice in the whole history of the civilization, as a last resort to achieve some vital scientific objective when all other methods have been thoroughly exhausted. Furthermore it would have to be toward some truly cosmic objective, such as forestalling the heat death of the universe. Anything less would not justify literally replicating thousands of genocides.

But the way Bostrom generates a nontrivial probability of us living in a simulation is by assuming that each posthuman civilization will create many simulations similar to our own, so that the prior probability of being in a simulation is so high that it overwhelms the much higher likelihood that we are in the real universe. (This a deeply Bayesian argument; of that part, I approve. In Bayesian reasoning, the likelihood is the probability that we would observe the evidence we do given that the theory is true, while the prior is the probability that the theory is true, before we’ve seen any evidence. The probability of the theory actually being true is proportional to the likelihood multiplied by the prior.) But if the Foundation IRB will only approve the construction of a Synthetic Universe in order to achieve some cosmic objective, then the prior probability is something like 2/3, or 9/10; and thus it is no match whatsoever for the some 10^12 evidence in favor of this being actual reality.

Just what is this so compelling likelihood? That brings me to my next point, which is a bit more technical, but important because it’s really where the Simulation Argument truly collapses.

How do I know we aren’t in a simulation?

The fundamental equations of the laws of nature do not have closed-form solutions.

Take a look at the Schrodinger Equation, the Einstein field equations, the Navier-Stokes Equations, even Maxwell’s Equations (which are relatively well-behaved all things considered). These are second-order partial differential equations all, extremely complex to solve. They are all defined over continuous time and space, which has uncountably many points in every interval (though there are some physicists who believe that spacetime may be discrete on the order of 10^-44 seconds.) Not one of them has a general closed-form solution, by which I mean a formula that you could just plug in numbers for the parameters on one side of the equation and output an answer on the other. (x^3 + y^3 = 3 is not a closed-form solution, but y = (3 – x^3)^(1/3) is.) They have such exact solutions in certain special cases, but in general we can only solve them approximately, if at all.

This is not particularly surprising if you assume we’re in the actual universe. I have no particular reason to think that the fundamental laws underlying reality should be of a form that is exactly solvable to minds like my own, or even solvable at all in any but a trivial sense. (They must be “solvable” in the sense of actually resulting in something in particular happening at any given time, but that’s all.)

But it is extremely surprising if you assume we’re in a universe that is simulated by posthumans. If posthumans are similar to us, but… more so I guess, then when they set about to simulate a universe, they should do so in a fashion not too dissimilar from how we would do it. And how would we do it? We’d code in a bunch of laws into a computer in discrete time (and definitely not with time-steps of 10^-44 seconds either!), and those laws would have to be encoded as functions, not equations. There could be many inputs in many different forms, perhaps even involving mathematical operations we haven’t invented yet—but each configuration of inputs would have to yield precisely one output, if the computer program is to run at all.

Indeed, if they are really like us, then their computers will probably only be capable of one core operation—conditional bit flipping, 1 to 0 or 0 to 1 depending on some state—and the rest will be successive applications of that operation. Bit shifts are many bit flips, addition is many bit shifts, multiplication is many additions, exponentiation is many multiplications. We would therefore expect the fundamental equations of the simulated universe to have an extremely simple functional form, literally something that can be written out as many successive steps of “if A, flip X to 1” and “if B, flip Y to 0”. It could be a lot of such steps mind you—existing programs require billions or trillions of such operations—but one thing it could never be is a partial differential equation that cannot be solved exactly.

What fans of the Simulation Argument seem to forget is that while this simple set of operations is extremely general, capable of generating quite literally any possible computable function (Turing proved that), it is not capable of generating any function that isn’t computable, much less any equation that can’t be solved into a function. So unless the laws of the universe can actually be reduced to computable functions, it’s not even possible for us to be inside a computer simulation.

What is the probability that all the fundamental equations of the universe can be reduced to computable functions? Well, it’s difficult to assign a precise figure of course. I have no idea what new discoveries might be made in science or mathematics in the next thousand years (if I did, I would make a few and win the Nobel Prize). But given that we have been trying to get closed-form solutions for the fundamental equations of the universe and failing miserably since at least Isaac Newton, I think that probability is quite small.

Then there’s the fact that (again unless you believe some humans in our universe are NPCs) there are 7.3 billion minds (and counting) that you have to simulate at once, even assuming that the simulation only includes this planet and yet somehow perfectly generates an apparent cosmos that even behaves as we would expect under things like parallax and redshift. There’s the fact that whenever we try to study the fundamental laws of our universe, we are able to do so, and never run into any problems of insufficient resolution; so apparently at least this planet and its environs are being simulated at the scale of nanometers and femtoseconds. This is a ludicrously huge amount of data, and while I cannot rule out the possibility of some larger universe existing that would allow a computer large enough to contain it, you have a very steep uphill battle if you want to argue that this is somehow what our posthuman descendants will consider the best use of their time and resources. Bostrom uses the video game comparison to make it sound like they are just cranking out copies of Halo 917 (“Plasma rifles? How quaint!”) when in fact it amounts to assuming that our descendants will just casually create universes of 10^50 particles running over space intervals of 10^-9 meters and time-steps of 10^-15 seconds that contain billions of actual sentient beings and thousands of genocides, and furthermore do so in a way that somehow manages to make the apparent fundamental equations inside those universes unsolvable.

Indeed, I think it’s conservative to say that the likelihood ratio is 10^12—observing what we do is a trillion times more likely if this is the real universe than if it’s a simulation. Therefore, unless you believe that our posthuman descendants would have reason to create at least a billion simulations of universes like our own, you can assign a probability that we are in the actual universe of at least 99.9%.

As indeed I do.

How do we reach people with ridiculous beliefs?

Oct 16, JDN 2457678

One of the most unfortunate facts in the world—indeed, perhaps the most unfortunate fact, from which most other unfortunate facts follow—is that it is quite possible for a human brain to sincerely and deeply hold a belief that is, by any objective measure, totally and utterly ridiculous.

And to be clear, I don’t just mean false; I mean ridiculous. People having false beliefs is an inherent part of being finite beings in a vast and incomprehensible universe. Monetarists are wrong, but they are not ludicrous. String theorists are wrong, but they are not absurd. Multiregionalism is wrong, but it is not nonsensical. Indeed, I, like anyone else, am probably wrong about a great many things, though of course if I knew which ones I’d change my mind. (Indeed, I admit a small but nontrivial probability of being wrong about the three things I just listed.)

I mean ridiculous beliefs. I mean that any rational, objective assessment of the probability of that belief being true would be vanishingly small, 1 in 1 million at best. I’m talking about totally nonsensical beliefs, beliefs that go against overwhelming evidence; some of them are outright incoherent. Yet millions of people go on believing them.

For example, over 40% of Americans believe that human beings were created by God in their present form less than 10,000 years ago, and typically offer no evidence for this besides “The Bible says so.” (Strictly speaking, even that isn’t true—standard interpretations of the Bible say so. The Bible itself contains no clearly stated date for creation.) This despite the absolutely overwhelming body of evidence supporting the theory of evolution by Darwinian natural selection.

Over a third of Americans don’t believe in global warming, which is not only a complete consensus among all credible climate scientists based on overwhelming evidence, but one of the central threats facing human civilization over the 21st century. On a global scale this is rather like standing on a train track and saying you don’t believe in trains. (Or like the time my mother once told me about where an alert went out to her office that there was a sniper in the area, indiscriminately shooting at civilians, and one of her co-workers refused to join the security protocol and declared smugly, “I don’t believe in snipers.” Fortunately, he was unharmed in the incident. This time.)

1/4 of Americans believe in astrology, and 1/4 Americans believe that aliens have visited the Earth. (Not sure if it’s the same 1/4. Probably considerable but not total overlap.) The existence of extraterrestrial civilizations somewhere in this mind-bogglingly (perhaps infinitely) vast universe has probability 1. But visiting us is quite another matter, and there is absolutely no credible evidence of it. As for astrology? I shouldn’t have to explain why the position of Jupiter, much less Sirius, on your birthday is not a major influence on your behavior or life outcomes. Your obstetrician exerted more gravitational force on you than Jupiter did at the moment you were born.

The majority of Americans believe in telepathy or extrasensory perception. I confess that I actually did when I was very young, though I think I disabused myself of this around the time I stopped believing in Santa Claus.

I love the term “extrasensory perception” because it is such an oxymoron; if you’re perceiving, it is via senses. “Sixth sense” is better, except that we actually already have at least nine senses: The ones you probably know, vision (sight), audition (hearing), olfaction (smell), gustation (taste), and tactition (touch)—and the ones you may not know, thermoception (heat), proprioception (body position), vestibulation (balance), and nociception (pain). These can probably be subdivided further—vision and spatial reasoning are dissociated in blind people, heat and cold are separate nerve pathways, pain and itching are distinct systems, and there are a variety of different sensors used for proprioception. So we really could have as many as twenty senses, depending on how you’re counting.

What about telepathy? Well, that is not actually impossible in principle; it’s just that there’s no evidence that humans actually do it. Smartphones do it almost literally constantly, transmitting data via high-frequency radio waves back and forth to one another. We could have evolved some sort of radio transceiver organ (perhaps an offshoot of an electric defense organ such as that of electric eels), but as it turns out we didn’t. Actually in some sense—which some might say is trivial, but I think it’s actually quite deep—we do have telepathy; it’s just that we transmit our thoughts not via radio waves or anything more exotic, but via sound waves (speech) and marks on paper (writing) and electronic images (what you’re reading right now). Human beings really do transmit our thoughts to one another, and this truly is a marvelous thing we should not simply take for granted (it is one of our most impressive feats of Mundane Magic); but somehow I don’t think that’s what people mean when they say they believe in psychic telepathy.

And lest you think this is a uniquely American phenomenon: The particular beliefs vary from place to place, but bizarre beliefs abound worldwide, from conspiracy theories in the UK to 9/11 “truthers” in Canada to HIV denialism in South Africa (fortunately on the wane). The American examples are more familiar to me and most of my readers are Americans, but wherever you are reading from, there are probably ridiculous beliefs common there.

I could go on, listing more objectively ridiculous beliefs that are surprisingly common; but the more I do that, the more I risk alienating you, in case you should happen to believe one of them. When you add up the dizzying array of ridiculous beliefs one could hold, odds are that most people you’d ever meet will have at least one of them. (“Not me!” you’re thinking; and perhaps you’re right. Then again, I’m pretty sure that the 4% or so of people who believe in the Reptilians think the same thing.)

Which brings me to my real focus: How do we reach these people?

One possible approach would be to just ignore them, leave them alone, or go about our business with them as though they did not have ridiculous beliefs. This is in fact the right thing to do under most circumstances, I think; when a stranger on the bus starts blathering about how the lizard people are going to soon reveal themselves and establish the new world order, I don’t think it’s really your responsibility to persuade that person to realign their beliefs with reality. Nodding along quietly would be acceptable, and it would be above and beyond the call of duty to simply say, “Um, no… I’m fairly sure that isn’t true.”

But this cannot always be the answer, if for no other reason than the fact that we live in a democracy, and people with ridiculous beliefs frequently vote according to them. Then people with ridiculous beliefs can take office, and make laws that affect our lives. Actually this would be true even if we had some other system of government; there’s nothing in particular to stop monarchs, hereditary senates, or dictators from believing ridiculous things. If anything, the opposite; dictators are known for their eccentricity precisely because there are no checks on their behavior.

At some point, we’re going to need to confront the fact that over half of the Republicans in the US Congress do not believe in climate change, and are making policy accordingly, rolling drunk on petroleum and treating the hangover with the hair of the dog.

We’re going to have to confront the fact that school boards in Southern states, particularly Texas, continually vote to censor biology textbooks of their dreaded Darwinian evolution.

So we really do need to find a way to talk to people who have ridiculous beliefs, and engage with them, understand why they think the way they do, and then—hopefully at least—tilt them a little bit back toward rational reality. You will not be able to change their mind completely right away, but if each of us can at least chip away at their edifice of absurdity, then all together perhaps we can eventually bring them to enlightenment.

Of course, a good start is probably not to say you think that their beliefs are ridiculous, because people get very defensive when you do that, even—perhaps especially—when it’s true. People invest their identity in beliefs, and decide what beliefs to profess based on the group identities they value most.

This is the link that we must somehow break. We must show people that they are not defined by their beliefs, that it is okay to change your mind. We must be patient and compassionate—sometimes heroically so, as people spout offensive nonsense in our faces, sometimes offensive nonsense that directly attacks us personally. (“Atheists deserve Hell”, taken literally, would constitute something like a death threat except infinitely worse. While to them it very likely is just reciting a slogan, to the atheist listening it says that you believe that they are so evil, so horrible that they deserve eternal torture for believing what they do. And you get mad when we say your beliefs are ridiculous?)

We must also remind people that even very smart people can believe very dumb things—indeed, I’d venture a guess that most dumb things are in fact believed by smart people. Even the most intelligent human beings can only glimpse a tiny fraction of the universe, and all human brains are subject to the same fundamental limitations, the same core heuristics and biases. Make it clear that you’re saying you think their beliefs are false, not that they are stupid or crazy. And indeed, make it clear to yourself that this is indeed what you believe, because it ought to be. It can be tempting to think that only an idiot would believe something so ridiculous—and you are safe, for you are no idiot!—but the truth is far more humbling: Human brains are subject to many flaws, and guarding the fortress of the mind against error and deceit is a 24-7 occupation. Indeed, I hope that you will ask yourself: “What beliefs do I hold that other people might find ridiculous? Are they, in fact, ridiculous?”

Even then, it won’t be easy. Most people are strongly resistant to any change in belief, however small, and it is in the nature of ridiculous beliefs that they require radical changes in order to restore correspondence with reality. So we must try in smaller steps.

Maybe don’t try to convince them that 9/11 was actually the work of Osama bin Laden; start by pointing out that yes, steel does bend much more easily at the temperature at which jet fuel burns. Maybe don’t try to persuade them that astrology is meaningless; start by pointing out the ways that their horoscope doesn’t actually seem to fit them, or could be made to fit anybody. Maybe don’t try to get across the real urgency of climate change just yet, and instead point out that the “study” they read showing it was a hoax was clearly funded by oil companies, who would perhaps have a vested interest here. And as for ESP? I think it’s a good start just to point out that we have more than five senses already, and there are many wonders of the human brain that actual scientists know about well worth exploring—so who needs to speculate about things that have no scientific evidence?

Sometimes people have to lose their jobs. This isn’t a bad thing.

Oct 8, JDN 2457670

Eleizer Yudkowsky (founder of the excellent blog forum Less Wrong) has a term he likes to use to distinguish his economic policy views from either liberal, conservative, or even libertarian: “econoliterate”, meaning the sort of economic policy ideas one comes up with when one actually knows a good deal about economics.

In general I think Yudkowsky overestimates this effect; I’ve known some very knowledgeable economists who disagree quite strongly over economic policy, and often following the conventional political lines of liberal versus conservative: Liberal economists want more progressive taxation and more Keynesian monetary and fiscal policy, while conservative economists want to reduce taxes on capital and remove regulations. Theoretically you can want all these things—as Miles Kimball does—but it’s rare. Conservative economists hate minimum wage, and lean on the theory that says it should be harmful to employment; liberal economists are ambivalent about minimum wage, and lean on the empirical data that shows it has almost no effect on employment. Which is more reliable? The empirical data, obviously—and until more economists start thinking that way, economics is never truly going to be a science as it should be.

But there are a few issues where Yudkowsky’s “econoliterate” concept really does seem to make sense, where there is one view held by most people, and another held by economists, regardless of who is liberal or conservative. One such example is free trade, which almost all economists believe in. A recent poll of prominent economists by the University of Chicago found literally zero who agreed with protectionist tariffs.

Another example is my topic for today: People losing their jobs.

Not unemployment, which both economists and almost everyone else agree is bad; but people losing their jobs. The general consensus among the public seems to be that people losing jobs is always bad, while economists generally consider it a sign of an economy that is run smoothly and efficiently.

To be clear, of course losing your job is bad for you; I don’t mean to imply that if you lose your job you shouldn’t be sad or frustrated or anxious about that, particularly not in our current system. Rather, I mean to say that policy which tries to keep people in their jobs is almost always a bad idea.

I think the problem is that most people don’t quite grasp that losing your job and not having a job are not the same thing. People not having jobs who want to have jobs—unemployment—is a bad thing. But losing your job doesn’t mean you have to stay unemployed; it could simply mean you get a new job. And indeed, that is what it should mean, if the economy is running properly.

Check out this graph, from FRED:

hires_separations

The red line shows hires—people getting jobs. The blue line shows separations—people losing jobs or leaving jobs. During a recession (the most recent two are shown on this graph), people don’t actually leave their jobs faster than usual; if anything, slightly less. Instead what happens is that hiring rates drop dramatically. When the economy is doing well (as it is right now, more or less), both hires and separations are at very high rates.

Why is this? Well, think about what a job is, really: It’s something that needs done, that no one wants to do for free, so someone pays someone else to do it. Once that thing gets done, what should happen? The job should end. It’s done. The purpose of the job was not to provide for your standard of living; it was to achieve the task at hand. Once it doesn’t need done, why keep doing it?

We tend to lose sight of this, for a couple of reasons. First, we don’t have a basic income, and our social welfare system is very minimal; so a job usually is the only way people have to provide for their standard of living, and they come to think of this as the purpose of the job. Second, many jobs don’t really “get done” in any clear sense; individual tasks are completed, but new ones always arise. After every email sent is another received; after every patient treated is another who falls ill.

But even that is really only true in the short run. In the long run, almost all jobs do actually get done, in the sense that no one has to do them anymore. The job of cleaning up after horses is done (with rare exceptions). The job of manufacturing vacuum tubes for computers is done. Indeed, the job of being a computer—that used to be a profession, young women toiling away with slide rules—is very much done. There are no court jesters anymore, no town criers, and very few artisans (and even then, they’re really more like hobbyists). There are more writers now than ever, and occasional stenographers, but there are no scribes—no one powerful but illiterate pays others just to write things down, because no one powerful is illiterate (and even few who are not powerful, and fewer all the time).

When a job “gets done” in this long-run sense, we usually say that it is obsolete, and again think of this as somehow a bad thing, like we are somehow losing the ability to do something. No, we are gaining the ability to do something better. Jobs don’t become obsolete because we can’t do them anymore; they become obsolete because we don’t need to do them anymore. Instead of computers being a profession that toils with slide rules, they are thinking machines that fit in our pockets; and there are plenty of jobs now for software engineers, web developers, network administrators, hardware designers, and so on as a result.

Soon, there will be no coal miners, and very few oil drillers—or at least I hope so, for the sake of our planet’s climate. There will be far fewer auto workers (robots have already done most of that already), but far more construction workers who install rail lines. There will be more nuclear engineers, more photovoltaic researchers, even more miners and roofers, because we need to mine uranium and install solar panels on rooftops.

Yet even by saying that I am falling into the trap: I am making it sound like the benefit of new technology is that it opens up more new jobs. Typically it does do that, but that isn’t what it’s for. The purpose of technology is to get things done.

Remember my parable of the dishwasher. The goal of our economy is not to make people work; it is to provide people with goods and services. If we could invent a machine today that would do the job of everyone in the world and thereby put us all out of work, most people think that would be terrible—but in fact it would be wonderful.

Or at least it could be, if we did it right. See, the problem right now is that while poor people think that the purpose of a job is to provide for their needs, rich people think that the purpose of poor people is to do jobs. If there are no jobs to be done, why bother with them? At that point, they’re just in the way! (Think I’m exaggerating? Why else would anyone put a work requirement on TANF and SNAP? To do that, you must literally think that poor people do not deserve to eat or have homes if they aren’t, right now, working for an employer. You can couch that in cold economic jargon as “maximizing work incentives”, but that’s what you’re doing—you’re threatening people with starvation if they can’t or won’t find jobs.)

What would happen if we tried to stop people from losing their jobs? Typically, inefficiency. When you aren’t allowed to lay people off when they are no longer doing useful work, we end up in a situation where a large segment of the population is being paid but isn’t doing useful work—and unlike the situation with a basic income, those people would lose their income, at least temporarily, if they quit and tried to do something more useful. There is still considerable uncertainty within the empirical literature on just how much “employment protection” (laws that make it hard to lay people off) actually creates inefficiency and reduces productivity and employment, so it could be that this effect is small—but even so, likewise it does not seem to have the desired effect of reducing unemployment either. It may be like minimum wage, where the effect just isn’t all that large. But it’s probably not saving people from being unemployed; it may simply be shifting the distribution of unemployment so that people with protected jobs are almost never unemployed and people without it are unemployed much more frequently. (This doesn’t have to be based in law, either; while it is made by custom rather than law, it’s quite clear that tenure for university professors makes tenured professors vastly more secure, but at the cost of making employment tenuous and underpaid for adjuncts.)

There are other policies we could make that are better than employment protection, active labor market policies like those in Denmark that would make it easier to find a good job. Yet even then, we’re assuming that everyone needs jobs–and increasingly, that just isn’t true.

So, when we invent a new technology that replaces workers, workers are laid off from their jobs—and that is as it should be. What happens next is what we do wrong, and it’s not even anybody in particular; this is something our whole society does wrong: All those displaced workers get nothing. The extra profit from the more efficient production goes entirely to the shareholders of the corporation—and those shareholders are almost entirely members of the top 0.01%. So the poor get poorer and the rich get richer.

The real problem here is not that people lose their jobs; it’s that capital ownership is distributed so unequally. And boy, is it ever! Here are some graphs I made of the distribution of net wealth in the US, using from the US Census.

Here are the quintiles of the population as a whole:

net_wealth_us

And here are the medians by race:

net_wealth_race

Medians by age:

net_wealth_age

Medians by education:

net_wealth_education

And, perhaps most instructively, here are the quintiles of people who own their homes versus renting (The rent is too damn high!)

net_wealth_rent

All that is just within the US, and already they are ranging from the mean net wealth of the lowest quintile of people under 35 (-$45,000, yes negative—student loans) to the mean net wealth of the highest quintile of people with graduate degrees ($3.8 million). All but the top quintile of renters are poorer than all but the bottom quintile of homeowners. And the median Black or Hispanic person has less than one-tenth the wealth of the median White or Asian person.

If we look worldwide, wealth inequality is even starker. Based on UN University figures, 40% of world wealth is owned by the top 1%; 70% by the top 5%; and 80% by the top 10%. There is less total wealth in the bottom 80% than in the 80-90% decile alone. According to Oxfam, the richest 85 individuals own as much net wealth as the poorest 3.7 billion. They are the 0.000,001%.

If we had an equal distribution of capital ownership, people would be happy when their jobs became obsolete, because it would free them up to do other things (either new jobs, or simply leisure time), while not decreasing their income—because they would be the shareholders receiving those extra profits from higher efficiency. People would be excited to hear about new technologies that might displace their work, especially if those technologies would displace the tedious and difficult parts and leave the creative and fun parts. Losing your job could be the best thing that ever happened to you.

The business cycle would still be a problem; we have good reason not to let recessions happen. But stopping the churn of hiring and firing wouldn’t actually make our society better off; it would keep people in jobs where they don’t belong and prevent us from using our time and labor for its best use.

Perhaps the reason most people don’t even think of this solution is precisely because of the extreme inequality of capital distribution—and the fact that it has more or less always been this way since the dawn of civilization. It doesn’t seem to even occur to most people that capital income is a thing that exists, because they are so far removed from actually having any amount of capital sufficient to generate meaningful income. Perhaps when a robot takes their job, on some level they imagine that the robot is getting paid, when of course it’s the shareholders of the corporations that made the robot and the corporations that are using the robot in place of workers. Or perhaps they imagine that those shareholders actually did so much hard work they deserve to get paid that money for all the hours they spent.

Because pay is for work, isn’t it? The reason you get money is because you’ve earned it by your hard work?

No. This is a lie, told to you by the rich and powerful in order to control you. They know full well that income doesn’t just come from wages—most of their income doesn’t come from wages! Yet this is even built into our language; we say “net worth” and “earnings” rather than “net wealth” and “income”. (Parade magazine has a regular segment called “What People Earn”; it should be called “What People Receive”.) Money is not your just reward for your hard work—at least, not always.

The reason you get money is that this is a useful means of allocating resources in our society. (Remember, money was created by governments for the purpose of facilitating economic transactions. It is not something that occurs in nature.) Wages are one way to do that, but they are far from the only way; they are not even the only way currently in use. As technology advances, we should expect a larger proportion of our income to go to capital—but what we’ve been doing wrong is setting it up so that only a handful of people actually own any capital.

Fix that, and maybe people will finally be able to see that losing your job isn’t such a bad thing; it could even be satisfying, the fulfillment of finally getting something done.