Caught between nepotism and credentialism

Feb 19, JDN 2457804

One of the more legitimate criticisms out there of we “urban elites” is our credentialismour tendency to decide a person’s value as an employee or even as a human being based solely upon their formal credentials. Randall Collins, an American sociologist, wrote a book called The Credential Society arguing that much of the class stratification in the United States is traceable to this credentialism—upper-middle-class White Anglo-Saxon Protestants go to the good high schools to get into the good colleges to get the good careers, and all along the way maintain subtle but significant barriers to keep everyone else out.

A related concern is that of credential inflation, where more and more people get a given credential (such as a high school diploma or a college degree), and it begins to lose value as a signal of status. It is often noted that a bachelor’s degree today “gets” you the same jobs that a high school diploma did two generations ago, and two generations hence you may need a master’s or even a PhD.

I consider this concern wildly overblown, however. First of all, they’re not actually the same jobs at all. Even our “menial” jobs of today require skills that most people didn’t have two generations ago—not simply those involving electronics and computers, but even quite basic literacy and numeracy. Yes, you could be a banker in the 1920s with a high school diploma, but plenty of bankers in the 1920s didn’t know algebra. What, you think they were arbitraging derivatives based on the Black-Scholes model?

The primary purpose of education should be to actually improve students’ abilities, not to signal their superior status. More people getting educated is good, not bad. If we really do need signals, we can devise better ones than making people pay tens of thousands of dollars in tuition and spending years taking classes. An expenditure of that magnitude should be accomplishing something, not just signaling. (And given the overwhelming positive correlation between a country’s educational attainment and its economic development, clearly education is actually accomplishing something.) Our higher educational standards have directly tied to higher technology and higher productivity. If indeed you need a PhD to be a janitor in 2050, it will be because in 2050 a “janitor” is actually the expert artificial intelligence engineer who commands an army of cleaning robots, not because credentials have “inflated”. Thinking that credentials “inflate” requires thinking that business managers must be very stupid, that they would exclude whole swaths of qualified candidates that they could pay less to do the same work. Only a complete moron would require a PhD to hire you for wielding a mop.

No, what concerns me is an over-emphasis on prestigious credentials over genuine competence. This is definitely a real issue in our society: Almost every US President went to an Ivy League university, yet several of them (George W. Bush, anyone?) clearly would not actually have been selected by such a university if their families had not been wealthy and well-connected. (Harvard’s application literally contains a question asking whether you are a “lineal or collateral descendant” of one of a handful of super-wealthy families.) Papers that contain errors so basic that I would probably get a failing grade as a grad student for them become internationally influential because they were written by famous economists with fancy degrees.

Ironically, it may be precisely because elite universities try not to give grades or special honors that so many of their students try so desperately to latch onto any bits of social status they can get their hands on. In this blog post, a former Yale law student comments on how, without grades or cum laude to define themselves, Yale students became fiercely competitive in the pettiest ways imaginable. Or it might just be a selection effect; to get into Yale you’ve probably got to be pretty competitive, so even if they don’t give out grades once you get there, you can take the student out of the honors track, but you can’t take the honors track out of the student.

But perhaps the biggest problem with credentialism is… I don’t see any viable alternatives!

We have to decide who is going to be hired for technical and professional positions somehow. It almost certainly can’t be everyone. And the most sensible way to do it would be to have a process people go through to get trained and evaluated on their skills in that profession—that is, a credential.

What else would we do? We could decide randomly, I suppose; well, good luck with that. Or we could try to pick people who don’t have qualifications (“anti-credentialism” I suppose), which would be systematically wrong. Or individual employers could hire individuals they know and trust on a personal level, which doesn’t seem quite so ridiculous—but we have a name for that too, and it’s nepotism.

Even anti-credentialism does exist, bafflingly enough. Many people voted for George W. Bush because they said he was “the kind of guy you can have a beer with”. That wasn’t true, of course; he was the spoiled child of a billionaire, a man who had never really worked a day in his life. But even if it had been true, so what? How is that a qualification to be the leader of the free world? And how many people voted for Trump precisely because he had no experience in government? This made sense to them somehow. (And, shockingly, he has no idea what he’s doing. Actually what is shocking is that he admits that.)

Nepotism of course happens all the time. In fact, nepotism is probably the default state for humans. The continual re-emergence of hereditary monarchy and feudalism around the world suggests that this is some sort of attractor state for human societies, that in the absence of strong institutional pressures toward some other system this is what people will generally settle into. And feudalism is nothing if not nepotistic; your position in life is almost entirely determined by your father’s position, and his father’s before that.

Formal credentials can put a stop to that. Of course, your ability to obtain the credential often depends upon your income and social status. But if you can get past those barriers and actually get the credential, you now have a way of pushing past at least some of the competitors who would have otherwise been hired on their family connections alone. The rise in college enrollments—and women actually now exceeding men in college enrollment rates—is one of the biggest reasons why the gender pay gap is rapidly closing among young workers. Nepotism and sexism that would otherwise have hired unqualified men is now overtaken by the superior credentials of qualified women.

Credentialism does still seem suboptimal… but from where I’m sitting, it seems like a second-best solution. We can’t actually observe people’s competence and ability directly, so we need credentials to provide an approximate measurement. We can certainly work to improve credentials—and for example, I am fiercely opposed to multiple-choice testing because it produces such meaningless credentials—but ultimately I don’t see any alternative to credentials.

Is intellectual property justified?

Feb 12, JDN 2457797

I had hoped to make this week’s post more comprehensive, but as I’ve spent the last week suffering from viral bronchitis I think I will keep this one short and revisit the topic in a few weeks.

Intellectual property underlies an increasingly large proportion of the world’s economic activity, more so now than ever before. We don’t just patent machines anymore; we patent drugs, and software programs, and even plants. Compared to that, copyrights on books, music, and movies seem downright pedestrian.

Though surely not the only cause, this is almost certainly contributing to the winner-takes-all effect; if you own the patent to something important, you can appropriate a huge amount of wealth to yourself with very little effort.

Moreover, this is not something that happened automatically as a natural result of market forces or autonomous human behavior. This is a policy, one that requires large investments in surveillance and enforcement to maintain. Intellectual property is probably the single largest market intervention that our government makes, and it is in a very strange direction: With antitrust law, the government seeks to undermine monopolies; but with intellectual property, the government seeks to protect monopolies.

So it’s important to ask: What is the justification for intellectual property? Do we actually have a good reason for doing this?

The basic argument goes something like this:

Many intellectual endeavors, such as research, invention, and the creation of art, require a large up-front investment of resources to complete, but once completed it costs almost nothing to disseminate the results. There is a very large fixed cost that makes it difficult to create these goods at all, but once they exist, the marginal cost of producing more of them is minimal.

If we didn’t have any intellectual property, once someone created an invention or a work of art, someone else could simply copy it and sell it at a much lower price. If enough competition emerged to drive price down to marginal cost, the original creator of the good would not only not profit, but would actually take an enormous loss, as they paid that large fixed cost but none of their competitors did.

Thus, knowing that they will take a loss if they do, individuals will not create inventions or works of art in the first place. Without intellectual property, all research, invention, and art would grind to a halt.

 

That last sentence sounds terrible, right? What would we do without research, invention, or art? But then if you stop and think about it for a minute, it becomes clear that this can’t possibly be the outcome of eliminating intellectual property. Most societies throughout the history of human civilization have not had a system of intellectual property, and yet they have all had art, and most of them have had research and invention as well.

If intellectual property is to be defended, it can’t be because we would have none of these things without it—it must be that we would have less, and so much less that it offsets the obvious harms of concentrating so much wealth and power in a handful of individuals.

I had hoped to get into the empirical results of different intellectual property regimes, but due to my illness I’m going to save that for another day.

Instead I’m just going to try to articulate what the burden of proof here really needs to be.

First of all, showing that we spend a lot of money on patents contributes absolutely nothing useful to defending them. Yes, we all know patents are expensive. The question is whether they are worth it. To show that this is not a strawman, here’s an article by IP Watchdog that takes the fact that “a new study showing that academic patent licensing contributed more than $1 trillion to the U.S. economy over eighteen years” is some kind of knockdown argument in favor of patents. If you actually showed that this economic activity would not exist without patents, then that would be an argument for patents. But all this study actually does is shows that we spend that much on patents, which says nothing about whether this is a good use of resources. It’s like when people try to defend the F-35 boondoggle by saying “it supports thousands of jobs!”; well, yes, but what about the millions of jobs we could be supporting instead if we used that money for something more efficient? (And indeed, the evidence is quite clear that spending on the F-35 destroys more jobs than it creates.) So any serious of estimate of economic benefits of intellectual property must also come with an estimate of the economic cost of intellectual property, or it is just propaganda.
It’s not enough to show some non-negligible (much less “statistically significant”) increase in innovation as a result of intellectual property. The effect size is critical; the increase in innovation needs to be large enough that it justifies having world-spanning monopolies that concentrate the world’s wealth in the hands of a few individuals. Because we already know that intellectual property concentrates wealth; they are monopolies, and monopolies concentrate wealth. It’s not enough to show that there is a benefit; that benefit must be greater than the cost, and there must be no alternative methods that allow us to achieve a greater net benefit.
It’s also important to be clear what we mean by “innovation”; this can be a very difficult thing to measure. But in principle what we really want to know is whether we are supporting important innovation—whether we will get more Mona Lisas and more polio vaccines, not simply whether we will get more Twilight and more Viagra. And one of the key problems with intellectual property as a method of funding innovation is that there is only a vague link between the profits that can be extracted and the benefits of the innovation. (Though to be fair, this is actually a more general problem; it is literally a mathematical theorem that competitive markets only maximize utility if you value rich people more, in inverse proportion to their marginal utility of wealth.)

Innovation is certainly important. Indeed, it is no exaggeration to say that innovation is the foundation of economic development and civilization itself. Defenders of intellectual property often want you to stop the conversation there: “Innovation is important!” Don’t let them. It’s not enough to say that innovation is important; intellectual property must also be the best way of achieving that innovation.

Is it? Well, in a few weeks I’ll get back to what the data actually says on this. There is some evidence supporting intellectual property—but the case is a lot weaker than you have probably been led to believe.

The urban-rural divide runs deep

Feb 5, JDN 2457790

Are urban people worth less than rural people?

That probably sounds like a ridiculous thing to ask; of course not, all people are worth the same (other things equal of course—philanthropists are worth more than serial murderers). But then, if you agree with that, you’re probably an urban person, as I’m sure most of my readers are (and as indeed most people in highly-developed countries are).

A disturbing number of rural people, however, honestly do seem to believe this. They think that our urban lifestyles (whatever they imagine those to be) devalue us as citizens and human beings.

That is the key subtext to understand in the terrifying phenomenon that is Donald Trump. Most of the people who voted for him can’t possibly have thought he was actually trustworthy, and many probably didn’t actually support his policies of bigotry and authoritarianism (though he was very popular among bigots and authoritarians). From speaking with family members and acquaintances who proudly voted for Trump, one thing came through very clearly: This was a gigantic middle finger pointed at cities. They didn’t even really want Trump; they just knew we didn’t, and so they voted for him out of spite as much as anything else. They also have really confused views about free trade, so some of them voted for him because he promised to bring back jobs lost to trade (that weren’t lost to trade, can’t be brought back, and shouldn’t be even if they could). Talk with a Trump voter for a few minutes, and sneers of “latte-sipping liberal” (I don’t even like coffee) and “coastal elite” (I moved here to get educated; I wasn’t born here) are sure to follow.

There has always been some conflict between rural and urban cultures, for as long as there have been urban cultures for rural cultures to be in conflict with. It is found not just in the US, but in most if not all countries around the world. It was relatively calm during the postwar boom in the 20th century, as incomes everywhere (or at least everywhere within highly-developed countries) were improving more or less in lockstep. But the 21st century has brought us much more unequal growth, concentrated on particular groups of people and particular industries. This has brought more resentment. And that divide, above all else, is what brought us Trump; the correlation between population density and voting behavior is enormous.

Of course, “urban” is sometimes a dog-whistle for “Black”; but sometimes I think it actually really means “urban”—and yet there’s still a lot of hatred embedded in it. Indeed, perhaps that’s why the dog-whistle works; a White man from a rural town can sneer at “urban” people and it’s not entirely clear whether he’s being racist or just being anti-urban.

The assumption that rural lifestyles are superior runs so deep in our culture that even in articles by urban people (like this one from the LA Times) supposedly reflecting about how to resolve this divide, there are long paeans to the world of “hard work” and “sacrifice” and “autonomy” of rural life, and mocking “urban elites” for their “disproportionate” (by which you can only mean almost proportionate) power over government.

Well, guess what? If you want to live in a rural area, go live in a rural area. Don’t pine for it. Don’t tell me how great farm life is. If you want to live on a farm, go live on a farm. I have nothing against it; we need farmers, after all. I just want you to shut up about how great it is, especially if you’re not going to actually do it. Pining for someone else’s lifestyle when you could easily take on that lifestyle if you really wanted it just shows that you think the grass is greener on the other side.

Because the truth is, farm living isn’t so great for most people. The world’s poorest people are almost all farmers. 70% of people below the UN poverty line live in rural areas, even as more and more of the world’s population moves into cities. If you use a broader poverty measure, as many as 85% of the world’s poor live in rural areas.

The kind of “autonomy” that means defending your home with a shotgun is normally what we would call anarchy—it’s a society that has no governance, no security. (Of course, in the US that’s pure illusion; crime rates in general are low and falling, and lower in rural areas than urban areas. But in some parts of the world, that anarchy is very real.) One of the central goals of global economic development is to get people away from subsistence farming into far more efficient manufacturing and service jobs.

At least in the US, farm life is a lot better than it used to be, now that agricultural technology has improved so that one farmer can now do the work of hundreds. Despite increased population and increased food consumption per person, the number of farmers in the US is now the smallest it has been since before the Civil War. The share of employment devoted to agriculture has fallen from over 80% in 1800 to under 2% today. Even just since the 1960s labor productivity of US farms has more than tripled.

But the reason that some 80% of Americans have chosen to live in cities—and yes, I can clearly say “chosen”, because cities are more expensive and therefore urban living is a voluntary activity. Most people who live in the city right now could move to the country if we really wanted to. We choose not to, because we know our life would be worse if we did.

Indeed, I dare say that a lot of the hatred of city-dwellers has got to be envy. Our (median) incomes are higher and our (mean) lifespans are longer. Fewer of our children are in poverty. Life is better here—we know it, and deep down, they know it too.

We also have better Internet access, unsurprisingly—though rural areas are only a few years behind, and the technology improves so rapidly that twice as many rural homes in the US have Internet access than urban homes did in 1998.

Now, a rational solution to this problem would be either to improve the lives of people in rural areas or else move everyone to urban areas—and both of those things have been happening, not only in the US but around the world. But in order to do that, you need to be willing to change things. You have to give up the illusion that farm life is some wonderful thing we should all be emulating, rather than the necessary toil that humanity was forced to go through for centuries until civilization could advance beyond it. You have to be willing to replace farmers with robots, so that people who would have been farmers can go do something better with their lives. You need to give up the illusion that there is something noble or honorable about hard labor on a farm—indeed, you need to give up the illusion that there is anything noble or honorable about hard work in general. Work is not a benefit; work is a cost. Work is what we do because we have to—and when we no longer have to do it, we should stop. Wanting to escape toil and suffering doesn’t make you lazy or selfish—it makes you rational.

We could surely be more welcoming—but cities are obviously more welcoming to newcomers than rural areas are. Our housing is too expensive, but that’s in part because so many people want to live here—supply hasn’t been able to keep up with demand.

I may seem to be presenting this issue as one-sided; don’t urban people devalue rural people too? Sometimes. Insults like “hick” and “yokel” and “redneck” do of course exist. But I’ve never heard anyone from a city seriously argue that people who live in rural areas should have votes that systematically count for less than those of people who live in cities—yet the reverse is literally what people are saying when they defend the Electoral College. If you honestly think that the Electoral College deserves to exist in anything like its present form, you must believe that some Americans are worth more than others, and the people who are worth more are almost all in rural areas while the people who are worth less are almost all in urban areas.

No, National Review, the Electoral College doesn’t “save” America from California’s imperial power; it gives imperial power to a handful of swing states. The only reason California would be more important than any other state is that more Americans live here. Indeed, a lot of Republicans in California are disenfranchised, because they know that their votes will never overcome the overwhelming Democratic majority for the state as a whole and the system is winner-takes-all. Indeed, about 30% of California votes Republican (well, not in the last election, because that was Trump—Orange County went Democrat for the first time in decades), so the number of disenfranchised Republicans alone in California is larger than the population of Michigan, which in turn is larger than the population of Wyoming, North Dakota, South Dakota, Montana, Nebraska, West Virginia, and Kansas combined. Indeed, there are more people in California than there are in Canada. So yeah, I’m thinking maybe we should get a lot of votes?

But it’s easy for you to drum up fear over “imperial rule” by California in particular, because we’re so liberal—and so urban, indeed an astonishing 95% urban, the most of any US state (or frankly probably any major regional entity on the planet Earth! To beat that you have to be something like Singapore, which literally just is a single city).

In fact, while insults thrown at urban people get thrown at basically all of us regardless of what we do, most of the insults that are thrown at rural people are mainly thrown at uneducated rural people. (And statistically, while many people in rural areas are educated and many people in urban areas are not, there’s definitely a positive correlation between urbanization and education.) It’s still unfair in many ways, not least because education isn’t entirely a choice, not in a society where tuition at an average private university costs more than the median individual income. Many of the people we mock as being stupid were really just born poor. It may not be their fault, but they can’t believe that the Earth is only 10,000 years old and not have some substantial failings in their education. I still don’t think mockery is the right answer; it’s really kicking them while they’re down. But clearly there is something wrong with our society when 40% of people believe something so obviously ludicrous—and those beliefs are very much concentrated in the same Southern states that have the most rural populations. “They think we’re ignorant just because we believe that God made the Earth 6,000 years ago!” I mean… yes? I’m gonna have to own up to that one, I guess. I do in fact think that people who believe things that were disproven centuries ago are ignorant.

So really this issue is one-sided. We who live in cities are being systematically degraded and disenfranchised, and when we challenge that system we are accused of being selfish or elitist or worse. We are told that our lifestyles are inferior and shameful, and when we speak out about the positive qualities of our lives—our education, our acceptance of diversity, our flexibility in the face of change—we are again accused of elitism and condescension.

We could simply stew in that resentment. But we can do better. We can reach out to people in rural areas, show them not just that our lives are better—as I said, they already know this—but that they can have these lives too. And we can make policy so that this really can happen for people. Envy doesn’t automatically lead to resentment; that only happens when combined with a lack of mobility. The way urban people pine for the countryside is baffling, since we could go there any time; but the way that country people long for the city is perfectly understandable, as our lives really are better but our rent is too high for them to afford. We need to bring that rent down, not just for the people already living in cities, but also for the people who want to but can’t.

And of course we don’t want to move everyone to cities, either. Many people won’t want to live in cities, and we need a certain population of farmers to make our food after all. We can work to improve infrastructure in rural areas—particularly when it comes to hospitals, which are a basic necessity that is increasingly underfunded. We shouldn’t stop using cost-effectiveness calculations, but we need to compare against the right things. If that hospital isn’t worth building, it should be because there’s another, better hospital we could make for the same amount or cheaper—not because we think that this town doesn’t deserve to have a hospital. We can expand our public transit systems over a wider area, and improve their transit speeds so that people can more easily travel to the city from further away.

We should seriously face up to the costs that free trade has imposed upon many rural areas. We can’t give up on free trade—but that doesn’t mean we need to keep our trade policy exactly as it is. We can do more to ensure that multinational corporations don’t have overwhelming bargaining power against workers and small businesses. We can establish a tax system that would redistribute more of the gains from free trade to the people and places most hurt by the transition. Right now, poor people in the US are often the most fiercely opposed to redistribution of wealth, because somehow they perceive that wealth will be redistributed from them when it would in fact be redistributed to them. They are in a scarcity mindset, their whole worldview shaped by the fact that they struggle to get by. They see every change as a threat, every stranger as an enemy.

Somehow we need to fight that mindset, get them to see that there are many positive changes that can be made, many things that we can achieve together that none of us could achieve along.

Why do so many Americans think that crime is increasing?

Jan 29, JDN 2457783

Since the 1990s, crime in United States has been decreasing, and yet in every poll since then most Americans report that they believe that crime is increasing.

It’s not a small decrease either. The US murder rate is down to the lowest it has been in a century. There are now a smaller absolute number (by 34 log points) of violent crimes per year in the US than there were 20 years ago, despite a significant increase in total population (19 log points—and the magic of log points is that, yes, the rate has decreased by precisely 53 log points).

It isn’t geographically uniform, of course; some states have improved much more than others, and a few states (such as New Mexico) have actually gotten worse.

The 1990s were a peak of violent crime, so one might say that we are just regressing to the mean. (Even that would be enough to make it baffling that people think crime is increasing.) But in fact overall crime in the US is now the lowest it has been since the 1970s, and still decreasing.

Indeed, this decrease has been underestimated, because we are now much better about reporting and investigating crimes than we used to be (which may also be part of why they are decreasing, come to think of it). If you compare against surveys of people who say they have been personally victimized, we’re looking at a decline in violent crime rates of two thirds—109 log points.

Just since 2008 violent crime has decreased by 26% (30 log points)—but of course we all know that Obama is “soft on crime” because he thinks cops shouldn’t be allowed to just shoot Black kids for no reason.

And yet, over 60% of Americans believe that overall crime in the US has increased in the last 10 years (though only 38% think it has increased in their own community!). These figures are actually down from 2010, when 66% thought crime was increasing nationally and 49% thought it was increasing in their local area.

The proportion of people who think crime is increasing does seem to decrease as crime rates decrease—but it still remains alarmingly high. If people were half as rational as most economists seem to believe, the proportion of people who think crime is increasing should drop to basically zero whenever crime rates decrease, since that’s a really basic fact about the world that you can just go look up on the Web in a couple of minutes. There’s no deep ambiguity, not even much “rational ignorance” given the low cost of getting correct answers. People just don’t bother to check, or don’t feel they need to.
What’s going on? How can crime fall to half what it was 20 years ago and yet almost two-thirds of people think it’s actually increasing?

Well, one hint is that news coverage of crime doesn’t follow the same pattern as actual crime.

News coverage in general is a terrible source of information, not simply because news organizations can be biased, make glaring mistakes, and sometimes outright lie—but actually for a much more fundamental reason: Even a perfect news channel, qua news channel, would report what is surprising—and what is surprising is, by definition, improbable. (Indeed, there is a formal mathematical concept in probability theory called surprisal that is simply the logarithm of 1 over the probability.) Even assuming that news coverage reports only the truth, the probability of seeing something on the news isn’t proportional to the probability of the event occurring—it’s more likely proportional to the entropy, which is probability times surprisal.

Now, if humans were optimal information processing engines, that would be just fine, actually; reporting events proportional to their entropy is actually a very efficient mechanism for delivering information (optimal, under certain types of constraints), provided that you can then process the information back into probabilities afterward.

But of course, humans aren’t optimal information processing engines. We don’t recompute the probabilities from the given entropy; instead we use the availability heuristic, by which we simply use the number of times we can think of something happening as our estimate of the probability of that event occurring. If you see more murders on TV news than you used you, you assume that murders must be more common than they used to be. (And when I put it like that, it really doesn’t sound so unreasonable, does it? Intuitively the availability heuristic seems to make sense—which is part of why it’s so insidious.)

Another likely reason for the discrepancy between perception and reality is nostalgia. People almost always have a more positive view of the past than it deserves, particularly when referring to their own childhoods. Indeed, I’m quite certain that a major reason why people think the world was much better when they were kids was that their parents didn’t tell them what was going on. And of course I’m fine with that; you don’t need to burden 4-year-olds with stories of war and poverty and terrorism. I just wish people would realize that they were being protected from the harsh reality of the world, instead of thinking that their little bubble of childhood innocence was a genuinely much safer world than the one we live in today.

Then take that nostalgia and combine it with the availability heuristic and the wall-to-wall TV news coverage of anything bad that happens—and almost nothing good that happens, certainly not if it’s actually important. I’ve seen bizarre fluff pieces about puppies, but never anything about how world hunger is plummeting or air quality is dramatically improved or cars are much safer. That’s the one thing I will say about financial news; at least they report it when unemployment is down and the stock market is up. (Though most Americans, especially most Republicans, still seem really confused on those points as well….) They will attribute it to anything from sunspots to the will of Neptune, but at least they do report good news when it happens. It’s no wonder that people are always convinced that the world is getting more dangerous even as it gets safer and safer.

The real question is what we do about it—how do we get people to understand even these basic facts about the world? I still believe in democracy, but when I see just how painfully ignorant so many people are of such basic facts, I understand why some people don’t. The point of democracy is to represent everyone’s interests—but we also end up representing everyone’s beliefs, and sometimes people’s beliefs just don’t line up with reality. The only way forward I can see is to find a way to make people’s beliefs better align with reality… but even that isn’t so much a strategy as an objective. What do I say to someone who thinks that crime is increasing, beyond showing them the FBI data that clearly indicates otherwise? When someone is willing to override all evidence with what they feel in their heart to be true, what are the rest of us supposed to do?

In defense of slacktivism

Jan 22, JDN 2457776

It’s one of those awkward portmanteaus that people often make to try to express a concept in fewer syllables, while also implicitly saying that the phenomenon is specific enough to deserve its own word: “Slacktivism”, made of “slacker” and “activism”, not unlike “mansplain” is made of “man” and “explain” or “edutainment” was made of “education” and “entertainment”—or indeed “gerrymander” was made of “Elbridge Gerry” and “salamander”. The term seems to be particularly popular on Huffington Post, which has a whole category on slacktivism. There is a particular subcategory of slacktivism that is ironically against other slacktivism, which has been dubbed “snarktivism”.

It’s almost always used as a pejorative; very few people self-identify as “slacktivists” (though once I get through this post, you may see why I’m considering it myself). “Slacktivism” is activism that “isn’t real” somehow, activism that “doesn’t count”.

Of course, that raises the question: What “counts” as legitimate activism? Is it only protest marches and sit-ins? Then very few people have ever been or will ever be activists. Surely donations should count, at least? Those have a direct, measurable impact. What about calling your Congressman, or letter-writing campaigns? These have been staples of activism for decades.
If the term “slacktivism” means anything at all, it seems to point to activities surrounding raising awareness, where the goal is not to enact a particular policy or support a particular NGO but to simply get as much public attention to a topic as possible. It seems to be particularly targeted at blogging and social media—and that’s important, for reasons I’ll get to shortly. If you gather a group of people in your community and give a speech about LGBT rights, you’re an activist. If you send out the exact same speech on Facebook, you’re a slacktivist.

One of the arguments against “slacktivism” is that it can be used to funnel resources at the wrong things; this blog post makes a good point that the Kony 2012 campaign doesn’t appear to have actually accomplished anything except profits for the filmmakers behind it. (Then again: A blog post against slacktivism? Are you sure you’re not doing right now the thing you think you are against?) But is this problem unique to slacktivism, or is it a more general phenomenon that people simply aren’t all that informed about how to have the most impact? There are an awful lot of inefficient charities out there, and in fact the most important waste of charitable funds involves people giving to their local churches. Fortunately, this is changing, as people become more secularized; churches used to account for over half of US donations, and now they only account for less than a third. (Naturally, Christian organizations are pulling out their hair over this.) The 60 million Americans who voted for Trump made a horrible mistake and will cause enormous global damage; but they weren’t slacktivists, were they?

Studies do suggest that traditionally “slacktivist” activities like Facebook likes aren’t a very strong predictor of future, larger actions, and more private modes of support (like donations and calling your Congressman) tend to be stronger predictors. But so what? In order for slacktivism to be a bad thing, they would have to be a negative predictor. They would have to substitute for more effective activism, and there’s no evidence that this happens.

In fact, there’s even some evidence that slacktivism has a positive effect (normally I wouldn’t cite Fox News, but I think in this case we should expect a bias in the opposite direction, and you can read the full Georgetown study if you want):

A study from Georgetown University in November entitled “Dynamics of Cause Engagement” looked how Americans learned about and interacted with causes and other social issues, and discovered some surprising findings on Slacktivism.

While the traditional forms of activism like donating money or volunteering far outpaces slacktivism, those who engage in social issues online are twice as likely as their traditional counterparts to volunteer and participate in events. In other words, slacktivists often graduate to full-blown activism.

At worst, most slacktivists are doing nothing for positive social change, and that’s what the vast majority of people have been doing for the entirety of human history. We can bemoan this fact, but that won’t change it. Most people are simply too uniformed to know what’s going on in the world, and too broke and too busy to do anything about it.

Indeed, slacktivism may be the one thing they can do—which is why I think it’s worth defending.

From an economist’s perspective, there’s something quite odd about how people’s objections to slacktivism are almost always formulated. The rational, sensible objection would be to their small benefits—this isn’t accomplishing enough, you should do something more effective. But in fact, almost all the objections to slacktivism I have ever read focus on their small costs—you’re not a “real activist” because you don’t make sacrifices like I do.

Yet it is a basic principle of economic rationality that, all other things equal, lower cost is better. Indeed, this is one of the few principles of economic rationality that I really do think is unassailable; perfect information is unrealistic and total selfishness makes no sense at all. But cost minimization is really very hard to argue with—why pay more, when you can pay less and get the same benefit?

From an economist’s perspective, the most important thing about an activity is its cost-effectiveness, measured either by net benefitbenefit minus cost—or rate of returnbenefit divided by cost. But in both cases, a lower cost is always better; and in fact slacktivism has an astonishing rate of return, precisely because its cost is so small.

Suppose that a campaign of 10 million Facebook likes actually does have a 1% chance of changing a policy in a way that would save 10,000 lives, with a life expectancy of 50 years each. Surely this is conservative, right? I’m only giving it a 1% chance of success, on a policy with a relatively small impact (10,000 lives could be a single clause in an EPA regulatory standard), with a large number of slacktivist participants (10 million is more people than the entire population of Switzerland). Yet because clicking “like” and “share” only costs you maybe 10 seconds, we’re talking about an expected cost of (10 million)(10/86,400/365) = 0.32 QALY for an expected benefit of (10,000)(0.01)(50) = 5000 QALY. That is a rate of return of 1,500,000%—that’s 1.5 million percent.

Let’s compare this to the rate of return on donating to a top charity like UNICEF, Oxfam, the Against Malaria Foundation, or the Schistomoniasis Control Initiative, for which donating about $300 would save the life of 1 child, adding about 50 QALY. That $300 most likely cost you about 0.01 QALY (assuming an annual income of $30,000), so we’re looking at a return of 500,000%. Now, keep in mind that this is a huge rate of return, far beyond what you can ordinarily achieve, that donating $300 to UNICEF is probably one of the best things you could possibly be doing with that money—and yet slacktivism may still exceed it in efficiency. Maybe slacktivism doesn’t sound so bad after all?

Of course, the net benefit of your participation is higher in the case of donation; you yourself contribute 50 QALY instead of only contributing 0.0005 QALY. Ultimately net benefit is what matters; rate of return is a way of estimating what the net benefit would be when comparing different ways of spending the same amount of time or money. But from the figures I just calculated, it begins to seem like maybe the very best thing you could do with your time is clicking “like” and “share” on Facebook posts that will raise awareness of policies of global importance. Now, you have to include all that extra time spent poring through other Facebook posts, and consider that you may not be qualified to assess the most important issues, and there’s a lot of uncertainty involved in what sort of impact you yourself will have… but it’s almost certainly not the worst thing you could be doing with your time, and frankly running these numbers has made me feel a lot better about all the hours I have actually spent doing this sort of thing. It’s a small benefit, yes—but it’s an even smaller cost.

Indeed, the fact that so many people treat low cost as bad, when it is almost by definition good, and the fact that they also target their ire so heavily at blogging and social media, says to me that what they are really trying to accomplish here has nothing to do with actually helping people in the most efficient way possible.

Rather, it’s two things.

The obvious one is generational—it’s yet another chorus in the unending refrain that is “kids these days”. Facebook is new, therefore it is suspicious. Adults have been complaining about their descendants since time immemorial; some of the oldest written works we have are of ancient Babylonians complaining that their kids are lazy and selfish. Either human beings have been getting lazier and more selfish for thousands of years, or, you know, kids are always a bit more lazy and selfish than their parents or at least seem so from afar.

The one that’s more interesting for an economist is signaling. By complaining that other people aren’t paying enough cost for something, what you’re really doing is complaining that they aren’t signaling like you are. The costly signal has been made too cheap, so now it’s no good as a signal anymore.

“Anyone can click a button!” you say. Yes, and? Isn’t it wonderful that now anyone with a smartphone (and there are more people with access to smartphones than toilets, because #WeLiveInTheFuture) can contribute, at least in some small way, to improving the world? But if anyone can do it, then you can’t signal your status by doing it. If your goal was to make yourself look better, I can see why this would bother you; all these other people doing things that look just as good as what you do! How will you ever distinguish yourself from the riffraff now?

This is also likely what’s going on as people fret that “a college degree’s not worth anything anymore” because so many people are getting them now; well, as a signal, maybe not. But if it’s just a signal, why are we spending so much money on it? Surely we can find a more efficient way to rank people by their intellect. I thought it was supposed to be an education—in which case the meteoric rise in global college enrollments should be cause for celebration. (In reality of course a college degree can serve both roles, and it remains an open question among labor economists as to which effect is stronger and by how much. But the signaling role is almost pure waste from the perspective of social welfare; we should be trying to maximize the proportion of real value added.)

For this reason, I think I’m actually prepared to call myself a slacktivist. I aim for cost-effective awareness-raising; I want to spread the best ideas to the most people for the lowest cost. Why, would you prefer I waste more effort, to signal my own righteousness?

There is no problem of free will, just a lot of really confused people

Jan 15, JDN 2457769

I was hoping for some sort of news item to use as a segue, but none in particular emerged, so I decided to go on with it anyway. I haven’t done any cognitive science posts in awhile, and this is one I’ve been meaning to write for a long time—actually it’s the sort of thing that even a remarkable number of cognitive scientists frequently get wrong, perhaps because the structure of human personality makes cognitive science inherently difficult.

Do we have free will?

The question has been asked so many times by so many people it is now a whole topic in philosophy. The Stanford Encyclopedia of Philosophy has an entire article on free will. The Information Philosopher has a gateway page “The Problem of Free Will” linking to a variety of subpages. There are even YouTube videos about “the problem of free will”.

The constant arguing back and forth about this would be problematic enough, but what really grates me are the many, many people who write “bold” articles and books about how “free will does not exist”. Examples include Sam Harris and Jerry Coyne, and have been published in everything from Psychology Today to the Chronicle of Higher Education. There’s even a TED talk.

The worst ones are those that follow with “but you should believe in it anyway”. In The Atlantic we have “Free will does not exist. But we’re better off believing in it anyway.” Scientific American offers a similar view, “Scientists say free will probably doesn’t exist, but urge: “Don’t stop believing!””

This is a mind-bogglingly stupid approach. First of all, if you want someone to believe in something, you don’t tell them it doesn’t exist. Second, if something doesn’t exist, that is generally considered a pretty compelling reason not to believe in it. You’d need a really compelling counter-argument, and frankly I’m not even sure the whole idea is logically coherent. How can I believe in something if I know it doesn’t exist? Am I supposed to delude myself somehow?

But the really sad part is that it’s totally unnecessary. There is no problem of free will. There are just an awful lot of really, really confused people. (Fortunately not everyone is confused; there are those, such as Daniel Dennett, who actually understand what’s going on.)

The most important confusion is over what you mean by the phrase “free will”. There are really two core meanings here, and the conflation of them is about 90% of the problem.

1. Moral responsibility: We have “free will” if and only if we are morally responsible for our actions.

2. Noncausality: We have “free will” if and only if our actions are not caused by the laws of nature.

Basically, every debate over “free will” boils down to someone pointing out that noncausality doesn’t exist, and then arguing that this means that moral responsibility doesn’t exist. Then someone comes back and says that moral responsibility does exist, and then infers that this means noncausality must exist. Or someone points out that noncausality doesn’t exist, and then they realize how horrible it would be if moral responsibility didn’t exist, and then tells people they should go on believing in noncausality so that they don’t have to give up moral responsibility.

Let me be absolutely clear here: Noncausality could not possibly exist.

Noncausality isn’t even a coherent concept. Actions, insofar as they are actions, must, necessarily, by definition, be caused by the laws of nature.

I can sort of imagine an event not being caused; perhaps virtual electron-positron pairs can really pop into existence without ever being caused. (Even then I’m not entirely convinced; I think quantum mechanics might actually be deterministic at the most fundamental level.)

But an action isn’t just a particle popping into existence. It requires the coordinated behavior of some 10^26 or more particles, all in a precisely organized, unified way, structured so as to move some other similarly large quantity of particles through space in a precise way so as to change the universe from one state to another state according to some system of objectives. Typically, it involves human muscles intervening on human beings or inanimate objects. (Recently it has come to mean specifically human fingers on computer keyboards a rather large segment of the time!) If what you do is an action—not a muscle spasm, not a seizure, not a slip or a trip, but something you did on purpose—then it must be caused. And if something is caused, it must be caused according to the laws of nature, because the laws of nature are the laws underlying all causality in the universe!

And once you realize that, the “problem of free will” should strike you as one of the stupidest “problems” ever proposed. Of course our actions are caused by the laws of nature! Why in the world would you think otherwise?

If you think that noncausality is necessary—or even useful—for free will, what kind of universe do you think you live in? What kind of universe could someone live in, that would fit your idea of what free will is supposed to be?

It’s like I said in that much earlier post about The Basic Fact of Cognitive Science (we are our brains): If you don’t think a mind can be made of matter, what do you think minds are made of? What sort of magical invisible fairy dust would satisfy you? If you can’t even imagine something that would satisfy the constraints you’ve imposed, did it maybe occur to you that your constraints are too strong?

Noncausality isn’t worth fretting over for the same reason that you shouldn’t fret over the fact that pi is irrational and you can’t make a square circle. There is no possible universe in which that isn’t true. So if it bothers you, it’s not that there’s something wrong with the universe—it’s clearly that there’s something wrong with you. Your thinking on the matter must be too confused, too dependent on unquestioned intuitions, if you think that murder can’t be wrong unless 2+2=5.

In philosophical jargon I am called a “compatibilist” because I maintain that free will and determinism are “compatible”. But this is much too weak a term. I much prefer Eleizer Yudkowsky’s “requiredism”, which he explains in one of the greatest blog posts of all time (seriously, read it immediately if you haven’t before—I’m okay with you cutting off my blog post here and reading his instead, because it truly is that brilliant), entitled simply “Thou Art Physics”. This quote sums it up briefly:

My position might perhaps be called “Requiredism.” When agency, choice, control, and moral responsibility are cashed out in a sensible way, they require determinism—at least some patches of determinism within the universe. If you choose, and plan, and act, and bring some future into being, in accordance with your desire, then all this requires a lawful sort of reality; you cannot do it amid utter chaos. There must be order over at least over those parts of reality that are being controlled by you. You are within physics, and so you/physics have determined the future. If it were not determined by physics, it could not be determined by you.

Free will requires a certain minimum level of determinism in the universe, because the universe must be orderly enough that actions make sense and there isn’t simply an endless succession of random events. Call me a “requiredist” if you need to call me something. I’d prefer you just realize the whole debate is silly because moral responsibility exists and noncausality couldn’t possibly.

We could of course use different terms besides “free will”. “Moral responsibility” is certainly a good one, but it is missing one key piece, which is the issue of why we can assign moral responsibility to human beings and a few other entities (animals, perhaps robots) and not to the vast majority of entities (trees, rocks, planets, tables), and why we are sometimes willing to say that even a human being does not have moral responsibility (infancy, duress, impairment).

This is why my favored term is actually “rational volition”. The characteristic that human beings have (at least most of us, most of the time), which also many animals and possibly some robots share (if not now, then soon enough), which justifies our moral responsibility is precisely our capacity to reason. Things don’t just happen to us the way they do to some 99.999,999,999% of the universe; we do things. We experience the world through our senses, have goals we want to achieve, and act in ways that are planned to make the world move closer to achieving those goals. We have causes, sure enough; but not just any causes. We have a specific class of causes, which are related to our desires and intentions—we call these causes reasons.

So if you want to say that we don’t have “free will” because that implies some mysterious nonsensical noncausality, sure; that’s fine. But then don’t go telling us that this means we don’t have moral responsibility, or that we should somehow try to delude ourselves into believing otherwise in order to preserve moral responsibility. Just recognize that we do have rational volition.

How do I know we have rational volition? That’s the best part, really: Experiments. While you’re off in la-la land imagining fanciful universes where somehow causes aren’t really causes even though they are, I can point to not only centuries of human experience but decades of direct, controlled experiments in operant conditioning. Human beings and most other animals behave quite differently in behavioral experiments than, say, plants or coffee tables. Indeed, it is precisely because of this radical difference that it seems foolish to even speak of a “behavioral experiment” about coffee tables—because coffee tables don’t behave, they just are. Coffee tables don’t learn. They don’t decide. They don’t plan or consider or hope or seek.

Japanese, as it turns out, may be a uniquely good language for cognitive science, because it has two fundamentally different verbs for “to be” depending on whether an entity is sentient. Humans and animals imasu, while inanimate objects merely arimasu. We have free will because and insofar as we imasu.

Once you get past that most basic confusion of moral responsibility with noncausality, there are a few other confusions you might run into as well. Another one is two senses of “reductionism”, which Dennett refers to as “ordinary” and “greedy”:

1. Ordinary reductionism: All systems in the universe are ultimately made up of components that always and everywhere obey the laws of nature.

2. Greedy reductionism: All systems in the universe just are their components, and have no existence, structure, or meaning aside from those components.

I actually had trouble formulating greedy reductionism as a coherent statement, because it’s such a nonsensical notion. Does anyone really think that a pile of two-by-fours is the same thing as a house? But people do speak as though they think this about human brains, when they say that “love is just dopamine” or “happiness is just serotonin”. But dopamine in a petri dish isn’t love, any more than a pile of two-by-fours is a house; and what I really can’t quite grok is why anyone would think otherwise.

Maybe they’re simply too baffled by the fact that love is made of dopamine (among other things)? They can’t quite visualize how that would work (nor can I, nor, I think, can anyone in the world at this level of scientific knowledge). You can see how the two-by-fours get nailed together and assembled into the house, but you can’t see how dopamine and action potentials would somehow combine into love.

But isn’t that a reason to say that love isn’t the same thing as dopamine, rather than that it is? I can understand why some people are still dualists who think that consciousness is somehow separate from the functioning of the brain. That’s wrong—totally, utterly, ridiculously wrong—but I can at least appreciate the intuition that underlies it. What I can’t quite grasp is why someone would go so far the other way and say that the consciousness they are currently experiencing does not exist.

Another thing that might confuse people is the fact that minds, as far as we know, are platform independentthat is, your mind could most likely be created out of a variety of different materials, from the gelatinous brain it currently is to some sort of silicon supercomputer, to perhaps something even more exotic. This independence follows from the widely-believed Church-Turing thesis, which essentially says that all computation is computation, regardless of how it is done. This may not actually be right, but I see many reasons to think that it is, and if so, this means that minds aren’t really what they are made of at all—they could be made of lots of things. What makes a mind a mind is how it is structured and above all what it does.

If this is baffling to you, let me show you how platform-independence works on a much simpler concept: Tables. Tables are also in fact platform-independent. You can make a table out of wood, or steel, or plastic, or ice, or bone. You could take out literally every single atom of a table and replace it will a completely different atom of a completely different element—carbon for iron, for example—and still end up with a table. You could conceivably even do so without changing the table’s weight, strength, size, etc., though that would be considerably more difficult.
Does this mean that tables somehow exist “beyond” their constituent matter? In some very basic sense, I suppose so—they are, again, platform-independent. But not in any deep, mysterious sense. Start with a wooden table, take away all the wood, and you no longer have a table. Take apart the table and you have a bunch of wood, which you could use to build something else. There is no “essence” comprising the table. There is no “table soul” that would persist when the table is deconstructed.

And—now for the hard part—so it is with minds. Your mind is your brain. The constituent atoms of your brain are gradually being replaced, day by day, but your mind is the same, because it exists in the arrangement and behavior, not the atoms themselves. Yet there is nothing “extra” or “beyond” that makes up your mind. You have no “soul” that lies beyond your brain. If your brain is destroyed, your mind will also be destroyed. If your brain could be copied, your mind would also be copied. And one day it may even be possible to construct your mind in some other medium—some complex computer made of silicon and tantalum, most likely—and it would still be a mind, and in all its thoughts, feelings and behaviors your mind, if not numerically identical to you.

Thus, when we engage in rational volition—when we use our “free will” if you like that term—there is no special “extra” process beyond what’s going on in our brains, but there doesn’t have to be. Those particular configurations of action potentials and neurotransmitters are our thoughts, desires, plans, intentions, hopes, fears, goals, beliefs. These mental concepts are not in addition to the physical material; they are made of that physical material. Your soul is made of gelatin.

Again, this is not some deep mystery. There is no “paradox” here. We don’t actually know the details of how it works, but that makes this no different from a Homo erectus who doesn’t know how fire works. Maybe he thinks there needs to be some extra “fire soul” that makes it burn, but we know better; and in far fewer centuries than separate that Homo erectus from us, our descendants will know precisely how the brain creates the mind.

Until then, simply remember that any mystery here lies in us—in our ignorance—and not in the universe. And take heart that the kind of “free will” that matters—moral responsibility—has absolutely no need for the kind of “free will” that doesn’t exist—noncausality. They’re totally different things.

The real crisis in education is access, not debt

Jan 8, JDN 2457762

A few weeks ago I tried to provide assurances that the “student debt crisis” is really not much of a crisis; there is a lot of debt, but it is being spent on a very good investment both for individuals and for society. Student debt is not that large in the scheme of things, and it more than pays for itself in the long run.

But this does not mean we are not in the midst of an education crisis. It’s simply not about debt.

The crisis I’m worried about involves access.

As you may recall, there are a substantial number of people with very small amounts of student debt, and they tend to be the most likely to default. The highest default rates are among the group of people with student debt greater than $0 but less than $5000.

So how is it that there are people with only $5,000 in student debt anyway? You can’t buy much college for $5,000 these days, as tuition prices have risen at an enormous rate: From 1983 to 2013, in inflation-adjusted dollars, average annual tuition rose from $7,286 at public institutions and $17,333 at private institutions to $15,640 at public institutions and $35,987 at private institutions—more than doubling in each case.

Enrollments are much higher, but this by itself should not raise tuition per student. So where is all the extra money going? Some of it is due to increases in public funding that have failed to keep up with higher enrollments; but a lot of it just seems to be going to higher pay for administrators and athletic coaches. This is definitely a problem; students should not be forced to subsidize the millions of dollars most universities lose on funding athletics—the NCAA, who if anything are surely biased in favor of athletics, found that the total net loss due to athletics spending at FBS universities was $17 million per year. Only a handful of schools actually turn a profit on athletics, all of them Division I. So it might be fair to speak of an “irresponsible college administration crisis”, administrators who heap wealth upon themselves and their beloved athletic programs while students struggle to pay their bills, or even a “college tuition crisis” where tuition keeps rising far beyond what is sustainable. But that’s not the same thing as a “student debt crisis”—just as the mortgage crisis we had in 2008 is distinct from the slow-burning housing price crisis we’ve been in since the 1980s. Making restrictions on mortgages tighter might prevent banks from being as predatory as they have been lately, but it won’t suddenly allow people to better afford houses.

And likewise, I’m much more worried about students who don’t go to college because they are afraid of this so-called “debt crisis”; they’re going to end up much worse off. As Eduardo Porter put it in the New York Times:

And yet Mr. Beltrán says he probably wouldn’t have gone to college full time if he hadn’t received a Pell grant and financial aid from New York State to defray the costs. He has also heard too many stories about people struggling under an unbearable burden of student loans to even consider going into debt. “Honestly, I don’t think I would have gone,” he said. “I couldn’t have done four years.”

And that would have been the wrong decision.

His reasoning is not unusual. The rising cost of college looms like an insurmountable obstacle for many low-income Americans hoping to get a higher education. The notion of a college education becoming a financial albatross around the neck of the nation’s youth is a growing meme across the culture. Some education experts now advise high school graduates that a college education may not be such a good investment after all. “Sticker price matters a lot,” said Lawrence Katz, a professor of Harvard University. “It is a deterrent.”

 

[…]

 

And the most perplexing part of this accounting is that regardless of cost, getting a degree is the best financial decision a young American can make.

According to the O.E.C.D.’s report, a college degree is worth $365,000 for the average American man after subtracting all its direct and indirect costs over a lifetime. For women — who still tend to earn less than men — it’s worth $185,000.

College graduates have higher employment rates and make more money. According to the O.E.C.D., a typical graduate from a four-year college earns 84 percent more than a high school graduate. A graduate from a community college makes 16 percent more.

A college education is more profitable in the United States than in pretty much every other advanced nation. Only Irish women get more for the investment: $185,960 net.

So, these students who have $5,000 or less in student debt; what does that mean? That amount couldn’t even pay for a single year at most universities, so how did that happen?

Well, they almost certainly went to community college; only a community college could provide you with a nontrivial amount of education for less than $5,000. But community colleges vary tremendously in their quality, and some have truly terrible matriculation rates. While most students who start at a four-year school do eventually get a bachelor’s degree (57% at public schools, 78% at private schools), only 17% of students who start at community college do. And once students drop out, they very rarely actually return to complete a degree.

Indeed, the only way to really have that little student debt is to drop out quickly. Most students who drop out do so chiefly for reasons that really aren’t all that surprising: Mostly, they can’t afford to pay their bills. “Unable to balance school and work” is the number 1 reported reason why students drop out of college.

In the American system, student loans are only designed to pay the direct expenses of education; they often don’t cover the real costs of housing, food, transportation and healthcare, and even when they do, they basically never cover the opportunity cost of education—the money you could be making if you were working full-time instead of going to college. For many poor students, simply breaking even on their own expenses isn’t good enough; they have families that need to be taken care of, and that means working full-time. Many of them even need to provide for their parents or grandparents who may be poor or disabled. Yet in the US system it is tacitly assumed that your parents will help you—so when you need to help them, what are you supposed to do? You give up on college and you get a job.

The most successful reforms for solving this problem have been comprehensive; they involved working to support students directly and intensively in all aspects of their lives, not just the direct financial costs of school itself.

Another option would be to do something more like what they do in Sweden, where there is also a lot of student debt, but for a very different reason. The direct cost of college is paid automatically by the government. Yet essentially all Swedish students have student debt, and total student debt in Sweden is much larger than other European countries and comparable to the United States; why? Because Sweden understands that you should also provide for the opportunity cost. In Sweden, students live fully self-sufficient on student loans, just as if they were working full-time. They are not expected to be supported by their parents.

The problem with American student loans, then, is not that they are too large—but that they are too small. They don’t provide for what students actually need, and thus don’t allow them to make the large investment in their education that would have paid off in the long run. Panic over student loans being too large could make the problem worse, if it causes us to reduce the amount of loanable funds available for students.

The lack of support for poor students isn’t the only problem. There are also huge barriers to education in the US based upon race. While Asian students do as well (if not better) than White students, Black and Latino students have substantially lower levels of educational attainment. Affirmative action programs can reduce these disparities, but they are unpopular and widely regarded as unfair, and not entirely without reason.

A better option—indeed one that should be a no-brainer in my opinion—is not to create counter-biases in favor of Black and Latino students (which is what affirmative action is), but to eliminate biases in favor of White students that we know exist. Chief among these are so-called “legacy admissions”, in which elite universities attract wealthy alumni donors by granting their children admission and funding regardless of whether they even remotely deserve it or would contribute anything academically to the university.

These “legacy admissions” are frankly un-American. They go against everything our nation supposedly stands for; in fact, they reek of feudalism. And unsurprisingly, they bias heavily in favor of White students—indeed, over 90 percent of legacy admits are White and Protestant. Athletic admissions are also contrary to the stated mission of the university, though their racial biases are more complicated (Black students are highly overrepresented in football and basketball admits, for example) and it is at least not inherently un-American to select students based upon their athletic talent as opposed to their academic talent.

But this by itself would not be enough; the gaps are clearly too large to close that way. Getting into college is only the start, and graduation rates are much worse for Black students than White students. Moreover, the education gap begins well before college—high school dropout rates are much higher among Black and Latino studentsas well.

In fact, even closing the education gap by itself would not be enough; racial biases permeate our whole society. Black individuals with college degrees are substantially more likely to be unemployed and have substantially lower wages on average than White individuals with college degrees—indeed, a bachelor’s degree gets a Black man a lower mean wage than a White man would get with only an associate’s degree.

Fortunately, the barriers against women in college education have largely been conquered. In fact, there are now more women in US undergraduate institutions than men. This is not to say that there are not barriers against women in society at large; women still make about 75% as much income as men on average, and even once you adjust for factors such as education and career choice they still only make about 95% as much. Moreover, these factors we’re controlling for are endogenous. Women don’t choose their careers in a vacuum, they choose them based upon a variety of social and cultural pressures. The fact that 93% of auto mechanics are men and 79% of clerical workers are women might reflect innate differences in preferences—but it could just as well reflect a variety of cultural biases or even outright discrimination. Quite likely, it’s some combination of these. So it is not obvious to me that the “adjusted” wage gap is actually a more accurate reflection of the treatment of women in our society than the “unadjusted” wage gap; the true level of bias is most likely somewhere in between the two figures.

Gender wage gaps vary substantially across age groups and between even quite similar countries: Middle-aged women in Germany make 28% less than middle-aged men, while in France that gap is only 19%. Young women in Latvia make 14% less than young men, but in Romania they make 1.1% more. This variation clearly shows that this is not purely the effect of some innate genetic difference in skills or preferences; it must be at least in large part the product of cultural pressures or policy choices.

Even within academia, women are less likely to be hired full-time instead of part-time, awarded tenure, or promoted to administrative positions. Moreover, this must be active discrimination in some form, because gaps in hiring and wage offers between men and women persist in randomized controlled experiments. You can literally present the exact same resume and get a different result depending on whether you attached a male name or a female name.

But at least when it comes to the particular question of getting bachelor’s degrees, we have achieved something approaching equality across gender, and that is no minor accomplishment. Most countries in the world still have more men than women graduating from college, and in some countries the difference is terrifyingly large. I found from World Bank data that in the Democratic Republic of Congo, only 3% of men go to college—and less than 1% of women do. Even in Germany, 29% of men graduate from college but only 19% of women do. Getting both of these figures over 30% and actually having women higher than men is a substantial achievement for which the United States should be proud.

Yet it still remains the case that Americans who are poor, Black, Native American, or Latino are substantially less likely to ever make it through college. Panic about student debt might well be making this problem worse, as someone whose family makes $15,000 per year is bound to hear $50,000 in debt as an overwhelming burden, even as you try to explain that it will eventually pay for itself seven times over.

We need to instead be talking about the barriers that are keeping people from attending college, and pressuring them to drop out once they do. Debt is not the problem. Even tuition is not really the problem. Access is the problem. College is an astonishingly good investment—but most people never get the chance to make it. That is what we need to change.