Green New Deal Part 3: Guaranteeing education and healthcare is easy—why aren’t we doing it?

Apr 21 JDN 2458595

Last week was one of the “hard parts” of the Green New Deal. Today it’s back to one of the “easy parts”: Guaranteed education and healthcare.

“Providing all people of the United States with – (i) high-quality health care; […]

“Providing resources, training, and high-quality education, including higher education, to all people of the United States.”

Many Americans seem to think that providing universal healthcare would be prohibitively expensive. In fact, it would have literally negative net cost.
The US currently has the most bloated, expensive, inefficient healthcare system in the entire world. We spend almost $10,000 per person per year on healthcare, and get outcomes no better than France or the UK where they spend less than $5,000.
In fact, our public healthcare expenditures are currently higher than almost every other country. Our private expenditures are therefore pure waste; all they are doing is providing returns for the shareholders of corporations. If we were to simply copy the UK National Health Service and spend money in exactly the same way as they do, we would spend the same amount in public funds and almost nothing in private funds—and the UK has a higher mean lifespan than the US.
This is absolutely a no-brainer. Burn the whole system of private insurance down. Copy a healthcare system that actually works, like they use in every other First World country.
It wouldn’t even be that complicated to implement: We already have a single-payer healthcare system in the US; it’s called Medicare. Currently only old people get it; but old people use the most healthcare anyway. Hence, Medicare for All: Just lower the eligibility age for Medicare to 18 (if not zero). In the short run there would be additional costs for the transition, but in the long run we would save mind-boggling amounts of money, all while improving healthcare outcomes and extending our lifespans. Current estimates say that the net savings of Medicare for All would be about $5 trillion over the next 10 years. We can afford this. Indeed, the question is, as it was for infrastructure: How can we afford not to do this?
Isn’t this socialism? Yeah, I suppose it is. But healthcare is one of the few things that socialist countries consistently do extremely well. Cuba is a socialist country—a real socialist country, not a social democratic welfare state like Norway but a genuinely authoritarian centrally-planned economy. Cuba’s per-capita GDP PPP is a third of ours. Yet their life expectancy is actually higher than ours, because their healthcare system is just that good. Their per-capita healthcare spending is one-fourth of ours, and their health outcomes are better. So yeah, let’s be socialist in our healthcare. Socialists seem really good at healthcare.
And this makes sense, if you think about it. Doctors can do their jobs a lot better when they’re focused on just treating everyone who needs help, rather than arguing with insurance companies over what should and shouldn’t be covered. Preventative medicine is extremely cost-effective, yet it’s usually the first thing that people skimp on when trying to save money on health insurance. A variety of public health measures (such as vaccination and air quality regulation) are extremely cost-effective, but they are public goods that the private sector would not pay for by itself.
It’s not as if healthcare was ever really a competitive market anyway: When you get sick or injured, do you shop around for the best or cheapest hospital? How would you even go about that, when they don’t even post most of their prices and what prices they post are often wildly different than what you’ll actually pay?
The only serious argument I’ve heard against single-payer healthcare is a moral one: “Why should I have to pay for other people’s healthcare?” Well, I guess, because… you’re a human being? You should care about other human beings, and not want them to suffer and die from easily treatable diseases?
I don’t know how to explain to you that you should care about other people.

Single-payer healthcare is not only affordable: It would be cheaper and better than what we are currently doing. (In fact, almost anything would be cheaper and better than what we are currently doing—Obamacare was an improvement over the previous mess, but it’s still a mess.)
What about public education? Well, we already have that up to the high school level, and it works quite well.
Contrary to popular belief, the average public high school has better outcomes in terms of test scores and college placements than the average private high school. There are some elite private schools that do better, but they are extraordinarily expensive and they self-select only the best students. Public schools have to take all students, and they have a limited budget; but they have high quality standards and they require their teachers to be certified.
The flaws in our public school system are largely from it being not public enough, which is to say that schools are funded by their local property taxes instead of having their costs equally shared across whole states. This gives them the same basic problem as private schools: Rich kids get better schools.
If we removed that inequality, our educational outcomes would probably be among the best in the world—indeed, in our most well-funded school districts, they are. The state of Massachusetts which actually funds their public schools equally and well, gets international test scores just as good as the supposedly “superior” educational systems of Asian countries. In fact, this is probably even unfair to Massachusetts, as we know that China specifically selects the regions that have the best students to be the ones to take these international tests. Massachusetts is the best the US has to offer, but Shanghai is also the best China has to offer, so it’s only fair we compare apples to apples.
Public education has benefits for our whole society. We want to have a population of citizens, workers, and consumers who are well-educated. There are enormous benefits of primary and secondary education in terms of reducing poverty, improving public health, and increased economic growth.
So there’s my impassioned argument for why we should continue to support free, universal public education up to high school.
When it comes to college, I can’t be quite so enthusiastic. While there are societal benefits of college education, most of the benefits of college accrue to the individuals who go to college themselves.
The median weekly income of someone with a high school diploma is about $730; with a bachelor’s degree this rises to $1200; and with a doctoral or professional degree it gets over $1800. Higher education also greatly reduces your risk of being unemployed; while about 4% of the general population is unemployed, only 1.5% of people with doctorates or professional degrees are. Add that up over all the weeks of your life, and it’s a lot of money.
The net present value of a college education has been estimated at approximately $1 million. This result is quite sensitive to the choice of discount rate; at a higher discount rate you can get the net present value as “low” as $250,000.
With this in mind, the fact that the median student loan debt for a college graduate is about $30,000 doesn’t sound so terrible, does it? You’re taking out a loan for $30,000 to get something that will earn you between $250,000 and $1 million over the course of your life.
There is some evidence that having student loans delays homeownership; but this is a problem with our mortgage system, not our education system. It’s mainly the inability to finance a down payment that prevents people from buying homes. We should implement a system of block grants for first-time homeowners that gives them a chunk of money to make a down payment, perhaps $50,000. This would cost about as much as the mortgage interest tax deduction which mainly benefits the upper-middle class.
Higher education does have societal benefits as well. Perhaps the starkest I’ve noticed is how categorically higher education decided people’s votes on Donald Trump: Counties with high rates of college education almost all voted for Clinton, and counties with low rates of college education almost all voted for Trump. This was true even controlling for income and a lot of other demographic factors. Only authoritarianism, sexism and racism were better predictors of voting for Trump—and those could very well be mediating variables, if education reduces such attitudes.
If indeed it’s true that higher education makes people less sexist, less racist, less authoritarian, and overall better citizens, then it would be worth every penny to provide universal free college.
But it’s worth noting that even countries like Germany and Sweden which ostensibly do that don’t really do that: While college tuition is free for Swedish citizens and Germany provides free college for all students of any nationality, nevertheless the proportion of people in Sweden and Germany with bachelor’s degrees is actually lower than that of the United States. In Sweden the gap largely disappears if you restrict to younger cohorts—but in Germany it’s still there.
Indeed, from where I’m sitting, “universal free college” looks an awful lot like “the lower-middle class pays for the upper-middle class to go to college”. Social class is still a strong predictor of education level in Sweden. Among OECD countries, education seems to be the best at promoting upward mobility in Australia, and average college tuition in Australia is actually higher than average college tuition in the US (yes, even adjusting for currency exchange: Australian dollars are worth only slightly less than US dollars).
What does Australia do? They have a really good student loan system. You have to reach an annual income of about $40,000 per year before you need to make payments at all, and the loans are subsidized to be interest-free. Once you do owe payments, the debt is repaid at a rate proportional to your income—so effectively it’s not a debt at all but an equity stake.
In the US, students have been taking the desperate (and very cyberpunk) route of selling literal equity stakes in their education to Wall Street banks; this is a terrible idea for a hundred reasons. But having the government have something like an equity stake in students makes a lot of sense.
Because of the subsidies and generous repayment plans, the Australian government loses money on their student loan system, but so what? In order to implement universal free college, they would have spent an awful lot more than they are losing now. This way, the losses are specifically on students who got a lot of education but never managed to raise their income high enough—which means the government is actually incentivized to improve the quality of education or job-matching.
The cost of universal free college is considerable: That $1.3 trillion currently owed as student loans would be additional government debt or tax liability instead. Is this utterly unaffordable? No. But it’s not trivial either. We’re talking about roughly $60 billion per year in additional government spending, a bit less than what we currently spend on food stamps. An expenditure like that should have a large public benefit (as food stamps absolutely, definitely do!); I’m not convinced that free college would have such a benefit.
It would benefit me personally enormously: I currently owe over $100,000 in debt (about half from my undergrad and half from my first master’s). But I’m fairly privileged. Once I finally make it through this PhD, I can expect to make something like $100,000 per year until I retire. I’m not sure that benefiting people like me should be a major goal of public policy.
That said, I don’t think universal free college is a terrible policy. Done well, it could be a good thing. But it isn’t the no-brainer that single-payer healthcare is. We can still make sure that students are not overburdened by debt without making college tuition actually free.

Green New Deal Part 1: Why aren’t we building more infrastructure?

Apr 7 JDN 2458581

For the next few weeks, I’ll be doing a linked series of posts on the Green New Deal. Some parts of it are obvious and we should have been doing them for decades already; let’s call these “easy parts”. Some parts of it will be difficult, but are definitely worth doing; let’s call these “hard parts”. And some parts of it are quite radical and may ultimately not be feasible—but may still be worth trying; let’s call these “very hard parts”.

Today I’m going to talk about some of the easy parts.

“Repairing and upgrading the infrastructure in the United States, including [. . .] by eliminating pollution and greenhouse gas emissions as much as technologically feasible.”

“Building or upgrading to energy-efficient, distributed, and ‘smart’ power grids, and working to ensure affordable access to electricity.”

“Upgrading all existing buildings in the United States and building new buildings to achieve maximal energy efficiency, water efficiency, safety, affordability, comfort, and durability, including through electrification.”

Every one of these proposals is basically a no-brainer. We should have been spending something like $100 billion dollars a year for the last 30 years doing this, and if we had, we’d have infrastructure that would be the envy of the world.
Instead, the ASCE gives our infrastructure a D+: passing, but just barely. We are still in the top 10 in the World Bank’s infrastructure ratings, but we have been slowly slipping downward in the rankings.

 

Where did I get my $100 billion a year figure from? Well, we have about a $15 billion annual shortfall in highway maintenance, $13 billion in waterway maintenance, and $25 billion in dam repairs. That’s $53 billion. But that’s just to keep what we already have. In order to build more infrastructure, or upgrade it to be better, we’re going to need to spend considerably more. Double it and make it a nice round number, and you get $100 billion.

 

Of course, $100 billion a year is not a small amount of money.
How would we pay for such a thing?

 

That’s the thing: We wouldn’t need to.

 

Infrastructure investment doesn’t have to be “paid for” in the usual sense. We don’t need to raise taxes. We don’t need to cut spending. We can just add infrastructure spending onto other spending, raising the deficit directly. We can borrow money to fund the projects, and then by the time those bonds mature we will have made enough additional tax revenue from the increased productivity (and the Keynes multiplier) that we will have no problem paying back the debt.

 

Funding investment is what debt is supposed to be for. Particularly when interest rates are this low (currently about 3% nominal, which means about 1% adjusted for inflation), there is very little downside to taking out more debt if you’re going to plow that money into productive investments.

 

Of course debt can be used for anything money can, and using debt for all your spending is often not a good idea (but it can be, if your income is inconsistent or you have good reasons to think it will increase in the future). But I’m not suggesting the government should use debt to fund Medicare and Social Security payments; I’m merely suggesting that they should use debt to fund infrastructure investment. Medicare and Social Security are, at their core, social insurance programs; they spread wealth around, which has a lot of important benefits; but they don’t meaningfully create new wealth, so you need to be careful about how you pay for them. Infrastructure investment creates new wealth. The extra value is basically pulled from thin air; you’d be a fool not to take it.

 

This is also why I just can’t get all that upset about student loans (even though I personally would personally stand to gain a small house if student debt were to suddenly evaporate). Education is the most productive investment we have, and most of the benefits of education do actually accrue to the individual who is being educated. It therefore stands to reason that students should pay for their own education, and since most of us couldn’t afford to pay in cash, it stands to reason that we should be offered loans.

 

There are some minor changes I would make to the student loan system, such as lower interest rates, higher limits to subsidized loans, stricter regulations on private student loans, and a simpler forgiveness process that doesn’t result in ridiculous tax liability. But I really don’t see the need to go to a fully taxpayer-funded higher education system. On the other hand, it wouldn’t necessarily be bad to go to a fully taxpayer-funded system; it seems to work quite well in Germany, France, and most of Scandinavia. I just don’t see this as a top priority.

 

It feels awful having $100,000 in debt, but it’s really not that bad when you realize that a college education will increase your lifetime earnings by an average of $1 million (and more like $2 million in my case because I’m going for a PhD, PhDs are more valuable than bachelor’s degrees, and even among PhDs, economists are particularly well-paid). You are being offered the chance to apy $100,000 now to get $1 million later. You should definitely take that deal.

 

And yet, we still aren’t increasing our infrastructure investment. Trump said he would, and it seemed like one of his few actual good ideas (remember the Stopped Clock Principle: reversed stupidity is not intelligence); but so far, no serious infrastructure plan has materialized.

 

Despite extremely strong bipartisan support for increased infrastructure investment, we don’t seem to be able to actually get the job done.
I think I know why.

 

The first reason is that “infrastructure” is a vague concept, almost a feel-good Applause Light like “freedom” or “justice”. Nobody is ever going to say they are against freedom or justice. Instead they’ll disagree about what constitutes freedom or justice.

 

And likewise, while almost everyone will agree that infrastructure as a concept is a good thing, there can be large substantive disagreements over just what kind of infrastructure to build. We want better transportation: Does that mean more roads, or train lines instead? We want cheaper electricity: When we build new power plants, should they use natural gas, solar, or nuclear power? We want to revitalize inner cities: Does that mean public housing, community projects, or subsidies for developers? Nobody wants an inefficient electricity grid, but just how much are we willing to invest in making it more efficient, and how? Once the infrastructure is built, should it be publicly owned and tax-funded, or privatized and run for profit?
This reason is not going to go away. We simply have to face up to it, and find a way to argue substantively for the specific kinds of infrastructure we want. It should be trains, not roads. It should be solar, wind, and nuclear, not natural gas, and certainly not coal or oil. It should be public housing and community projects, not subsidies for developers. Most of the infrastructure should be publicly owned, and what isn’t should be strictly regulated.

 

Yet there is another reason, which I think we might be able to eliminate. Most people seem to think that we need to pay for infrastructure the way we would need to pay for expanded social programs or military spending. They keep asking “How will this be paid for?” (And despite a lot of conservatives frothing about it—I will not give them ad revenue by linking—Alexandria Ocasio-Cortez was not wrong when she said “The same way we pay for everything else.” We tax and spend; that’s what governments do. It’s always a question of what taxes and what spending.)

 

But we really don’t need to pay for infrastructure at all. Infrastructure will pay for itself; we simply need to finance it up front. And when we’re paying real interest rates of 1%, that’s not a difficult thing to do. If interest rates start to rise, we may want to pull back on that; but that’s not something that will happen overnight. We would see it coming, and have a variety of fiscal and monetary tools available to deal with it. The fear of possibly paying a bit more interest 30 years from now is a really stupid reason not to fix bridges that are crumbling today.

 

So when we talk about the Green New Deal (or at least the “easy parts”), let’s throw away this nonsense about “paying for it”. Almost all of these programs are long-term investments; they will pay for themselves. There are still substantive choices to be made about what exactly to build and where and how; but the US is an extraordinarily rich country with virtually unlimited borrowing power.

 

We can afford to do this.

 

Indeed, I think the question we should really be asking is:
How can we afford not to do this?

Is grade inflation a real problem?

Mar 4 JDN 2458182

You can’t spend much time teaching at the university level and not hear someone complain about “grade inflation”. Almost every professor seems to believe in it, and yet they must all be participating in it, if it’s really such a widespread problem.

This could be explained as a collective action problem, a Tragedy of the Commons: If the incentives are always to have the students with the highest grades—perhaps because of administrative pressure, or in order to get better reviews from students—then even if all professors would prefer a harsher grading scheme, no individual professor can afford to deviate from the prevailing norms.

But in fact I think there is a much simpler explanation: Grade inflation doesn’t exist.

In economic growth theory, economists make a sharp distinction between inflation—increase in prices without change in underlying fundamentals—and growth—increase in the real value of output. I contend that there is no such thing as grade inflation—what we are in fact observing is grade growth.
Am I saying that students are actually smarter now than they were 30 years ago?

Yes. That’s exactly what I’m saying.

But don’t take it from me. Take it from the decades of research on the Flynn Effect: IQ scores have been rising worldwide at a rate of about 0.3 IQ points per year for as long as we’ve been keeping good records. Students today are about 10 IQ points smarter than students 30 years ago—a 2018 IQ score of 95 is equivalent to a 1988 score of 105, which is equivalent to a 1958 score of 115. There is reason to think this trend won’t continue indefinitely, since the effect is mainly concentrated at the bottom end of the distribution; but it has continued for quite some time already.

This by itself would probably be enough to explain the observed increase in grades, but there’s more: College students are also a self-selected sample, admitted precisely because they were believed to be the smartest individuals in the application pool. Rising grades at top institutions are easily explained by rising selectivity at top schools: Harvard now accepts 5.6% of applicants. In 1942, Harvard accepted 92% of applicants. The odds of getting in have fallen from 9:1 in favor to 19:1 against. Today, you need a 4.0 GPA, a 36 ACT in every category, glowing letters of recommendation, and hundreds of hours of extracurricular activities (or a family member who donated millions of dollars, of course) to get into Harvard. In the 1940s, you needed a high school diploma and a B average.

In fact, when educational researchers have tried to quantitatively study the phenomenon of “grade inflation”, they usually come back with the result that they simply can’t find it. The US department of education conducted a study in 1995 showing that average university grades had declined since 1965. Given that the Flynn effect raised IQ by almost 10 points during that time, maybe we should be panicking about grade deflation.

It really wouldn’t be hard to make that case: “Back in my day, you could get an A just by knowing basic algebra! Now they want these kids to take partial derivatives?” “We used to just memorize facts to ace the exam; but now teachers keep asking for reasoning and critical thinking?”

More recently, a study in 2013 found that grades rose at the high school level, but fell at the college level, and showed no evidence of losing any informativeness as a signaling mechanism. The only recent study I could find showing genuinely compelling evidence for grade inflation was a 2017 study of UK students estimating that grades are growing about twice as fast as the Flynn effect alone would predict. Most studies don’t even consider the possibility that students are smarter than they used to be—they just take it for granted that any increase in average grades constitutes grade inflation. Many of them don’t even control for the increase in selectivity—here’s one using the fact that Harvard’s average rose from 2.7 to 3.4 from 1960 to 2000 as evidence of “grade inflation” when Harvard’s acceptance rate fell from almost 30% to only 10% during that period.

Indeed, the real mystery is why so many professors believe in grade inflation, when the evidence for it is so astonishingly weak.

I think it’s availability heuristic. Who are professors? They are the cream of the crop. They aced their way through high school, college, and graduate school, then got hired and earned tenure—they were one of a handful of individuals who won a fierce competition with hundreds of competitors at each stage. There are over 320 million people in the US, and only 1.3 million college faculty. This means that college professors represent about the top 0.4% of high-scoring students.

Combine that with the fact that human beings assort positively (we like to spend time with people who are similar to us) and use availability heuristic (we judge how likely something is based on how many times we have seen it).

Thus, when a professor compares to her own experience of college, she is remembering her fellow top-scoring students at elite educational institutions. She is recalling the extreme intellectual demands she had to meet to get where she is today, and erroneously assuming that these are representative of most the population of her generation. She probably went to school at one of a handful of elite institutions, even if she now teaches at a mid-level community college: three quarters of college faculty come from the top one quarter of graduate schools.

And now she compares to the students she has to teach, most of whom would not be able to meet such demands—but of course most people in her generation couldn’t either. She frets for the future of humanity only because not everyone is a genius like her.

Throw in the Curse of Knowledge: The professor doesn’t remember how hard it was to learn what she has learned so far, and so the fact that it seems easy now makes her think it was easy all along. “How can they not know how to take partial derivatives!?” Well, let’s see… were you born knowing how to take partial derivatives?

Giving a student an A for work far inferior to what you’d have done in their place isn’t unfair. Indeed, it would clearly be unfair to do anything less. You have years if not decades of additional education ahead of them, and you are from self-selected elite sample of highly intelligent individuals. Expecting everyone to perform as well as you would is simply setting up most of the population for failure.

There are potential incentives for grade inflation that do concern me: In particular, a lot of international student visas and scholarship programs insist upon maintaining a B or even A- average to continue. Professors are understandably loathe to condemn a student to having to drop out or return to their home country just because they scored 81% instead of 84% on the final exam. If we really intend to make C the average score, then students shouldn’t lose funding or visas just for scoring a B-. Indeed, I have trouble defending any threshold above outright failing—which is to say, a minimum score of D-. If you pass your classes, that should be good enough to keep your funding.

Yet apparently even this isn’t creating too much upward bias, as students who are 10 IQ points smarter are still getting about the same scores as their forebears. We should be celebrating that our population is getting smarter, but instead we’re panicking over “easy grading”.

But kids these days, am I right?

Information theory proves that multiple-choice is stupid

Mar 19, JDN 2457832

This post is a bit of a departure from my usual topics, but it’s something that has bothered me for a long time, and I think it fits broadly into the scope of uniting economics with the broader realm of human knowledge.

Multiple-choice questions are inherently and objectively poor methods of assessing learning.

Consider the following question, which is adapted from actual tests I have been required to administer and grade as a teaching assistant (that is, the style of question is the same; I’ve changed the details so that it wouldn’t be possible to just memorize the response—though in a moment I’ll get to why all this paranoia about students seeing test questions beforehand would also be defused if we stopped using multiple-choice):

The demand for apples follows the equation Q = 100 – 5 P.
The supply of apples follows the equation Q = 10 P.
If a tax of $2 per apple is imposed, what is the equilibrium price, quantity, tax revenue, consumer surplus, and producer surplus?

A. Price = $5, Quantity = 10, Tax revenue = $50, Consumer Surplus = $360, Producer Surplus = $100

B. Price = $6, Quantity = 20, Tax revenue = $40, Consumer Surplus = $200, Producer Surplus = $300

C. Price = $6, Quantity = 60, Tax revenue = $120, Consumer Surplus = $360, Producer Surplus = $300

D. Price = $5, Quantity = 60, Tax revenue = $120, Consumer Surplus = $280, Producer Surplus = $500

You could try solving this properly, setting supply equal to demand, adjusting for the tax, finding the equilibrium, and calculating the surplus, but don’t bother. If I were tutoring a student in preparing for this test, I’d tell them not to bother. You can get the right answer in only two steps, because of the multiple-choice format.

Step 1: Does tax revenue equal $2 times quantity? We said the tax was $2 per apple.
So that rules out everything except C and D. Welp, quantity must be 60 then.

Step 2: Is quantity 10 times price as the supply curve says? For C they are, for D they aren’t; guess it must be C then.

Now, to do that, you need to have at least a basic understanding of the economics underlying the question (How is tax revenue calculated? What does the supply curve equation mean?). But there’s an even easier technique you can use that doesn’t even require that; it’s called Answer Splicing.

Here’s how it works: You look for repeated values in the answer choices, and you choose the one that has the most repeated values. Prices $5 and $6 are repeated equally, so that’s not helpful (maybe the test designer planned at least that far). Quantity 60 is repeated, other quantities aren’t, so it’s probably that. Likewise with tax revenue $120. Consumer surplus $360 and Producer Surplus $300 are both repeated, so those are probably it. Oh, look, we’ve selected a unique answer choice C, the correct answer!

You could have done answer splicing even if the question were about 18th century German philosophy, or even if the question were written in Arabic or Japanese. In fact you even do it if it were written in a cipher, as long as the cipher was a consistent substitution cipher.

Could the question have been designed to better avoid answer splicing? Probably. But this is actually quite difficult to do, because there is a fundamental tradeoff between two types of “distractors” (as they are known in the test design industry). You want the answer choices to contain correct pieces and resemble the true answer, so that students who basically understand the question but make a mistake in the process still get it wrong. But you also want the answer choices to be distinct enough in a random enough pattern that answer splicing is unreliable. These two goals are inherently contradictory, and the result will always be a compromise between them. Professional test-designers usually lean pretty heavily against answer-splicing, which I think is probably optimal so far as it goes; but I’ve seen many a professor err too far on the side of similar choices and end up making answer splicing quite effective.

But of course, all of this could be completely avoided if I had just presented the question as an open-ended free-response. Then you’d actually have to write down the equations, show me some algebra solving them, and then interpret your results in a coherent way to answer the question I asked. What’s more, if you made a minor mistake somewhere (carried a minus sign over wrong, forgot to divide by 2 when calculating the area of the consumer surplus triangle), I can take off a few points for that error, rather than all the points just because you didn’t get the right answer. At the other extreme, if you just randomly guess, your odds of getting the right answer are miniscule, but even if you did—or copied from someone else—if you don’t show me the algebra you won’t get credit.

So the free-response question is telling me a lot more about what the student actually knows, in a much more reliable way, that is much harder to cheat or strategize against.

Moreover, this isn’t a matter of opinion. This is a theorem of information theory.

The information that is carried over a message channel can be quantitatively measured as its Shannon entropy. It is usually measured in bits, which you may already be familiar with as a unit of data storage and transmission rate in computers—and yes, those are all fundamentally the same thing. A proper formal treatment of information theory would be way too complicated for this blog, but the basic concepts are fairly straightforward: think in terms of how long a sequence of 1s and 0s it would take to convey the message. That is, roughly speaking, the Shannon entropy of that message.

How many bits are conveyed by a multiple-choice response with four choices? 2. Always. At maximum. No exceptions. It is fundamentally, provably, mathematically impossible to convey more than 2 bits of information via a channel that only has 4 possible states. Any multiple-choice response—any multiple-choice response—of four choices can be reduced to the sequence 00, 01, 10, 11.

True-false questions are a bit worse—literally, they convey 1 bit instead of 2. It’s possible to fully encode the entire response to a true-false question as simply 0 or 1.

For comparison, how many bits can I get from the free-response question? Well, in principle the answer to any mathematical question has the cardinality of the real numbers, which is infinite (in some sense beyond infinite, in fact—more infinite than mere “ordinary” infinity); but in reality you can only write down a small number of possible symbols on a page. I can’t actually write down the infinite diversity of numbers between 3.14159 and the true value of pi; in 10 digits or less, I can only (“only”) write down a few billion of them. So let’s suppose that handwritten text has about the same information density as typing, which in ASCII or Unicode has 8 bits—one byte—per character. If the response to this free-response question is 300 characters (note that this paragraph itself is over 800 characters), then the total number of bits conveyed is about 2400.

That is to say, one free-response question conveys six hundred times as much information as a multiple-choice question. Of course, a lot of that information is redundant; there are many possible correct ways to write the answer to a problem (if the answer is 1.5 you could say 3/2 or 6/4 or 1.500, etc.), and many problems have multiple valid approaches to them, and it’s often safe to skip certain steps of algebra when they are very basic, and so on. But it’s really not at all unrealistic to say that I am getting between 10 and 100 times as much useful information about a student from reading one free response than I would from one multiple-choice question.

Indeed, it’s actually a bigger difference than it appears, because when evaluating a student’s performance I’m not actually interested in the information density of the message itself; I’m interested in the product of that information density and its correlation with the true latent variable I’m trying to measure, namely the student’s actual understanding of the content. (A sequence of 500 random symbols would have a very high information density, but would be quite useless in evaluating a student!) Free-response questions aren’t just more information, they are also better information, because they are closer to the real-world problems we are training for, harder to cheat, harder to strategize, nearly impossible to guess, and provided detailed feedback about exactly what the student is struggling with (for instance, maybe they could solve the equilibrium just fine, but got hung up on calculating the consumer surplus).

As I alluded to earlier, free-response questions would also remove most of the danger of students seeing your tests beforehand. If they saw it beforehand, learned how to solve it, memorized the steps, and then were able to carry them out on the test… well, that’s actually pretty close to what you were trying to teach them. It would be better for them to learn a whole class of related problems and then be able to solve any problem from that broader class—but the first step in learning to solve a whole class of problems is in fact learning to solve one problem from that class. Just change a few details each year so that the questions aren’t identical, and you will find that any student who tried to “cheat” by seeing last year’s exam would inadvertently be studying properly for this year’s exam. And then perhaps we could stop making students literally sign nondisclosure agreements when they take college entrance exams. Listen to this Orwellian line from the SAT nondisclosure agreement:

Misconduct includes,but is not limited to:

Taking any test questions or essay topics from the testing room, including through memorization, giving them to anyone else, or discussing them with anyone else through anymeans, including, but not limited to, email, text messages or the Internet

Including through memorization. You are not allowed to memorize SAT questions, because God forbid you actually learn something when we are here to make money off evaluating you.

Multiple-choice tests fail in another way as well; by definition they cannot possibly test generation or recall of knowledge, they can only test recognition. You don’t need to come up with an answer; you know for a fact that the correct answer must be in front of you, and all you need to do is recognize it. Recall and recognition are fundamentally different memory processes, and recall is both more difficult and more important.

Indeed, the real mystery here is why we use multiple-choice exams at all.
There are a few types of very basic questions where multiple-choice is forgivable, because there are just aren’t that many possible valid answers. If I ask whether demand for apples has increased, you can pretty much say “it increased”, “it decreased”, “it stayed the same”, or “it’s impossible to determine”. So a multiple-choice format isn’t losing too much in such a case. But most really interesting and meaningful questions aren’t going to work in this format.

I don’t think it’s even particularly controversial among educators that multiple-choice questions are awful. (Though I do recall an “educational training” seminar a few weeks back that was basically an apologia for multiple choice, claiming that it is totally possible to test “higher-order cognitive skills” using multiple-choice, for reals, believe me.) So why do we still keep using them?

Well, the obvious reason is grading time. The one thing multiple-choice does have over a true free response is that it can be graded efficiently and reliably by machines, which really does make a big difference when you have 300 students in a class. But there are a couple reasons why even this isn’t a sufficient argument.

First of all, why do we have classes that big? It’s absurd. At that point you should just email the students video lectures. You’ve already foreclosed any possibility of genuine student-teacher interaction, so why are you bothering with having an actual teacher? It seems to be that universities have tried to work out what is the absolute maximum rent they can extract by structuring a class so that it is just good enough that students won’t revolt against the tuition, but they can still spend as little as possible by hiring only one adjunct or lecturer when they should have been paying 10 professors.

And don’t tell me they can’t afford to spend more on faculty—first of all, supporting faculty is why you exist. If you can’t afford to spend enough providing the primary service that you exist as an institution to provide, then you don’t deserve to exist as an institution. Moreover, they clearly can afford it—they simply prefer to spend on hiring more and more administrators and raising the pay of athletic coaches. PhD comics visualized it quite well; the average pay for administrators is three times that of even tenured faculty, and athletic coaches make ten times as much as faculty. (And here I think the mean is the relevant figure, as the mean income is what can be redistributed. Firing one administrator making $300,000 does actually free up enough to hire three faculty making $100,000 or ten grad students making $30,000.)

But even supposing that the institutional incentives here are just too strong, and we will continue to have ludicrously-huge lecture classes into the foreseeable future, there are still alternatives to multiple-choice testing.

Ironically, the College Board appears to have stumbled upon one themselves! About half the SAT math exam is organized into a format where instead of bubbling in one circle to give your 2 bits of answer, you bubble in numbers and symbols corresponding to a more complicated mathematical answer, such as entering “3/4” as “0”, “3”, “/”, “4” or “1.28” as “1”, “.”, “2”, “8”. This could easily be generalized to things like “e^2” as “e”, “^”, “2” and “sin(3pi/2)” as “sin”, “3” “pi”, “/”, “2”. There are 12 possible symbols currently allowed by the SAT, and each response is up to 4 characters, so we have already increased our possible responses from 4 to over 20,000—which is to say from 2 bits to 14. If we generalize it to include symbols like “pi” and “e” and “sin”, and allow a few more characters per response, we could easily get it over 20 bits—10 times as much information as a multiple-choice question.

But we can do better still! Even if we insist upon automation, high-end text-recognition software (of the sort any university could surely afford) is now getting to the point where it could realistically recognize a properly-formatted algebraic formula, so you’d at least know if the student remembered the formula correctly. Sentences could be transcribed into typed text, checked for grammar, and sorted for keywords—which is not nearly as good as a proper reading by an expert professor, but is still orders of magnitude better than filling circle “C”. Eventually AI will make even more detailed grading possible, though at that point we may have AIs just taking over the whole process of teaching. (Leaving professors entirely for research, presumably. Not sure if this would be good or bad.)

Automation isn’t the only answer either. You could hire more graders and teaching assistants—say one for every 30 or 40 students instead of one for every 100 students. (And then the TAs might actually be able to get to know their students! What a concept!) You could give fewer tests, or shorter ones—because a small, reliable sample is actually better than a large, unreliable one. A bonus there would be reducing students’ feelings of test anxiety. You could give project-based assignments, which would still take a long time to grade, but would also be a lot more interesting and fulfilling for both the students and the graders.

Or, and perhaps this is the most radical answer of all: You could stop worrying so much about evaluating student performance.

I get it, you want to know whether students are doing well, both so that you can improve your teaching and so that you can rank the students and decide who deserves various awards and merits. But do you really need to be constantly evaluating everything that students do? Did it ever occur to you that perhaps that is why so many students suffer from anxiety—because they are literally being formally evaluated with long-term consequences every single day they go to school?

If we eased up on all this evaluation, I think the fear is that students would just detach entirely; all teachers know students who only seem to show up in class because they’re being graded on attendance. But there are a couple of reasons to think that maybe this fear isn’t so well-founded after all.

If you give up on constant evaluation, you can open up opportunities to make your classes a lot more creative and interesting—and even fun. You can make students want to come to class, because they get to engage in creative exploration and collaboration instead of memorizing what you drone on at them for hours on end. Most of the reason we don’t do creative, exploratory activities is simply that we don’t know how to evaluate them reliably—so what if we just stopped worrying about that?

Moreover, are those students who only show up for the grade really getting anything out of it anyway? Maybe it would be better if they didn’t show up—indeed, if they just dropped out of college entirely and did something else with their lives until they get their heads on straight. Maybe all this effort that we are currently expending trying to force students to learn who clearly don’t appreciate the value of learning could instead be spent enriching the students who do appreciate learning and came here to do as much of it as possible. Because, ultimately, you can lead a student to algebra, but you can’t make them think. (Let me be clear, I do not mean students with less innate ability or prior preparation; I mean students who aren’t interested in learning and are only showing up because they feel compelled to. I admire students with less innate ability who nonetheless succeed because they work their butts off, and wish I were quite so motivated myself.)
There’s a downside to that, of course. Compulsory education does actually seem to have significant benefits in making people into better citizens. Maybe if we let those students just leave college, they’d never come back, and they would squander their potential. Maybe we need to force them to show up until something clicks in their brains and they finally realize why we’re doing it. In fact, we’re really not forcing them; they could drop out in most cases and simply don’t, probably because their parents are forcing them. Maybe the signaling problem is too fundamental, and the only way we can get unmotivated students to accept not getting prestigious degrees is by going through this whole process of forcing them to show up for years and evaluating everything they do until we can formally justify ultimately failing them. (Of course, almost by construction, a student who does the absolute bare minimum to pass will pass.) But college admission is competitive, and I can’t shake this feeling there are thousands of students out there who got rejected from the school they most wanted to go to, the school they were really passionate about and willing to commit their lives to, because some other student got in ahead of them—and that other student is now sitting in the back of the room playing with an iPhone, grumbling about having to show up for class every day. What about that squandered potential? Perhaps competitive admission and compulsory attendance just don’t mix, and we should stop compelling students once they get their high school diploma.

Games as economic simulations—and education tools

Mar 5, JDN 2457818 [Sun]

Moore’s Law is a truly astonishing phenomenon. Now as we are well into the 21st century (I’ve lived more of my life in the 21st century than the 20th now!) it may finally be slowing down a little bit, but it has had quite a run, and even this could be a temporary slowdown due to economic conditions or the lull before a new paradigm (quantum computing?) matures. Since at least 1975, the computing power of an individual processor has doubled approximately every year and a half; that means it has doubled over 25 times—or in other words that it has increased by a factor of over 30 million. I now have in my pocket a smartphone with several thousand times the processing speed of the guidance computer of the Saturn V that landed on the Moon.

This meteoric increase in computing power has had an enormous impact on the way science is done, including economics. Simple theoretical models that could be solved by hand are now being replaced by enormous simulation models that have to be processed by computers. It is now commonplace to devise models with systems of dozens of nonlinear equations that are literally impossible to solve analytically, and just solve them iteratively with computer software.

But one application of this technology that I believe is currently underutilized is video games.

As a culture, we still have the impression that video games are for children; even games like Dragon Age and Grand Theft Auto that are explicitly for adults (and really quite inappropriate for children!) are viewed as in some sense “childish”—that no serious adult would be involved with such frivolities. The same cultural critics who treat Shakespeare’s vagina jokes as the highest form of art are liable to dismiss the poignant critique of war in Call of Duty: Black Ops or the reflections on cultural diversity in Skyrim as mere puerility.

But video games are an art form with a fundamentally greater potential than any other. Now that graphics are almost photorealistic, there is really nothing you can do in a play or a film that you can’t do in a video game—and there is so, so much more that you can only do in a game.
In what other medium can we witness the spontaneous emergence and costly aftermath of a war? Yet EVE Online has this sort of event every year or so—just today there was a surprise attack involving hundreds of players that destroyed thousands of hours’—and dollars’—worth of starships, something that has more or less become an annual tradition. A few years ago there was a massive three-faction war that destroyed over $300,000 in ships and has now been commemorated as “the Bloodbath of B-R5RB”.
Indeed, the immersion and interactivity of games present an opportunity to do nothing less than experimental macroeconomics. For generations it has been impossible, or at least absurdly unethical, to ever experimentally manipulate an entire macroeconomy. But in a video game like EVE Online or Second Life, we can now do so easily, cheaply, and with little or no long-term harm to the participants—and we can literally control everything in the experiment. Forget the natural resource constraints and currency exchange rates—we can change the laws of physics if we want. (Indeed, EVE‘s whole trade network is built around FTL jump points, and in Second Life it’s a basic part of the interface that everyone can fly like Superman.)

This provides untold potential for economic research. With sufficient funding, we could build a game that would allow us to directly test hypotheses about the most fundamental questions of economics: How do governments emerge and maintain security? How is the rule of law sustained, and when can it be broken? What controls the value of money and the rate of inflation? What is the fundamental cause of unemployment, and how can it be corrected? What influences the rate of technological development? How can we maximize the rate of economic growth? What effect does redistribution of wealth have on employment and output? I envision a future where we can directly simulate these questions with thousands of eager participants, varying the subtlest of parameters and carrying out events over any timescale we like from seconds to centuries.

Nor is the potential of games in economics limited to research; it also has enormous untapped potential in education. I’ve already seen in my classes how tabletop-style games with poker chips can teach a concept better in a few minutes than hours of writing algebra derivations on the board; but custom-built video games could be made that would teach economics far better still, and to a much wider audience. In a well-designed game, people could really feel the effects of free trade or protectionism, not just on themselves as individuals but on entire nations that they control—watch their GDP numbers go down as they scramble to produce in autarky what they could have bought for half the price if not for the tariffs. They could see, in real time, how in the absence of environmental regulations and Pigovian taxes the actions of millions of individuals could despoil our planet for everyone.

Of course, games are fundamentally works of fiction, subject to the Fictional Evidence Fallacy and only as reliable as their authors make them. But so it is with all forms of art. I have no illusions about the fact that we will never get the majority of the population to regularly read peer-reviewed empirical papers. But perhaps if we are clever enough in the games we offer them to play, we can still convey some of the knowledge that those papers contain. We could also update and expand the games as new information comes in. Instead of complaining that our students are spending time playing games on their phones and tablets, we could actually make education into games that are as interesting and entertaining as the ones they would have been playing. We could work with the technology instead of against it. And in a world where more people have access to a smartphone than to a toilet, we could finally bring high-quality education to the underdeveloped world quickly and cheaply.

Rapid growth in computing power has given us a gift of great potential. But soon our capacity will widen even further. Even if Moore’s Law slows down, computing power will continue to increase for awhile yet. Soon enough, virtual reality will finally take off and we’ll have even greater depth of immersion available. The future is bright—if we can avoid this corporatist cyberpunk dystopia we seem to be hurtling toward, of course.

Caught between nepotism and credentialism

Feb 19, JDN 2457804

One of the more legitimate criticisms out there of we “urban elites” is our credentialismour tendency to decide a person’s value as an employee or even as a human being based solely upon their formal credentials. Randall Collins, an American sociologist, wrote a book called The Credential Society arguing that much of the class stratification in the United States is traceable to this credentialism—upper-middle-class White Anglo-Saxon Protestants go to the good high schools to get into the good colleges to get the good careers, and all along the way maintain subtle but significant barriers to keep everyone else out.

A related concern is that of credential inflation, where more and more people get a given credential (such as a high school diploma or a college degree), and it begins to lose value as a signal of status. It is often noted that a bachelor’s degree today “gets” you the same jobs that a high school diploma did two generations ago, and two generations hence you may need a master’s or even a PhD.

I consider this concern wildly overblown, however. First of all, they’re not actually the same jobs at all. Even our “menial” jobs of today require skills that most people didn’t have two generations ago—not simply those involving electronics and computers, but even quite basic literacy and numeracy. Yes, you could be a banker in the 1920s with a high school diploma, but plenty of bankers in the 1920s didn’t know algebra. What, you think they were arbitraging derivatives based on the Black-Scholes model?

The primary purpose of education should be to actually improve students’ abilities, not to signal their superior status. More people getting educated is good, not bad. If we really do need signals, we can devise better ones than making people pay tens of thousands of dollars in tuition and spending years taking classes. An expenditure of that magnitude should be accomplishing something, not just signaling. (And given the overwhelming positive correlation between a country’s educational attainment and its economic development, clearly education is actually accomplishing something.) Our higher educational standards have directly tied to higher technology and higher productivity. If indeed you need a PhD to be a janitor in 2050, it will be because in 2050 a “janitor” is actually the expert artificial intelligence engineer who commands an army of cleaning robots, not because credentials have “inflated”. Thinking that credentials “inflate” requires thinking that business managers must be very stupid, that they would exclude whole swaths of qualified candidates that they could pay less to do the same work. Only a complete moron would require a PhD to hire you for wielding a mop.

No, what concerns me is an over-emphasis on prestigious credentials over genuine competence. This is definitely a real issue in our society: Almost every US President went to an Ivy League university, yet several of them (George W. Bush, anyone?) clearly would not actually have been selected by such a university if their families had not been wealthy and well-connected. (Harvard’s application literally contains a question asking whether you are a “lineal or collateral descendant” of one of a handful of super-wealthy families.) Papers that contain errors so basic that I would probably get a failing grade as a grad student for them become internationally influential because they were written by famous economists with fancy degrees.

Ironically, it may be precisely because elite universities try not to give grades or special honors that so many of their students try so desperately to latch onto any bits of social status they can get their hands on. In this blog post, a former Yale law student comments on how, without grades or cum laude to define themselves, Yale students became fiercely competitive in the pettiest ways imaginable. Or it might just be a selection effect; to get into Yale you’ve probably got to be pretty competitive, so even if they don’t give out grades once you get there, you can take the student out of the honors track, but you can’t take the honors track out of the student.

But perhaps the biggest problem with credentialism is… I don’t see any viable alternatives!

We have to decide who is going to be hired for technical and professional positions somehow. It almost certainly can’t be everyone. And the most sensible way to do it would be to have a process people go through to get trained and evaluated on their skills in that profession—that is, a credential.

What else would we do? We could decide randomly, I suppose; well, good luck with that. Or we could try to pick people who don’t have qualifications (“anti-credentialism” I suppose), which would be systematically wrong. Or individual employers could hire individuals they know and trust on a personal level, which doesn’t seem quite so ridiculous—but we have a name for that too, and it’s nepotism.

Even anti-credentialism does exist, bafflingly enough. Many people voted for George W. Bush because they said he was “the kind of guy you can have a beer with”. That wasn’t true, of course; he was the spoiled child of a billionaire, a man who had never really worked a day in his life. But even if it had been true, so what? How is that a qualification to be the leader of the free world? And how many people voted for Trump precisely because he had no experience in government? This made sense to them somehow. (And, shockingly, he has no idea what he’s doing. Actually what is shocking is that he admits that.)

Nepotism of course happens all the time. In fact, nepotism is probably the default state for humans. The continual re-emergence of hereditary monarchy and feudalism around the world suggests that this is some sort of attractor state for human societies, that in the absence of strong institutional pressures toward some other system this is what people will generally settle into. And feudalism is nothing if not nepotistic; your position in life is almost entirely determined by your father’s position, and his father’s before that.

Formal credentials can put a stop to that. Of course, your ability to obtain the credential often depends upon your income and social status. But if you can get past those barriers and actually get the credential, you now have a way of pushing past at least some of the competitors who would have otherwise been hired on their family connections alone. The rise in college enrollments—and women actually now exceeding men in college enrollment rates—is one of the biggest reasons why the gender pay gap is rapidly closing among young workers. Nepotism and sexism that would otherwise have hired unqualified men is now overtaken by the superior credentials of qualified women.

Credentialism does still seem suboptimal… but from where I’m sitting, it seems like a second-best solution. We can’t actually observe people’s competence and ability directly, so we need credentials to provide an approximate measurement. We can certainly work to improve credentials—and for example, I am fiercely opposed to multiple-choice testing because it produces such meaningless credentials—but ultimately I don’t see any alternative to credentials.

The urban-rural divide runs deep

Feb 5, JDN 2457790

Are urban people worth less than rural people?

That probably sounds like a ridiculous thing to ask; of course not, all people are worth the same (other things equal of course—philanthropists are worth more than serial murderers). But then, if you agree with that, you’re probably an urban person, as I’m sure most of my readers are (and as indeed most people in highly-developed countries are).

A disturbing number of rural people, however, honestly do seem to believe this. They think that our urban lifestyles (whatever they imagine those to be) devalue us as citizens and human beings.

That is the key subtext to understand in the terrifying phenomenon that is Donald Trump. Most of the people who voted for him can’t possibly have thought he was actually trustworthy, and many probably didn’t actually support his policies of bigotry and authoritarianism (though he was very popular among bigots and authoritarians). From speaking with family members and acquaintances who proudly voted for Trump, one thing came through very clearly: This was a gigantic middle finger pointed at cities. They didn’t even really want Trump; they just knew we didn’t, and so they voted for him out of spite as much as anything else. They also have really confused views about free trade, so some of them voted for him because he promised to bring back jobs lost to trade (that weren’t lost to trade, can’t be brought back, and shouldn’t be even if they could). Talk with a Trump voter for a few minutes, and sneers of “latte-sipping liberal” (I don’t even like coffee) and “coastal elite” (I moved here to get educated; I wasn’t born here) are sure to follow.

There has always been some conflict between rural and urban cultures, for as long as there have been urban cultures for rural cultures to be in conflict with. It is found not just in the US, but in most if not all countries around the world. It was relatively calm during the postwar boom in the 20th century, as incomes everywhere (or at least everywhere within highly-developed countries) were improving more or less in lockstep. But the 21st century has brought us much more unequal growth, concentrated on particular groups of people and particular industries. This has brought more resentment. And that divide, above all else, is what brought us Trump; the correlation between population density and voting behavior is enormous.

Of course, “urban” is sometimes a dog-whistle for “Black”; but sometimes I think it actually really means “urban”—and yet there’s still a lot of hatred embedded in it. Indeed, perhaps that’s why the dog-whistle works; a White man from a rural town can sneer at “urban” people and it’s not entirely clear whether he’s being racist or just being anti-urban.

The assumption that rural lifestyles are superior runs so deep in our culture that even in articles by urban people (like this one from the LA Times) supposedly reflecting about how to resolve this divide, there are long paeans to the world of “hard work” and “sacrifice” and “autonomy” of rural life, and mocking “urban elites” for their “disproportionate” (by which you can only mean almost proportionate) power over government.

Well, guess what? If you want to live in a rural area, go live in a rural area. Don’t pine for it. Don’t tell me how great farm life is. If you want to live on a farm, go live on a farm. I have nothing against it; we need farmers, after all. I just want you to shut up about how great it is, especially if you’re not going to actually do it. Pining for someone else’s lifestyle when you could easily take on that lifestyle if you really wanted it just shows that you think the grass is greener on the other side.

Because the truth is, farm living isn’t so great for most people. The world’s poorest people are almost all farmers. 70% of people below the UN poverty line live in rural areas, even as more and more of the world’s population moves into cities. If you use a broader poverty measure, as many as 85% of the world’s poor live in rural areas.

The kind of “autonomy” that means defending your home with a shotgun is normally what we would call anarchy—it’s a society that has no governance, no security. (Of course, in the US that’s pure illusion; crime rates in general are low and falling, and lower in rural areas than urban areas. But in some parts of the world, that anarchy is very real.) One of the central goals of global economic development is to get people away from subsistence farming into far more efficient manufacturing and service jobs.

At least in the US, farm life is a lot better than it used to be, now that agricultural technology has improved so that one farmer can now do the work of hundreds. Despite increased population and increased food consumption per person, the number of farmers in the US is now the smallest it has been since before the Civil War. The share of employment devoted to agriculture has fallen from over 80% in 1800 to under 2% today. Even just since the 1960s labor productivity of US farms has more than tripled.

But the reason that some 80% of Americans have chosen to live in cities—and yes, I can clearly say “chosen”, because cities are more expensive and therefore urban living is a voluntary activity. Most people who live in the city right now could move to the country if we really wanted to. We choose not to, because we know our life would be worse if we did.

Indeed, I dare say that a lot of the hatred of city-dwellers has got to be envy. Our (median) incomes are higher and our (mean) lifespans are longer. Fewer of our children are in poverty. Life is better here—we know it, and deep down, they know it too.

We also have better Internet access, unsurprisingly—though rural areas are only a few years behind, and the technology improves so rapidly that twice as many rural homes in the US have Internet access than urban homes did in 1998.

Now, a rational solution to this problem would be either to improve the lives of people in rural areas or else move everyone to urban areas—and both of those things have been happening, not only in the US but around the world. But in order to do that, you need to be willing to change things. You have to give up the illusion that farm life is some wonderful thing we should all be emulating, rather than the necessary toil that humanity was forced to go through for centuries until civilization could advance beyond it. You have to be willing to replace farmers with robots, so that people who would have been farmers can go do something better with their lives. You need to give up the illusion that there is something noble or honorable about hard labor on a farm—indeed, you need to give up the illusion that there is anything noble or honorable about hard work in general. Work is not a benefit; work is a cost. Work is what we do because we have to—and when we no longer have to do it, we should stop. Wanting to escape toil and suffering doesn’t make you lazy or selfish—it makes you rational.

We could surely be more welcoming—but cities are obviously more welcoming to newcomers than rural areas are. Our housing is too expensive, but that’s in part because so many people want to live here—supply hasn’t been able to keep up with demand.

I may seem to be presenting this issue as one-sided; don’t urban people devalue rural people too? Sometimes. Insults like “hick” and “yokel” and “redneck” do of course exist. But I’ve never heard anyone from a city seriously argue that people who live in rural areas should have votes that systematically count for less than those of people who live in cities—yet the reverse is literally what people are saying when they defend the Electoral College. If you honestly think that the Electoral College deserves to exist in anything like its present form, you must believe that some Americans are worth more than others, and the people who are worth more are almost all in rural areas while the people who are worth less are almost all in urban areas.

No, National Review, the Electoral College doesn’t “save” America from California’s imperial power; it gives imperial power to a handful of swing states. The only reason California would be more important than any other state is that more Americans live here. Indeed, a lot of Republicans in California are disenfranchised, because they know that their votes will never overcome the overwhelming Democratic majority for the state as a whole and the system is winner-takes-all. Indeed, about 30% of California votes Republican (well, not in the last election, because that was Trump—Orange County went Democrat for the first time in decades), so the number of disenfranchised Republicans alone in California is larger than the population of Michigan, which in turn is larger than the population of Wyoming, North Dakota, South Dakota, Montana, Nebraska, West Virginia, and Kansas combined. Indeed, there are more people in California than there are in Canada. So yeah, I’m thinking maybe we should get a lot of votes?

But it’s easy for you to drum up fear over “imperial rule” by California in particular, because we’re so liberal—and so urban, indeed an astonishing 95% urban, the most of any US state (or frankly probably any major regional entity on the planet Earth! To beat that you have to be something like Singapore, which literally just is a single city).

In fact, while insults thrown at urban people get thrown at basically all of us regardless of what we do, most of the insults that are thrown at rural people are mainly thrown at uneducated rural people. (And statistically, while many people in rural areas are educated and many people in urban areas are not, there’s definitely a positive correlation between urbanization and education.) It’s still unfair in many ways, not least because education isn’t entirely a choice, not in a society where tuition at an average private university costs more than the median individual income. Many of the people we mock as being stupid were really just born poor. It may not be their fault, but they can’t believe that the Earth is only 10,000 years old and not have some substantial failings in their education. I still don’t think mockery is the right answer; it’s really kicking them while they’re down. But clearly there is something wrong with our society when 40% of people believe something so obviously ludicrous—and those beliefs are very much concentrated in the same Southern states that have the most rural populations. “They think we’re ignorant just because we believe that God made the Earth 6,000 years ago!” I mean… yes? I’m gonna have to own up to that one, I guess. I do in fact think that people who believe things that were disproven centuries ago are ignorant.

So really this issue is one-sided. We who live in cities are being systematically degraded and disenfranchised, and when we challenge that system we are accused of being selfish or elitist or worse. We are told that our lifestyles are inferior and shameful, and when we speak out about the positive qualities of our lives—our education, our acceptance of diversity, our flexibility in the face of change—we are again accused of elitism and condescension.

We could simply stew in that resentment. But we can do better. We can reach out to people in rural areas, show them not just that our lives are better—as I said, they already know this—but that they can have these lives too. And we can make policy so that this really can happen for people. Envy doesn’t automatically lead to resentment; that only happens when combined with a lack of mobility. The way urban people pine for the countryside is baffling, since we could go there any time; but the way that country people long for the city is perfectly understandable, as our lives really are better but our rent is too high for them to afford. We need to bring that rent down, not just for the people already living in cities, but also for the people who want to but can’t.

And of course we don’t want to move everyone to cities, either. Many people won’t want to live in cities, and we need a certain population of farmers to make our food after all. We can work to improve infrastructure in rural areas—particularly when it comes to hospitals, which are a basic necessity that is increasingly underfunded. We shouldn’t stop using cost-effectiveness calculations, but we need to compare against the right things. If that hospital isn’t worth building, it should be because there’s another, better hospital we could make for the same amount or cheaper—not because we think that this town doesn’t deserve to have a hospital. We can expand our public transit systems over a wider area, and improve their transit speeds so that people can more easily travel to the city from further away.

We should seriously face up to the costs that free trade has imposed upon many rural areas. We can’t give up on free trade—but that doesn’t mean we need to keep our trade policy exactly as it is. We can do more to ensure that multinational corporations don’t have overwhelming bargaining power against workers and small businesses. We can establish a tax system that would redistribute more of the gains from free trade to the people and places most hurt by the transition. Right now, poor people in the US are often the most fiercely opposed to redistribution of wealth, because somehow they perceive that wealth will be redistributed from them when it would in fact be redistributed to them. They are in a scarcity mindset, their whole worldview shaped by the fact that they struggle to get by. They see every change as a threat, every stranger as an enemy.

Somehow we need to fight that mindset, get them to see that there are many positive changes that can be made, many things that we can achieve together that none of us could achieve along.