I think I know what the Great Filter is now

Sep 3, JDN 2458000

One of the most plausible solutions to the Fermi Paradox of why we have not found any other intelligent life in the universe is called the Great Filter: Somewhere in the process of evolving from unicellular prokaryotes to becoming an interstellar civilization, there is some highly-probable event that breaks the process, a “filter” that screens out all but the luckiest species—or perhaps literally all of them.

I previously thought that this filter was the invention of nuclear weapons; I now realize that this theory is incomplete. Nuclear weapons by themselves are only an existential threat because they co-exist with widespread irrationality and bigotry. The Great Filter is the combination of the two.

Yet there is a deep reason why we would expect that this is precisely the combination that would emerge in most species (as it has certainly emerged in our own): The rationality of a species is not uniform. Some individuals in a species will always be more rational than others, so as a species increases its level of rationality, it does not do so all at once.

Indeed, the processes of economic development and scientific advancement that make a species more rational are unlikely to be spread evenly; some cultures will develop faster than others, and some individuals within a given culture will be further along than others. While the mean level of rationality increases, the variance will also tend to increase.

On some arbitrary and oversimplified scale where 1 is the level of rationality needed to maintain a hunter-gatherer tribe, and 20 is the level of rationality needed to invent nuclear weapons, the distribution of rationality in a population starts something like this:


Most of the population is between levels 1 and 3, which we might think of as lying between the bare minimum for a tribe to survive and the level at which one can start to make advances in knowledge and culture.

Then, as the society advances, it goes through a phase like this:


This is about where we were in Periclean Athens. Most of the population is between levels 2 and 8. Level 2 used to be the average level of rationality back when we were hunter-gatherers. Level 8 is the level of philosophers like Archimedes and Pythagoras.

Today, our society looks like this:

Most of the society is between levels 4 and 20. As I said, level 20 is the point at which it becomes feasible to develop nuclear weapons. Some of the world’s people are extremely intelligent and rational, and almost everyone is more rational than even the smartest people in hunter-gatherer times, but now there is enormous variation.

Where on this chart are racism and nationalism? Importantly, I think they are above the level of rationality that most people had in ancient times. Even Greek philosophers had attitudes toward slaves and other cultures that the modern KKK would find repulsive. I think on this scale racism is about a 10 and nationalism is about a 12.

If we had managed to uniformly increase the rationality of our society, with everyone gaining at the same rate, our distribution would instead look like this:

If that were the case, we’d be fine. The lowest level of rationality widespread in the population would be 14, which is already beyond racism and nationalism. (Maybe it’s about the level of humanities professors today? That makes them substantially below quantum physicists who are 20 by construction… but hey, still almost twice as good as the Greek philosophers they revere.) We would have our nuclear technology, but it would not endanger our future—we wouldn’t even use it for weapons, we’d use it for power generation and space travel. Indeed, this lower-variance high-rationality state seems to be about what they have the Star Trek universe.

But since we didn’t, a large chunk of our population is between 10 and 12—that is, still racist or nationalist. We have the nuclear weapons, and we have people who might actually be willing to use them.


I think this is what happens to most advanced civilizations around the galaxy. By the time they invent space travel, they have also invented nuclear weapons—but they still have their equivalent of racism and nationalism. And most of the time, the two combine into a volatile mix that results in the destruction or regression of their entire civilization.

If this is right, then we may be living at the most important moment in human history. It may be right here, right now, that we have the only chance we’ll ever get to turn the tide. We have to find a way to reduce the variance, to raise the rest of the world’s population past nationalism to a cosmopolitan morality. And we may have very little time.

What is the point of democracy?

Apr 9, JDN 2457853

[This topic was chosen by Patreon vote.]

“Democracy” is the sort of word that often becomes just an Applause Light (indeed it was the original example Less Wrong used). Like “freedom” and “liberty” (and for much the same reasons), it’s a good thing, that much we know; but it’s often unclear what is even meant by the word, much less why it should be so important to us.

From another angle, it is strangely common for economists and political scientists to argue that democracy is not all that important; they at least tend to use a precise formal definition of “democracy”, but are oddly quick to dismiss it as pointless or even harmful when it doesn’t line up precisely with their models of an efficient economy or society. I think the best example of this is the so-called “Downs paradox”, where political scientists were so steeped in the tradition of defining all rationality as psychopathic self-interest that they couldn’t even explain why it would occur to anyone to vote. (And indeed, rumor has it that most economists don’t bother to vote, much less campaign politically—which perhaps begins to explain why our economic policy is so terrible.)

Yet especially for Americans in the Trump era, I think it is vital to understand what “democracy” is supposed to mean, and why it is so important.

So, first of all, what is democracy? It is nothing more or less than government by popular vote.

This comes in degrees, of course: The purest direct democracy would have the entire population vote on even the most mundane policies and decisions. You could actually manage something like a monastery or a social club in such a fashion, but this is clearly unworkable on any large scale. Even once you get to hundreds of people, much less thousands or millions, it becomes unviable. The closest example I’ve seen is Switzerland, where there are always numerous popular referenda on ballots that are voted on by entire regions or the entire country—and even then, Switzerland does have representatives that make many of the day-to-day decisions.

So in practice all large-scale democratic systems are some degree of representative democracy, or republic, where some especially decisions may be made by popular vote, but most policies are made by elected representatives, staff appointed by those representatives, or even career civil servants who are appointed in a nominally apolitical process not so different from private-sector hiring. In the most extreme cases such civil servants can become so powerful that you get a deep state, where career bureaucrats exercise more power than elected officials—at that point I think you have actually lost the right to really call yourself a “democracy” and have become something more like a technocracy.
Yet of course a country can get even more undemocratic than that, and many are, governed by an aristocracy or oligarchy that vests power in a small number of wealthy and powerful individuals, or monarchy or autocracy that gives near-absolute power to a single individual.

Thus, there is a continuum of most to least democratic, with popular vote at one end, followed by elected representatives, followed by appointed civil servants, followed by a handful of oligarchs, and ultimately the most undemocratic system is an autocracy controlled by a single individual.

I also think it’s worth mentioning that constitutional monarchies with strong parliamentary systems, like the United Kingdom and Norway, are also “democracies” in the sense I intend. Yes, technically they have these hereditary monarchs—but in practice, the vast majority of the state’s power is vested in the votes of its people. Indeed, if we separate out parliamentary constitutional monarchy from presidential majoritarian democracy and compare them, the former might actually turn out to be better. Certainly, some of the world’s most prosperous nations are governed that way.

As I’ve already acknowledge, the very far extreme of pure direct democracy is unfeasible. But why would we want to get closer to that end? Why be like Switzerland or Denmark rather than like Turkey or Russia—or for that matter why be like California rather than like Mississippi?
Well, if you know anything about the overall welfare of these states, it almost seems obvious—Switzerland and Denmark are richer, happier, safer, healthier, more peaceful, and overall better in almost every way than Turkey and Russia. The gap between California and Mississippi is not as large, but it is larger than most people realize. Median household income in California is $64,500; in Mississippi it is only $40,593. Both are still well within the normal range of a highly-developed country, but that effectively makes California richer than Luxembourg but Mississippi poorer than South Korea. But perhaps the really stark comparison to make is life expectancy: Life expectancy at birth in California is almost 81 years, while in Mississippi it’s only 75.

Of course, there are a lot of other differences between states besides how much of their governance is done by popular referendum. Simply making Mississippi decide more things by popular vote would not turn it into California—much less would making Turkey more democratic turn it into Switzerland. So we shouldn’t attribute these comparisons entirely to differences in democracy. Indeed, a pair of two-way comparisons is only in the barest sense a statistical argument; we should be looking at dozens if not hundreds of comparisons if we really want to see the effects of democracy. And we should of course be trying to control for other factors, adjust for country fixed-effects, and preferably use natural experiments or instrumental variables to tease out causality.

Yet such studies have in fact been done. Stronger degrees of democracy appear to improve long-run economic growth, as well as reduce corruption, increase free trade, protect peace, and even improve air quality.

Subtler analyses have compared majoritarian versus proportional systems (where proportional seems, to me, at least, more democratic), as well as different republican systems with stronger or weaker checks and balances (stronger is clearly better, though whether that is “more democratic” is at least debatable). The effects of democracy on income distribution are more complicated, probably because there have been some highly undemocratic socialist regimes.

So, the common belief that democracy is good seems to be pretty well supported by the data. But why is democracy good? Is it just a practical matter of happening to get better overall results? Could it one day be overturned by some superior system such as technocracy or a benevolent autocratic AI?

Well, I don’t want to rule out the possibility of improving upon existing systems of government. Clearly new systems of government have in fact emerged over the course of history—Greek “democracy” and Roman “republic” were both really aristocracy, and anything close to universal suffrage didn’t really emerge on a large scale until the 20th century. So the 21st (or 22nd) century could well devise a superior form of government we haven’t yet imagined.
However, I do think there is good reason to believe that any new system of government that actually manages to improve upon democracy will still resemble democracy, because there are three key features democracy has that other systems of government simply can’t match. It is these three features that make democracy so important and so worth fighting for.

1. Everyone’s interests are equally represented.

Perhaps no real system actually manages to represent everyone’s interests equally, but the more democratic a system is, the better it will conform to this ideal. A well-designed voting system can aggregate the interests of an entire population and choose the course of action that creates the greatest overall benefit.

Markets can also be a good system for allocating resources, but while markets represent everyone’s interests, they do so highly unequally. Rich people are quite literally weighted more heavily in the sum.

Most systems of government do even worse, by completely silencing the voices of the majority of the population. The notion of a “benevolent autocracy” is really a conceit; what makes you think you could possibly keep the autocrat benevolent?

This is also why any form of disenfranchisement is dangerous and a direct attack upon democracy. Even if people are voting irrationally, against their own interests and yours, by silencing their voice you are undermining the most fundamental tenet of democracy itself. All voices must be heard, no exceptions. That is democracy’s fundamental strength.

2. The system is self-correcting.

This may more accurately describe a constitutional republican system with strong checks and balances, but that is what most well-functioning democracies have and it is what I recommend. If you conceive of “more democracy” as meaning that people can vote their way into fascism by electing a sufficiently charismatic totalitarian, then I do not want us to have “more democracy”. But just as contracts and regulations that protect you can make you in real terms more free because you can now safely do things you otherwise couldn’t risk, I consider strong checks and balances that maintain the stability of a republic against charismatic fascists to be in a deeper sense more democratic. This is ultimately semantic; I think I’ve made it clear enough that I want strong checks and balances.

With such checks and balances in place, democracies may move slower than autocracies; they may spend more time in deliberation or even bitter, polarized conflict. But this also means that their policies do not lurch from one emperor’s whim to another, and they are stable against being overtaken by corruption or fascism. Their policies are stable and predictable; their institutions are strong and resilient.

No other system of government yet devised by humans has this kind of stability, which may be why democracies are gradually taking over the world. Charismatic fascism fails when the charismatic leader dies; hereditary monarchy collapses when the great-grandson of the great king is incompetent; even oligarchy and aristocracy, which have at least some staying power, ultimately fall apart when the downtrodden peasants ultimately revolt. But democracy abides, for where monarchy and aristocracy are made of families and autocracy and fascism are made of a single man, democracy is made of principles and institutions. Democracy is evolutionarily stable, and thus in Darwinian terms we can predict it will eventually prevail.

3. The coercion that government requires is justified.

All government is inherently coercive. Libertarians are not wrong about this. Taxation is coercive. Regulation is coercive. Law is coercive. (The ones who go on to say that all government is “death threats” or “slavery” are bonkers, mind you. But it is in fact coercive.)

The coercion of government is particularly terrible if that coercion is coming from a system like an autocracy, where the will of the people is minimally if at all represented in the decisions of policymakers. Then that is a coercion imposed from outside, a coercion in the fullest sense, one person who imposes their will upon another.

But when government coercion comes from a democracy, it takes on a fundamentally different meaning. Then it is not they who coerce us—it is we who coerce ourselves. Now, why in the world would you coerce yourself? It seems ridiculous, doesn’t it?

Not if you know any game theory. There are in fall all sorts of reasons why one might want to coerce oneself, and two in particular become particularly important for the justification of democratic government.

The first and most important is collective action: There are many situations in which people all working together to accomplish a goal can be beneficial to everyone, but nonetheless any individual person who found a way to shirk their duty and not contribute could benefit even more. Anyone who has done a group project in school with a couple of lazy students in it will know this experience: You end up doing all the work, but they still get a good grade at the end. If everyone had taken the rational, self-interested action of slacking off, everyone in the group would have failed the project.

Now imagine that the group project we’re trying to achieve is, say, defending against an attack by Imperial Japan. We can’t exactly afford to risk that project falling through. So maybe we should actually force people to support it—in the form of taxes, or even perhaps a draft (as ultimately we did in WW2). Then it is no longer rational to try to shirk your duty, so everyone does their duty, the project gets done, and we’re all better off. How do we decide which projects are important enough to justify such coercion? We vote, of course. This is the most fundamental justification of democratic government.

The second that is relevant for government is commitment. There are many circumstances in which we want to accomplish something in the future, and from a long-run perspective it makes sense to achieve that goal—but then when the time comes to take action, we are tempted to procrastinate or change our minds. How can we resolve such a dilemma? Well, one way is to tie our own hands—to coerce ourselves into carrying out the necessary task we are tempted to avoid or delay.

This applies to many types of civil and criminal law, particularly regarding property ownership. Murder is a crime that most people would not commit even if it were completely legal. But shoplifting? I think if most people knew there would be no penalty for petty theft and retail fraud they would be tempted into doing it at least on occasion. I doubt it would be frequent enough to collapse our entire economic system, but it would introduce a lot of inefficiency, and make almost everything more expensive. By having laws in place that punish us for such behavior, we have a way of defusing such temptations, at least for most people most of the time. This is not as important for the basic functioning of government as is collective action, but I think it is still important enough to be worthy of mention.

Of course, there will always be someone who disagrees with any given law, regardless of how sensible and well-founded that law may be. And while in some sense “we all” agreed to pay these taxes, when the IRS actually demands that specific dollar amount from you, it may well be an amount that you would not have chosen if you’d been able to set our entire tax system yourself. But this is a problem of aggregation that I think may be completely intractable; there’s no way to govern by consensus, because human beings just can’t achieve consensus on the scale of millions of people. Governing by popular vote and representation is the best alternative we’ve been able to come up with. If and when someone devises a system of government that solves that problem and represents the public will even better than voting, then we will have a superior alternative to democracy.

Until then, it is as Churchill said: “Democracy is the worst form of government, except for all the others.”

How we sold our privacy piecemeal

Apr 2, JDN 2457846

The US Senate just narrowly voted to remove restrictions on the sale of user information by Internet Service Providers. Right now, your ISP can basically sell your information to whomever they like without even telling you. The new rule that the Senate struck down would have required them to at least make you sign a form with some fine print on it, which you probably would sign without reading it. So in practical terms maybe it makes no difference.

…or does it? Maybe that’s really the mistake we’ve been making all along.

In cognitive science we have a concept called the just-noticeable difference (JND); it is basically what it sounds like. If you have two stimuli—two colors, say, or sounds of two different pitches—that differ by an amount smaller than the JND, people will not notice it. But if they differ by more than the JND, people will notice. (In practice it’s a bit more complicated than that, as different people have different JND thresholds and even within a person they can vary from case to case based on attention or other factors. But there’s usually a relatively narrow range of JND values, such that anything below that is noticed by no one and anything above that is noticed by almost everyone.)

The JND seems like an intuitively obvious concept—of course you can’t tell the difference between a color of 432.78 nanometers and 432.79 nanometers!—but it actually has profound implications. In particular it undermines the possibility of having truly transitive preferences. If you prefer some colors to others—which most of us do—but you have a nonzero JND in color wavelengths—as we all do—then I can do the following: Find one color you like (for concreteness, say you like blue of 475 nm), and another color you don’t (say green of 510 nm). Let you choose between the blue you like and another blue, 475.01 nm. Will you prefer one to the other? Of course not, the difference is within your JND. So now compare 475.01 nm and 475.02 nm; which do you prefer? Again, you’re indifferent. And I can go on and on this way a few thousand times, until finally I get to 510 nanometers, the green you didn’t like. I have just found a chain of your preferences that is intransitive; you said A = B = C = D… all the way down the line to X = Y = Z… but then at the end you said A > Z. Your preferences aren’t transitive, and therefore aren’t well-defined rational preferences. And you could do the same to me, so neither are mine.

Part of the reason we’ve so willingly given up our privacy in the last generation or so is our paranoid fear of terrorism, which no doubt triggers deep instincts about tribal warfare. Depressingly, the plurality of Americans think that our government has not gone far enough in its obvious overreaches of the Constitution in the name of defending us from a threat that has killed fewer Americans in my lifetime than die from car accidents each month.

But that doesn’t explain why we—and I do mean we, for I am as guilty as most—have so willingly sold our relationships to Facebook and our schedules to Google. Google isn’t promising to save me from the threat of foreign fanatics; they’re merely offering me a more convenient way to plan my activities. Why, then, am I so cavalier about entrusting them with so much personal data?


Well, I didn’t start by giving them my whole life. I created an email account, which I used on occasion. I tried out their calendar app and used it to remind myself when my classes were. And so on, and so forth, until now Google knows almost as much about me as I know about myself.

At each step, it didn’t feel like I was doing anything of significance; perhaps indeed it was below my JND. Each bit of information I was giving didn’t seem important, and perhaps it wasn’t. But all together, our combined information allows Google to make enormous amounts of money without charging most of its users a cent.

The process goes something like this. Imagine someone offering you a penny in exchange for telling them how many times you made left turns last week. You’d probably take it, right? Who cares how many left turns you made last week? But then they offer another penny in exchange for telling them how many miles you drove on Tuesday. And another penny for telling them the average speed you drive during the afternoon. This process continues hundreds of times, until they’ve finally given you say $5.00—and they know exactly where you live, where you work, and where most of your friends live, because all that information was encoded in the list of driving patterns you gave them, piece by piece.

Consider instead how you’d react if someone had offered, “Tell me where you live and work and I’ll give you $5.00.” You’d be pretty suspicious, wouldn’t you? What are they going to do with that information? And $5.00 really isn’t very much money. Maybe there’s a price at which you’d part with that information to a random suspicious stranger—but it’s probably at least $50 or even more like $500, not $5.00. But by asking it in 500 different questions for a penny each, they can obtain that information from you at a bargain price.

If you work out how much money Facebook and Google make from each user, it’s actually pitiful. Facebook has been increasing their revenue lately, but it’s still less than $20 per user per year. The stranger asks, “Tell me who all your friends are, where you live, where you were born, where you work, and what your political views are, and I’ll give you $20.” Do you take that deal? Apparently, we do. Polls find that most Americans are willing to exchange privacy for valuable services, often quite cheaply.


Of course, there isn’t actually an alternative social network that doesn’t sell data and instead just charges a subscription fee. I don’t think this is a fundamentally unfeasible business model, but it hasn’t succeeded so far, and it will have an uphill battle for two reasons.

The first is the obvious one: It would have to compete with Facebook and Google, who already have the enormous advantage of a built-in user base of hundreds of millions of people.

The second one is what this post is about: The social network based on conventional economics rather than selling people’s privacy can’t take advantage of the JND.

I suppose they could try—charge $0.01 per month at first, then after awhile raise it to $0.02, $0.03 and so on until they’re charging $2.00 per month and actually making a profit—but that would be much harder to pull off, and it would provide the least revenue when it is needed most, at the early phase when the up-front costs of establishing a network are highest. Moreover, people would still feel that; it’s a good feature of our monetary system that you can’t break money into small enough denominations to really consistently hide under the JND. But information can be broken down into very tiny pieces indeed. Much of the revenue earned by these corporate giants is actually based upon indexing the keywords of the text we write; we literally sell off our privacy word by word.


What should we do about this? Honestly, I’m not sure. Facebook and Google do in fact provide valuable services, without which we would be worse off. I would be willing to pay them their $20 per year, if I could ensure that they’d stop selling my secrets to advertisers. But as long as their current business model keeps working, they have little incentive to change. There is in fact a huge industry of data brokering, corporations you’ve probably never heard of that make their revenue entirely from selling your secrets.

In a rare moment of actual journalism, TIME ran an article about a year ago arguing that we need new government policy to protect us from this kind of predation of our privacy. But they had little to offer in the way of concrete proposals.

The ACLU does better: They have specific proposals for regulations that should be made to protect our information from the most harmful prying eyes. But as we can see, the current administration has no particular interest in pursuing such policies—if anything they seem to do the opposite.

Information theory proves that multiple-choice is stupid

Mar 19, JDN 2457832

This post is a bit of a departure from my usual topics, but it’s something that has bothered me for a long time, and I think it fits broadly into the scope of uniting economics with the broader realm of human knowledge.

Multiple-choice questions are inherently and objectively poor methods of assessing learning.

Consider the following question, which is adapted from actual tests I have been required to administer and grade as a teaching assistant (that is, the style of question is the same; I’ve changed the details so that it wouldn’t be possible to just memorize the response—though in a moment I’ll get to why all this paranoia about students seeing test questions beforehand would also be defused if we stopped using multiple-choice):

The demand for apples follows the equation Q = 100 – 5 P.
The supply of apples follows the equation Q = 10 P.
If a tax of $2 per apple is imposed, what is the equilibrium price, quantity, tax revenue, consumer surplus, and producer surplus?

A. Price = $5, Quantity = 10, Tax revenue = $50, Consumer Surplus = $360, Producer Surplus = $100

B. Price = $6, Quantity = 20, Tax revenue = $40, Consumer Surplus = $200, Producer Surplus = $300

C. Price = $6, Quantity = 60, Tax revenue = $120, Consumer Surplus = $360, Producer Surplus = $300

D. Price = $5, Quantity = 60, Tax revenue = $120, Consumer Surplus = $280, Producer Surplus = $500

You could try solving this properly, setting supply equal to demand, adjusting for the tax, finding the equilibrium, and calculating the surplus, but don’t bother. If I were tutoring a student in preparing for this test, I’d tell them not to bother. You can get the right answer in only two steps, because of the multiple-choice format.

Step 1: Does tax revenue equal $2 times quantity? We said the tax was $2 per apple.
So that rules out everything except C and D. Welp, quantity must be 60 then.

Step 2: Is quantity 10 times price as the supply curve says? For C they are, for D they aren’t; guess it must be C then.

Now, to do that, you need to have at least a basic understanding of the economics underlying the question (How is tax revenue calculated? What does the supply curve equation mean?). But there’s an even easier technique you can use that doesn’t even require that; it’s called Answer Splicing.

Here’s how it works: You look for repeated values in the answer choices, and you choose the one that has the most repeated values. Prices $5 and $6 are repeated equally, so that’s not helpful (maybe the test designer planned at least that far). Quantity 60 is repeated, other quantities aren’t, so it’s probably that. Likewise with tax revenue $120. Consumer surplus $360 and Producer Surplus $300 are both repeated, so those are probably it. Oh, look, we’ve selected a unique answer choice C, the correct answer!

You could have done answer splicing even if the question were about 18th century German philosophy, or even if the question were written in Arabic or Japanese. In fact you even do it if it were written in a cipher, as long as the cipher was a consistent substitution cipher.

Could the question have been designed to better avoid answer splicing? Probably. But this is actually quite difficult to do, because there is a fundamental tradeoff between two types of “distractors” (as they are known in the test design industry). You want the answer choices to contain correct pieces and resemble the true answer, so that students who basically understand the question but make a mistake in the process still get it wrong. But you also want the answer choices to be distinct enough in a random enough pattern that answer splicing is unreliable. These two goals are inherently contradictory, and the result will always be a compromise between them. Professional test-designers usually lean pretty heavily against answer-splicing, which I think is probably optimal so far as it goes; but I’ve seen many a professor err too far on the side of similar choices and end up making answer splicing quite effective.

But of course, all of this could be completely avoided if I had just presented the question as an open-ended free-response. Then you’d actually have to write down the equations, show me some algebra solving them, and then interpret your results in a coherent way to answer the question I asked. What’s more, if you made a minor mistake somewhere (carried a minus sign over wrong, forgot to divide by 2 when calculating the area of the consumer surplus triangle), I can take off a few points for that error, rather than all the points just because you didn’t get the right answer. At the other extreme, if you just randomly guess, your odds of getting the right answer are miniscule, but even if you did—or copied from someone else—if you don’t show me the algebra you won’t get credit.

So the free-response question is telling me a lot more about what the student actually knows, in a much more reliable way, that is much harder to cheat or strategize against.

Moreover, this isn’t a matter of opinion. This is a theorem of information theory.

The information that is carried over a message channel can be quantitatively measured as its Shannon entropy. It is usually measured in bits, which you may already be familiar with as a unit of data storage and transmission rate in computers—and yes, those are all fundamentally the same thing. A proper formal treatment of information theory would be way too complicated for this blog, but the basic concepts are fairly straightforward: think in terms of how long a sequence of 1s and 0s it would take to convey the message. That is, roughly speaking, the Shannon entropy of that message.

How many bits are conveyed by a multiple-choice response with four choices? 2. Always. At maximum. No exceptions. It is fundamentally, provably, mathematically impossible to convey more than 2 bits of information via a channel that only has 4 possible states. Any multiple-choice response—any multiple-choice response—of four choices can be reduced to the sequence 00, 01, 10, 11.

True-false questions are a bit worse—literally, they convey 1 bit instead of 2. It’s possible to fully encode the entire response to a true-false question as simply 0 or 1.

For comparison, how many bits can I get from the free-response question? Well, in principle the answer to any mathematical question has the cardinality of the real numbers, which is infinite (in some sense beyond infinite, in fact—more infinite than mere “ordinary” infinity); but in reality you can only write down a small number of possible symbols on a page. I can’t actually write down the infinite diversity of numbers between 3.14159 and the true value of pi; in 10 digits or less, I can only (“only”) write down a few billion of them. So let’s suppose that handwritten text has about the same information density as typing, which in ASCII or Unicode has 8 bits—one byte—per character. If the response to this free-response question is 300 characters (note that this paragraph itself is over 800 characters), then the total number of bits conveyed is about 2400.

That is to say, one free-response question conveys six hundred times as much information as a multiple-choice question. Of course, a lot of that information is redundant; there are many possible correct ways to write the answer to a problem (if the answer is 1.5 you could say 3/2 or 6/4 or 1.500, etc.), and many problems have multiple valid approaches to them, and it’s often safe to skip certain steps of algebra when they are very basic, and so on. But it’s really not at all unrealistic to say that I am getting between 10 and 100 times as much useful information about a student from reading one free response than I would from one multiple-choice question.

Indeed, it’s actually a bigger difference than it appears, because when evaluating a student’s performance I’m not actually interested in the information density of the message itself; I’m interested in the product of that information density and its correlation with the true latent variable I’m trying to measure, namely the student’s actual understanding of the content. (A sequence of 500 random symbols would have a very high information density, but would be quite useless in evaluating a student!) Free-response questions aren’t just more information, they are also better information, because they are closer to the real-world problems we are training for, harder to cheat, harder to strategize, nearly impossible to guess, and provided detailed feedback about exactly what the student is struggling with (for instance, maybe they could solve the equilibrium just fine, but got hung up on calculating the consumer surplus).

As I alluded to earlier, free-response questions would also remove most of the danger of students seeing your tests beforehand. If they saw it beforehand, learned how to solve it, memorized the steps, and then were able to carry them out on the test… well, that’s actually pretty close to what you were trying to teach them. It would be better for them to learn a whole class of related problems and then be able to solve any problem from that broader class—but the first step in learning to solve a whole class of problems is in fact learning to solve one problem from that class. Just change a few details each year so that the questions aren’t identical, and you will find that any student who tried to “cheat” by seeing last year’s exam would inadvertently be studying properly for this year’s exam. And then perhaps we could stop making students literally sign nondisclosure agreements when they take college entrance exams. Listen to this Orwellian line from the SAT nondisclosure agreement:

Misconduct includes,but is not limited to:

Taking any test questions or essay topics from the testing room, including through memorization, giving them to anyone else, or discussing them with anyone else through anymeans, including, but not limited to, email, text messages or the Internet

Including through memorization. You are not allowed to memorize SAT questions, because God forbid you actually learn something when we are here to make money off evaluating you.

Multiple-choice tests fail in another way as well; by definition they cannot possibly test generation or recall of knowledge, they can only test recognition. You don’t need to come up with an answer; you know for a fact that the correct answer must be in front of you, and all you need to do is recognize it. Recall and recognition are fundamentally different memory processes, and recall is both more difficult and more important.

Indeed, the real mystery here is why we use multiple-choice exams at all.
There are a few types of very basic questions where multiple-choice is forgivable, because there are just aren’t that many possible valid answers. If I ask whether demand for apples has increased, you can pretty much say “it increased”, “it decreased”, “it stayed the same”, or “it’s impossible to determine”. So a multiple-choice format isn’t losing too much in such a case. But most really interesting and meaningful questions aren’t going to work in this format.

I don’t think it’s even particularly controversial among educators that multiple-choice questions are awful. (Though I do recall an “educational training” seminar a few weeks back that was basically an apologia for multiple choice, claiming that it is totally possible to test “higher-order cognitive skills” using multiple-choice, for reals, believe me.) So why do we still keep using them?

Well, the obvious reason is grading time. The one thing multiple-choice does have over a true free response is that it can be graded efficiently and reliably by machines, which really does make a big difference when you have 300 students in a class. But there are a couple reasons why even this isn’t a sufficient argument.

First of all, why do we have classes that big? It’s absurd. At that point you should just email the students video lectures. You’ve already foreclosed any possibility of genuine student-teacher interaction, so why are you bothering with having an actual teacher? It seems to be that universities have tried to work out what is the absolute maximum rent they can extract by structuring a class so that it is just good enough that students won’t revolt against the tuition, but they can still spend as little as possible by hiring only one adjunct or lecturer when they should have been paying 10 professors.

And don’t tell me they can’t afford to spend more on faculty—first of all, supporting faculty is why you exist. If you can’t afford to spend enough providing the primary service that you exist as an institution to provide, then you don’t deserve to exist as an institution. Moreover, they clearly can afford it—they simply prefer to spend on hiring more and more administrators and raising the pay of athletic coaches. PhD comics visualized it quite well; the average pay for administrators is three times that of even tenured faculty, and athletic coaches make ten times as much as faculty. (And here I think the mean is the relevant figure, as the mean income is what can be redistributed. Firing one administrator making $300,000 does actually free up enough to hire three faculty making $100,000 or ten grad students making $30,000.)

But even supposing that the institutional incentives here are just too strong, and we will continue to have ludicrously-huge lecture classes into the foreseeable future, there are still alternatives to multiple-choice testing.

Ironically, the College Board appears to have stumbled upon one themselves! About half the SAT math exam is organized into a format where instead of bubbling in one circle to give your 2 bits of answer, you bubble in numbers and symbols corresponding to a more complicated mathematical answer, such as entering “3/4” as “0”, “3”, “/”, “4” or “1.28” as “1”, “.”, “2”, “8”. This could easily be generalized to things like “e^2” as “e”, “^”, “2” and “sin(3pi/2)” as “sin”, “3” “pi”, “/”, “2”. There are 12 possible symbols currently allowed by the SAT, and each response is up to 4 characters, so we have already increased our possible responses from 4 to over 20,000—which is to say from 2 bits to 14. If we generalize it to include symbols like “pi” and “e” and “sin”, and allow a few more characters per response, we could easily get it over 20 bits—10 times as much information as a multiple-choice question.

But we can do better still! Even if we insist upon automation, high-end text-recognition software (of the sort any university could surely afford) is now getting to the point where it could realistically recognize a properly-formatted algebraic formula, so you’d at least know if the student remembered the formula correctly. Sentences could be transcribed into typed text, checked for grammar, and sorted for keywords—which is not nearly as good as a proper reading by an expert professor, but is still orders of magnitude better than filling circle “C”. Eventually AI will make even more detailed grading possible, though at that point we may have AIs just taking over the whole process of teaching. (Leaving professors entirely for research, presumably. Not sure if this would be good or bad.)

Automation isn’t the only answer either. You could hire more graders and teaching assistants—say one for every 30 or 40 students instead of one for every 100 students. (And then the TAs might actually be able to get to know their students! What a concept!) You could give fewer tests, or shorter ones—because a small, reliable sample is actually better than a large, unreliable one. A bonus there would be reducing students’ feelings of test anxiety. You could give project-based assignments, which would still take a long time to grade, but would also be a lot more interesting and fulfilling for both the students and the graders.

Or, and perhaps this is the most radical answer of all: You could stop worrying so much about evaluating student performance.

I get it, you want to know whether students are doing well, both so that you can improve your teaching and so that you can rank the students and decide who deserves various awards and merits. But do you really need to be constantly evaluating everything that students do? Did it ever occur to you that perhaps that is why so many students suffer from anxiety—because they are literally being formally evaluated with long-term consequences every single day they go to school?

If we eased up on all this evaluation, I think the fear is that students would just detach entirely; all teachers know students who only seem to show up in class because they’re being graded on attendance. But there are a couple of reasons to think that maybe this fear isn’t so well-founded after all.

If you give up on constant evaluation, you can open up opportunities to make your classes a lot more creative and interesting—and even fun. You can make students want to come to class, because they get to engage in creative exploration and collaboration instead of memorizing what you drone on at them for hours on end. Most of the reason we don’t do creative, exploratory activities is simply that we don’t know how to evaluate them reliably—so what if we just stopped worrying about that?

Moreover, are those students who only show up for the grade really getting anything out of it anyway? Maybe it would be better if they didn’t show up—indeed, if they just dropped out of college entirely and did something else with their lives until they get their heads on straight. Maybe all this effort that we are currently expending trying to force students to learn who clearly don’t appreciate the value of learning could instead be spent enriching the students who do appreciate learning and came here to do as much of it as possible. Because, ultimately, you can lead a student to algebra, but you can’t make them think. (Let me be clear, I do not mean students with less innate ability or prior preparation; I mean students who aren’t interested in learning and are only showing up because they feel compelled to. I admire students with less innate ability who nonetheless succeed because they work their butts off, and wish I were quite so motivated myself.)
There’s a downside to that, of course. Compulsory education does actually seem to have significant benefits in making people into better citizens. Maybe if we let those students just leave college, they’d never come back, and they would squander their potential. Maybe we need to force them to show up until something clicks in their brains and they finally realize why we’re doing it. In fact, we’re really not forcing them; they could drop out in most cases and simply don’t, probably because their parents are forcing them. Maybe the signaling problem is too fundamental, and the only way we can get unmotivated students to accept not getting prestigious degrees is by going through this whole process of forcing them to show up for years and evaluating everything they do until we can formally justify ultimately failing them. (Of course, almost by construction, a student who does the absolute bare minimum to pass will pass.) But college admission is competitive, and I can’t shake this feeling there are thousands of students out there who got rejected from the school they most wanted to go to, the school they were really passionate about and willing to commit their lives to, because some other student got in ahead of them—and that other student is now sitting in the back of the room playing with an iPhone, grumbling about having to show up for class every day. What about that squandered potential? Perhaps competitive admission and compulsory attendance just don’t mix, and we should stop compelling students once they get their high school diploma.

Games as economic simulations—and education tools

Mar 5, JDN 2457818 [Sun]

Moore’s Law is a truly astonishing phenomenon. Now as we are well into the 21st century (I’ve lived more of my life in the 21st century than the 20th now!) it may finally be slowing down a little bit, but it has had quite a run, and even this could be a temporary slowdown due to economic conditions or the lull before a new paradigm (quantum computing?) matures. Since at least 1975, the computing power of an individual processor has doubled approximately every year and a half; that means it has doubled over 25 times—or in other words that it has increased by a factor of over 30 million. I now have in my pocket a smartphone with several thousand times the processing speed of the guidance computer of the Saturn V that landed on the Moon.

This meteoric increase in computing power has had an enormous impact on the way science is done, including economics. Simple theoretical models that could be solved by hand are now being replaced by enormous simulation models that have to be processed by computers. It is now commonplace to devise models with systems of dozens of nonlinear equations that are literally impossible to solve analytically, and just solve them iteratively with computer software.

But one application of this technology that I believe is currently underutilized is video games.

As a culture, we still have the impression that video games are for children; even games like Dragon Age and Grand Theft Auto that are explicitly for adults (and really quite inappropriate for children!) are viewed as in some sense “childish”—that no serious adult would be involved with such frivolities. The same cultural critics who treat Shakespeare’s vagina jokes as the highest form of art are liable to dismiss the poignant critique of war in Call of Duty: Black Ops or the reflections on cultural diversity in Skyrim as mere puerility.

But video games are an art form with a fundamentally greater potential than any other. Now that graphics are almost photorealistic, there is really nothing you can do in a play or a film that you can’t do in a video game—and there is so, so much more that you can only do in a game.
In what other medium can we witness the spontaneous emergence and costly aftermath of a war? Yet EVE Online has this sort of event every year or so—just today there was a surprise attack involving hundreds of players that destroyed thousands of hours’—and dollars’—worth of starships, something that has more or less become an annual tradition. A few years ago there was a massive three-faction war that destroyed over $300,000 in ships and has now been commemorated as “the Bloodbath of B-R5RB”.
Indeed, the immersion and interactivity of games present an opportunity to do nothing less than experimental macroeconomics. For generations it has been impossible, or at least absurdly unethical, to ever experimentally manipulate an entire macroeconomy. But in a video game like EVE Online or Second Life, we can now do so easily, cheaply, and with little or no long-term harm to the participants—and we can literally control everything in the experiment. Forget the natural resource constraints and currency exchange rates—we can change the laws of physics if we want. (Indeed, EVE‘s whole trade network is built around FTL jump points, and in Second Life it’s a basic part of the interface that everyone can fly like Superman.)

This provides untold potential for economic research. With sufficient funding, we could build a game that would allow us to directly test hypotheses about the most fundamental questions of economics: How do governments emerge and maintain security? How is the rule of law sustained, and when can it be broken? What controls the value of money and the rate of inflation? What is the fundamental cause of unemployment, and how can it be corrected? What influences the rate of technological development? How can we maximize the rate of economic growth? What effect does redistribution of wealth have on employment and output? I envision a future where we can directly simulate these questions with thousands of eager participants, varying the subtlest of parameters and carrying out events over any timescale we like from seconds to centuries.

Nor is the potential of games in economics limited to research; it also has enormous untapped potential in education. I’ve already seen in my classes how tabletop-style games with poker chips can teach a concept better in a few minutes than hours of writing algebra derivations on the board; but custom-built video games could be made that would teach economics far better still, and to a much wider audience. In a well-designed game, people could really feel the effects of free trade or protectionism, not just on themselves as individuals but on entire nations that they control—watch their GDP numbers go down as they scramble to produce in autarky what they could have bought for half the price if not for the tariffs. They could see, in real time, how in the absence of environmental regulations and Pigovian taxes the actions of millions of individuals could despoil our planet for everyone.

Of course, games are fundamentally works of fiction, subject to the Fictional Evidence Fallacy and only as reliable as their authors make them. But so it is with all forms of art. I have no illusions about the fact that we will never get the majority of the population to regularly read peer-reviewed empirical papers. But perhaps if we are clever enough in the games we offer them to play, we can still convey some of the knowledge that those papers contain. We could also update and expand the games as new information comes in. Instead of complaining that our students are spending time playing games on their phones and tablets, we could actually make education into games that are as interesting and entertaining as the ones they would have been playing. We could work with the technology instead of against it. And in a world where more people have access to a smartphone than to a toilet, we could finally bring high-quality education to the underdeveloped world quickly and cheaply.

Rapid growth in computing power has given us a gift of great potential. But soon our capacity will widen even further. Even if Moore’s Law slows down, computing power will continue to increase for awhile yet. Soon enough, virtual reality will finally take off and we’ll have even greater depth of immersion available. The future is bright—if we can avoid this corporatist cyberpunk dystopia we seem to be hurtling toward, of course.

The urban-rural divide runs deep

Feb 5, JDN 2457790

Are urban people worth less than rural people?

That probably sounds like a ridiculous thing to ask; of course not, all people are worth the same (other things equal of course—philanthropists are worth more than serial murderers). But then, if you agree with that, you’re probably an urban person, as I’m sure most of my readers are (and as indeed most people in highly-developed countries are).

A disturbing number of rural people, however, honestly do seem to believe this. They think that our urban lifestyles (whatever they imagine those to be) devalue us as citizens and human beings.

That is the key subtext to understand in the terrifying phenomenon that is Donald Trump. Most of the people who voted for him can’t possibly have thought he was actually trustworthy, and many probably didn’t actually support his policies of bigotry and authoritarianism (though he was very popular among bigots and authoritarians). From speaking with family members and acquaintances who proudly voted for Trump, one thing came through very clearly: This was a gigantic middle finger pointed at cities. They didn’t even really want Trump; they just knew we didn’t, and so they voted for him out of spite as much as anything else. They also have really confused views about free trade, so some of them voted for him because he promised to bring back jobs lost to trade (that weren’t lost to trade, can’t be brought back, and shouldn’t be even if they could). Talk with a Trump voter for a few minutes, and sneers of “latte-sipping liberal” (I don’t even like coffee) and “coastal elite” (I moved here to get educated; I wasn’t born here) are sure to follow.

There has always been some conflict between rural and urban cultures, for as long as there have been urban cultures for rural cultures to be in conflict with. It is found not just in the US, but in most if not all countries around the world. It was relatively calm during the postwar boom in the 20th century, as incomes everywhere (or at least everywhere within highly-developed countries) were improving more or less in lockstep. But the 21st century has brought us much more unequal growth, concentrated on particular groups of people and particular industries. This has brought more resentment. And that divide, above all else, is what brought us Trump; the correlation between population density and voting behavior is enormous.

Of course, “urban” is sometimes a dog-whistle for “Black”; but sometimes I think it actually really means “urban”—and yet there’s still a lot of hatred embedded in it. Indeed, perhaps that’s why the dog-whistle works; a White man from a rural town can sneer at “urban” people and it’s not entirely clear whether he’s being racist or just being anti-urban.

The assumption that rural lifestyles are superior runs so deep in our culture that even in articles by urban people (like this one from the LA Times) supposedly reflecting about how to resolve this divide, there are long paeans to the world of “hard work” and “sacrifice” and “autonomy” of rural life, and mocking “urban elites” for their “disproportionate” (by which you can only mean almost proportionate) power over government.

Well, guess what? If you want to live in a rural area, go live in a rural area. Don’t pine for it. Don’t tell me how great farm life is. If you want to live on a farm, go live on a farm. I have nothing against it; we need farmers, after all. I just want you to shut up about how great it is, especially if you’re not going to actually do it. Pining for someone else’s lifestyle when you could easily take on that lifestyle if you really wanted it just shows that you think the grass is greener on the other side.

Because the truth is, farm living isn’t so great for most people. The world’s poorest people are almost all farmers. 70% of people below the UN poverty line live in rural areas, even as more and more of the world’s population moves into cities. If you use a broader poverty measure, as many as 85% of the world’s poor live in rural areas.

The kind of “autonomy” that means defending your home with a shotgun is normally what we would call anarchy—it’s a society that has no governance, no security. (Of course, in the US that’s pure illusion; crime rates in general are low and falling, and lower in rural areas than urban areas. But in some parts of the world, that anarchy is very real.) One of the central goals of global economic development is to get people away from subsistence farming into far more efficient manufacturing and service jobs.

At least in the US, farm life is a lot better than it used to be, now that agricultural technology has improved so that one farmer can now do the work of hundreds. Despite increased population and increased food consumption per person, the number of farmers in the US is now the smallest it has been since before the Civil War. The share of employment devoted to agriculture has fallen from over 80% in 1800 to under 2% today. Even just since the 1960s labor productivity of US farms has more than tripled.

But the reason that some 80% of Americans have chosen to live in cities—and yes, I can clearly say “chosen”, because cities are more expensive and therefore urban living is a voluntary activity. Most people who live in the city right now could move to the country if we really wanted to. We choose not to, because we know our life would be worse if we did.

Indeed, I dare say that a lot of the hatred of city-dwellers has got to be envy. Our (median) incomes are higher and our (mean) lifespans are longer. Fewer of our children are in poverty. Life is better here—we know it, and deep down, they know it too.

We also have better Internet access, unsurprisingly—though rural areas are only a few years behind, and the technology improves so rapidly that twice as many rural homes in the US have Internet access than urban homes did in 1998.

Now, a rational solution to this problem would be either to improve the lives of people in rural areas or else move everyone to urban areas—and both of those things have been happening, not only in the US but around the world. But in order to do that, you need to be willing to change things. You have to give up the illusion that farm life is some wonderful thing we should all be emulating, rather than the necessary toil that humanity was forced to go through for centuries until civilization could advance beyond it. You have to be willing to replace farmers with robots, so that people who would have been farmers can go do something better with their lives. You need to give up the illusion that there is something noble or honorable about hard labor on a farm—indeed, you need to give up the illusion that there is anything noble or honorable about hard work in general. Work is not a benefit; work is a cost. Work is what we do because we have to—and when we no longer have to do it, we should stop. Wanting to escape toil and suffering doesn’t make you lazy or selfish—it makes you rational.

We could surely be more welcoming—but cities are obviously more welcoming to newcomers than rural areas are. Our housing is too expensive, but that’s in part because so many people want to live here—supply hasn’t been able to keep up with demand.

I may seem to be presenting this issue as one-sided; don’t urban people devalue rural people too? Sometimes. Insults like “hick” and “yokel” and “redneck” do of course exist. But I’ve never heard anyone from a city seriously argue that people who live in rural areas should have votes that systematically count for less than those of people who live in cities—yet the reverse is literally what people are saying when they defend the Electoral College. If you honestly think that the Electoral College deserves to exist in anything like its present form, you must believe that some Americans are worth more than others, and the people who are worth more are almost all in rural areas while the people who are worth less are almost all in urban areas.

No, National Review, the Electoral College doesn’t “save” America from California’s imperial power; it gives imperial power to a handful of swing states. The only reason California would be more important than any other state is that more Americans live here. Indeed, a lot of Republicans in California are disenfranchised, because they know that their votes will never overcome the overwhelming Democratic majority for the state as a whole and the system is winner-takes-all. Indeed, about 30% of California votes Republican (well, not in the last election, because that was Trump—Orange County went Democrat for the first time in decades), so the number of disenfranchised Republicans alone in California is larger than the population of Michigan, which in turn is larger than the population of Wyoming, North Dakota, South Dakota, Montana, Nebraska, West Virginia, and Kansas combined. Indeed, there are more people in California than there are in Canada. So yeah, I’m thinking maybe we should get a lot of votes?

But it’s easy for you to drum up fear over “imperial rule” by California in particular, because we’re so liberal—and so urban, indeed an astonishing 95% urban, the most of any US state (or frankly probably any major regional entity on the planet Earth! To beat that you have to be something like Singapore, which literally just is a single city).

In fact, while insults thrown at urban people get thrown at basically all of us regardless of what we do, most of the insults that are thrown at rural people are mainly thrown at uneducated rural people. (And statistically, while many people in rural areas are educated and many people in urban areas are not, there’s definitely a positive correlation between urbanization and education.) It’s still unfair in many ways, not least because education isn’t entirely a choice, not in a society where tuition at an average private university costs more than the median individual income. Many of the people we mock as being stupid were really just born poor. It may not be their fault, but they can’t believe that the Earth is only 10,000 years old and not have some substantial failings in their education. I still don’t think mockery is the right answer; it’s really kicking them while they’re down. But clearly there is something wrong with our society when 40% of people believe something so obviously ludicrous—and those beliefs are very much concentrated in the same Southern states that have the most rural populations. “They think we’re ignorant just because we believe that God made the Earth 6,000 years ago!” I mean… yes? I’m gonna have to own up to that one, I guess. I do in fact think that people who believe things that were disproven centuries ago are ignorant.

So really this issue is one-sided. We who live in cities are being systematically degraded and disenfranchised, and when we challenge that system we are accused of being selfish or elitist or worse. We are told that our lifestyles are inferior and shameful, and when we speak out about the positive qualities of our lives—our education, our acceptance of diversity, our flexibility in the face of change—we are again accused of elitism and condescension.

We could simply stew in that resentment. But we can do better. We can reach out to people in rural areas, show them not just that our lives are better—as I said, they already know this—but that they can have these lives too. And we can make policy so that this really can happen for people. Envy doesn’t automatically lead to resentment; that only happens when combined with a lack of mobility. The way urban people pine for the countryside is baffling, since we could go there any time; but the way that country people long for the city is perfectly understandable, as our lives really are better but our rent is too high for them to afford. We need to bring that rent down, not just for the people already living in cities, but also for the people who want to but can’t.

And of course we don’t want to move everyone to cities, either. Many people won’t want to live in cities, and we need a certain population of farmers to make our food after all. We can work to improve infrastructure in rural areas—particularly when it comes to hospitals, which are a basic necessity that is increasingly underfunded. We shouldn’t stop using cost-effectiveness calculations, but we need to compare against the right things. If that hospital isn’t worth building, it should be because there’s another, better hospital we could make for the same amount or cheaper—not because we think that this town doesn’t deserve to have a hospital. We can expand our public transit systems over a wider area, and improve their transit speeds so that people can more easily travel to the city from further away.

We should seriously face up to the costs that free trade has imposed upon many rural areas. We can’t give up on free trade—but that doesn’t mean we need to keep our trade policy exactly as it is. We can do more to ensure that multinational corporations don’t have overwhelming bargaining power against workers and small businesses. We can establish a tax system that would redistribute more of the gains from free trade to the people and places most hurt by the transition. Right now, poor people in the US are often the most fiercely opposed to redistribution of wealth, because somehow they perceive that wealth will be redistributed from them when it would in fact be redistributed to them. They are in a scarcity mindset, their whole worldview shaped by the fact that they struggle to get by. They see every change as a threat, every stranger as an enemy.

Somehow we need to fight that mindset, get them to see that there are many positive changes that can be made, many things that we can achieve together that none of us could achieve along.

Why do so many Americans think that crime is increasing?

Jan 29, JDN 2457783

Since the 1990s, crime in United States has been decreasing, and yet in every poll since then most Americans report that they believe that crime is increasing.

It’s not a small decrease either. The US murder rate is down to the lowest it has been in a century. There are now a smaller absolute number (by 34 log points) of violent crimes per year in the US than there were 20 years ago, despite a significant increase in total population (19 log points—and the magic of log points is that, yes, the rate has decreased by precisely 53 log points).

It isn’t geographically uniform, of course; some states have improved much more than others, and a few states (such as New Mexico) have actually gotten worse.

The 1990s were a peak of violent crime, so one might say that we are just regressing to the mean. (Even that would be enough to make it baffling that people think crime is increasing.) But in fact overall crime in the US is now the lowest it has been since the 1970s, and still decreasing.

Indeed, this decrease has been underestimated, because we are now much better about reporting and investigating crimes than we used to be (which may also be part of why they are decreasing, come to think of it). If you compare against surveys of people who say they have been personally victimized, we’re looking at a decline in violent crime rates of two thirds—109 log points.

Just since 2008 violent crime has decreased by 26% (30 log points)—but of course we all know that Obama is “soft on crime” because he thinks cops shouldn’t be allowed to just shoot Black kids for no reason.

And yet, over 60% of Americans believe that overall crime in the US has increased in the last 10 years (though only 38% think it has increased in their own community!). These figures are actually down from 2010, when 66% thought crime was increasing nationally and 49% thought it was increasing in their local area.

The proportion of people who think crime is increasing does seem to decrease as crime rates decrease—but it still remains alarmingly high. If people were half as rational as most economists seem to believe, the proportion of people who think crime is increasing should drop to basically zero whenever crime rates decrease, since that’s a really basic fact about the world that you can just go look up on the Web in a couple of minutes. There’s no deep ambiguity, not even much “rational ignorance” given the low cost of getting correct answers. People just don’t bother to check, or don’t feel they need to.
What’s going on? How can crime fall to half what it was 20 years ago and yet almost two-thirds of people think it’s actually increasing?

Well, one hint is that news coverage of crime doesn’t follow the same pattern as actual crime.

News coverage in general is a terrible source of information, not simply because news organizations can be biased, make glaring mistakes, and sometimes outright lie—but actually for a much more fundamental reason: Even a perfect news channel, qua news channel, would report what is surprising—and what is surprising is, by definition, improbable. (Indeed, there is a formal mathematical concept in probability theory called surprisal that is simply the logarithm of 1 over the probability.) Even assuming that news coverage reports only the truth, the probability of seeing something on the news isn’t proportional to the probability of the event occurring—it’s more likely proportional to the entropy, which is probability times surprisal.

Now, if humans were optimal information processing engines, that would be just fine, actually; reporting events proportional to their entropy is actually a very efficient mechanism for delivering information (optimal, under certain types of constraints), provided that you can then process the information back into probabilities afterward.

But of course, humans aren’t optimal information processing engines. We don’t recompute the probabilities from the given entropy; instead we use the availability heuristic, by which we simply use the number of times we can think of something happening as our estimate of the probability of that event occurring. If you see more murders on TV news than you used you, you assume that murders must be more common than they used to be. (And when I put it like that, it really doesn’t sound so unreasonable, does it? Intuitively the availability heuristic seems to make sense—which is part of why it’s so insidious.)

Another likely reason for the discrepancy between perception and reality is nostalgia. People almost always have a more positive view of the past than it deserves, particularly when referring to their own childhoods. Indeed, I’m quite certain that a major reason why people think the world was much better when they were kids was that their parents didn’t tell them what was going on. And of course I’m fine with that; you don’t need to burden 4-year-olds with stories of war and poverty and terrorism. I just wish people would realize that they were being protected from the harsh reality of the world, instead of thinking that their little bubble of childhood innocence was a genuinely much safer world than the one we live in today.

Then take that nostalgia and combine it with the availability heuristic and the wall-to-wall TV news coverage of anything bad that happens—and almost nothing good that happens, certainly not if it’s actually important. I’ve seen bizarre fluff pieces about puppies, but never anything about how world hunger is plummeting or air quality is dramatically improved or cars are much safer. That’s the one thing I will say about financial news; at least they report it when unemployment is down and the stock market is up. (Though most Americans, especially most Republicans, still seem really confused on those points as well….) They will attribute it to anything from sunspots to the will of Neptune, but at least they do report good news when it happens. It’s no wonder that people are always convinced that the world is getting more dangerous even as it gets safer and safer.

The real question is what we do about it—how do we get people to understand even these basic facts about the world? I still believe in democracy, but when I see just how painfully ignorant so many people are of such basic facts, I understand why some people don’t. The point of democracy is to represent everyone’s interests—but we also end up representing everyone’s beliefs, and sometimes people’s beliefs just don’t line up with reality. The only way forward I can see is to find a way to make people’s beliefs better align with reality… but even that isn’t so much a strategy as an objective. What do I say to someone who thinks that crime is increasing, beyond showing them the FBI data that clearly indicates otherwise? When someone is willing to override all evidence with what they feel in their heart to be true, what are the rest of us supposed to do?