Statistics you should have been taught in high school, but probably weren’t

Oct 15, JDN 2458042

Today I’m trying something a little different. This post will assume a lot less background knowledge than most of the others. For some of my readers, this post will probably seem too basic, obvious, even boring. For others, it might feel like a breath of fresh air, relief at last from the overly-dense posts I am generally inclined to write out of Curse of Knowledge. Hopefully I can balance these two effects well enough to gain rather than lose readers.

Here are four core statistical concepts that I think all adults should know, necessary for functional literacy in understanding the never-ending stream of news stories about “A new study shows…” and more generally in applying social science to political decisions. In theory shese should all be taught as part of a core high school curriculum, but typically they either aren’t taught or aren’t retained once students graduate. (Really, I think we should replace one year of algebra with one semester of statistics and one semester of logic. Most people don’t actually need algebra, but they absolutely do need logic and statistics.)

  1. Mean and median

The mean and the median are quite simple concepts, and you’ve probably at least heard of them before, yet confusion between them has caused a great many misunderstandings.

Part of the problem is the word “average”. Normally, the word “average” applies to the mean—for example, a batting average, or an average speed. But in common usage the word “average” can also mean “typical” or “representative”—an average person, an average family. And in many cases, particularly when in comes to economics, the mean is in no way typical or representative.

The mean of a sample of values is just the sum of all those values, divided by the number of values. The mean of the sample {1,2,3,10,1000} is (1+2+3+10+1000)/5 = 203.2

The median of a sample of values is the middle one—order the values, choose the one in the exact center. If you have an even number, take the mean of the two values on either side. So the median of the sample {1,2,3,10,1000} is 3.

I intentionally chose an extreme example: The mean and median of these samples are completely different. But this is something that can happen in real life.

This is vital for understanding the distribution of income, because for almost all countries (and certainly for the world as a whole), the mean income is substantially higher (usually between 50% and 100% higher) than the median income. Yet the mean income is what is reported as “per capita GDP”, but the median income is a much better measure of actual standard of living.

As for the word “average”, it’s probably best to just remove it from your vocabulary. Say “mean” instead if that’s what you intend, or “median” if that’s what you’re using instead.

  1. Standard deviation and mean absolute deviation

Standard deviation is another one you’ve probably seen before.

Standard deviation is kind of a weird concept, honestly. It’s so entrenched in statistics that we’re probably stuck with it, but it’s really not a very good measure of anything intuitively interesting.

Mean absolute deviation is a much more intuitive concept, and much more robust to weird distributions (such as those of incomes and financial markets), but it isn’t as widely used by statisticians for some reason.

The standard deviation is defined as the square root of the mean of the squared differences between the individual values in sample and the mean of that sample. So for my {1,2,3,10,1000} example, the standard deviation is sqrt(((1-203.2)^2 + (2-203.2)^2 + (3-203.2)^2 + (10-203.2)^2 + (1000-203.2)^2)/5) = 398.4.

What can you infer from that figure? Not a lot, honestly. The standard deviation is bigger than the mean, so we have some sense that there’s a lot of variation in our sample. But interpreting exactly what that means is not easy.

The mean absolute deviation is much simpler: It’s the mean of the absolute value of differences between the individual values in a sample and the mean of that sample. In this case it is ((203.2-1) + (203.2-2) + (203.2-3) + (203.2-10) + (1000-203.2))/5 = 318.7.

This has a much simpler interpretation: The mean distance between each value and the mean is 318.7. On average (if we still use that word), each value is about 318.7 away from the mean of 203.2.

When you ask people to interpret a standard deviation, most of them actually reply as if you had asked them about the mean absolute deviation. They say things like “the average distance from the mean”. Only people who know statistics very well and are being very careful would actually say the true answer, “the square root of the sum of squared distances from the mean”.

But there is an even more fundamental reason to prefer the mean absolute deviation, and that is that sometimes the standard deviation doesn’t exist!

For very fat-tailed distributions, the sum that would give you the standard deviation simply fails to converge. You could say the standard deviation is infinite, or that it’s simply undefined. Either way we know it’s fat-tailed, but that’s about all. Any finite sample would have a well-defined standard deviation, but that will keep changing as your sample grows, and never converge toward anything in particular.

But usually the mean still exists, and if the mean exists, then the mean absolute deviation also exists. (In some rare cases even they fail, such as the Cauchy distribution—but actually even then there is usually a way to recover what the mean and mean absolute deviation “should have been” even though they don’t technically exist.)

  1. Standard error

The standard error is even more important for statistical inference than the standard deviation, and frankly even harder to intuitively understand.

The actual definition of the standard error is this: The standard deviation of the distribution of sample means, provided that the null hypothesis is true and the distribution is a normal distribution.

How it is usually used is something more like this: “A good guess of the margin of error on my estimates, such that I’m probably not off by more than 2 standard errors in either direction.”

You may notice that those two things aren’t the same, and don’t even seem particularly closely related. You are correct in noticing this, and I hope that you never forget it. One thing that extensive training in statistics (especially frequentist statistics) seems to do to people is to make them forget that.

In particular, the standard error strictly only applies if the value you are trying to estimate is zero, which usually means that your results aren’t interesting. (To be fair, not always; finding zero effect of minimum wage on unemployment was a big deal.) Using it as a margin of error on your actual nonzero estimates is deeply dubious, even though almost everyone does it for lack of an uncontroversial alternative.
Application of standard errors typically also relies heavily on the assumption of a normal distribution, even though plenty of real-world distributions aren’t normal and don’t even approach a normal distribution in quite large samples. The Central Limit Theorem says that the sampling distribution of the mean of any non-fat-tailed distribution will approach a normal distribution eventually as sample size increases, but it doesn’t say how large a sample needs to be to do that, nor does it apply to fat-tailed distributions.

Therefore, the standard error is really a very conservative estimate of your margin of error; it assumes essentially that the only kind of error you had was random sampling error from a normal distribution in an otherwise perfect randomized controlled experiment. All sorts of other forms of error and bias could have occurred at various stages—and typically, did—making your error estimate inherently too small.

This is why you should never believe a claim that comes from only a single study or a handful of studies. There are simply too many things that could have gone wrong. Only when there are a large number of studies, with varying methodologies, all pointing to the same core conclusion, do we really have good empirical evidence of that conclusion. This is part of why the journalistic model of “A new study shows…” is so terrible; if you really want to know what’s true, you look at large meta-analyses of dozens or hundreds of studies, not a single study that could be completely wrong.

  1. Linear regression and its limits

Finally, I come to linear regression, the workhorse of statistical social science. Almost everything in applied social science ultimately comes down to variations on linear regression.

There is the simplest kind, ordinary least-squares or OLS; but then there is two-stage least-squares 2SLS, fixed-effects regression, clustered regression, random-effects regression, heterogeneous treatment effects, and so on.
The basic idea of all regressions is extremely simple: We have an outcome Y, a variable we are interested in D, and some other variables X.

This might be an effect of education D on earnings Y, or minimum wage D on unemployment Y, or eating strawberries D on getting cancer Y. In our X variables we might include age, gender, race, or whatever seems relevant to Y but can’t be affected by D.

We then make the incredibly bold (and typically unjustifiable) assumption that all the effects are linear, and say that:

Y = A + B*D + C*X + E

A, B, and C are coefficients we estimate by fitting a straight line through the data. The last bit, E, is a random error that we allow to fill in any gaps. Then, if the standard error of B is less than half the size of B itself, we declare that our result is “statistically significant”, and we publish our paper “proving” that D has an effect on Y that is proportional to B.

No, really, that’s pretty much it. Most of the work in econometrics involves trying to find good choices of X that will make our estimates of B better. A few of the more sophisticated techniques involve breaking up this single regression into a few pieces that are regressed separately, in the hopes of removing unwanted correlations between our variable of interest D and our error term E.

What about nonlinear effects, you ask? Yeah, we don’t much talk about those.

Occasionally we might include a term for D^2:

Y = A + B1*D + B2*D^2 + C*X + E

Then, if the coefficient B2 is small enough, which is usually what happens, we say “we found no evidence of a nonlinear effect”.

Those who are a bit more sophisticated will instead report (correctly) that they have found the linear projection of the effect, rather than the effect itself; but if the effect was nonlinear enough, the linear projection might be almost meaningless. Also, if you’re too careful about the caveats on your research, nobody publishes your work, because there are plenty of other people competing with you who are willing to upsell their research as far more reliable than it actually is.

If this process seems rather underwhelming to you, that’s good. I think people being too easily impressed by linear regression is a much more widespread problem than people not having enough trust in linear regression.

Yes, it is possible to go too far the other way, and dismiss even dozens of brilliant experiments as totally useless because they used linear regression; but I don’t actually hear people doing that very often. (Maybe occasionally: The evidence that gun ownership increases suicide and homicide and that corporal punishment harms children is largely based on linear regression, but it’s also quite strong at this point, and I do still hear people denying it.)

Far more often I see people point to a single study using linear regression to prove that blueberries cure cancer or eating aspartame will kill you or yoga cures back pain or reading Harry Potter makes you hate Donald Trump or olive oil prevents Alzheimer’s or psychopaths are more likely to enjoy rap music. The more exciting and surprising a new study is, the more dubious you should be of its conclusions. If a very surprising result is unsupported by many other studies and just uses linear regression, you can probably safely ignore it.

A really good scientific study might use linear regression, but it would also be based on detailed, well-founded theory and apply a proper experimental (or at least quasi-experimental) design. It would check for confounding influences, look for nonlinear effects, and be honest that standard errors are a conservative estimate of the margin of error. Most scientific studies probably should end by saying “We don’t actually know whether this is true; we need other people to check it.” Yet sadly few do, because the publishers that have a strangle-hold on the industry prefer sexy, exciting, “significant” findings to actual careful, honest research. They’d rather you find something that isn’t there than not find anything, which goes against everything science stands for. Until that changes, all I can really tell you is to be skeptical when you read about linear regressions.

Building a wider tent, revisited

Sep 17, JDN 2458014

At a reader’s suggestion, I am expanding upon the argument I made a few weeks ago that political coalitions are strongest when they are willing to accept some disagreement. I made that argument with numbers, which is likely to convince someone like me; but I know that many other people don’t really think that way, so it may help to provide some visuals as well.

60% of this rectangle is filled in red.


This represents the proportion of the population that agrees with you on some issue. For concreteness but to avoid making this any more political than it already is, I’m going to pick silly issues. So let’s have this first issue be about which side of the road we should drive on. Let’s say your view is that we should drive on the right. 60% of people agree that we should drive on the right. The other 40% think we should drive on the left.

Now let’s consider another issue. Let’s say this one is about putting pineapples on pizza. You, and 60% of people, agree that pineapples should not be put on pizza. The other 40% think we should put pineapples on pizza.

For now, let’s assume those two issues are independent, that someone’s opinions on driving and pizza are unrelated. Then we can fill 60% of the rectangle in blue, but it should be a perpendicular portion because the two issues aren’t related:


Those who agree with you on driving but not pizza (that would include me, by the way) are in red, those who agree with you on pizza but not driving are in blue, those who agree with you on both are in purple, and those who disagree with you on both are in white. You should already be able to see that less than half the population agrees with you on both issues, even though more than half agrees on each.

Let’s add a third issue, which we will color in green. This one can be the question of whether Star Trek is better than Star Wars. Let’s say that 60% of the population agrees with you that Star Trek is better, while 40% think that Star Wars is better. Let’s also assume that this is independent of opinions on both driving and pizza.


This is already starting to get unwieldy; there are now eight distinct regions. The white region (8) is comprised of people who disagree with you on everything. The red (6), blue (4), and green (7) regions each have people agree with you on exactly one issue. The blue-green (3), purple (2), and brown (5) regions have people agree with you on two issues. Only those in the dark-green region (1) agree with you on everything.

As you can see, the proportion of people who agree with you on all issues is fairly small, even though the majority of the population agrees with you on any given issue.

If we keep adding issues, this effect gets even stronger. I’m going to change the color-coding now to simplify things. Now, blue will indicate the people who agree with you on all issues, green the people who agree on all but one issue, yellow the people who agree on all but two issues, and red the people who disagree with you on three or more issues.

For three issues, that looks like this, which you can compare to the previous diagram:


Now let’s add a fourth issue. Let’s say 60% of people agree with you that socks should not be worn with sandals, but 40% think that socks should be worn with sandals. The blue region gets smaller:


How about a fifth issue? Let’s say 60% of people agree with you that cats are better than dogs, while 40% think that dogs are better than cats. The blue region continues to shrink:


How about a sixth issue?


And finally, a seventh issue?


Now the majority of the space is covered by red, meaning that most of the population disagrees with you on at least three issues.

To recap:

By the time there were two issues, the majority of the population disagreed with you on at least one issue.

By the time there were four issues, the majority of the population disagreed with you on at least two issues.

By the time there were seven issues, the majority of the population disagreed with you on at least three issues.

This despite the fact that the majority of the population always agrees with you on any given issue!

If you only welcomed people into your coalition who agree on every single issue (the blue region), you wouldn’t win election if there were even two issues. If you only welcomed those who disagree on at most one (blue or green), you’d stop winning if there were at least four issues. And if there were at least seven issues, you couldn’t even win by allowing those who disagree on at most two issues (blue, green, yellow).

Now, this argument very much does rely upon the different opinions being independent, which in real politics is not the case. So let’s introduce some correlations and see how this changes the result.

Suppose that once someone agrees with you about driving on the right side of the road, they are 90% likely to agree on pizza, Star Trek, sandals, and cats.

That makes things look a lot better for you; by including one level of disagreement, you could dominate every election. But notice that even in this case, if you exclude all disagreement, you will continue to lose elections.

With enough issues, even with very strong correlations you can get the same effect. Suppose there are 20 issues, and if you agree on the first one, there is a 99% chance you’ll agree on each of the others. You are still only getting about half the electorate if you don’t allow any disagreement! Due to the very high correlation, if someone disagrees with you on a few things, they usually disagree with you on many things; yet you’re still better off including some disagreement in your coalition.


Obviously, you shouldn’t include people in your coalition who actively oppose its core mission. Even if they aren’t actively trying to undermine you, at some point, the disagreement becomes so large that you’ve got to cut them loose. But in a pluralistic democracy, ideological purism is a surefire recipe for electoral failure. You need to allow at least some disagreement.

This isn’t even getting into the possibility that you might be wrong about some issues, and by including those who disagree with you, you may broaden your horizons and correct your mistakes. I’ve thus far assumed you are completely correct and in the majority on every single issue, and yet you still can’t win elections with complex policy mixes unless you include people who disagree with you.

Think of this as a moral recession

August 27, JDN 2457993

The Great Depression was, without doubt, the worst macroeconomic event of the last 200 years. Over 30 million people became unemployed. Unemployment exceeded 20%. Standard of living fell by as much as a third in the United States. Political unrest spread across the world, and the collapsing government of Germany ultimately became the Third Reich and triggered the Second World War If we ignore the world war, however, the effect on mortality rates was surprisingly small. (“Other than that, Mrs. Lincoln, how was the play?”)

And yet, how long do you suppose it took for economic growth to repair the damage? 80 years? 50 years? 30 years? 20 years? Try ten to fifteen. By 1940, the US, US, Germany, and Japan all had a per-capita GDP at least as high as in 1930. By 1945, every country in Europe had a per-capita GDP at least as high as they did before the Great Depression.

The moral of this story is this: Recessions are bad, and can have far-reaching consequences; but ultimately what really matters in the long run is growth.

Assuming the same growth otherwise, a country that had a recession as large as the Great Depression would be about 70% as rich as one that didn’t.

But over 100 years, a country that experienced 3% growth instead of 2% growth would be over two and a half times richer.

Therefore, in terms of standard of living only, if you were given the choice between having a Great Depression but otherwise growing at 3%, and having no recessions but growing at 2%, your grandchildren will be better off if you chose the former. (Of course, given the possibility of political unrest or even war, the depression could very well end up worse.)

With that in mind, I want you to think of the last few years—and especially the last few months—as a moral recession. Donald Trump being President of the United States is clearly a step backward for human civilization, and it seems to have breathed new life into some of the worst ideologies our society has ever harbored, from extreme misogyny, homophobia, right-wing nationalism, and White supremacism to outright Neo-Nazism. When one of the central debates in our public discourse is what level of violence is justifiable against Nazis under what circumstances, something has gone terribly, terribly wrong.

But much as recessions are overwhelmed in the long run by economic growth, there is reason to be confident that this moral backslide is temporary and will be similarly overwhelmed by humanity’s long-run moral progress.

What moral progress, you ask? Let’s remind ourselves.

Just 100 years ago, women could not vote in the United States.

160 years ago, slavery was legal in 15 US states.

Just 3 years ago, same-sex marriage was illegal in 14 US states. Yes, you read that number correctly. I said three. There are gay couples graduating high school and getting married now who as freshmen didn’t think they would be allowed to get married.

That’s just the United States. What about the rest of the world?

100 years ago, almost all of the world’s countries were dictatorships. Today, half of the world’s countries are democracies. Indeed, thanks to India, the majority of the world’s population now lives under democracy.

35 years ago, the Soviet Union still ruled most of Eastern Europe and Northern Asia with an iron fist (or should I say “curtain”?).

30 years ago, the number of human beings in extreme poverty—note I said number, not just rate; the world population was two-thirds what it is today—was twice as large as it is today.

Over the last 65 years, the global death rate due to war has fallen from 250 per million to just 10 per million.

The global literacy rate has risen from 40% to 80% in just 50 years.

World life expectancy has increased by 6 years in just the last 20 years.

We are living in a golden age. Do not forget that.

Indeed, if there is anything that could destroy all these astonishing achievements, I think it would be our failure to appreciate them.

If you listen to what these Neo-Nazi White supremacists say about their grievances, they sound like the spoiled children of millionaires (I mean, they elected one President, after all). They are outraged because they only get 90% of what they want instead of 100%—or even outraged not because they didn’t get what they wanted but because someone else they don’t know also did.

If you listen to the far left, their complaints don’t make much more sense. If you didn’t actually know any statistics, you’d think that life is just as bad for Black people in America today as it was under Jim Crow or even slavery. Well, it’s not even close. I’m not saying racism is gone; it’s definitely still here. But the civil rights movement has made absolutely enormous strides, from banning school segregation and housing redlining to reforming prison sentences and instituting affirmative action programs. Simply the fact that “racist” is now widely considered a terrible thing to be is a major accomplishment in itself. A typical Black person today, despite having only about 60% of the income of a typical White person, is still richer than a typical White person was just 50 years ago. While the 71% high school completion rate Black people currently have may not sound great, it’s much higher than the 50% rate that the whole US population had as recently as 1950.

Yes, there are some things that aren’t going very well right now. The two that I think are most important are climate change and income inequality. As both the global mean temperature anomaly and the world top 1% income share continue to rise, millions of people will suffer and die needlessly from diseases of poverty and natural disasters.

And of course if Neo-Nazis manage to take hold of the US government and try to repeat the Third Reich, that could be literally the worst thing that ever happened. If it triggered a nuclear war, it unquestionably would be literally the worst thing that ever happened. Both these events are unlikely—but not nearly as unlikely as they should be. (Five Thirty Eight interviewed several nuclear experts who estimated a probability of imminent nuclear war at a horrifying five percent.) So I certainly don’t want to make anyone complacent about these very grave problems.

But I worry also that we go too far the other direction, and fail to celebrate the truly amazing progress humanity has made thus far. We hear so often that we are treading water, getting nowhere, or even falling backward, that we begin to feel as though the fight for moral progress is utterly hopeless. If all these centuries of fighting for justice really had gotten us nowhere, the only sensible thing to do at this point would be to give up. But on the contrary, we have made enormous progress in an incredibly short period of time. We are on the verge of finally winning this fight. The last thing we want to do now is give up.

Building a wider tent is not compromising on your principles

August 20, JDN 2457986

After humiliating defeats in the last election, the Democratic Party is now debating how to recover and win future elections. One proposal that has been particularly hotly contested is over whether to include candidates who agree with the Democratic Party on most things, but still oppose abortion.

This would almost certainly improve the chances of winning seats in Congress, particularly in the South. But many have argued that this is a bridge too far, it amounts to compromising on fundamental principles, and the sort of DINO (Democrat-In-Name-Only) we’d end up with are no better than no Democrats at all.

I consider this view deeply misguided; indeed, I think it’s a good portion of the reason why we got so close to winning the culture wars and yet suddenly there are literal Nazis marching in the streets. Insisting upon ideological purity on every issue is a fantastic way to amplify the backlash against you and ensure that you will always lose.

To show why, I offer you a simple formal model. Let’s make it as abstract as possible, and say there are five different issues, A, B, C, D, and E, and on each of them you can either choose Yes or No.

Furthermore, let’s suppose that on every single issue, the opinion of a 60% majority is “Yes”. If you are a political party that wants to support “Yes” on every issue, which of these options should you choose:
Option 1: Only run candidates who support “Yes” on every single issue

Option 2: Only run candidates who support “Yes” on at least 4 out of 5 issues

Option 3: Only run candidates who support “Yes” on at least 3 out of 5 issues

For now, let’s assume that people’s beliefs within a district are very strongly correlated (people believe what their friends, family, colleagues, and neighbors believe). Then assume that the beliefs of a given district are independently and identically distributed (each person essentially flips a weighted coin to decide their belief on each issue). These are of course wildly oversimplified, but they keep the problem simple, and I can relax them a little in a moment.

Suppose there are 100 districts up for grabs (like, say, the US Senate). Then there will be:

(0.6)^5*100 = 8 districts that support “Yes” on every single issue.

5*(0.6)^4*(0.4)*100 = 26 districts that support “Yes” on 4 out of 5 issues.

10*(0.6)^3*(0.4)^2*100 = 34 districts that support “Yes” on 3 out of 5 issues.

10*(0.6)^2*(0.4)^3*100 = 23 districts that support “Yes” on 2 out of 5 issues.

5*(0.6)^1*(0.4)^4*100 = 8 districts that support “Yes” on 1 out of 5 issues.

(0.4)^5*100 = 1 district that doesn’t support “Yes” on any issues.

The ideological purists want us to choose option 1, so let’s start with that. If you only run candidates who support “Yes” on every single issue, you will win only eight districts. Your party will lose 92 out of 100 seats. You will become a minor, irrelevant party of purists with no actual power—despite the fact that the majority of the population agrees with you on any given issue.

If you choose option 2, and run candidates who differ at most by one issue, you will still lose, but not by nearly as much. You’ll claim a total of 34 seats. That might at least be enough to win some votes or drive some committees.

If you want a majority, you need to go with option 3, and run candidates who agree on at least 3 out of 5 issues. Only then will you win 68 seats and be able to drive legislative outcomes.

But wait! you may be thinking. You only won in that case by including people who don’t agree with your core platform; so what use is it to win the seats? You could win every seat by including every possible candidate, and then accomplish absolutely nothing!

Yet notice that even under option 3, you’re still only including people who agree with the majority of your platform. You aren’t including absolutely everyone. Indeed, once you parse out all the combinations, it becomes clear that by running these candidates, you will win the vote on almost every issue.

8 of your candidates are A1, B1, C1, D1, E1, perfect partisans; they’ll support you every time.

6 of your candidates are A1, B1, C1, D1, E0, disagreeing only on issue E.

5 of your candidates are A1, B1, C1, D0, E1, disagreeing only on issue D.

5 of your candidates are A1, B1, C0, D1, E1, disagreeing only on issue C.

5 of your candidates are A1, B0, C1, D1, E1, disagreeing only on issue B.

5 of your candidates are A0, B1, C1, D1, E1, disagreeing only on issue A.

4 of your candidates are A1, B1, C1, D0, E0, disagreeing on issues D and E.

4 of your candidates are A0, B1, C1, D0, E0, disagreeing on issues E and A.

4 of your candidates are A0, B0, C1, D1, E1, disagreeing on issues B and A.

4 of your candidates are A1, B0, C1, D1, E0, disagreeing on issues E and B.

3 of your candidates are A1, B1, C0, D0, E1, disagreeing on issues D and C.

3 of your candidates are A1, B0, C0, D1, E1, disagreeing on issues C and B.

3 of your candidates are A0, B1, C1, D0, E1, disagreeing on issues D and A.

3 of your candidates are A0, B1, C0, D1, E1, disagreeing on issues C and A.

3 of your candidates are A1, B0, C1, D0, E1, disagreeing on issues D and B.

3 of your candidates are A1, B1, C0, D1, E0, disagreeing on issues C and E.

I took the liberty of rounding up or down as needed to make the numbers add up to 68. I biased toward rounding up on issue E, to concentrate all the dissent on one particular issue. This is sort of a worst-case scenario.

Since 60% of the population also agrees with you, the opposing parties couldn’t have only chosen pure partisans; they had to cast some kind of big tent as well. So I’m going to assume that the opposing candidates look like this:

8 of their candidates are A1, B0, C0, D0, E0, agreeing with you only on issue A.

8 of their candidates are A0, B1, C0, D0, E0, agreeing with you only on issue B.

8 of their candidates are A0, B0, C1, D0, E0, agreeing with you only on issue C.

8 of their candidates are A0, B0, C0, D1, E0, agreeing with you only on issue D.

This is actually very conservative; despite the fact that there should be only 9 districts that disagree with you on 4 or more issues, they somehow managed to win 32 districts with such candidates. Let’s say it was gerrymandering or something.

Now, let’s take a look at the voting results, shall we?

A vote for “Yes” on issue A will have 8 + 6 + 3*5 + 2*4 + 4*3 + 8 = 57 votes.

A vote for “Yes” on issue B will have 8 + 6 + 3*5 + 2*4 + 4*3 + 8 = 57 votes.

A vote for “Yes” on issue C will have 8 + 6 + 3*5 + 4*4 + 2*3 + 8 = 59 votes.

A vote for “Yes” on issue D will have 8 + 6 + 3*5 + 3*4 + 3*3 + 8 = 58 votes.

A vote for “Yes” on issue E will have 8 + 0 + 4*5 + 1*4 + 5*3 = 47

Final results? You win on issues A, B, C, and D, and lose very narrowly on issue E. Even if the other party somehow managed to maintain total ideological compliance and you couldn’t get a single vote from them, you’d still win on issue C and tie on issue D. If on the other hand your party can convince just 4 of your own anti-E candidates to vote in favor of E for the good of the party, you can win on E as well.

Of course, in all of the above I assumed that districts are homogeneous and independently and identically distributed. Neither of those things are true.
The homogeneity assumption actually turns out to be pretty innocuous; if each district elects a candidate by plurality vote from two major parties, the Median Voter Theorem applies and the result is as if there were a single representative median voter making the decision.

The independence assumption is not innocuous, however. In reality, there will be strong correlations between the views of different people in different districts, and strong correlations across issues among individual voters. It is in fact quite likely that people who believe A1, B1, C1, D1 are more likely to believe E1 than people who believe A0, B0, C0, D0.

Given that, all the numbers above would shift, in the following way: There would be a larger proportion of pure partisans, and a smaller proportion of moderates with totally mixed views.

Does this undermine the argument? Not really. You need an awful lot of pure partisanship to make that a viable electoral strategy. I won’t go through all the cases again because it’s a mess, but let’s just look at those voting numbers again.

Suppose that instead of it being an even 60% regardless of your other beliefs, your probability of a “Yes” belief on a given issue is 80% if the majority of your previous beliefs are “Yes”, and a probability of 40% if the majority of your previous beliefs are “No”.

Then out of 100 districts:

(0.6)^3(0.8)^2*100 = 14 will be A1, B1, C1, D1, E1 partisans.

Fourteen. Better than eight, I suppose; but not much.

Okay, let’s try even stronger partisan loyalty. Suppose that your belief on A is randomly chosen with 60% probability, but every belief thereafter is 90% “Yes” if you are A1 and 30% “Yes” if you are A0.

Then out of 100 districts:

(0.6)(0.9)^4*100 = 39 will be A1, B1, C1, D1, E1 partisans.

You will still not be able to win a majority of seats using only hardcore partisans.

Of course, you could assume even higher partisanship rates, but then it really wasn’t fair to assume that there are only five issues to choose. Even with 95% partisanship on each issue, if there are 20 issues:
(0.95)^20*100 = 36

The moral of the story is that if there is any heterogeneity across districts at all, any meaningful deviation from the party lines, you will only be able to reliably win a majority of the legislature if you cast a big tent. Even if the vast majority of people agree with you on any given issue, odds are that the vast majority of people don’t agree with you on everything.

Moreover, you are not sacrificing your principles by accepting these candidates, as you are still only accepting people who mostly agree with you into your party. Furthermore, you will still win votes on most issues—even those you felt like you were compromising on.

I therefore hope the Democratic Party makes the right choice and allows anti-abortion candidates into the party. It’s our best chance of actually winning a majority and driving the legislative agenda, including the legislative agenda on abortion.

Social construction is not fact—and it is not fiction

July 30, JDN 2457965

With the possible exception of politically-charged issues (especially lately in the US), most people are fairly good at distinguishing between true and false, fact and fiction. But there are certain types of ideas that can’t be neatly categorized into fact versus fiction.

First, there are subjective feelings. You can feel angry, or afraid, or sad—and really, truly feel that way—despite having no objective basis for the emotion coming from the external world. Such emotions are usually irrational, but even knowing that doesn’t make them automatically disappear. Distinguishing subjective feelings from objective facts is simple in principle, but often difficult in practice: A great many things simply “feel true” despite being utterly false. (Ask an average American which is more likely to kill them, a terrorist or the car in their garage; I bet quite a few will get the wrong answer. Indeed, if you ask them whether they’re more likely to be shot by someone else or to shoot themselves, almost literally every gun owner is going to get that answer wrong—or they wouldn’t be gun owners.)

The one I really want to focus on today is social constructions. This is a term that has been so thoroughly overused and abused by postmodernist academics (“science is a social construction”, “love is a social construction”, “math is a social construction”, “sex is a social construction”, etc.) that it has almost lost its meaning. Indeed, many people now react with automatic aversion to the term; upon hearing it, they immediately assume—understandably—that whatever is about to follow is nonsense.

But there is actually a very important core meaning to the term “social construction” that we stand to lose if we throw it away entirely. A social construction is something that exists only because we all believe in it.

Every part of that definition is important:

First, a social construction is something that exists: It’s really there, objectively. If you think it doesn’t exist, you’re wrong. It even has objective properties; you can be right or wrong in your beliefs about it, even once you agree that it exists.

Second, a social construction only exists because we all believe in it: If everyone in the world suddenly stopped believing in it, like Tinker Bell it would wink out of existence. The “we all” is important as well; a social construction doesn’t exist simply because one person, or a few people, believe in it—it requires a certain critical mass of society to believe in it. Of course, almost nothing is literally believed by everyone, so it’s more that a social construction exists insofar as people believe in it—and thus can attain a weaker or stronger kind of existence as beliefs change.

The combination of these two features makes social constructions a very weird sort of entity. They aren’t merely subjective beliefs; you can’t be wrong about what you are feeling right now (though you can certainly lie about it), but you can definitely be wrong about the social constructions of your society. But we can’t all be wrong about the social constructions of our society; once enough of our society stops believing in them, they will no longer exist. And when we have conflict over a social construction, its existence can become weaker or stronger—indeed, it can exist to some of us but not to others.

If all this sounds very bizarre and reminds you of postmodernist nonsense that might come from the Wisdom of Chopra randomizer, allow me to provide a concrete and indisputable example of a social construction that is vitally important to economics: Money.

The US dollar is a social construction. It has all sorts of well-defined objective properties, from its purchasing power in the market to its exchange rate with other currencies (also all social constructions). The markets in which it is spent are social constructions. The laws which regulate those markets are social constructions. The government which makes those laws is a social construction.

But it is not social constructions all the way down. The paper upon which the dollar was printed is a physical object with objective factual existence. It is an artifact—it was made by humans, and wouldn’t exist if we didn’t—but now that we’ve made it, it exists and would continue to exist regardless of whether we believe in it or even whether we continue to exist. The cotton from which it was made is also partly artificial, bred over centuries from a lifeform that evolved over millions of years. But the carbon atoms inside that cotton were made in a star, and that star existed and fused its carbon billions of years before any life on Earth existed, much less humans in particular. This is why the statements “math is a social construction” and “science is a social construction” are so ridiculous. Okay, sure, the institutions of science and mathematics are social constructions, but that’s trivial; nobody would dispute that, and it’s not terribly interesting. (What, you mean if everyone stopped going to MIT, there would be no MIT!?) The truths of science and mathematics were true long before we were even here—indeed, the fundamental truths of mathematics could not have failed to be true in any possible universe.

But the US dollar did not exist before human beings created it, and unlike the physical paper, the purchasing power of that dollar (which is, after all, mainly what we care about) is entirely socially constructed. If everyone in the world suddenly stopped accepting US dollars as money, the US dollar would cease to be money. If even a few million people in the US suddenly stopped accepting dollars, its value would become much more precarious, and inflation would be sure to follow.

Nor is this simply because the US dollar is a fiat currency. That makes it more obvious, to be sure; a fiat currency attains its value solely through social construction, as its physical object has negligible value. But even when we were on the gold standard, our currency was representative; the paper itself was still equally worthless. If you wanted gold, you’d have to exchange for it; and that process of exchange is entirely social construction.

And what about gold coins, one of the oldest form of money? There now the physical object might actually be useful for something, but not all that much. It’s shiny, you can make jewelry out of it, it doesn’t corrode, it can be used to replace lost teeth, it has anti-inflammatory properties—and millennia later we found out that its dense nucleus is useful for particle accelerator experiments and it is a very reliable electrical conductor useful for making microchips. But all in all, gold is really not that useful. If gold were priced based on its true usefulness, it would be extraordinarily cheap; cheaper than water, for sure, as it’s much less useful than water. Yet very few cultures have ever used water as currency (though some have used salt). Thus, most of the value of gold is itself socially constructed; you value gold not to use it, but to impress other people with the fact that you own it (or indeed to sell it to them). Stranded alone on a desert island, you’d do anything for fresh water, but gold means nothing to you. And a gold coin actually takes on additional socially-constructed value; gold coins almost always had seignorage, additional value the government received from minting them over and above the market price of the gold itself.

Economics, in fact, is largely about social constructions; or rather I should say it’s about the process of producing and distributing artifacts by means of social constructions. Artifacts like houses, cars, computers, and toasters; social constructions like money, bonds, deeds, policies, rights, corporations, and governments. Of course, there are also services, which are not quite artifacts since they stop existing when we stop doing them—though, crucially, not when we stop believing in them; your waiter still delivered your lunch even if you persist in the delusion that the lunch is not there. And there are natural resources, which existed before us (and may or may not exist after us). But these are corner cases; mostly economics is about using laws and money to distribute goods, which means using social constructions to distribute artifacts.

Other very important social constructions include race and gender. Not melanin and sex, mind you; human beings have real, biological variation in skin tone and body shape. But the concept of a race—especially the race categories we ordinarily use—is socially constructed. Nothing biological forced us to regard Kenyan and Burkinabe as the same “race” while Ainu and Navajo are different “races”; indeed, the genetic data is screaming at us in the opposite direction. Humans are sexually dimorphic, with some rare exceptions (only about 0.02% of people are intersex; about 0.3% are transgender; and no more than 5% have sex chromosome abnormalities). But the much thicker concept of gender that comes with a whole system of norms and attitudes is all socially constructed.

It’s one thing to say that perhaps males are, on average, more genetically predisposed to be systematizers than females, and thus men are more attracted to engineering and women to nursing. That could, in fact, be true, though the evidence remains quite weak. It’s quite another to say that women must not be engineers, even if they want to be, and men must not be nurses—yet the latter was, until very recently, the quite explicit and enforced norm. Standards of clothing are even more obviously socially-constructed; in Western cultures (except the Celts, for some reason), flared garments are “dresses” and hence “feminine”; in East Asian cultures, flared garments such as kimono are gender-neutral, and gender is instead expressed through clothing by subtler aspects such as being fastened on the left instead of the right. In a thousand different ways, we mark our gender by what we wear, how we speak, even how we walk—and what’s more, we enforce those gender markings. It’s not simply that males typically speak in lower pitches (which does actually have a biological basis); it’s that males who speak in higher pitches are seen as less of a man, and that is a bad thing. We have a very strict hierarchy, which is imposed in almost every culture: It is best to be a man, worse to be a woman who acts like a woman, worse still to be a woman who acts like a man, and worst of all to be a man who acts like a woman. What it means to “act like a man” or “act like a woman” varies substantially; but the core hierarchy persists.

Social constructions like these ones are in fact some of the most important things in our lives. Human beings are uniquely social animals, and we define our meaning and purpose in life largely through social constructions.

It can be tempting, therefore, to be cynical about this, and say that our lives are built around what is not real—that is, fiction. But while this may be true for religious fanatics who honestly believe that some supernatural being will reward them for their acts of devotion, it is not a fair or accurate description of someone who makes comparable sacrifices for “the United States” or “free speech” or “liberty”. These are social constructions, not fictions. They really do exist. Indeed, it is only because we are willing to make sacrifices to maintain them that they continue to exist. Free speech isn’t maintained by us saying things we want to say; it is maintained by us allowing other people to say things we don’t want to hear. Liberty is not protected by us doing whatever we feel like, but by not doing things we would be tempted to do that impose upon other people’s freedom. If in our cynicism we act as though these things are fictions, they may soon become so.

But it would be a lot easier to get this across to people, I think, if folks would stop saying idiotic things like “science is a social construction”.

Several of the world’s largest banks are known to have committed large-scale fraud. Why have we done so little about it?

July 16, JDN 2457951

In 2014, JPMorgan Chase paid a settlement of $614 million for fraudulent mortgage lending contributing to the crisis; but this was spare change compared to the $16.5 billion Bank of America paid in settlements for their fradulent mortgages.

In 2015, Citibank paid $700 million in restitution and $35 million in penalties for fraudulent advertising of “payment protection” services.

In 2016, Wells Fargo paid $190 in settlements for defrauding their customers with fake accounts.

Even PayPal has paid $25 million in settlements over abuses of their “PayPal Credit” system.
In 2016, Goldman Sachs paid $5.1 billion in settlements over their fraudulent sales of mortgage-backed securities.
But the worst offender of course is HSBC, which has paid $2.5 billion in settlements over fraud, as well as $1.9 billion in settlements for laundering money for terrorists. The US Justice Department has kept their money-laundering protections classified because they’re so bad that simply revealing them to the public could result in vast amounts of criminal abuse.
These are some of the world’s largest banks. JPMorgan Chase alone owns 8.0% of all investment banking worldwide; Goldman Sachs owns 6.6%; Citi owns 4.9%; Wells Fargo 2.5%; and HSBC 1.8%. That means that between them, these five corporations—all proven to have engaged in large-scale fraud—own almost one-fourth of all the world’s investment banking assets.

What shocks me the most about this is that hardly anyone seems to care. It’s seen as “normal”, as “business as usual” that a quarter of the world’s investment banking system is owned by white-collar criminals. When the issue is even brought up, often the complaint seems to be that the government is being somehow overzealous. The Economist even went so far as to characterize the prosecution of Wall Street fraud as a “shakedown”. Apparently the idea that our world’s most profitable companies shouldn’t be able to launder money for terrorists is just ridiculous. These are rich people; you expect them to follow rules? What is this, some kind of democracy?

Is this just always how it has been? Has corruption always been so thoroughly infused with finance that we don’t even know how to separate them? Has the oligarchy of the top 0.01% become so strong that we can’t even bring ourselves to challenge them when they commit literal treason? For, in case you’ve forgotten, that is what money-laundering for terrorists is: HSBC gave aid and comfort to the enemies of the free world. Like “freedom” and “terrorism”, the word “treason” has been so overused that we begin to forget its meaning; but one of the groups that HSBC gladly loaned money to is an organization that has financed Hezbollah and Al-Qaeda. These are people that American and British soldiers have died fighting against, and when a British bank was found colluding with them, the penalty was… a few weeks of profits, no personal responsibility, and not a single day of prison time. The settlement was in fact less than the profits gained from the criminal enterprise, so this wasn’t even a fine; it was a tax. Our response to treason was to impose a tax.

And this of course was not the result of some newfound leniency in American government in general. No, we are still the nation that imprisons 700 out of every 100,000 people, the nation with more prisoners than any other nation on Earth. Our police officers still kill young Black men with impunity, including at least three dozen unarmed Black men every year, many of them for no apparent reason at all. (The precise number is still unknown, as the police refuse to keep an official database of all the citizens they kill.) Decades of “law and order” politicians promising to stop the “rising crime” (that is actually falling) have made the United States very close to a police state, especially in poor neighborhoods that are primarily inhabited by Black and Hispanic people. We don’t even have an especially high crime rate, except for gun homicides (and that because we have so many guns, also more than any other nation on Earth). We are, if anything, an especially vindictive society, cruel, unforgiving, and violent towards those we perceive as transgressors.

Except, that is, when the criminals are rich. Even the racial biases seem to go away in such circumstances; there is no reasonable doubt as to the guilt of O.J. Simpson or Bill Cosby, but Simpson only ended up in prison years later on a completely unrelated offense, and after Cosby’s mistrial it’s unclear if he’ll ever see any prison time. I don’t see how either man could have been less punished for his crimes had he been White; but can anyone seriously doubt that both men would be punished more had they not been rich?

I do not think that capitalism is an irredeemable system. I think that, in themselves, free markets are very useful, and we should not remove or restrict them unnecessarily. But capitalism isn’t supposed to be a system where the rich can do whatever they want and the poor have to accept it. Capitalism is supposed to be a system where everyone is free to do as they choose, unless they are harming others—and the rules are supposed to be the same for everyone. A free market is not one where you can buy the right to take away other people’s freedom.

Is this just some utopian idealism? It would surely be utopian to imagine a world where fraud never happens, that much is true. Someone, somewhere, will always be defrauding someone else. But a world where fraud is punished most of the time? Where our most powerful institutions are still subject to the basic rule of law? Is that a pipe dream as well?

What we lose by aggregating

Jun 25, JDN 2457930

One of the central premises of current neoclassical macroeconomics is the representative agent: Rather than trying to keep track of all the thousands of firms, millions of people, and billions of goods and in a national economy, we aggregate everything up into a single worker/consumer and a single firm producing and consuming a single commodity.

This sometimes goes under the baffling misnomer of microfoundations, which would seem to suggest that it carries detailed information about the microeconomic behavior underlying it; in fact what this means is that the large-scale behavior is determined by some sort of (perfectly) rational optimization process as if there were just one person running the entire economy optimally.

First of all, let me say that some degree of aggregation is obviously necessary. Literally keeping track of every single transaction by every single person in an entire economy would require absurd amounts of data and calculation. We might have enough computing power to theoretically try this nowadays, but then again we might not—and in any case such a model would very rapidly lose sight of the forest for the trees.

But it is also clearly possible to aggregate too much, and most economists don’t seem to appreciate this. They cite a couple of famous theorems (like the Gorman Aggregation Theorem) involving perfectly-competitive firms and perfectly-rational identical consumers that offer a thin veneer of justification for aggregating everything into one, and then go on with their work as if this meant everything were fine.

What’s wrong with such an approach?

Well, first of all, a representative agent model can’t talk about inequality at all. It’s not even that a representative agent model says inequality is good, or not a problem; it lacks the capacity to even formulate the concept. Trying to talk about income or wealth inequality in a representative agent model would be like trying to decide whether your left hand is richer than your right hand.

It’s also nearly impossible to talk about poverty in a representative agent model; the best you can do is talk about a country’s overall level of development, and assume (not without reason) that a country with a per-capita GDP of $1,000 probably has a lot more poverty than a country with a per-capita GDP of $50,000. But two countries with the same per-capita GDP can have very different poverty rates—and indeed, the cynic in me wonders if the reason we’re reluctant to use inequality-adjusted measures of development is precisely that many American economists fear where this might put the US in the rankings. The Human Development Index was a step in the right direction because it includes things other than money (and as a result Saudi Arabia looks much worse and Cuba much better), but it still aggregates and averages everything, so as long as your rich people are doing well enough they can compensate for how badly your poor people are doing.

Nor can you talk about oligopoly in a representative agent model, as there is always only one firm, which for some reason chooses to act as if it were facing competition instead of rationally behaving as a monopoly. (This is not quite as nonsensical as it sounds, as the aggregation actually does kind of work if there truly are so many firms that they are all forced down to zero profit by fierce competition—but then again, what market is actually like that?) There is no market share, no market power; all are at the mercy of the One True Price.

You can still talk about externalities, sort of; but in order to do so you have to set up this weird doublethink phenomenon where the representative consumer keeps polluting their backyard and then can’t figure out why their backyard is so darn polluted. (I suppose humans do seem to behave like that sometimes; but wait, I thought you believed people were rational?) I think this probably confuses many an undergrad, in fact; the models we teach them about externalities generally use this baffling assumption that people consider one set of costs when making their decisions and then bear a different set of costs from the outcome. If you can conceptualize the idea that we’re aggregating across people and thinking “as if” there were a representative agent, you can ultimately make sense of this; but I think a lot of students get really confused by it.

Indeed, what can you talk about with a representative agent model? Economic growth and business cycles. That’s… about it. These are not minor issues, of course; indeed, as Robert Lucas famously said:

The consequences for human welfare involved in questions like these [on economic growth] are simply staggering: once one starts to think about them, it is hard to think about anything else.

I certainly do think that studying economic growth and business cycles should be among the top priorities of macroeconomics. But then, I also think that poverty and inequality should be among the top priorities, and they haven’t been—perhaps because the obsession with representative agent models make that basically impossible.

I want to be constructive here; I appreciate that aggregating makes things much easier. So what could we do to include some heterogeneity without too much cost in complexity?

Here’s one: How about we have p firms, making q types of goods, sold to n consumers? If you want you can start by setting all these numbers equal to 2; simply going from 1 to 2 has an enormous effect, as it allows you to at least say something about inequality. Getting them as high as 100 or even 1000 still shouldn’t be a problem for computing the model on an ordinary laptop. (There are “econophysicists” who like to use these sorts of agent-based models, but so far very few economists take them seriously. Partly that is justified by their lack of foundational knowledge in economics—the arrogance of physicists taking on a new field is legendary—but partly it is also interdepartmental turf war, as economists don’t like the idea of physicists treading on their sacred ground.) One thing that really baffles me about this is that economists routinely use computers to solve models that can’t be calculated by hand, but it never seems to occur to them that they could have started at the beginning planning to make the model solvable only by computer, and that would spare them from making the sort of heroic assumptions they are accustomed to making—assumptions that only made sense when they were used to make a model solvable that otherwise wouldn’t be.

You could also assign a probability distribution over incomes; that can get messy quickly, but we actually are fortunate that the constant relative risk aversion utility function and the Pareto distribution over incomes seem to fit the data quite well—as the product of those two things is integrable by hand. As long as you can model how your policy affects this distribution without making that integral impossible (which is surprisingly tricky), you can aggregate over utility instead of over income, which is a lot more reasonable as a measure of welfare.

And really I’m only scratching the surface here. There are a vast array of possible new approaches that would allow us to extend macroeconomic models to cover heterogeneity; the real problem is an apparent lack of will in the community to make such an attempt. Most economists still seem very happy with representative agent models, and reluctant to consider anything else—often arguing, in fact, that anything else would make the model less microfounded when plainly the opposite is the case.