**Nov 11 JDN 2458434**

“Laws, like sausages, cease to inspire respect in proportion as we know how they are made.”

Statistics are a bit like laws and sausages. There are a lot of things in statistical practice that don’t align with statistical theory. The most obvious examples are the fact that many results in statistics are *asymptotic: *they only strictly apply for infinitely large samples, and in any finite sample they will be some sort of approximation (we often don’t even know *how good *an approximation).

But the problem runs deeper than this: The whole idea of a *p*-value was originally supposed to be used to assess *one single hypothesis *that is the *only one *you test in your entire study.

That’s frankly a ludicrous expectation: Why would you write a whole paper just to test *one *parameter?

This is why I don’t actually think this so-called **multiple comparisons problem **is a problem with researchers doing too many hypothesis tests; I think it’s a problem with *statisticians *being *fundamentally unreasonable *about what statistics is useful for.* *We *have *to do multiple comparisons, so *you should be telling us how to do it correctly. *

Statisticians have this beautiful pure mathematics that generates all these lovely asymptotic results… and then they stop, as if they were done. But we aren’t dealing with infinite or even “sufficiently large” samples; we need to know what happens when your sample is 100, not when your sample is 10^29. We can’t assume that our variables are independently identically distributed; we *don’t know *their distribution, and we’re pretty sure they’re going to be somewhat dependent.

Even in an experimental context where we can randomly and independently assign some treatments, we can’t do that with lots of variables that are likely to matter, like age, gender, nationality, or field of study. And applied econometricians are in an even tighter bind; they often can’t randomize *anything. *They have to rely upon “instrumental variables” that they hope are “close enough to randomized” relative to whatever they want to study.

In practice what we tend to do is… fudge it. We use the formal statistical methods, and then we step back and apply a series of *informal *norms to see if the result actually makes sense to us. This is why almost no psychologists were actually convinced by Daryl Bem’s precognition experiments, despite his standard experimental methodology and perfect *p < *0.05 results; he couldn’t pass any of the *informal *tests, particularly the most basic one of not violating any known fundamental laws of physics. We *knew *he had somehow cherry-picked the data, even before looking at it; nothing else was *possible. *

This is actually part of where the “hierarchy of sciences” notion is useful: One of the norms is that you’re not allowed to break the rules of the sciences above you, but you can break the rules of the sciences below you. So psychology has to obey physics, but physics doesn’t have to obey psychology. I think this is also part of why there’s so much enmity between economists and anthropologists; really we should be on the same level, cognizant of each other’s rules, but economists want to be above anthropologists so we can ignore culture, and anthropologists want to be above economists so they can ignore incentives.

Another informal norm is the “robustness check”, in which the researcher runs a dozen different regressions approaching the same basic question from different angles. “What if we control for this? What if we interact those two variables? What if we use a different instrument?” In terms of statistical *theory*, this doesn’t actually make a lot of sense; the probability distributions *f(y|x)* of *y *conditional on *x *and *f(y|x, z) *of *y *conditional on *x *and *z *are not the same thing, and wouldn’t in general be closely tied, depending on the distribution *f(x|z*) of *x *conditional on z. But in *practice, *most real-world phenomena are going to continue to show up even as you run a bunch of different regressions, and so we can be more confident that something is a real phenomenon insofar as that happens. If an effect drops out when you switch out a couple of control variables, it may have been a statistical artifact. But if it keeps appearing no matter what you do to try to make it go away, then it’s probably a real thing.

Because of the powerful career incentives toward publication and the strange obsession among journals with a *p-*value less than 0.05, another norm has emerged: *Don’t actually trust p-values that are close to 0.05*. The vast majority of the time, a *p*-value of 0.047 was the result of publication bias. Now if you see a *p*-value of 0.001, maybe *then *you can trust it—but you’re still relying on a lot of assumptions even then. I’ve seen some researchers argue that because of this, we should tighten our standards for publication to something like* p < *0.01, but that’s missing the point; what we need to do is *stop publishing based on p-values. *If you tighten the threshold, you’re just going to get more rejected papers and then the few papers that do get published will now have even *smaller p-*values that are *still *utterly meaningless.

These informal norms protect us from the worst outcomes of bad research. But they are almost certainly not optimal. It’s all very vague and informal, and different researchers will often disagree vehemently over whether a given interpretation is valid. What we need are *formal *methods for solving these problems, so that we can have the objectivity and replicability that formal methods provide. Right now, our existing formal tools simply are not up to that task.

There are some things we may never be able to formalize: If we had a formal algorithm for coming up with good ideas, the AIs would already rule the world, and this would be either *Terminator *or *The Culture *depending on whether we designed the AIs correctly. But I think we should at least be able to formalize the basic question of *“Is this statement likely to be true?” *that is the fundamental motivation behind statistical hypothesis testing.

I think the answer is likely to be in a broad sense Bayesian, but Bayesians still have a lot of work left to do in order to give us really flexible, reliable statistical methods we can actually apply to the messy world of real data. In particular, *tell us how to choose priors please! *Prior selection is a fundamental make-or-break problem in Bayesian inference that has nonetheless been greatly neglected by most Bayesian statisticians. So, what do we do? We fall back on informal norms: Try maximum likelihood, which is like using a very flat prior. Try a normally-distributed prior. See if you can construct a prior from past data. If all those give the same thing, that’s a “robustness check” (see previous informal norm).

Informal norms are also inherently harder to teach and learn. I’ve seen a lot of other grad students flail wildly at statistics, not because they don’t know what a *p*-value means (though maybe that’s also sometimes true), but because they don’t really quite grok the *informal *underpinnings of good statistical inference. This can be very hard to explain to someone: They feel like they followed all the rules correctly, but you are saying their results are wrong, and now you can’t explain why.

In fact, some of the informal norms that are in wide use are clearly detrimental. In economics, norms have emerged that certain types of models are better simply because they are “more standard”, such as the dynamic stochastic general equilibrium models that can basically be fit to everything and have never actually usefully predicted anything. In fact, the best ones just predict *what we already knew from Keynesian models. *But without a formal norm for testing the validity of models, it’s been “DSGE or GTFO”. At present, it is considered “nonstandard” (read: “bad”) *not *to assume that your agents are either a single unitary “representative agent” or a continuum of infinitely-many agents—modeling the actual fact of finitely-many agents is *just not done. *Yet it’s hard for me to imagine any formal criterion that wouldn’t at least give you *some *points for correctly including the fact that there is more than one but less than infinity people in the world (obviously your model could still be bad in other ways).

I don’t know what these new statistical methods would look like. Maybe it’s as simple as formally justifying some of the norms we already use; maybe it’s as complicated as taking a fundamentally new approach to statistical inference. But we have to start somewhere.