A tale of two storms

Sep 24, JDN 2458021

There were two severe storm events this past week; one you probably heard a great deal about, the other, probably not. The first was Hurricane Irma, which hit the United States and did most of its damage in Florida; the second was Typhoon Doksuri, which hit Southeast Asia and did most of its damage in Vietnam.

You might expect that this post is going to give you more bad news. Well, I have a surprise for you: The news is actually mostly good.

The death tolls from both storms were astonishingly small. The hurricane is estimated to have killed at least 84 people, while the typhoon has killed at least 26. This result is nothing less than heroism. The valiant efforts of thousands of meteorologists and emergency responders around the world has saved thousands of lives, and did so both in the wealthy United States and in impoverished Vietnam.

When I started this post, I had expected to see that the emergency response in Vietnam would be much worse, and fatalities would be far higher; I am delighted to report that nothing of the sort was the case, and Vietnam, despite their per-capita GDP PPP of under $6,000, has made emergency response a sufficiently high priority that they saved their people just about as well as Florida did.

To get a sense of what might have happened without them, consider that 1.5 million homes in Florida were leveled by the hurricane, and over 100,000 homes were damaged by the typhoon. Vietnam is a country of 94 million people. Florida has a population of 20 million. (The reason Florida determines so many elections is that it is by far the most populous swing state.) Without weather forecasting and emergency response, these death figures would have been in the tens of thousands, not the dozens.

Indeed, if you know statistics and demographics well, these figures become even more astonishing: These death rates were almost indistinguishable from statistical noise.

Vietnam’s baseline death rate is about 5.9 per 1,000, meaning that they experience about 560,000 deaths in any given year. This means that over 1500 people die in Vietnam on a normal day.

Florida’s baseline death rate is about 6.6 per 1,000, actually a bit higher than Vietnam’s, because Florida’s population skews so much toward the elderly. Therefore Florida experiences about 130,000 deaths per year, or 360 deaths on a normal day.

In both Vietnam and Florida, this makes the daily death probability for any given person about 0.0017%. A random process with a fixed probability of 0.0017% over a population of n people will result in an average of 0.0017n events, but with some variation around that number. The standard deviation is actually sqrt(p(1-p)n) = 0.004 sqrt(n). When n = 20,000,000 (Florida), this results in a standard deviation of 18. When n = 94,000,000 (Vietnam), this results in a standard deviation of 40.

This means that the 26 additional deaths in Vietnam were within one standard deviation of average! They basically are indistinguishable from statistical noise. There have been over a hundred days in Vietnam where an extra 26 people happened to die, just in the past year. Weather forecasting took what could have been a historic disaster and turned it into just another bad day.

The 84 additional deaths in Florida are over four standard deviations away from average, so they are definitely distinguishable from statistical noise—but this still means that Florida’s total death rate for the year will only tick up by 0.6%.

It is common in such tragedies to point out in grave tones that “one death is too many”, but I maintain that this is not actually moral wisdom but empty platitude. No conceivable policy is ever going to reduce death rates to zero, and the people who died of heart attacks or brain aneurysms are every bit as dead as the people who died from hurricanes or terrorist attacks. Instead of focusing on the handful of people who died because they didn’t heed warnings or simply got extraordinarily unlucky, I think we should be focusing on the thousands of people who survived because our weather forecasters and emergency responders did their jobs so exceptionally well. Of course if we can reduce the numbers even further, we should; but from where I’m sitting, our emergency response system has a lot to be proud of.

Of course, the economic damage of the storms was substantially greater. The losses in destroyed housing and infrastructure in Florida are projected at over $80 billion. Vietnam is much poorer, so there simply isn’t as much infrastructure to destroy; total damage is unlikely to exceed $10 billion. Florida’s GDP is $926 billion, so they are losing 8.6%; while Vietnam’s GDP is $220 billion, so they are still losing 4.5%. And of course the damage isn’t evenly spread across everyone; those hardest hit will lose perhaps their entire net wealth, while others will feel absolutely nothing.

But economic damage is fleeting. Indeed, if we spend the government money we should be, and take the opportunity to rebuild this infrastructure better than it was before, the long-run economic impact could be positive. Even 8.6% of GDP is less than five years of normal economic growth—and there were years in the 1950s where we did it in a single year. The 4.6% that Vietnam lost, they should make back within a year of their current economic growth.

Thank goodness.

Why do so many Americans think that crime is increasing?

Jan 29, JDN 2457783

Since the 1990s, crime in United States has been decreasing, and yet in every poll since then most Americans report that they believe that crime is increasing.

It’s not a small decrease either. The US murder rate is down to the lowest it has been in a century. There are now a smaller absolute number (by 34 log points) of violent crimes per year in the US than there were 20 years ago, despite a significant increase in total population (19 log points—and the magic of log points is that, yes, the rate has decreased by precisely 53 log points).

It isn’t geographically uniform, of course; some states have improved much more than others, and a few states (such as New Mexico) have actually gotten worse.

The 1990s were a peak of violent crime, so one might say that we are just regressing to the mean. (Even that would be enough to make it baffling that people think crime is increasing.) But in fact overall crime in the US is now the lowest it has been since the 1970s, and still decreasing.

Indeed, this decrease has been underestimated, because we are now much better about reporting and investigating crimes than we used to be (which may also be part of why they are decreasing, come to think of it). If you compare against surveys of people who say they have been personally victimized, we’re looking at a decline in violent crime rates of two thirds—109 log points.

Just since 2008 violent crime has decreased by 26% (30 log points)—but of course we all know that Obama is “soft on crime” because he thinks cops shouldn’t be allowed to just shoot Black kids for no reason.

And yet, over 60% of Americans believe that overall crime in the US has increased in the last 10 years (though only 38% think it has increased in their own community!). These figures are actually down from 2010, when 66% thought crime was increasing nationally and 49% thought it was increasing in their local area.

The proportion of people who think crime is increasing does seem to decrease as crime rates decrease—but it still remains alarmingly high. If people were half as rational as most economists seem to believe, the proportion of people who think crime is increasing should drop to basically zero whenever crime rates decrease, since that’s a really basic fact about the world that you can just go look up on the Web in a couple of minutes. There’s no deep ambiguity, not even much “rational ignorance” given the low cost of getting correct answers. People just don’t bother to check, or don’t feel they need to.
What’s going on? How can crime fall to half what it was 20 years ago and yet almost two-thirds of people think it’s actually increasing?

Well, one hint is that news coverage of crime doesn’t follow the same pattern as actual crime.

News coverage in general is a terrible source of information, not simply because news organizations can be biased, make glaring mistakes, and sometimes outright lie—but actually for a much more fundamental reason: Even a perfect news channel, qua news channel, would report what is surprising—and what is surprising is, by definition, improbable. (Indeed, there is a formal mathematical concept in probability theory called surprisal that is simply the logarithm of 1 over the probability.) Even assuming that news coverage reports only the truth, the probability of seeing something on the news isn’t proportional to the probability of the event occurring—it’s more likely proportional to the entropy, which is probability times surprisal.

Now, if humans were optimal information processing engines, that would be just fine, actually; reporting events proportional to their entropy is actually a very efficient mechanism for delivering information (optimal, under certain types of constraints), provided that you can then process the information back into probabilities afterward.

But of course, humans aren’t optimal information processing engines. We don’t recompute the probabilities from the given entropy; instead we use the availability heuristic, by which we simply use the number of times we can think of something happening as our estimate of the probability of that event occurring. If you see more murders on TV news than you used you, you assume that murders must be more common than they used to be. (And when I put it like that, it really doesn’t sound so unreasonable, does it? Intuitively the availability heuristic seems to make sense—which is part of why it’s so insidious.)

Another likely reason for the discrepancy between perception and reality is nostalgia. People almost always have a more positive view of the past than it deserves, particularly when referring to their own childhoods. Indeed, I’m quite certain that a major reason why people think the world was much better when they were kids was that their parents didn’t tell them what was going on. And of course I’m fine with that; you don’t need to burden 4-year-olds with stories of war and poverty and terrorism. I just wish people would realize that they were being protected from the harsh reality of the world, instead of thinking that their little bubble of childhood innocence was a genuinely much safer world than the one we live in today.

Then take that nostalgia and combine it with the availability heuristic and the wall-to-wall TV news coverage of anything bad that happens—and almost nothing good that happens, certainly not if it’s actually important. I’ve seen bizarre fluff pieces about puppies, but never anything about how world hunger is plummeting or air quality is dramatically improved or cars are much safer. That’s the one thing I will say about financial news; at least they report it when unemployment is down and the stock market is up. (Though most Americans, especially most Republicans, still seem really confused on those points as well….) They will attribute it to anything from sunspots to the will of Neptune, but at least they do report good news when it happens. It’s no wonder that people are always convinced that the world is getting more dangerous even as it gets safer and safer.

The real question is what we do about it—how do we get people to understand even these basic facts about the world? I still believe in democracy, but when I see just how painfully ignorant so many people are of such basic facts, I understand why some people don’t. The point of democracy is to represent everyone’s interests—but we also end up representing everyone’s beliefs, and sometimes people’s beliefs just don’t line up with reality. The only way forward I can see is to find a way to make people’s beliefs better align with reality… but even that isn’t so much a strategy as an objective. What do I say to someone who thinks that crime is increasing, beyond showing them the FBI data that clearly indicates otherwise? When someone is willing to override all evidence with what they feel in their heart to be true, what are the rest of us supposed to do?

Experimentally testing categorical prospect theory

Dec 4, JDN 2457727

In last week’s post I presented a new theory of probability judgments, which doesn’t rely upon people performing complicated math even subconsciously. Instead, I hypothesize that people try to assign categories to their subjective probabilities, and throw away all the information that wasn’t used to assign that category.

The way to most clearly distinguish this from cumulative prospect theory is to show discontinuity. Kahneman’s smooth, continuous function places fairly strong bounds on just how much a shift from 0% to 0.000001% can really affect your behavior. In particular, if you want to explain the fact that people do seem to behave differently around 10% compared to 1% probabilities, you can’t allow the slope of the smooth function to get much higher than 10 at any point, even near 0 and 1. (It does depend on the precise form of the function, but the more complicated you make it, the more free parameters you add to the model. In the most parsimonious form, which is a cubic polynomial, the maximum slope is actually much smaller than this—only 2.)

If that’s the case, then switching from 0.% to 0.0001% should have no more effect in reality than a switch from 0% to 0.00001% would to a rational expected utility optimizer. But in fact I think I can set up scenarios where it would have a larger effect than a switch from 0.001% to 0.01%.

Indeed, these games are already quite profitable for the majority of US states, and they are called lotteries.

Rationally, it should make very little difference to you whether your odds of winning the Powerball are 0 (you bought no ticket) or 0.000000001% (you bought a ticket), even when the prize is $100 million. This is because your utility of $100 million is nowhere near 100 million times as large as your marginal utility of $1. A good guess would be that your lifetime income is about $2 million, your utility is logarithmic, the units of utility are hectoQALY, and the baseline level is about 100,000.

I apologize for the extremely large number of decimals, but I had to do that in order to show any difference at all. I have bolded where the decimals first deviate from the baseline.

Your utility if you don’t have a ticket is ln(20) = 2.9957322736 hQALY.

Your utility if you have a ticket is (1-10^-9) ln(20) + 10^-9 ln(1020) = 2.9957322775 hQALY.

You gain a whopping 40 microQALY over your whole lifetime. I highly doubt you could even perceive such a difference.

And yet, people are willing to pay nontrivial sums for the chance to play such lotteries. Powerball tickets sell for about $2 each, and some people buy tickets every week. If you do that and live to be 80, you will spend some $8,000 on lottery tickets during your lifetime, which results in this expected utility: (1-4*10^-6) ln(20-0.08) + 4*10^-6 ln(1020) = 2.9917399955 hQALY.
You have now sacrificed 0.004 hectoQALY, which is to say 0.4 QALY—that’s months of happiness you’ve given up to play this stupid pointless game.

Which shouldn’t be surprising, as (with 99.9996% probability) you have given up four months of your lifetime income with nothing to show for it. Lifetime income of $2 million / lifespan of 80 years = $25,000 per year; $8,000 / $25,000 = 0.32. You’ve actually sacrificed slightly more than this, which comes from your risk aversion.

Why would anyone do such a thing? Because while the difference between 0 and 10^-9 may be trivial, the difference between “impossible” and “almost impossible” feels enormous. “You can’t win if you don’t play!” they say, but they might as well say “You can’t win if you do play either.” Indeed, the probability of winning without playing isn’t zero; you could find a winning ticket lying on the ground, or win due to an error that is then upheld in court, or be given the winnings bequeathed by a dying family member or gifted by an anonymous donor. These are of course vanishingly unlikely—but so was winning in the first place. You’re talking about the difference between 10^-9 and 10^-12, which in proportional terms sounds like a lot—but in absolute terms is nothing. If you drive to a drug store every week to buy a ticket, you are more likely to die in a car accident on the way to the drug store than you are to win the lottery.

Of course, these are not experimental conditions. So I need to devise a similar game, with smaller stakes but still large enough for people’s brains to care about the “almost impossible” category; maybe thousands? It’s not uncommon for an economics experiment to cost thousands, it’s just usually paid out to many people instead of randomly to one person or nobody. Conducting the experiment in an underdeveloped country like India would also effectively amplify the amounts paid, but at the fixed cost of transporting the research team to India.

But I think in general terms the experiment could look something like this. You are given $20 for participating in the experiment (we treat it as already given to you, to maximize your loss aversion and endowment effect and thereby give us more bang for our buck). You then have a chance to play a game, where you pay $X to get a P probability of $Y*X, and we vary these numbers.

The actual participants wouldn’t see the variables, just the numbers and possibly the rules: “You can pay $2 for a 1% chance of winning $200. You can also play multiple times if you wish.” “You can pay $10 for a 5% chance of winning $250. You can only play once or not at all.”

So I think the first step is to find some dilemmas, cases where people feel ambivalent, and different people differ in their choices. That’s a good role for a pilot study.

Then we take these dilemmas and start varying their probabilities slightly.

In particular, we try to vary them at the edge of where people have mental categories. If subjective probability is continuous, a slight change in actual probability should never result in a large change in behavior, and furthermore the effect of a change shouldn’t vary too much depending on where the change starts.

But if subjective probability is categorical, these categories should have edges. Then, when I present you with two dilemmas that are on opposite sides of one of the edges, your behavior should radically shift; while if I change it in a different way, I can make a large change without changing the result.

Based solely on my own intuition, I guessed that the categories roughly follow this pattern:

Impossible: 0%

Almost impossible: 0.1%

Very unlikely: 1%

Unlikely: 10%

Fairly unlikely: 20%

Roughly even odds: 50%

Fairly likely: 80%

Likely: 90%

Very likely: 99%

Almost certain: 99.9%

Certain: 100%

So for example, if I switch from 0%% to 0.01%, it should have a very large effect, because I’ve moved you out of your “impossible” category (indeed, I think the “impossible” category is almost completely sharp; literally anything above zero seems to be enough for most people, even 10^-9 or 10^-10). But if I move from 1% to 2%, it should have a small effect, because I’m still well within the “very unlikely” category. Yet the latter change is literally one hundred times larger than the former. It is possible to define continuous functions that would behave this way to an arbitrary level of approximation—but they get a lot less parsimonious very fast.

Now, immediately I run into a problem, because I’m not even sure those are my categories, much less that they are everyone else’s. If I knew precisely which categories to look for, I could tell whether or not I had found it. But the process of both finding the categories and determining if their edges are truly sharp is much more complicated, and requires a lot more statistical degrees of freedom to get beyond the noise.

One thing I’m considering is assigning these values as a prior, and then conducting a series of experiments which would adjust that prior. In effect I would be using optimal Bayesian probability reasoning to show that human beings do not use optimal Bayesian probability reasoning. Still, I think that actually pinning down the categories would require a large number of participants or a long series of experiments (in frequentist statistics this distinction is vital; in Bayesian statistics it is basically irrelevant—one of the simplest reasons to be Bayesian is that it no longer bothers you whether someone did 2 experiments of 100 people or 1 experiment of 200 people, provided they were the same experiment of course). And of course there’s always the possibility that my theory is totally off-base, and I find nothing; a dissertation replicating cumulative prospect theory is a lot less exciting (and, sadly, less publishable) than one refuting it.

Still, I think something like this is worth exploring. I highly doubt that people are doing very much math when they make most probabilistic judgments, and using categories would provide a very good way for people to make judgments usefully with no math at all.

How I wish we measured percentage change

JDN 2457415

For today’s post I’m taking a break from issues of global policy to discuss a bit of a mathematical pet peeve. It is an opinion I share with many economists—for instance Miles Kimball has a very nice post about it, complete with some clever analogies to music.

I hate when we talk about percentages in asymmetric terms.

What do I mean by this? Well, here are a few examples.

If my stock portfolio loses 10% one year and then gains 11% the following year, have I gained or lost money? I’ve lost money. Only a little bit—I’m down 0.1%—but still, a loss.

In 2003, Venezuela suffered a depression of -26.7% growth one year, and then an economic boom of 36.1% growth the following year. What was their new GDP, relative to what it was before the depression? Very slightly less than before. (99.8% of its pre-recession value, to be precise.) You would think that falling 27% and rising 36% would leave you about 9% ahead; in fact it leaves you behind.

Would you rather live in a country with 11% inflation and have constant nominal pay, or live in a country with no inflation and take a 10% pay cut? You should prefer the inflation; in that case your real income only falls by 9.9%, instead of 10%.

We often say that the real interest rate is simply the nominal interest rate minus the rate of inflation, but that’s actually only an approximation. If you have 7% inflation and a nominal interest rate of 11%, your real interest rate is not actually 4%; it is 3.74%. If you have 2% inflation and a nominal interest rate of 0%, your real interest rate is not actually -2%; it is -1.96%.

This is what I mean by asymmetric:

Rising 10% and falling 10% do not cancel each other out. To cancel out a fall of 10%, you must actually rise 11.1%.

Gaining 20% and losing 20% do not cancel each other out. To cancel out a loss of 20%, you need a gain of 25%.

Is it starting to bother you yet? It sure bothers me.

Worst of all is the fact that the way we usually measure percentages, losses are bounded at 100% while gains are unbounded. To cancel a loss of 100%, you’d need a gain of infinity.

There are two basic ways of solving this problem: The simple way, and the good way.

The simple way is to just start measuring percentages symmetrically, by including both the starting and ending values in the calculation and averaging them.
That is, instead of using this formula:

% change = 100% * (new – old)/(old)

You use this one:

% change = 100% * (new – old)/((new + old)/2)

In this new system, percentage changes are symmetric.

Suppose a country’s GDP rises from $5 trillion to $6 trillion.

In the old system we’d say it has risen 20%:

100% * ($6 T – $5 T)/($5 T) = 20%

In the symmetric system, we’d say it has risen 18.2%:

100% * ($6 T – $5 T)/($5.5 T) = 18.2%

Suppose it falls back to $5 trillion the next year.

In the old system we’d say it has only fallen 16.7%:

100% * ($5 T – $6 T)/($6 T) = -16.7%

But in the symmetric system, we’d say it has fallen 18.2%.

100% * ($5 T – $6 T)/($5.5 T) = -18.2%

In the old system, the gain of 20% was somehow canceled by a loss of 16.7%. In the symmetric system, the gain of 18.2% was canceled by a loss of 18.2%, just as you’d expect.

This also removes the problem of losses being bounded but gains being unbounded. Now both losses and gains are bounded, at the rather surprising value of 200%.

Formally, that’s because of these limits:
lim_{x rightarrow infty} {(x-1) over {(x+1)/2}} = 2

lim_{x rightarrow infty} {(0-x) over {(x+0)/2}} = -2

It might be easier to intuit these limits with an example. Suppose something explodes from a value of 1 to a value of 10,000,000. In the old system, this means it rose 1,000,000,000%. In the symmetric system, it rose 199.9999%. Like the speed of light, you can approach 200%, but never quite get there.

100% * (10^7 – 1)/(5*10^6 + 0.5) = 199.9999%

Gaining 200% in the symmetric system is gaining an infinite amount. That’s… weird, to say the least. Also, losing everything is now losing… 200%?

This is simple to explain and compute, but it’s ultimately not the best way.

The best way is to use logarithms.

As you may vaguely recall from math classes past, logarithms are the inverse of exponents.

Since 2^4 = 16, log_2 (16) = 4.

The natural logarithm ln() is the most fundamental for deep mathematical reasons I don’t have room to explain right now. It uses the base e, a transcendental number that starts 2.718281828459045…

To the uninitiated, this probably seems like an odd choice—no rational number has a natural logarithm that is itself a rational number (well, other than 1, since ln(1) = 0).

But perhaps it will seem a bit more comfortable once I show you that natural logarithms are remarkably close to percentages, particularly for the small changes in which percentages make sense.

We define something called log points such that the change in log points is 100 times the natural logarithm of the ratio of the two:

log points = 100 * ln(new / old)

This is symmetric because of the following property of logarithms:

ln(a/b) = – ln(b/a)

Let’s return to the country that saw its GDP rise from $5 trillion to $6 trillion.

The logarithmic change is 18.2 log points:

100 * ln($6 T / $5 T) = 100 * ln(1.2) = 18.2

If it falls back to $5 T, the change is -18.2 log points:

100 * ln($5 T / $6 T) = 100 * ln(0.833) = -18.2

Notice how in the symmetric percentage system, it rose and fell 18.2%; and in the logarithmic system, it rose and fell 18.2 log points. They are almost interchangeable, for small percentages.

In this graph, the old value is assumed to be 1. The horizontal axis is the new value, and the vertical axis is the percentage change we would report by each method.

percentage_change_small

The green line is the usual way we measure percentages.

The red curve is the symmetric percentage method.

The blue curve is the logarithmic method.

For percentages within +/- 10%, all three methods are about the same. Then both new methods give about the same answer all the way up to changes of +/- 40%. Since most real changes in economics are within that range, the symmetric method and the logarithmic method are basically interchangeable.

However, for very large changes, even these two methods diverge, and in my opinion the logarithm is to be preferred.

percentage_change_large

The symmetric percentage never gets above 200% or below -200%, while the logarithm is unbounded in both directions.

If you lose everything, the old system would say you have lost 100%. The symmetric system would say you have lost 200%. The logarithmic system would say you have lost infinity log points. If infinity seems a bit too extreme, think of it this way: You have in fact lost everything. No finite proportional gain can ever bring it back. A loss that requires a gain of infinity percent seems like it should be called a loss of infinity percent, doesn’t it? Under the logarithmic system it is.

If you gain an infinite amount, the old system would say you have gained infinity percent. The logarithmic system would also say that you have gained infinity log points. But the symmetric percentage system would say that you have gained 200%. 200%? Counter-intuitive, to say the least.

Log points also have another very nice property that neither the usual system nor the symmetric percentage system have: You can add them.

If you gain 25 log points, lose 15 log points, then gain 10 log points, you have gained 20 log points.

25 – 15 + 10 = 20

Just as you’d expect!

But if you gain 25%, then lose 15%, and then gain 10%, you have gained… 16.9%.

(1 + 0.25)*(1 – 0.15)*(1 + 0.10) = 1.169

If you gain 25% symmetric, lose 15% symmetric, then gain 10% symmetric, that calculation is really a pain. To find the value y that is p symmetric percentage points from the starting value x, you end up needing to solve this equation:

p = 100 * (y – x)/((x+y)/2)

This can be done; it comes out like this:

y = (200 + p)/(200 – p) * x

(This also gives a bit of insight into why it is that the bounds are +/- 200%.)

So by chaining those, we can in fact find out what happens after gaining 25%, losing 15%, then gaining 10% in the symmetric system:

(200 + 25)/(200 – 25)*(200 – 15)/(200 + 15)*(200 + 10)/(200 – 10) = 1.223

Then we can put that back into the symmetric system:

100% * (1.223 – 1)/((1+1.223)/2) = 20.1%

So after all that work, we find out that you have gained 20.1% symmetric. We could almost just add them—because they are so similar to log points—but we can’t quite.

Log points actually turn out to be really convenient, once you get the hang of them. The problem is that there’s a conceptual leap for most people to grasp what a logarithm is in the first place.

In particular, the hardest part to grasp is probably that a doubling is not 100 log points.

It is in fact 69 log points, because ln(2) = 0.69.

(Doubling in the symmetric percentage system is gaining 67%—much closer to the log points than to the usual percentage system.)

Calculation of the new value is a bit more difficult than in the usual system, but not as difficult as in the symmetric percentage system.

If you have a change of p log points from a starting point of x, the ending point y is:

y = e^{p/100} * x

The fact that you can add log points ultimately comes from the way exponents add:

e^{p1/100} * e^{p2/100} = e^{(p1+p2)/100}

Suppose US GDP grew 2% in 2007, then 0% in 2008, then fell 8% in 2009 and rose 4% in 2010 (this is approximately true). Where was it in 2010 relative to 2006? Who knows, right? It turns out to be a net loss of 2.4%; so if it was $15 T before it’s now $14.63 T. If you had just added, you’d think it was only down 2%; you’d have underestimated the loss by $70 billion.

But if it had grown 2 log points, then 0 log points, then fell 8 log points, then rose 4 log points, the answer is easy: It’s down 2 log points. If it was $15 T before, it’s now $14.70 T. Adding gives the correct answer this time.

Thus, instead of saying that the stock market fell 4.3%, we should say it fell 4.4 log points. Instead of saying that GDP is up 1.9%, we should say it is up 1.8 log points. For small changes it won’t even matter; if inflation is 1.4%, it is in fact also 1.4 log points. Log points are a bit harder to conceptualize; but they are symmetric and additive, which other methods are not.

Is this a matter of life and death on a global scale? No.

But I can’t write about those every day, now can I?

What does correlation have to do with causation?

JDN 2457345

I’ve been thinking of expanding the topics of this blog into some basic statistics and econometrics. It has been said that there are “Lies, damn lies, and statistics”; but in fact it’s almost the opposite—there are truths, whole truths, and statistics. Almost everything in the world that we know—not merely guess, or suppose, or intuit, or believe, but actually know, with a quantifiable level of certainty—is done by means of statistics. All sciences are based on them, from physics (when they say the Higgs discovery is a “5-sigma event”, that’s a statistic) to psychology, ecology to economics. Far from being something we cannot trust, they are in a sense the only thing we can trust.

The reason it sometimes feels like we cannot trust statistics is that most people do not understand statistics very well; this creates opportunities for both accidental confusion and willful distortion. My hope is therefore to provide you with some of the basic statistical knowledge you need to combat the worst distortions and correct the worst confusions.

I wasn’t quite sure where to start on this quest, but I suppose I have to start somewhere. I figured I may as well start with an adage about statistics that I hear commonly abused: “Correlation does not imply causation.”

Taken at its original meaning, this is definitely true. Unfortunately, it can be easily abused or misunderstood.

In its original meaning, the formal sense of the word “imply” meaning logical implication, to “imply” something is an extremely strong statement. It means that you logically entail that result, that if the antecedent is true, the consequent must be true, on pain of logical contradiction. Logical implication is for most practical purposes synonymous with mathematical proof. (Unfortunately, it’s not quite synonymous, because of things like Gödel’s incompleteness theorems and Löb’s theorem.)

And indeed, correlation does not logically entail causation; it’s quite possible to have correlations without any causal connection whatsoever, simply by chance. One of my former professors liked to brag that from 1990 to 2010 whether or not she ate breakfast had a statistically significant positive correlation with that day’s closing price for the Dow Jones Industrial Average.

How is this possible? Did my professor actually somehow influence the stock market by eating breakfast? Of course not; if she could do that, she’d be a billionaire by now. And obviously the Dow’s price at 17:00 couldn’t influence whether she ate breakfast at 09:00. Could there be some common cause driving both of them, like the weather? I guess it’s possible; maybe in good weather she gets up earlier and people are in better moods so they buy more stocks. But the most likely reason for this correlation is much simpler than that: She tried a whole bunch of different combinations until she found two things that correlated. At the usual significance level of 0.05, on average you need to try about 20 combinations of totally unrelated things before two of them will show up as correlated. (My guess is she used a number of different stock indexes and varied the starting and ending year. That’s a way to generate a surprisingly large number of degrees of freedom without it seeming like you’re doing anything particularly nefarious.)

But how do we know they aren’t actually causally related? Well, I suppose we don’t. Especially if the universe is ultimately deterministic and nonlocal (as I’ve become increasingly convinced by the results of recent quantum experiments), any two data sets could be causally related somehow. But the point is they don’t have to be; you can pick any randomly-generated datasets, pair them up in 20 different ways, and odds are, one of those ways will show a statistically significant correlation.

All of that is true, and important to understand. Finding a correlation between eating grapefruit and getting breast cancer, or between liking bitter foods and being a psychopath, does not necessarily mean that there is any real causal link between the two. If we can replicate these results in a bunch of other studies, that would suggest that the link is real; but typically, such findings cannot be replicated. There is something deeply wrong with the way science journalists operate; they like to publish the new and exciting findings, which 9 times out of 10 turn out to be completely wrong. They never want to talk about the really important and fascinating things that we know are true because we’ve been confirming them over hundreds of different experiments, because that’s “old news”. The journalistic desire to be new and first fundamentally contradicts the scientific requirement of being replicated and confirmed.

So, yes, it’s quite possible to have a correlation that tells you absolutely nothing about causation.

But this is exceptional. In most cases, correlation actually tells you quite a bit about causation.

And this is why I don’t like the adage; “imply” has a very different meaning in common speech, meaning merely to suggest or evoke. Almost everything you say implies all sorts of things in this broader sense, some more strongly than others, even though it may logically entail none of them.

Correlation does in fact suggest causation. Like any suggestion, it can be overridden. If we know that 20 different combinations were tried until one finally yielded a correlation, we have reason to distrust that correlation. If we find a correlation between A and B but there is no logical way they can be connected, we infer that it is simply an odd coincidence.

But when we encounter any given correlation, there are three other scenarios which are far more likely than mere coincidence: A causes B, B causes A, or some other factor C causes A and B. These are also not mutually exclusive; they can all be true to some extent, and in many cases are.

A great deal of work in science, and particularly in economics, is based upon using correlation to infer causation, and has to be—because there is simply no alternative means of approaching the problem.

Yes, sometimes you can do randomized controlled experiments, and some really important new findings in behavioral economics and development economics have been made this way. Indeed, much of the work that I hope to do over the course of my career is based on randomized controlled experiments, because they truly are the foundation of scientific knowledge. But sometimes, that’s just not an option.

Let’s consider an example: In my master’s thesis I found a strong correlation between the level of corruption in a country (as estimated by the World Bank) and the proportion of that country’s income which goes to the top 0.01% of the population. Countries that have higher levels of corruption also tend to have a larger proportion of income that accrues to the top 0.01%. That correlation is a fact; it’s there. There’s no denying it. But where does it come from? That’s the real question.

Could it be pure coincidence? Well, maybe; but when it keeps showing up in several different models with different variables included, that becomes unlikely. A single p < 0.05 will happen about 1 in 20 times by chance; but five in a row should happen less than 1 in 1 million times (assuming they’re independent, which, to be fair, they usually aren’t).

Could it be some artifact of the measurement methods? It’s possible. In particular, I was concerned about the possibility of Halo Effect, in which people tend to assume that something which is better (or worse) in one way is automatically better (or worse) in other ways as well. People might think of their country as more corrupt simply because it has higher inequality, even if there is no real connection. But it would have taken a very large halo bias to explain this effect.

So, does corruption cause income inequality? It’s not hard to see how that might happen: More corrupt individuals could bribe leaders or exploit loopholes to make themselves extremely rich, and thereby increase inequality.

Does inequality cause corruption? This also makes some sense, since it’s a lot easier to bribe leaders and manipulate regulations when you have a lot of money to work with in the first place.

Does something else cause both corruption and inequality? Also quite plausible. Maybe some general cultural factors are involved, or certain economic policies lead to both corruption and inequality. I did try to control for such things, but I obviously couldn’t include all possible variables.

So, which way does the causation run? Unfortunately, I don’t know. I tried some clever statistical techniques to try to figure this out; in particular, I looked at which tends to come first—the corruption or the inequality—and whether they could be used to predict each other, a method called Granger causality. Those results were inconclusive, however. I could neither verify nor exclude a causal connection in either direction. But is there a causal connection? I think so. It’s too robust to just be coincidence. I simply don’t know whether A causes B, B causes A, or C causes A and B.

Imagine trying to do this same study as a randomized controlled experiment. Are we supposed to create two societies and flip a coin to decide which one we make more corrupt? Or which one we give more income inequality? Perhaps you could do some sort of experiment with a proxy for corruption (cheating on a test or something like that), and then have unequal payoffs in the experiment—but that is very far removed from how corruption actually works in the real world, and worse, it’s prohibitively expensive to make really life-altering income inequality within an experimental context. Sure, we can give one participant $1 and the other $1,000; but we can’t give one participant $10,000 and the other $10 million, and it’s the latter that we’re really talking about when we deal with real-world income inequality. I’m not opposed to doing such an experiment, but it can only tell us so much. At some point you need to actually test the validity of your theory in the real world, and for that we need to use statistical correlations.

Or think about macroeconomics; how exactly are you supposed to test a theory of the business cycle experimentally? I guess theoretically you could subject an entire country to a new monetary policy selected at random, but the consequences of being put into the wrong experimental group would be disastrous. Moreover, nobody is going to accept a random monetary policy democratically, so you’d have to introduce it against the will of the population, by some sort of tyranny or at least technocracy. Even if this is theoretically possible, it’s mind-bogglingly unethical.

Now, you might be thinking: But we do change real-world policies, right? Couldn’t we use those changes as a sort of “experiment”? Yes, absolutely; that’s called a quasi-experiment or a natural experiment. They are tremendously useful. But since they are not truly randomized, they aren’t quite experiments. Ultimately, everything you get out of a quasi-experiment is based on statistical correlations.

Thus, abuse of the adage “Correlation does not imply causation” can lead to ignoring whole subfields of science, because there is no realistic way of running experiments in those subfields. Sometimes, statistics are all we have to work with.

This is why I like to say it a little differently:

Correlation does not prove causation. But correlation definitely can suggest causation.