Darkest Before the Dawn: Bayesian Impostor Syndrome

Jan 12 JDN 2458860

At the time of writing, I have just returned from my second Allied Social Sciences Association Annual Meeting, the AEA’s annual conference (or AEA and friends, I suppose, since there several other, much smaller economics and finance associations are represented as well). This one was in San Diego, which made it considerably cheaper for me to attend than last year’s. Alas, next year’s conference will be in Chicago. At least flights to Chicago tend to be cheap because it’s a major hub.

My biggest accomplishment of the conference was getting some face-time and career advice from Colin Camerer, the Caltech economist who literally wrote the book on behavioral game theory. Otherwise I would call the conference successful, but not spectacular. Some of the talks were much better than others; I think I liked the one by Emmanuel Saez best, and I also really liked the one on procrastination by Matthew Gibson. I was mildly disappointed by Ben Bernanke’s keynote address; maybe I would have found it more compelling if I were more focused on macroeconomics.

But while sitting through one of the less-interesting seminars I had a clever little idea, which may help explain why Impostor Syndrome seems to occur so frequently even among highly competent, intelligent people. This post is going to be more technical than most, so be warned: Here There Be Bayes. If you fear yon algebra and wish to skip it, I have marked below a good place for you to jump back in.

Suppose there are two types of people, high talent H and low talent L. (In reality there is of course a wide range of talents, so I could assign a distribution over that range, but it would complicate the model without really changing the conclusions.) You don’t know which one you are; all you know is a prior probability h that you are high-talent. It doesn’t matter too much what h is, but for concreteness let’s say h = 0.50; you’ve got to be in the top 50% to be considered “high-talent”.

You are engaged in some sort of activity that comes with a high risk of failure. Many creative endeavors fit this pattern: Perhaps you are a musician looking for a producer, an actor looking for a gig, an author trying to secure an agent, or a scientist trying to publish in a journal. Or maybe you’re a high school student applying to college, or a unemployed worker submitting job applications.

If you are high-talent, you’re more likely to succeed—but still very likely to fail. And even low-talent people don’t always fail; sometimes you just get lucky. Let’s say the probability of success if you are high-talent is p, and if you are low-talent, the probability of success is q. The precise value depends on the domain; but perhaps p = 0.10 and q = 0.02.

Finally, let’s suppose you are highly rational, a good and proper Bayesian. You update all your probabilities based on your observations, precisely as you should.

How will you feel about your talent, after a series of failures?

More precisely, what posterior probability will you assign to being a high-talent individual, after a series of n+k attempts, of which k met with success and n met with failure?

Since failure is likely even if you are high-talent, you shouldn’t update your probability too much on a failurebut each failure should, in fact, lead to revising your probability downward.

Conversely, since success is rare, it should cause you to revise your probability upward—and, as will become important, your revisions upon success should be much larger than your revisions upon failure.

We begin as any good Bayesian does, with Bayes’ Law:

P[H|(~S)^n (S)^k] = P[(~S)^n (S)^k|H] P[H] / P[(~S)^n (S)^k]

In words, this reads: The posterior probability of being high-talent, given that you have observed k successes and n failures, is equal to the probability of observing such an outcome, given that you are high-talent, times the prior probability of being high-skill, divided by the prior probability of observing such an outcome.

We can compute the probabilities on the right-hand side using the binomial distribution:

P[H] = h

P[(~S)^n (S)^k|H] = (n+k C k) p^k (1-p)^n

P[(~S)^n (S)^k] = (n+k C k) p^k (1-p)^n h + (n+k C k) q^k (1-q)^n (1-h)

Plugging all this back in and canceling like terms yields:

P[H|(~S)^n (S)^k] = 1/(1 + [1-h/h] [q/p]^k [(1-q)/(1-p)]^n)

This turns out to be particularly convenient in log-odds form:

L[X] = ln [ P(X)/P(~X) ]

L[(~S)^n) (S)^k|H] = ln [h/(1-h)] + k ln [p/q] + n ln [(1-p)/(1-q)]

Since p > q, ln[p/q] is a positive number, while ln[(1-p)/(1-q)] is a negative number. This corresponds to the fact that you will increase your posterior when you observe a success (k increases by 1) and decrease your posterior when you observe a failure (n increases by 1).

But when p and q are small, it turns out that ln[p/q] is much larger in magnitude than ln[(1-p)/(1-q)]. For the numbers I gave above, p = 0.10 and q = 0.02, ln[p/q] = 1.609 while ln[(1-p)/(1-q)] = -0.085. You will therefore update substantially more upon a success than on a failure.

Yet successes are rare! This means that any given success will most likely be first preceded by a sequence of failures. This results in what I will call the darkest-before-dawn effect: Your opinion of your own talent will tend to be at its very worst in the moments just preceding a major success.

I’ve graphed the results of a few simulations illustrating this: On the X-axis is the number of overall attempts made thus far, and on the Y-axis is the posterior probability of being high-talent. The simulated individual undergoes randomized successes and failures with the probabilities I chose above.

Bayesian_Impostor_full

There are 10 simulations on that one graph, which may make it a bit confusing. So let’s focus in on two runs in particular, which turned out to be run 6 and run 10:

[If you skipped over the math, here’s a good place to come back. Welcome!]

Bayesian_Impostor_focus

Run 6 is a lucky little devil. They had an immediate success, followed by another success in their fourth attempt. As a result, they quickly update their posterior to conclude that they are almost certainly a high-talent individual, and even after a string of failures beyond that they never lose faith.

Run 10, on the other hand, probably has Impostor Syndrome. Failure after failure after failure slowly eroded their self-esteem, leading them to conclude that they are probably a low-talent individual. And then, suddenly, a miracle occurs: On their 20th attempt, at last they succeed, and their whole outlook changes; perhaps they are high-talent after all.

Note that all the simulations are of high-talent individuals. Run 6 and run 10 are equally competent. Ex ante, the probability of success for run 6 and run 10 was exactly the same. Moreover, both individuals are completely rational, in the sense that they are doing perfect Bayesian updating.

And yet, if you compare their self-evaluations after the 19th attempt, they could hardly look more different: Run 6 is 85% sure that they are high-talent, even though they’ve been in a slump for the last 13 attempts. Run 10, on the other hand, is 83% sure that they are low-talent, because they’ve never succeeded at all.

It is darkest just before the dawn: Run 10’s self-evaluation is at its very lowest right before they finally have a success, at which point their self-esteem surges upward, almost to baseline. With just one more success, their opinion of themselves would in fact converge to the same as Run 6’s.

This may explain, at least in part, why Impostor Syndrome is so common. When successes are few and far between—even for the very best and brightest—then a string of failures is the most likely outcome for almost everyone, and it can be difficult to tell whether you are so bright after all. Failure after failure will slowly erode your self-esteem (and should, in some sense; you’re being a good Bayesian!). You’ll observe a few lucky individuals who get their big break right away, and it will only reinforce your fear that you’re not cut out for this (whatever this is) after all.

Of course, this model is far too simple: People don’t just come in “talented” and “untalented” varieties, but have a wide range of skills that lie on a continuum. There are degrees of success and failure as well: You could get published in some obscure field journal hardly anybody reads, or in the top journal in your discipline. You could get into the University of Northwestern Ohio, or into Harvard. And people face different barriers to success that may have nothing to do with talent—perhaps why marginalized people such as women, racial minorities, LGBT people, and people with disabilities tend to have the highest rates of Impostor Syndrome. But I think the overall pattern is right: People feel like impostors when they’ve experienced a long string of failures, even when that is likely to occur for everyone.

What can be done with this information? Well, it leads me to three pieces of advice:

1. When success is rare, find other evidence. If truly “succeeding” (whatever that means in your case) is unlikely on any given attempt, don’t try to evaluate your own competence based on that extremely noisy signal. Instead, look for other sources of data: Do you seem to have the kinds of skills that people who succeed in your endeavors have—preferably based on the most objective measures you can find? Do others who know you or your work have a high opinion of your abilities and your potential? This, perhaps is the greatest mistake we make when falling prey to Impostor Syndrome: We imagine that we have somehow “fooled” people into thinking we are competent, rather than realizing that other people’s opinions of us are actually evidence that we are in fact competent. Use this evidence. Update your posterior on that.

2. Don’t over-update your posterior on failures—and don’t under-update on successes. Very few living humans (if any) are true and proper Bayesians. We use a variety of heuristics when judging probability, most notably the representative and availability heuristics. These will cause you to over-respond to failures, because this string of failures makes you “look like” the kind of person who would continue to fail (representative), and you can’t conjure to mind any clear examples of success (availability). Keeping this in mind, your update upon experiencing failure should be small, probably as small as you can make it. Conversely, when you do actually succeed, even in a small way, don’t dismiss it. Don’t look for reasons why it was just luck—it’s always luck, at least in part, for everyone. Try to update your self-evaluation more when you succeed, precisely because success is rare for everyone.

3. Don’t lose hope. The next one really could be your big break. While astronomically baffling (no, it’s darkest at midnight, in between dusk and dawn!), “it is always darkest before the dawn” really does apply here. You are likely to feel the worst about yourself at the very point where you are about to finally succeed. The lowest self-esteem you ever feel will be just before you finally achieve a major success. Of course, you can’t know if the next one will be it—or if it will take five, or ten, or twenty more tries. And yes, each new failure will hurt a little bit more, make you doubt yourself a little bit more. But if you are properly grounded by what others think of your talents, you can stand firm, until that one glorious day comes and you finally make it.

Now, if I could only manage to take my own advice….

Tithing makes quite a lot of sense

Dec 22 JDN 2458840

Christmas is coming soon, and it is a season of giving: Not only gifts to those we love, but also to charities that help people around the world. It’s a theme of some of our most classic Christmas stories, like A Christmas Carol. (I do have to admit: Scrooge really isn’t wrong for not wanting to give to some random charity without any chance to evaluate it. But I also get the impression he wasn’t giving a lot to evaluated charities either.) And people do really give more around this time of year: Charitable donation rates peak in November and December (though that may also have something to do with tax deductions).

Where should we give? This is not an easy question, but it’s one that we now have tools to answer: There are various independent charity evaluation agencies, like GiveWell and Charity Navigator, which can at least provide some idea of which charities are most cost-effective.

How much should we give? This question is a good deal harder.

Perhaps a perfect being would determine their own precise marginal utility of wealth, and the marginal utility of spending on every possible charity, and give of your wealth to the best possible charity up until those two marginal utilities are equal. Since $1 to UNICEF or the Against Malaria Foundation saves about 0.02 QALY, and (unless you’re a billionaire) you don’t have enough money to meaningfully affect the budget of UNICEF, you’d probably need to give until you are yourself at the UN poverty level of $1.90 per day.

I don’t know of anyone who does this. Even Peter Singer, who writes books that essentially tell us to do this, doesn’t do this. I’m not sure it’s humanly possible to do this. Indeed, I’m not even so sure that a perfect being would do it, since it would require destroying their own life and their own future potential.

How about we all give 10%? In other words, how about we tithe? Yes, it sounds arbitrary—because it is. It could just as well have been 8% or 11%. Perhaps one-tenth feels natural to a base-10 culture made of 10-fingered beings, and if we used a base-12 numeral system we’d think in terms of giving one-twelfth instead. But 10% feels reasonable to a lot of people, it has a lot of cultural support behind it already, and it has become a Schelling point for coordination on this otherwise intractable problem. We need to draw the line somewhere, and it might as well be there.

As Slate Star Codex put it:

It’s ten percent because that’s the standard decreed by Giving What We Can and the effective altruist community. Why should we believe their standard? I think we should believe it because if we reject it in favor of “No, you are a bad person unless you give all of it,” then everyone will just sit around feeling very guilty and doing nothing. But if we very clearly say “You have discharged your moral duty if you give ten percent or more,” then many people will give ten percent or more. The most important thing is having a Schelling point, and ten percent is nice, round, divinely ordained, and – crucially – the Schelling point upon which we have already settled. It is an active Schelling point. If you give ten percent, you can have your name on a nice list and get access to a secret forum on the Giving What We Can site which is actually pretty boring.

It’s ten percent because definitions were made for Man, not Man for definitions, and if we define “good person” in a way such that everyone is sitting around miserable because they can’t reach an unobtainable standard, we are stupid definition-makers. If we are smart definition-makers, we will define it in whichever way which makes it the most effective tool to convince people to give at least that much.

I think it would be also reasonable to adjust this proportion according to your household income. If you are extremely poor, give a token amount: Perhaps 1% or 2%. (As it stands, most poor people already give more than this, and most rich people give less.) If you are somewhat below the median household income, give a bit less: Perhaps 6% or 8%. (I currently give 8%; I plan to increase to 10% once I get a higher-paying job after graduation.) If you are somewhat above, give a bit more: Perhaps 12% or 15%. If you are spectacularly rich, maybe you should give as much as 25%.

Is 10% enough? Well, actually, if everyone gave, even 1% would probably be enough. The total GDP of the First World is about $40 trillion; 1% of that is $400 billion per year, which is more than enough to end world hunger. But since we know that not everyone will give, we need to adjust our standard upward so that those who do give will give enough. (There’s actually an optimization problem here which is basically equivalent to finding a monopoly’s profit-maximizing price.) And just ending world hunger probably isn’t enough; there is plenty of disease to cure, education to improve, research to do, and ecology to protect. If say a third of First World people give 10%, that would be about $1.3 trillion, which would be enough money to at least make a huge difference in all those areas.

You can decide for yourself where you think you should draw the line. But 10% is a pretty good benchmark, and above all—please, give something. If you give anything, you are probably already above average. A large proportion of people give nothing at all. (Only 24% of US tax returns include a charitable deduction—though, to be fair, a lot of us donate but don’t itemize deductions. Even once you account for that, only about 60% of US households give to charity in any given year.)

To a first approximation, all human behavior is social norms

Dec 15 JDN 2458833

The language we speak, the food we eat, and the clothes we wear—indeed, the fact that we wear clothes at all—are all the direct result of social norms. But norms run much deeper than this: Almost everything we do is more norm than not.

Why do sleep and wake up at a particular time of day? For most people, the answer is that they needed to get up to go to work. Why do you need to go to work at that specific time? Why does almost everyone go to work at the same time? Social norms.

Even the most extreme human behaviors are often most comprehensible in terms of social norms. The most effective predictive models of terrorism are based on social networks: You are much more likely to be a terrorist if you know people who are terrorists, and much more likely to become a terrorist if you spend a lot of time talking with terrorists. Cultists and conspiracy theorists seem utterly baffling if you imagine that humans form their beliefs rationally—and totally unsurprising if you realize that humans mainly form their beliefs by matching those around them.

For a long time, economists have ignored social norms at our peril; we’ve assumed that financial incentives will be sufficient to motivate behavior, when social incentives can very easily override them. Indeed, it is entirely possible for a financial incentive to have a negative effect, when it crowds out a social incentive: A good example is a friend who would gladly come over to help you with something as a friend, but then becomes reluctant if you offer to pay him $25. I previously discussed another example, where taking a mentor out to dinner sounds good but paying him seems corrupt.

Why do you drive on the right side of the road (or the left, if you’re in Britain)? The law? Well, the law is already a social norm. But in fact, it’s hardly just that. You probably sometimes speed or run red lights, which are also in violation of traffic laws. Yet somehow driving on the right side seem to be different. Well, that’s because driving on the right has a much stronger norm—and in this case, that norm is self-enforcing with the risk of severe bodily harm or death.

This is a good example of why it isn’t necessary for everyone to choose to follow a norm for that norm to have a great deal of power. As long as the norms include some mechanism for rewarding those who follow and punishing those who don’t, norms can become compelling even to those who would prefer not to obey. Sometimes it’s not even clear whether people are following a norm or following direct incentives, because the two are so closely aligned.

Humans are not the only social species, but we are by far the most social species. We form larger, more complex groups than any other animal; we form far more complex systems of social norms; and we follow those norms with slavish obedience. Indeed, I’m a little suspicious of some of the evolutionary models predicting the evolution of social norms, because they predict it too well; they seem to suggest that it should arise all the time, when in fact it’s only a handful of species who exhibit it at all and only we who build our whole existence around it.

Along with our extreme capacity for altruism, this is another way that human beings actually deviate more from the infinite identical psychopaths of neoclassical economics than most other animals. Yes, we’re smarter than other animals; other animals are more likely to make mistakes (though certainly we make plenty of our own). But most other animals aren’t motivated by entirely different goals than individual self-interest (or “evolutionary self-interest” in a Selfish Gene sort of sense) the way we typically are. Other animals try to be selfish and often fail; we try not to be selfish and usually succeed.

Economics experiments often go out of their way to exclude social motives as much as possible—anonymous random matching with no communication, for instance—and still end up failing. Human behavior in experiments is consistent, systematic—and almost never completely selfish.

Once you start looking for norms, you see them everywhere. Indeed, it becomes hard to see anything else. To a first approximation, all human behavior is social norms.

Unsolved problems

Oct 20 JDN 2458777

The beauty and clearness of the dynamical theory, which asserts heat and light to be modes of motion, is at present obscured by two clouds. The first came into existence with the undulatory theory of light, and was dealt with by Fresnel and Dr. Thomas Young; it involved the question, how could the earth move through an elastic solid, such as essentially is the luminiferous ether? The second is the Maxwell-Boltzmann doctrine regarding the partition of energy.


~ Lord Kelvin, April 27, 1900

The above quote is part of a speech where Kelvin basically says that physics is a completed field, with just these two little problems to clear up, “two clouds” in a vast clear horizon. Those “two clouds” Kelvin talked about, regarding the ‘luminiferous ether’ and the ‘partition of energy’? They are, respectively, relativity and quantum mechanics. Almost 120 years later we still haven’t managed to really solve them, at least not in a way that works consistently as part of one broader theory.

But I’ll give Kelvin this: He knew where the problems were. He vastly underestimated how complex and difficult those problems would be, but he knew where they were.

I’m not sure I can say the same about economists. We don’t seem to have even reached the point where we agree where the problems are. Consider another quotation:

For a long while after the explosion of macroeconomics in the 1970s, the field looked like a battlefield. Over time however, largely because facts do not go away, a largely shared vision both of fluctuations and of methodology has emerged. Not everything is fine. Like all revolutions, this one has come with the destruction of some knowledge, and suffers from extremism and herding. None of this deadly however. The state of macro is good.


~ Oliver Blanchard, 2008

The timing of Blanchard’s remark is particularly ominous: It is much like the turkey who declares, the day before Thanksgiving, that his life is better than ever.

But the content is also important: Blanchard didn’t say that microeconomics is in good shape (which I think one could make a better case for). He didn’t even say that economics, in general, is in good shape. He specifically said, right before the greatest economic collapse since the Great Depression, that macroeconomics was in good shape. He didn’t merely underestimate the difficulty of the problem; he didn’t even see where the problem was.

If you search the Web, you can find a few lists of unsolved problems in economics. Wikipedia has such a list that I find particularly bad; Mike Moffatt offers a better list that still has significant blind spots.

Wikipedia’s list is full of esoteric problems that require deeply faulty assumptions to even exist, like the ‘American option problem’ which assumes that the Black-Scholes model is even remotely an accurate description of how option prices work, or the ‘tatonnement problem’ which ignores the fact that there may be many equilibria and we might never reach one at all, or the problem they list under ‘revealed preferences’ which doesn’t address any of the fundamental reasons why the entire concept of revealed preferences may fail once we apply a realistic account of cognitive science. (I could go pretty far afield with that last one—and perhaps I will in a later post—but for now, suffice it to say that human beings often freely choose to do things that we later go on to regret.) I think the only one that Wikipedia’s list really gets right is Unified models of human biases’. The ‘home bias in trade’ and ‘Feldstein-Horioka Puzzle’ problems are sort of edging toward genuine problems, but they’re bound up in too many false assumptions to really get at the right question, which is actually something like “How do we deal with nationalism?” Referring to the ‘Feldstein-Horioka Puzzle’ misses the forest for the trees. Likewise, the ‘PPP Puzzle’ and the ‘Exchange rate disconnect puzzle’ (and to some extent the ‘equity premium puzzle’ as well) are really side effects of a much deeper problem, which is that financial markets in general are ludicrously volatile and inefficient and we have no idea why.

And Wikipedia’s list doesn’t have some of the largest, most important problems in economics. Moffatt’s list does better, including good choices like “What Caused the Industrial Revolution?”, “What Is the Proper Size and Scope of Government?”, and “What Truly Caused the Great Depression?”, but it also includes some of the more esoteric problems like the ‘equity premium puzzle’ and the ‘endogeneity of money’. The way he states the problem “What Causes the Variation of Income Among Ethnic Groups?” suggests that he doesn’t quite understand what’s going on there either. More importantly, Moffatt still leaves out very obviously important questions like “How do we achieve economic development in poor countries?” (Or as I sometimes put it, “What did South Korea do from 1950 to 2000, and how can we do it again?”), “How do we fix shortages of housing and other necessities?”, “What is causing the global rise of income and wealth inequality?”, “How altruistic are human beings, to whom, and under what conditions?” and “What makes financial markets so unstable?” Ironically, ‘Unified models of human biases’, the one problem that Wikipedia got right, is missing from Moffatt’s list.

And I’m also humble enough to realize that some of the deepest problems in economics may be ones that we don’t even quite know how to formulate yet. We like to pretend that economics is a mature science, almost on the coattails of physics; but it’s really a very young science, more like psychology. We go through these ‘cargo cult science‘ rituals of p-values and econometric hypothesis tests, but there are deep, basic forces we don’t understand. We have precisely prepared all the apparatus for the detection of the phlogiston, and by God, we’ll get that 0.05 however we have to. (Think I’m being too harsh? “Real Business Cycle” theory essentially posits that the Great Depression was caused by everyone deciding that they weren’t going to work for a few years, and as whole countries fell into the abyss from failing financial markets, most economists still clung to the Efficient Market Hypothesis.) Our whole discipline requires major injections of intellectual humility: We not only don’t have all the answers; we’re not even sure we have all the questions.

I think the esoteric nature of questions like ‘the equity premium puzzle’ and the ‘tatonnement problem‘ is precisely the source of their appeal: It’s the sort of thing you can say you’re working on and sound very smart, because the person you’re talking to likely has no idea what you’re talking about. (Or else they are a fellow economist, and thus in on the con.) If you said that you’re trying to explain why poor countries are poor and why rich countries are rich—and if economics isn’t doing that, then what in the world are we doing?you’d have to admit that we honestly have only the faintest idea, and that millions of people have suffered from bad advice economists gave their governments based on ideas that turned out to be wrong.

It’s really quite problematic how closely economists are tied to policymaking (except when we do really know what we’re talking about?). We’re trying to do engineering without even knowing physics. Maybe there’s no way around it: We have to make some sort of economic policy, and it makes more sense to do it based on half-proven ideas than on completely unfounded ideas. (Engineering without physics worked pretty well for the Romans, after all.) But it seems to me that we could be relying more, at least for the time being, on the experiences and intuitions of the people who have worked on the ground, rather than on sophisticated theoretical models that often turn out to be utterly false. We could eschew ‘shock therapy‘ approaches that try to make large interventions in an economy all at once, in favor of smaller, subtler adjustments whose consequences are more predictable. We could endeavor to focus on the cases where we do have relatively clear knowledge (like rent control) and avoid those where the uncertainty is greatest (like economic development).

At the very least, we could admit what we don’t know, and admit that there is probably a great deal we don’t know that we don’t know.

Billionaires bear the burden of proof

Sep 15 JDN 2458743

A king sits atop a golden throne, surrounded by a thousand stacks of gold coins six feet high. A hundred starving peasants beseech him for just one gold coin each, so that they might buy enough food to eat and clothes for the winter. The king responds: “How dare you take my hard-earned money!”

This is essentially the world we live in today. I really cannot emphasize enough how astonishingly, horrifically, mind-bogglingly rich billionares are. I am writing this sentence at 13:00 PDT on September 8, 2019. A thousand seconds ago was 12:43, about when I started this post. A million seconds ago was Wednesday, August 28. A billion seconds ago was 1987. I will be a billion seconds old this October.

Jeff Bezos has $170 billion. 170 billion seconds ago was a thousand years before the construction of the Great Pyramid. To get as much money as he has gaining one dollar per second (that’s $3600 an hour!), Jeff Bezos would have had to work for as long as human civilization has existed.

At a more sensible wage like $30 per hour (still better than most people get), how long would it take to amass $170 billion? Oh, just about 600,000 years—or about twice the length of time that Homo sapiens has existed on Earth.

How does this compare to my fictional king with a thousand stacks of gold? A typical gold coin is worth about $500, depending on its age and condition. Coins are about 2 millimeters thick. So a thousand stacks, each 2 meters high, would be about $500*1000*1000 = $500 million. This king isn’t even a billionaire! Jeff Bezos has three hundred times as much as him.

Coins are about 30 millimeters in diameter, so assuming they are packed in neat rows, these thousand stacks of gold coins would fill a square about 0.9 meters to a side—in our silly Imperial units, that’s 3 feet wide, 3 feet deep, 6 feet tall. If Jeff Bezo’s stock portfolio were liquidated into gold coins (which would require about 2% of the world’s entire gold supply and surely tank the market), the neat rows of coins stacked a thousand high would fill a square over 16 meters to a side—that’s a 50-foot-wide block of gold coins. Smaug’s hoard in The Hobbit was probably about the same amount of money as what Jeff Bezos has.

And yet, somehow there are still people who believe that he deserves this money, that he earned it, that to take even a fraction of it away would be a crime tantamount to theft or even slavery.

Their arguments can be quite seductive: How would you feel about the government taking your hard-earned money? Entrepreneurs are brilliant, dedicated, hard-working people; why shouldn’t they be rewarded? What crime do CEOs commit by selling products at low prices?

The way to cut through these arguments is to never lose sight of the numbers. In defense of a man who had $5 million or even $20 million, such an argument might make sense. I can imagine how someone could contribute enough to humanity to legitimately deserve $20 million. I can understand how a talented person might work hard enough to earn $5 million. But it’s simply not possible for any human being to be so brilliant, so dedicated, so hard-working, or make such a contribution to the world, that they deserve to have more dollars than there have been seconds since the Great Pyramid.

It’s not necessary to find specific unethical behaviors that brought a billionaire to where he (and yes, it’s nearly always he) is. They are generally there to be found: At best, one becomes a billionaire by sheer luck. Typically, one becomes a billionaire by exerting monopoly power. At worst, one can become a billionaire by ruthless exploitation or even mass murder. But it’s not our responsibility to point out a specific crime for every specific billionaire.

The burden of proof is on billionaires: Explain how you can possibly deserve that much money.

It’s not enough to point to some good things you did, or emphasize what a bold innovator you are: You need to explain what you did that was so good that it deserves to be rewarded with Smaug-level hoards of wealth. Did you save the world from a catastrophic plague? Did you end world hunger? Did you personally prevent a global nuclear war? I could almost see the case for Norman Borlaug or Jonas Salk earning a billion dollars (neither did, by the way). But Jeff Bezos? You didn’t save the world. You made a company that sells things cheaply and ships them quickly. Get over yourself.

Where exactly do we draw that line? That’s a fair question. $20 million? $100 million? $500 million? Maybe there shouldn’t even be a hard cap. There are many other approaches we could take to reducing this staggering inequality. Previously I have proposed a tax system that gets continuously more progressive forever, as well as a CEO compensation cap based on the pay of the lowliest employees. We could impose a wealth tax, as Elizabeth Warren has proposed. Or we could simply raise the top marginal rate on income tax to something more like what it was in the 1960s. Or as Republicans today would call it, radical socialism.

The backfire effect has been greatly exaggerated

Sep 8 JDN 2458736

Do a search for “backfire effect” and you’re likely to get a large number of results, many of them from quite credible sources. The Oatmeal did an excellent comic on it. The basic notion is simple: “[…]some individuals when confronted with evidence that conflicts with their beliefs come to hold their original position even more strongly.”

The implications of this effect are terrifying: There’s no point in arguing with anyone about anything controversial, because once someone strongly holds a belief there is nothing you can do to ever change it. Beliefs are fixed and unchanging, stalwart cliffs against the petty tides of evidence and logic.

Fortunately, the backfire effect is not actually real—or if it is, it’s quite rare. Over many years those seemingly-ineffectual tides can erode those cliffs down and turn them into sandy beaches.

The most recent studies with larger samples and better statistical analysis suggest that the typical response to receiving evidence contradicting our beliefs is—lo and behold—to change our beliefs toward that evidence.

To be clear, very few people completely revise their worldview in response to a single argument. Instead, they try to make a few small changes and fit them in as best they can.

But would we really expect otherwise? Worldviews are holistic, interconnected systems. You’ve built up your worldview over many years of education, experience, and acculturation. Even when someone presents you with extremely compelling evidence that your view is wrong, you have to weigh that against everything else you have experienced prior to that point. It’s entirely reasonable—rational, even—for you to try to fit the new evidence in with a minimal overall change to your worldview. If it’s possible to make sense of the available evidence with only a small change in your beliefs, it makes perfect sense for you to do that.

What if your whole worldview is wrong? You might have based your view of the world on a religion that turns out not to be true. You might have been raised into a culture with a fundamentally incorrect concept of morality. What if you really do need a radical revision—what then?

Well, that can happen too. People change religions. They abandon their old cultures and adopt new ones. This is not a frequent occurrence, to be sure—but it does happen. It happens, I would posit, when someone has been bombarded with contrary evidence not once, not a few times, but hundreds or thousands of times, until they can no longer sustain the crumbling fortress of their beliefs against the overwhelming onslaught of argument.

I think the reason that the backfire effect feels true to us is that our life experience is largely that “argument doesn’t work”; we think back to all the times that we have tried to convince to change a belief that was important to them, and we can find so few examples of when it actually worked. But this is setting the bar much too high. You shouldn’t expect to change an entire worldview in a single conversation. Even if your worldview is correct and theirs is not, that one conversation can’t have provided sufficient evidence for them to rationally conclude that. One person could always be mistaken. One piece of evidence could always be misleading. Even a direct experience could be a delusion or a foggy memory.

You shouldn’t be trying to turn a Young-Earth Creationist into an evolutionary biologist, or a climate change denier into a Greenpeace member. You should be trying to make that Creationist question whether the Ussher chronology is really so reliable, or if perhaps the Earth might be a bit older than a 17th century theologian interpreted it to be. You should be getting the climate change denier to question whether scientists really have such a greater vested interest in this than oil company lobbyists. You can’t expect to make them tear down the entire wall—just get them to take out one brick today, and then another brick tomorrow, and perhaps another the day after that.

The proverb is of uncertain provenance, variously attributed, rarely verified, but it is still my favorite: No single raindrop feels responsible for the flood.

Do not seek to be a flood. Seek only to be a raindrop—for if we all do, the flood will happen sure enough. (There’s a version more specific to our times: So maybe we’re snowflakes. I believe there is a word for a lot of snowflakes together: Avalanche.)

And remember this also: When you argue in public (which includes social media), you aren’t just arguing for the person you’re directly engaged with; you are also arguing for everyone who is there to listen. Even if you can’t get the person you’re arguing with to concede even a single point, maybe there is someone else reading your post who now thinks a little differently because of something you said. In fact, maybe there are many people who think a little differently—the marginal impact of slacktivism can actually be staggeringly large if the audience is big enough.

This can be frustrating, thankless work, for few people will ever thank you for changing their mind, and many will condemn you even for trying. Finding out you were wrong about a deeply-held belief can be painful and humiliating, and most people will attribute that pain and humiliation to the person who called them out for being wrong—rather than placing the blame where it belongs, which is on whatever source or method made you wrong in the first place. Being wrong feels just like being right.

But this is important work, among the most important work that anyone can do. Philosophy, mathematics, science, technology—all of these things depend upon it. Changing people’s minds by evidence and rational argument is literally the foundation of civilization itself. Every real, enduring increment of progress humanity has ever made depends upon this basic process. Perhaps occasionally we have gotten lucky and made the right choice for the wrong reasons; but without the guiding light of reason, there is nothing to stop us from switching back and making the wrong choice again soon enough.

So I guess what I’m saying is: Don’t give up. Keep arguing. Keep presenting evidence. Don’t be afraid that your arguments will backfire—because in fact they probably won’t.

Privatized prisons were always an atrocity

Aug 4 JDN 2458700

Let’s be clear: The camps that Trump built on the border absolutely are concentration camps. They aren’t extermination camps—yet?—but they are in fact “a place where large numbers of people (such as prisoners of war, political prisoners, refugees, or the members of an ethnic or religious minority) are detained or confined under armed guard.” Above all, it is indeed the case that “Persons are placed in such camps often on the basis of identification with a particular ethnic or political group rather than as individuals and without benefit either of indictment or fair trial.”

And I hope it goes without saying that this is an unconscionable atrocity that will remain a stain upon America for generations to come. Trump was clear from the beginning that this was his intention, and thus this blood is on the hands of anyone who voted for him. (The good news is that even they are now having second thoughts: Even a majority of Fox News viewers agrees that Trump has gone too far.)

Yet these camps are only a symptom of a much older disease: We should have seen this sort of cruelty and inhumanity coming when first we privatized prisons.

Krugman makes the point using economics: Without market competition or public view, how can the private sector be kept from abuse, corruption, and exploitation? And this is absolutely true—but it is not the strongest reason.

No, the reason privatized prisons are unjust is much more fundamental than that: Prisons are a direct incursion against liberty. The only institution that should ever have that authority is a democratically-elected government restrained by a constitution.

I don’t care if private prisons were cleaner and nicer and safer and more effective at rehabilitation (as you’ll see from those links, exactly the opposite is true across the board). No private institution has the right to imprison people. No one should be making profits from locking people up.

This is the argument we should have been making for the last 40 years. You can’t privatize prisons, because no one has a right to profit from locking people up. You can’t privatize the military, because no one has a right to profit from killing people. These are basic government functions precisely because they are direct incursions against fundamental rights; though such incursions are sometimes necessary, we allow only governments to make them, because democracy is the only means we have found to keep them from being used indiscriminately. (And even then, there are always abuses and we must remain eternally vigilant.)

Yes, obviously we must shut down these concentration camps as soon as possible. But we can’t stop there. This is a symptom of a much deeper disease: Our liberty is being sold for profit.