To a first approximation, all human behavior is social norms

Dec 15 JDN 2458833

The language we speak, the food we eat, and the clothes we wear—indeed, the fact that we wear clothes at all—are all the direct result of social norms. But norms run much deeper than this: Almost everything we do is more norm than not.

Why do sleep and wake up at a particular time of day? For most people, the answer is that they needed to get up to go to work. Why do you need to go to work at that specific time? Why does almost everyone go to work at the same time? Social norms.

Even the most extreme human behaviors are often most comprehensible in terms of social norms. The most effective predictive models of terrorism are based on social networks: You are much more likely to be a terrorist if you know people who are terrorists, and much more likely to become a terrorist if you spend a lot of time talking with terrorists. Cultists and conspiracy theorists seem utterly baffling if you imagine that humans form their beliefs rationally—and totally unsurprising if you realize that humans mainly form their beliefs by matching those around them.

For a long time, economists have ignored social norms at our peril; we’ve assumed that financial incentives will be sufficient to motivate behavior, when social incentives can very easily override them. Indeed, it is entirely possible for a financial incentive to have a negative effect, when it crowds out a social incentive: A good example is a friend who would gladly come over to help you with something as a friend, but then becomes reluctant if you offer to pay him $25. I previously discussed another example, where taking a mentor out to dinner sounds good but paying him seems corrupt.

Why do you drive on the right side of the road (or the left, if you’re in Britain)? The law? Well, the law is already a social norm. But in fact, it’s hardly just that. You probably sometimes speed or run red lights, which are also in violation of traffic laws. Yet somehow driving on the right side seem to be different. Well, that’s because driving on the right has a much stronger norm—and in this case, that norm is self-enforcing with the risk of severe bodily harm or death.

This is a good example of why it isn’t necessary for everyone to choose to follow a norm for that norm to have a great deal of power. As long as the norms include some mechanism for rewarding those who follow and punishing those who don’t, norms can become compelling even to those who would prefer not to obey. Sometimes it’s not even clear whether people are following a norm or following direct incentives, because the two are so closely aligned.

Humans are not the only social species, but we are by far the most social species. We form larger, more complex groups than any other animal; we form far more complex systems of social norms; and we follow those norms with slavish obedience. Indeed, I’m a little suspicious of some of the evolutionary models predicting the evolution of social norms, because they predict it too well; they seem to suggest that it should arise all the time, when in fact it’s only a handful of species who exhibit it at all and only we who build our whole existence around it.

Along with our extreme capacity for altruism, this is another way that human beings actually deviate more from the infinite identical psychopaths of neoclassical economics than most other animals. Yes, we’re smarter than other animals; other animals are more likely to make mistakes (though certainly we make plenty of our own). But most other animals aren’t motivated by entirely different goals than individual self-interest (or “evolutionary self-interest” in a Selfish Gene sort of sense) the way we typically are. Other animals try to be selfish and often fail; we try not to be selfish and usually succeed.

Economics experiments often go out of their way to exclude social motives as much as possible—anonymous random matching with no communication, for instance—and still end up failing. Human behavior in experiments is consistent, systematic—and almost never completely selfish.

Once you start looking for norms, you see them everywhere. Indeed, it becomes hard to see anything else. To a first approximation, all human behavior is social norms.

Good for the economy isn’t the same as good

Dec 8 JDN 2458826

Many of the common critiques of economics are actually somewhat misguided, or at least outdated: While there are still some neoclassical economists who think that markets are perfect and humans are completely rational, most economists these days would admit that there are at least some exceptions to this. But there’s at least one common critique that I think still has a good deal of merit: “Good for the economy” isn’t the same thing as good.

I’ve read literally dozens, if not hundreds, of articles on economics, in both popular press and peer-reviewed journals, that all defend their conclusions in the following way: “Intervention X would statistically be expected to increase GDP/raise total surplus/reduce unemployment. Therefore, policymakers should implement intervention X.” The fact that a policy would be “good for the economy” (in a very narrow sense) is taken as a completely compelling reason that this policy must be overall good.

The clearest examples of this always turn up during a recession, when inevitably people will start saying that cutting unemployment benefits will reduce unemployment. Sometimes it’s just right-wing pundits, but often it’s actually quite serious economists.

The usual left-wing response is to deny the claim, explain all the structural causes of unemployment in a recession and point out that unemployment benefits are not what caused the surge in unemployment. This is true; it is also utterly irrelevant. It can be simultaneously true that the unemployment was caused by bad monetary policy or a financial shock, and also true that cutting unemployment benefits would in fact reduce unemployment.

Indeed, I’m fairly certain that both of those propositions are true, to greater or lesser extent. Most people who are unemployed will remain unemployed regardless of how high or low unemployment benefits are; and likewise most people who are employed will remain so. But at the margin, I’m sure there’s someone who is on the fence about searching for a job, or who is trying to find a job but could try a little harder with some extra pressure, or who has a few lousy job offers they’re not taking because they hope to find a better offer later. That is, I have little doubt that the claim “Cutting unemployment benefits would reduce unemployment” is true.

The problem is that this is in no way a sufficient argument for cutting unemployment benefits. For while it might reduce unemployment per se, more importantly it would actually increase the harm of unemployment. Indeed, those two effects are in direct proportion: Cutting unemployment benefits only reduces unemployment insofar as it makes being unemployed a more painful and miserable experience for the unemployed.

Indeed, the very same (oversimplified) economic models that predict that cutting benefits would reduce unemployment use that precise mechanism, and thereby predict, necessarily, that cutting unemployment benefits will harm those who are unemployed. It has to. In some sense, it’s supposed to; otherwise it wouldn’t have any effect at all.
That is, if your goal is actually to help the people harmed by a recession, cutting unemployment benefits is absolutely not going to accomplish that. But if your goal is actually to reduce unemployment at any cost, I suppose it would in fact do that. (Also highly effective against unemployment: Mass military conscription. If everyone’s drafted, no one is unemployed!)

Similarly, I’ve read more than a few policy briefs written to the governments of poor countries telling them how some radical intervention into their society would (probably) increase their GDP, and then either subtly implying or outright stating that this means they are obliged to enact this intervention immediately.

Don’t get me wrong: Poor countries need to increase their GDP. Indeed, it’s probably the single most important thing they need to do. Providing better security, education, healthcare, and sanitation are all things that will increase GDP—but they’re also things that will be easier if you have more GDP.

(Rich countries, on the other hand? Maybe we don’t actually need to increase GDP. We may actually be better off focusing on things like reducing inequality and improving environmental sustainability, while keeping our level of GDP roughly the same—or maybe even reducing it somewhat. Stay inside the wedge.)

But the mere fact that a policy will increase GDP is not a sufficient reason to implement that policy. You also need to consider all sorts of other effects the policy will have: Poverty, inequality, social unrest, labor standards, pollution, and so on.

To be fair, sometimes these articles only say that the policy will increase GDP, and don’t actually assert that this is a sufficient reason to implement it, theoretically leaving open the possibility that other considerations will be overriding.

But that’s really not all that comforting. If the only thing you say about a policy is a major upside, like it or not, you are implicitly endorsing that policy. Framing is vital. Everything you say could be completely, objectively, factually true; but if you only tell one side of the story, you are presenting a biased view. There’s a reason the oath is “The truth, the whole truth, and nothing but the truth.” A partial view of the facts can be as bad as an outright lie.

Of course, it’s unreasonable to expect you to present every possible consideration that could become relevant. Rather, I expect you to do two things: First, if you include some positive aspects, also include some negative ones, and vice-versa; never let your argument sound completely one-sided. Second, clearly and explicitly acknowledge that there are other considerations you haven’t mentioned.

Moreover, if you are talking about something like increasing GDP or decreasing unemployment—something that has been, many times, by many sources, treated as though it were a completely compelling reason unto itself—you must be especially careful. In such a context, an article that would be otherwise quite balanced can still come off as an unqualified endorsement.

Creativity and mental illness

Dec 1 JDN 2458819

There is some truth to the stereotype that artistic people are crazy. Mental illnesses, particularly bipolar disorder, are overrepresented among artists, writers, and musicians. Creative people score highly on literally all five of the Big Five personality traits: They are higher in Openness, higher in Conscientiousness, higher in Extraversion (that one actually surprised me), higher in Agreeableness, and higher in Neuroticism. Creative people just have more personality, it seems.

But in fact mental illness is not as overrepresented among creative people as most people think, and the highest probability of being a successful artist occurs when you have close relatives with mental illness, but are not yourself mentally ill. Those with mental illness actually tend to be most creative when their symptoms are in remission. This suggests that the apparent link between creativity and mental illness may actually increase over time, as treatments improve and remission becomes easier.

One possible source of the link is that artistic expression may be a form of self-medication: Art therapy does seem to have some promise in treating a variety of mental disorders (though it is not nearly as effective as therapy and medication). And that wouldn’t explain why family history of mental illness is actually a better predictor of creativity than mental illness itself.

My guess is that in order to be creative, you need to think differently than other people. You need to see the world in a way that others do not see it. Mental illness is surely not the only way to do that, but it’s definitely one way.

But creativity also requires basic functioning: If you are totally crippled by a mental illness, you’re not going to be very creative. So the people who are most creative have just enough craziness to think differently, but not so much that it takes over their lives.

This might even help explain how mental illness persisted in our population, despite its obvious survival disadvantages. It could be some form of heterozygote advantage.

The classic example of heterozygote advantage is sickle-cell anemia: If you have no copies of the sickle-cell gene, you’re normal. If you have two copies, you have sickle-cell anemia, which is very bad. But if you have only one copy, you’re healthy—and you’re resistant to malaria. Thus, high risk of malaria—as we certainly had, living in central Africa—creates a selection pressure that keeps sickle-cell genes in the population, even though having two copies is much worse than having none at all.

Mental illness might function something like this. I suspect it’s far more complicated than sickle-cell anemia, which is literally just two alleles of a single gene; but the overall process may be similar. If having just a little bit of bipolar disorder or schizophrenia makes you see the world differently than other people and makes you more creative, there are lots of reasons why that might improve the survival of your genes: There are the obvious problem-solving benefits, but also the simple fact that artists are sexy.

The downside of such “weird-thinking” genes is that they can go too far and make you mentally ill, perhaps if you have too many copies of them, or if you face an environmental trigger that sets them off. Sometimes the reason you see the world differently than everyone else is that you’re just seeing it wrong. But if the benefits of creativity are high enough—and they surely are—this could offset the risks, in an evolutionary sense.

But one thing is quite clear: If you are mentally ill, don’t avoid treatment for fear it will damage your creativity. Quite the opposite: A mental illness that is well treated and in remission is the optimal state for creativity. Go seek treatment, so that your creativity may blossom.

What we can be thankful for

Nov 24 JDN 2458812

Thanksgiving is upon us, yet as more and more evidence is revealed implicating President Trump in grievous crimes, as US carbon emissions that had been declining are now trending upward again, as our air quality deteriorates for the first time in decades, it may be hard to see what we should be thankful for.

But these are exceptions to a broader trend: The world is getting better, in almost every way, remarkably quickly. Homicide rates in the US are lower than they’ve been since the 1960s. Worldwide, the homicide rate has fallen 20% since 1990.

While world carbon emissions are still increasing, on a per capita basis they are actually starting to decline, and on an efficiency basis (kilograms of carbon-equivalent per dollar of GDP) they are at their lowest ever. This trend is likely to continue: The price of solar power has rapidly declined to the point where it is now the cheapest form of electric power.
The number—not just proportion, absolute number—of people in extreme poverty has declined by almost two-thirds within my own lifetime. The proportion is the lowest it has ever been in human history. World life expectancy is at its highest ever. Death rates from infectious disease fell by over 85% over the 20th century, and are now at their lowest ever.

I wouldn’t usually cite Reason as a source, but they’re right on this one: Defeat appears imminent for all four Horsemen of the Apocalypse. Pestilence, Famine, War, and even Death are all on the decline. We have a great deal to be grateful for: We are living in a golden age.

This is not to say that we should let ourselves become complacent and stop trying to make the world better: On the contrary, it proves that the world can be made better, which gives us every reason to redouble our efforts to do so.

Is Singularitarianism a religion?

 

Nov 17 JDN 2458805

I said in last week’s post that Pascal’s Mugging provides some deep insights into both Singularitarianism and religion. In particular, it explains why Singularitarianism seems so much like a religion.

This has been previously remarked, of course. I think Eric Steinhart makes the best case for Singularitarianism as a religion:

I think singularitarianism is a new religious movement. I might add that I think Clifford Geertz had a pretty nice (though very abstract) definition of religion. And I think singularitarianism fits Geertz’s definition (but that’s for another time).

My main interest is this: if singularitarianism is a new religious movement, then what should we make of it? Will it mainly be a good thing? A kind of enlightenment religion? It might be an excellent alternative to old-fashioned Abrahamic religion. Or would it degenerate into the well-known tragic pattern of coercive authority? Time will tell; but I think it’s worth thinking about this in much more detail.

To be clear: Singularitarianism is probably not a religion. It is certainly not a cult, as it has been even worse accused; the behaviors it prescribes are largely normative, pro-social behaviors, and therefore it would at worst be a mainstream religion. Really, if every religion only inspired people to do things like donate to famine relief and work on AI research (as opposed to, say, beheading gay people), I wouldn’t have much of a problem with religion.

In fact, Singularitarianism has one vital advantage over religion: Evidence. While the evidence in favor of it is not overwhelming, there is enough evidential support to lend plausibility to at least a broad concept of Singularitarianism: Technology will continue rapidly advancing, achieving accomplishments currently only in our wildest imaginings; artificial intelligence surpassing human intelligence will arise, sooner than many people think; human beings will change ourselves into something new and broadly superior; these posthumans will go on to colonize the galaxy and build a grander civilization than we can imagine. I don’t know that these things are true, but I hope they are, and I think it’s at least reasonably likely. All I’m really doing is extrapolating based on what human civilization has done so far and what we are currently trying to do now. Of course, we could well blow ourselves up before then, or regress to a lower level of technology, or be wiped out by some external force. But there’s at least a decent chance that we will continue to thrive for another million years to come.

But yes, Singularitarianism does in many ways resemble a religion: It offers a rich, emotionally fulfilling ontology combined with ethical prescriptions that require particular behaviors. It promises us a chance at immortality. It inspires us to work toward something much larger than ourselves. More importantly, it makes us special—we are among the unique few (millions?) who have the power to influence the direction of human and posthuman civilization for a million years. The stronger forms of Singularitarianism even have a flavor of apocalypse: When the AI comes, sooner than you think, it will immediately reshape everything at effectively infinite speed, so that from one year—or even one moment—to the next, our whole civilization will be changed. (These forms of Singularitarianism are substantially less plausible than the broader concept I outlined above.)

It’s this sense of specialness that Pascal’s Mugging provides some insight into. When it is suggested that we are so special, we should be inherently skeptical, not least because it feels good to hear that. (As Less Wrong would put it, we need to avoid a Happy Death Spiral.) Human beings like to feel special; we want to feel special. Our brains are configured to seek out evidence that we are special and reject evidence that we are not. This is true even to the point of absurdity: One cannot be mathematically coherent without admitting that the compliment “You’re one in a million.” is equivalent to the statement “There are seven thousand people as good or better than you.”—and yet, the latter seems much worse, because it does not make us sound special.

Indeed, the connection between Pascal’s Mugging and Pascal’s Wager is quite deep: Each argument takes a tiny probability and multiplies it by a huge impact in order to get a large expected utility. This often seems to be the way that religions defend themselves: Well, yes, the probability is small; but can you take the chance? Can you afford to take that bet if it’s really your immortal soul on the line?

And Singularitarianism has a similar case to make, even aside from the paradox of Pascal’s Mugging itself. The chief argument for why we should be focusing all of our time and energy on existential risk is that the potential payoff is just so huge that even a tiny probability of making a difference is enough to make it the only thing that matters. We should be especially suspicious of that; anything that says it is the only thing that matters is to be doubted with utmost care. The really dangerous religion has always been the fanatical kind that says it is the only thing that matters. That’s the kind of religion that makes you crash airliners into buildings.

I think some people may well have become Singularitarians because it made them feel special. It is exhilarating to be one of these lone few—and in the scheme of things, even a few million is a small fraction of all past and future humanity—with the power to effect some shift, however small, in the probability of a far grander, far brighter future.

Yet, in fact this is very likely the circumstance in which we are. We could have been born in the Neolithic, struggling to survive, utterly unaware of what would come a few millennia hence; we could have been born in the posthuman era, one of a trillion other artist/gamer/philosophers living in a world where all the hard work that needed to be done is already done. In the long S-curve of human development, we could have been born in the flat part on the left or the flat part on the right—and by all probability, we should have been; most people were. But instead we happened to be born in that tiny middle slice, where the curve slopes upward at its fastest. I suppose somebody had to be, and it might as well be us.

Sigmoid curve labeled

A priori, we should doubt that we were born so special. And when forming our beliefs, we should compensate for the fact that we want to believe we are special. But we do in fact have evidence, lots of evidence. We live in a time of astonishing scientific and technological progress.

My lifetime has included the progression from Deep Thought first beating David Levy to the creation of a computer one millimeter across that runs on a few nanowatts and nevertheless has ten times as much computing power as the 80-pound computer that ran the Saturn V. (The human brain runs on about 100 watts, and has a processing power of about 1 petaflop, so we can say that our energy efficiency is about 10 TFLOPS/W. The M3 runs on about 10 nanowatts and has a processing power of about 0.1 megaflops, so its energy efficiency is also about 10 TFLOPS/W. We did it! We finally made a computer as energy-efficient as the human brain! But we have still not matched the brain in terms of space-efficiency: The volume of the human brain is about 1000 cm^3, so our space efficiency is about 1 TFLOPS/cm^3. The volume of the M3 is about 1 mm^3, so its space efficiency is only about 100 MFLOPS/cm^3. The brain still wins by a factor of 10,000.)

My mother saw us go from the first jet airliners to landing on the Moon to the International Space Station and robots on Mars. She grew up before the polio vaccine and is still alive to see the first 3D-printed human heart. When I was a child, smartphones didn’t even exist; now more people have smartphones than have toilets. I may yet live to see the first human beings set foot on Mars. The pace of change is utterly staggering.

Without a doubt, this is sufficient evidence to believe that we, as a civilization, are living in a very special time. The real question is: Are we, as individuals, special enough to make a difference? And if we are, what weight of responsibility does this put upon us?

If you are reading this, odds are the answer to the first question is yes: You are definitely literate, and most likely educated, probably middle- or upper-middle-class in a First World country. Countries are something I can track, and I do get some readers from non-First-World countries; and of course I don’t observe your education or socioeconomic status. But at an educated guess, this is surely my primary reading demographic. Even if you don’t have the faintest idea what I’m talking about when I use Bayesian logic or calculus, you’re already quite exceptional. (And if you do? All the more so.)

That means the second question must apply: What do we owe these future generations who may come to exist if we play our cards right? What can we, as individuals, hope to do to bring about this brighter future?

The Singularitarian community will generally tell you that the best thing to do with your time is to work on AI research, or, failing that, the best thing to do with your money is to give it to people working on artificial intelligence research. I’m not going to tell you not to work on AI research or donate to AI research, as I do think it is among the most important things humanity needs to be doing right now, but I’m also not going to tell you that it is the one single thing you must be doing.

You should almost certainly be donating somewhere, but I’m not so sure it should be to AI research. Maybe it should be famine relief, or malaria prevention, or medical research, or human rights, or environmental sustainability. If you’re in the United States (as I know most of you are), the best thing to do with your money may well be to support political campaigns, because US political, economic, and military hegemony means that as goes America, so goes the world. Stop and think for a moment how different the prospects of global warming might have been—how many millions of lives might have been saved!—if Al Gore had become President in 2001. For lack of a few million dollars in Tampa twenty years ago, Miami may be gone in fifty. If you’re not sure which cause is most important, just pick one; or better yet, donate to a diversified portfolio of charities and political campaigns. Diversified investment isn’t just about monetary return.

And you should think carefully about what you’re doing with the rest of your life. This can be hard to do; we can easily get so caught up in just getting through the day, getting through the week, just getting by, that we lose sight of having a broader mission in life. Of course, I don’t know what your situation is; it’s possible things really are so desperate for you that you have no choice but to keep your head down and muddle through. But you should also consider the possibility that this is not the case: You may not be as desperate as you feel. You may have more options than you know. Most “starving artists” don’t actually starve. More people regret staying in their dead-end jobs than regret quitting to follow their dreams. I guess if you stay in a high-paying job in order to earn to give, that might really be ethically optimal; but I doubt it will make you happy. And in fact some of the most important fields are constrained by a lack of good people doing good work, and not by a simple lack of funding.

I see this especially in economics: As a field, economics is really not focused on the right kind of questions. There’s far too much prestige for incrementally adjusting some overcomplicated unfalsifiable mess of macroeconomic algebra, and not nearly enough for trying to figure out how to mitigate global warming, how to turn back the tide of rising wealth inequality, or what happens to human society once robots take all the middle-class jobs. Good work is being done in devising measures to fight poverty directly, but not in devising means to undermine the authoritarian regimes that are responsible for maintaining poverty. Formal mathematical sophistication is prized, and deep thought about hard questions is eschewed. We are carefully arranging the pebbles on our sandcastle in front of the oncoming tidal wave. I won’t tell you that it’s easy to change this—it certainly hasn’t been easy for me—but I have to imagine it’d be easier with more of us trying rather than with fewer. Nobody needs to donate money to economics departments, but we definitely do need better economists running those departments.

You should ask yourself what it is that you are really good at, what you—you yourself, not anyone else—might do to make a mark on the world. This is not an easy question: I have not quite answered for myself whether I would make more difference as an academic researcher, a policy analyst, a nonfiction author, or even a science fiction author. (If you scoff at the latter: Who would have any concept of AI, space colonization, or transhumanism, if not for science fiction authors? The people who most tilted the dial of human civilization toward this brighter future may well be Clarke, Roddenberry, and Asimov.) It is not impossible to be some combination or even all of these, but the more I try to take on the more difficult my life becomes.

Your own path will look different than mine, different, indeed, than anyone else’s. But you must choose it wisely. For we are very special individuals, living in a very special time.

Pascal’s Mugging

Nov 10 JDN 2458798

In the Singularitarian community there is a paradox known as “Pascal’s Mugging”. The name is an intentional reference to Pascal’s Wager (and the link is quite apt, for reasons I’ll discuss in a later post.)

There are a few different versions of the argument; Yudkowsky’s original argument in which he came up with the name “Pascal’s Mugging” relies upon the concept of the universe as a simulation and an understanding of esoteric mathematical notation. So here is a more intuitive version:

A strange man in a dark hood comes up to you on the street. “Give me five dollars,” he says, “or I will destroy an entire planet filled with ten billion innocent people. I cannot prove to you that I have this power, but how much is an innocent life worth to you? Even if it is as little as $5,000, are you really willing to bet on ten trillion to one odds that I am lying?”

Do you give him the five dollars? I suspect that you do not. Indeed, I suspect that you’d be less likely to give him the five dollars than if he had merely said he was homeless and asked for five dollars to help pay for food. (Also, you may have objected that you value innocent lives, even faraway strangers you’ll never meet, at more than $5,000 each—but if that’s the case, you should probably be donating more, because the world’s best charities can save a live for about $3,000.)

But therein lies the paradox: Are you really willing to bet on ten trillion to one odds?

This argument gives me much the same feeling as the Ontological Argument; as Russell said of the latter, “it is much easier to be persuaded that ontological arguments are no good than it is to say exactly what is wrong with them.” It wasn’t until I read this post on GiveWell that I could really formulate the answer clearly enough to explain it.

The apparent force of Pascal’s Mugging comes from the idea of expected utility: Even if the probability of an event is very small, if it has a sufficiently great impact, the expected utility can still be large.

The problem with this argument is that extraordinary claims require extraordinary evidence. If a man held a gun to your head and said he’d shoot you if you didn’t give him five dollars, you’d give him five dollars. This is a plausible claim and he has provided ample evidence. If he were instead wearing a bomb vest (or even just really puffy clothing that could conceal a bomb vest), and he threatened to blow up a building unless you gave him five dollars, you’d probably do the same. This is less plausible (what kind of terrorist only demands five dollars?), but it’s not worth taking the chance.

But when he claims to have a Death Star parked in orbit of some distant planet, primed to make another Alderaan, you are right to be extremely skeptical. And if he claims to be a being from beyond our universe, primed to destroy so many lives that we couldn’t even write the number down with all the atoms in our universe (which was actually Yudkowsky’s original argument), to say that you are extremely skeptical seems a grievous understatement.

That GiveWell post provides a way to make this intuition mathematically precise in terms of Bayesian logic. If you have a normal prior with mean 0 and standard deviation 1, and you are presented with a likelihood with mean X and standard deviation X, what should you make your posterior distribution?

Normal priors are quite convenient; they conjugate nicely. The precision (inverse variance) of the posterior distribution is the sum of the two precisions, and the mean is a weighted average of the two means, weighted by their precision.

So the posterior variance is 1/(1 + 1/X^2).

The posterior mean is 1/(1+1/X^2)*(0) + (1/X^2)/(1+1/X^2)*(X) = X/(X^2+1).

That is, the mean of the posterior distribution is just barely higher than zero—and in fact, it is decreasing in X, if X > 1.

For those who don’t speak Bayesian: If someone says he’s going to have an effect of magnitude X, you should be less likely to believe him the larger that X is. And indeed this is precisely what our intuition said before: If he says he’s going to kill one person, believe him. If he says he’s going to destroy a planet, don’t believe him, unless he provides some really extraordinary evidence.

What sort of extraordinary evidence? To his credit, Yudkowsky imagined the sort of evidence that might actually be convincing:

If a poorly-dressed street person offers to save 10(10^100) lives (googolplex lives) for $5 using their Matrix Lord powers, and you claim to assign this scenario less than 10-(10^100) probability, then apparently you should continue to believe absolutely that their offer is bogus even after they snap their fingers and cause a giant silhouette of themselves to appear in the sky.

This post he called “Pascal’s Muggle”, after the term from the Harry Potter series, since some of the solutions that had been proposed for dealing with Pascal’s Mugging had resulted in a situation almost as absurd, in which the mugger could exhibit powers beyond our imagining and yet nevertheless we’d never have sufficient evidence to believe him.

So, let me go on record as saying this: Yes, if someone snaps his fingers and causes the sky to rip open and reveal a silhouette of himself, I’ll do whatever that person says. The odds are still higher that I’m dreaming or hallucinating than that this is really a being from beyond our universe, but if I’m dreaming, it makes no difference, and if someone can make me hallucinate that vividly he can probably cajole the money out of me in other ways. And there might be just enough chance that this could be real that I’m willing to give up that five bucks.

These seem like really strange thought experiments, because they are. But like many good thought experiments, they can provide us with some important insights. In this case, I think they are telling us something about the way human reasoning can fail when faced with impacts beyond our normal experience: We are in danger of both over-estimating and under-estimating their effects, because our brains aren’t equipped to deal with magnitudes and probabilities on that scale. This has made me realize something rather important about both Singularitarianism and religion, but I’ll save that for next week’s post.

What if the charitable deduction were larger?

Nov 3 JDN 2458791

Right now, the charitable tax deduction is really not all that significant. It makes donating to charity cheaper, but you still always end up with less money after donating than you had before. It might cause you to donate more than you otherwise would have, but you’ll still only give to a charity you already care about.

This is because the tax deduction applies to your income, rather than your taxes directly. So if you make $100,000 and donate $10,000, you pay taxes as if your income were $90,000. Say your tax rate is 25%; then you go from paying $25,000 and keeping $75,000 to paying $22,500 and keeping $67,500. The more you donate, the less money you will have to keep.

Many people don’t seem to understand this; they seem to think that rich people can actually get richer by donating to charity. That can’t be done in our current tax system, or at least not legally. (There are fraudulent ways to do so; but there are fraudulent ways to do lots of things.) Part of the confusion may be related to the fact that people don’t seem to understand how tax brackets work; they worry about being “pushed into a higher tax bracket” as though this could somehow reduce their after-tax income, but that doesn’t happen. That isn’t how tax brackets work.

Some welfare programs work that way—for instance, seeing your income rise high enough to lose Medicaid eligibility can be bad enough that you would prefer to have less income—but taxes themselves do not.

The graph below shows the actual average tax rate (red) and marginal tax rate (purple) of the current US federal income tax:

Average_tax_rate
From that graph alone, you might think that going to a higher tax bracket could result in lower after-tax income. But the next graph, of before-tax (blue) and after-tax (green) income shows otherwise:

After_tax_income

All that tax deductions can do is reduce your taxable income. Thus the tax deduction benefits you if you were already donating, but never leaves you richer than you would have been without donating at all.

For example, if you have an income of $700,000, you would pay $223,000 in taxes and keep $477,000 in after-tax income. If you instead donate $100,000, your adjusted gross income will be reduced to $600,000, you will only pay $186,000 in taxes, and you will keep $414,000 in after-tax income. If there were no tax deduction, you would still have to pay $223,000 in taxes, and your after-tax income would be only $377,000. So you do benefit from the tax deduction; but there is no amount of donation which will actually increase your after-tax income to above $477,000.

But we wouldn’t have to do it this way. We could instead apply the deduction as a tax credit, which would make the effect of the deduction far larger.

Several years back, Miles Kimball (an economist who formerly worked at Michigan, now at UC Boulder) proposed a quite clever change to the tax system:

My proposal is to raise marginal tax rates above about $75,000 per person–or $150,000 per couple–by 10% (a dime on every extra dollar), but offer a 100% tax credit for public contributions up to the entire amount of the tax surcharge.

Kimball’s argument for the policy is mainly that this would make a tax increase more palatable, by giving people more control over where their money goes. This is surely true, and a worthwhile endeavor.

But the even larger benefit might come from the increased charitable donations. If we limited the tax credit to particularly high-impact charities, we would increase the donations to those charities. Whereas in the current system you get the same deduction regardless of where you give your money, even though we know that some charities are literally hundreds of times as cost-effective as others.

In fact, we might not even want to limit the tax credit to that 10% surcharge. If people want to donate more than 10% of their income to high-impact charities, perhaps we should let them. This would mean that the federal deficit could actually increase under this policy, but if so, there would have to be so much money donated that we’d most likely end world hunger. That’s a tradeoff I’m quite willing to make.

In principle, we could even introduce a tax credit that is greater than 100%—say for instance you get a 120% donation for the top-rated charities. This is not mathematically inconsistent, though it is surely a very bad idea. In that case, it absolutely would be possible to end up with more money than you started with, and the richer you are, the more you could get. There would effectively be a positive return on charitable donations, with the money paid for from the government budget. Bill Gates for instance could pay $10 billion a year to charity and the government would not only pay for it, but also have to give him an extra $2 billion. So even for the best charities—which probably are actually a good deal more cost-effective than the US government—we should cap the tax credit at 100%.

Obvious choices for high-impact charities include UNICEF, the Red Cross, GiveDirectly, and the Malaria Consortium. We would need some sort of criteria to decide which charities should get the benefits; I’m thinking we could have some sort of panel of experts who rate charities based on their cost-effectiveness.

It wouldn’t have to be all-or-nothing, either; charities with good but not top ratings could get an increased deduction but not a 100% deduction. The expert panel could rate charities on a scale from 0 to 10, and then anything above 5 gets an (X-5)*10% tax credit.

In effect, the current policy says, “If you give to charity, you don’t have to pay taxes on the money you gave; but all of your other taxes still apply.” The new policy would say, “You can give to a top-impact charity instead of paying taxes.”

Americans hate taxes and already give a lot to charity, but most of those donations are to relatively ineffective charities. This policy could incentivize people to give more or at least give to better places, probably without hurting the government budget—and if it does hurt the government budget, the benefits will be well worth the cost.

Revealed preference: Does the fact that I did it mean I preferred it?

Post 312 Oct 27 JDN 2458784

One of the most basic axioms of neoclassical economics is revealed preference: Because we cannot observe preferences directly, we infer them from actions. Whatever you chose must be what you preferred.

Stated so badly, this is obviously not true: We often make decisions that we later come to regret. We may choose under duress, or confusion; we may lack necessary information. We change our minds.

And there really do seem to be economists who use it in this bald way: From the fact that a particular outcome occurred in a free market, they will infer that it must be optimally efficient. (“Freshwater” economists who are dubious of any intervention into markets seem to be most guilty of this.) In the most extreme form, this account would have us believe that people who trip and fall do so on purpose.

I doubt anyone believes that particular version—but there do seem to be people who believe that unemployment is the result of people voluntarily choosing not to work, and revealed preference has also led economists down some strange paths when trying to explain what sure looks like irrational behavior—such as “rational addiction” theory, positing that someone can absolutely become addicted to alcohol or heroin and end up ruining their life all based on completely rational, forward-thinking decision planning.

The theory can be adapted to deal with these issues, by specifying that it’s only choices made with full information and all of our faculties intact that count as revealing our preferences.

But when are we ever in such circumstances? When do we ever really have all the information we need in order to make a rational decision? Just what constitutes intact faculties? No one is perfectly rational—so how rational must we be in order for our decisions to count as revealing our preferences?

Revealed preference theory also quickly becomes tautologous: Why do we choose to do things? Because we prefer them. What do we prefer? What we choose to do. Without some independent account of what our preferences are, we can’t really predict behavior this way.

A standard counter-argument to this is that revealed preference theory imposes certain constraints of consistency and transitivity, so it is not utterly vacuous. The problem with this answer is that human beings don’t obey those constraints. The Allais Paradox, the Ellsberg Paradox, the sunk cost fallacy. It’s even possible to use these inconsistencies to create “money pumps” that will cause people to systematically give you money; this has been done in experiments. While real-world violations seem to be small, they’re definitely present. So insofar as your theory is testable, it’s false.

The good news is that we really don’t need revealed preference theory. We already have ways of telling what human beings prefer that are considerably richer than simply observing what they choose in various scenarios. One very simple but surprisingly powerful method is to ask. In general, if you ask people what they want and they have no reason to distrust you, they will in fact tell you what they want.

We also have our own introspection, as well as our knowledge about millions of years of evolutionary history that shaped our brains. We don’t expect a lot of people to prefer suffering, for instance (even masochists, who might be said to ‘prefer pain’, seem to be experiencing that pain rather differently than the rest of us would). We generally expect that people will prefer to stay alive rather than die. Some may prefer chocolate, others vanilla; but few prefer motor oil. Our preferences may vary, but they do follow consistent patterns; they are not utterly arbitrary and inscrutable.

There is a deeper problem that any account of human desires must face, however: Sometimes we are actually wrong about our own desires. Affective forecasting, the prediction of our own future mental states, is astonishingly unreliable. People often wildly overestimate the emotional effects of both positive and negative outcomes. (Interestingly, people with depression tend not to do this—those with severe depression often underestimate the emotional effects of positive outcomes, while those with mild depression seem to be some of the most accurate forecasters, an example of the depressive realism effect.)

There may be no simple solution to this problem. Human existence is complicated; we spend large portions of our lives trying to figure out what it is we really want.
This means that we should not simply trust that whatever it is happens is what everyone—or even necessarily anyone—wants to happen. People make mistakes, even large, systematic, repeated mistakes. Sometimes what happens is just bad, and we should be trying to change it. Indeed, sometimes people need to be protected from their own bad decisions.

Unsolved problems

Oct 20 JDN 2458777

The beauty and clearness of the dynamical theory, which asserts heat and light to be modes of motion, is at present obscured by two clouds. The first came into existence with the undulatory theory of light, and was dealt with by Fresnel and Dr. Thomas Young; it involved the question, how could the earth move through an elastic solid, such as essentially is the luminiferous ether? The second is the Maxwell-Boltzmann doctrine regarding the partition of energy.


~ Lord Kelvin, April 27, 1900

The above quote is part of a speech where Kelvin basically says that physics is a completed field, with just these two little problems to clear up, “two clouds” in a vast clear horizon. Those “two clouds” Kelvin talked about, regarding the ‘luminiferous ether’ and the ‘partition of energy’? They are, respectively, relativity and quantum mechanics. Almost 120 years later we still haven’t managed to really solve them, at least not in a way that works consistently as part of one broader theory.

But I’ll give Kelvin this: He knew where the problems were. He vastly underestimated how complex and difficult those problems would be, but he knew where they were.

I’m not sure I can say the same about economists. We don’t seem to have even reached the point where we agree where the problems are. Consider another quotation:

For a long while after the explosion of macroeconomics in the 1970s, the field looked like a battlefield. Over time however, largely because facts do not go away, a largely shared vision both of fluctuations and of methodology has emerged. Not everything is fine. Like all revolutions, this one has come with the destruction of some knowledge, and suffers from extremism and herding. None of this deadly however. The state of macro is good.


~ Oliver Blanchard, 2008

The timing of Blanchard’s remark is particularly ominous: It is much like the turkey who declares, the day before Thanksgiving, that his life is better than ever.

But the content is also important: Blanchard didn’t say that microeconomics is in good shape (which I think one could make a better case for). He didn’t even say that economics, in general, is in good shape. He specifically said, right before the greatest economic collapse since the Great Depression, that macroeconomics was in good shape. He didn’t merely underestimate the difficulty of the problem; he didn’t even see where the problem was.

If you search the Web, you can find a few lists of unsolved problems in economics. Wikipedia has such a list that I find particularly bad; Mike Moffatt offers a better list that still has significant blind spots.

Wikipedia’s list is full of esoteric problems that require deeply faulty assumptions to even exist, like the ‘American option problem’ which assumes that the Black-Scholes model is even remotely an accurate description of how option prices work, or the ‘tatonnement problem’ which ignores the fact that there may be many equilibria and we might never reach one at all, or the problem they list under ‘revealed preferences’ which doesn’t address any of the fundamental reasons why the entire concept of revealed preferences may fail once we apply a realistic account of cognitive science. (I could go pretty far afield with that last one—and perhaps I will in a later post—but for now, suffice it to say that human beings often freely choose to do things that we later go on to regret.) I think the only one that Wikipedia’s list really gets right is Unified models of human biases’. The ‘home bias in trade’ and ‘Feldstein-Horioka Puzzle’ problems are sort of edging toward genuine problems, but they’re bound up in too many false assumptions to really get at the right question, which is actually something like “How do we deal with nationalism?” Referring to the ‘Feldstein-Horioka Puzzle’ misses the forest for the trees. Likewise, the ‘PPP Puzzle’ and the ‘Exchange rate disconnect puzzle’ (and to some extent the ‘equity premium puzzle’ as well) are really side effects of a much deeper problem, which is that financial markets in general are ludicrously volatile and inefficient and we have no idea why.

And Wikipedia’s list doesn’t have some of the largest, most important problems in economics. Moffatt’s list does better, including good choices like “What Caused the Industrial Revolution?”, “What Is the Proper Size and Scope of Government?”, and “What Truly Caused the Great Depression?”, but it also includes some of the more esoteric problems like the ‘equity premium puzzle’ and the ‘endogeneity of money’. The way he states the problem “What Causes the Variation of Income Among Ethnic Groups?” suggests that he doesn’t quite understand what’s going on there either. More importantly, Moffatt still leaves out very obviously important questions like “How do we achieve economic development in poor countries?” (Or as I sometimes put it, “What did South Korea do from 1950 to 2000, and how can we do it again?”), “How do we fix shortages of housing and other necessities?”, “What is causing the global rise of income and wealth inequality?”, “How altruistic are human beings, to whom, and under what conditions?” and “What makes financial markets so unstable?” Ironically, ‘Unified models of human biases’, the one problem that Wikipedia got right, is missing from Moffatt’s list.

And I’m also humble enough to realize that some of the deepest problems in economics may be ones that we don’t even quite know how to formulate yet. We like to pretend that economics is a mature science, almost on the coattails of physics; but it’s really a very young science, more like psychology. We go through these ‘cargo cult science‘ rituals of p-values and econometric hypothesis tests, but there are deep, basic forces we don’t understand. We have precisely prepared all the apparatus for the detection of the phlogiston, and by God, we’ll get that 0.05 however we have to. (Think I’m being too harsh? “Real Business Cycle” theory essentially posits that the Great Depression was caused by everyone deciding that they weren’t going to work for a few years, and as whole countries fell into the abyss from failing financial markets, most economists still clung to the Efficient Market Hypothesis.) Our whole discipline requires major injections of intellectual humility: We not only don’t have all the answers; we’re not even sure we have all the questions.

I think the esoteric nature of questions like ‘the equity premium puzzle’ and the ‘tatonnement problem‘ is precisely the source of their appeal: It’s the sort of thing you can say you’re working on and sound very smart, because the person you’re talking to likely has no idea what you’re talking about. (Or else they are a fellow economist, and thus in on the con.) If you said that you’re trying to explain why poor countries are poor and why rich countries are rich—and if economics isn’t doing that, then what in the world are we doing?you’d have to admit that we honestly have only the faintest idea, and that millions of people have suffered from bad advice economists gave their governments based on ideas that turned out to be wrong.

It’s really quite problematic how closely economists are tied to policymaking (except when we do really know what we’re talking about?). We’re trying to do engineering without even knowing physics. Maybe there’s no way around it: We have to make some sort of economic policy, and it makes more sense to do it based on half-proven ideas than on completely unfounded ideas. (Engineering without physics worked pretty well for the Romans, after all.) But it seems to me that we could be relying more, at least for the time being, on the experiences and intuitions of the people who have worked on the ground, rather than on sophisticated theoretical models that often turn out to be utterly false. We could eschew ‘shock therapy‘ approaches that try to make large interventions in an economy all at once, in favor of smaller, subtler adjustments whose consequences are more predictable. We could endeavor to focus on the cases where we do have relatively clear knowledge (like rent control) and avoid those where the uncertainty is greatest (like economic development).

At the very least, we could admit what we don’t know, and admit that there is probably a great deal we don’t know that we don’t know.

Mental illness is different from physical illness.

Post 311 Oct 13 JDN 2458770

There’s something I have heard a lot of people say about mental illness that is obviously well-intentioned, but ultimately misguided: “Mental illness is just like physical illness.”

Sometimes they say it explicitly in those terms. Other times they make analogies, like “If you wouldn’t shame someone with diabetes for using insulin, why shame someone with depression for using SSRIs?”

Yet I don’t think this line of argument will ever meaningfully reduce the stigma surrounding mental illness, because, well, it’s obviously not true.

There are some characteristics of mental illness that are analogous to physical illness—but there are some that really are quite different. And these are not just superficial differences, the way that pancreatic disease is different from liver disease. No one would say that liver cancer is exactly the same as pancreatic cancer; but they’re both obviously of the same basic category. There are differences between physical and mental illness which are both obvious, and fundamental.

Here’s the biggest one: Talk therapy works on mental illness.

You can’t talk yourself out of diabetes. You can’t talk yourself out of myocardial infarct. You can’t even talk yourself out of migraine (though I’ll get back to that one in a little bit). But you can, in a very important sense, talk yourself out of depression.

In fact, talk therapy is one of the most effective treatments for most mental disorders. Cognitive behavioral therapy for depression is on its own as effective as most antidepressants (with far fewer harmful side effects), and the two combined are clearly more effective than either alone. Talk therapy is as effective as medication on bipolar disorder, and considerably better on social anxiety disorder.

To be clear: Talk therapy is not just people telling you to cheer up, or saying it’s “all in your head”, or suggesting that you get more exercise or eat some chocolate. Nor does it consist of you ruminating by yourself and trying to talk yourself out of your disorder. Cognitive behavioral therapy is a very complex, sophisticated series of techniques that require years of expert training to master. Yet, at its core, cognitive therapy really is just a very sophisticated form of talking.

The fact that mental disorders can be so strongly affected by talk therapy shows that there really is an important sense in which mental disorders are “all in your head”, and not just the trivial way that an axe wound or even a migraine is all in your head. It isn’t just the fact that it is physically located in your brain that makes a mental disorder different; it’s something deeper than that.

Here’s the best analogy I can come up with: Physical illness is hardware. Mental illness is software.

If a computer breaks after being dropped on the floor, that’s like an axe wound: An obvious, traumatic source of physical damage that is an unambiguous cause of the failure.

If a computer’s CPU starts overheating, that’s like a physical illness, like diabetes: There may be no particular traumatic cause, or even any clear cause at all, but there is obviously something physically wrong that needs physical intervention to correct.

But if a computer is suffering glitches and showing error messages when it tries to run particular programs, that is like mental illness: Something is wrong not on the low-level hardware, but on the high-level software.

These different types of problem require different types of solutions. If your CPU is overheating, you might want to see about replacing your cooling fan or your heat sink. But if your software is glitching while your CPU is otherwise running fine, there’s no point in replacing your fan or heat sink. You need to get a programmer in there to look at the code and find out where it’s going wrong. A talk therapist is like a programmer: The words they say to you are like code scripts they’re trying to get your processor to run correctly.

Of course, our understanding of computers is vastly better than our understanding of human brains, and as a result, programmers tend to get a lot better results than psychotherapists. (Interestingly they do actually get paid about the same, though! Programmers make about 10% more on average than psychotherapists, and both are solidly within the realm of average upper-middle-class service jobs.) But the basic process is the same: Using your expert knowledge of the system, find the right set of inputs that will fix the underlying code and solve the problem. At no point do you physically intervene on the system; you could do it remotely without ever touching it—and indeed, remote talk therapy is a thing.

What about other neurological illnesses, like migraine or fibromyalgia? Well, I think these are somewhere in between. They’re definitely more physical in some sense than a mental disorder like depression. There isn’t any cognitive content to a migraine the way there is to a depressive episode. When I feel depressed or anxious, I feel depressed or anxious about something. But there’s nothing a migraine is about. To use the technical term in cognitive science, neurological disorders lack the intentionality that mental disorders generally have. “What are you depressed about?” is a question you usually can answer. “What are you migrained about?” generally isn’t.

But like mental disorders, neurological disorders are directly linked to the functioning of the brain, and often seem to operate at a higher level of functional abstraction. The brain doesn’t have pain receptors on itself the way most of your body does; getting a migraine behind your left eye doesn’t actually mean that that specific lobe of your brain is what’s malfunctioning. It’s more like a general alert your brain is sending out that something is wrong, somewhere. And fibromyalgia often feels like it’s taking place in your entire body at once. Moreover, most neurological disorders are strongly correlated with mental disorders—indeed, the comorbidity of depression with migraine and fibromyalgia in particular is extremely high.

Which disorder causes the other? That’s a surprisingly difficult question. Intuitively we might expect the “more physical” disorder to be the primary cause, but that’s not always clear. Successful treatment for depression often improves symptoms of migraine and fibromyalgia as well (though the converse is also true). They seem to be mutually reinforcing one another, and it’s not at all clear which came first. I suppose if I had to venture a guess, I’d say the pain disorders probably have causal precedence over the mood disorders, but I don’t actually know that for a fact.

To stretch my analogy a little, it may be like a software problem that ends up causing a hardware problem, or a hardware problem that ends up causing a software problem. There actually have been a few examples of this, like games with graphics so demanding that they caused GPUs to overheat.

The human brain is a lot more complicated than a computer, and the distinction between software and hardware is fuzzier; we don’t actually have “code” that runs on a “processor”. We have synapses that continually fire on and off and rewire each other. The closest thing we have to code that gets processed in sequence would be our genome, and that is several orders of magnitude less complex than the structure of our brains. Aside from simply physically copying the entire brain down to every synapse, it’s not clear that you could ever “download” a mind, science fiction notwithstanding.

Indeed, anything that changes your mind necessarily also changes your brain; the effects of talking are generally subtler than the effects of a drug (and certainly subtler than the effects of an axe wound!), but they are nevertheless real, physical changes. (This is why it is so idiotic whenever the popular science press comes out with: “New study finds that X actually changes your brain!” where X might be anything from drinking coffee to reading romance novels. Of course it does! If it has an effect on your mind, it did so by having an effect on your brain. That’s the Basic Fact of Cognitive Science.) This is not so different from computers, however: Any change in software is also a physical change, in the form of some sequence of electrical charges that were moved from one place to another. Actual physical electrons are a few microns away from where they otherwise would have been because of what was typed into that code.

Of course I want to reduce the stigma surrounding mental illness. (For both selfish and altruistic reasons, really.) But blatantly false assertions don’t seem terribly productive toward that goal. Mental illness is different from physical illness; we can’t treat it the same.