On compromise: The kind of politics that can be bipartisan—and the kind that can’t

Dec 29 JDN 2458847

The “polarization” of our current government has been much maligned. And there is some truth to this: The ideological gap between Democrats and Republicans in Congress is larger than it has been in a century. There have been many calls by self-proclaimed “centrists” for a return to “bipartisanship”.

But there is nothing centrist about compromising with fascists. If one party wants to destroy democracy and the other wants to save it, a true centrist would vote entirely with the pro-democracy party.

There is a kind of politics that can be bipartisan, that can bear reasonable compromise. Most economic policy is of this kind. If one side wants a tax of 40% and the other wants 20%, it’s quite reasonable to set the tax at 30%. If one side wants a large tariff and the other no tariff, it’s quite reasonable to make a small tariff. It could still be wrong—I’d tend to say that the 40% tax with no tariff is the right way to go—but it won’t be unjust. We can in fact “agree to disagree” in such cases. There really is a reasonable intermediate view between the extremes.

But there is also a kind of politics that can’t be bipartisan, in which compromise is inherently unjust. Most social policy is of this kind. If one side wants to let women vote and the other doesn’t, you can’t compromise by letting half of women vote. Women deserve the right to vote, period. All of them. In some sense letting half of women vote would be an improvement over none at all, but it’s obviously not an acceptable policy. The only just thing to do is to keep fighting until all women can vote.

This isn’t a question of importance per se.

Climate change is probably the single most important thing going on in the world this century, but it is actually something we can reasonably compromise about. It isn’t obvious where exactly the emission targets should be set to balance environmental sustainability with economic growth, and reasonable people can disagree about how to draw that line. (It is not reasonable to deny that climate change is important and refuse to take any action at all—which, sadly, is what the Republicans have been doing lately.) Thousands of innocent people have already been killed by Trump’s nonsensical deregulation of air pollution—but in fact it’s a quite difficult problem to decide exactly how pollution should be regulated.

Conversely, voter suppression has a small, if any, effect on our actual outcomes. In a country of 320 million people, even tens of thousands of votes rarely make a difference, and the (Constitutional) Electoral College does far greater damage to the principle of “one person, one vote” than voter suppression ever could. But voter suppression is fundamentally, inherently anti-democractic. When you try to suppress votes, you declare yourself an enemy of the free world.

There has always been disagreement about both kinds of issues; that hasn’t changed. The fundamental rights of women, racial minorities, and LGBT people have always been politically contentious, when—qua fundamental rights—they should never have been. But at least as far as I could tell, we seemed to be making progress on all these fronts. The left wing was dragging the right wing, kicking and screaming if necessary, toward a more just society.

Then came President Donald Trump.

The Trump administration, at least more than any administration I can remember, has been reversing social progress, taking hardline far-right positions on the kind of issues that we can’t compromise about. Locking up children at the border. Undermining judicial due process. Suppressing voter participation. These are attacks upon the foundations of a free society. We can’t “agree to disagree” on them.

Indeed, Trump’s economic policy has been surprisingly ambivalent; while he cuts taxes on the rich like a standard Republican, his trade war is much more of a leftist idea. It’s not so much that he’s willing to compromise as that he’s utterly inconsistent, but at least he’s not a consistent extremist on these issues.

That is what makes Trump an anomaly. The Republicans have gradually become more extreme over time, but it was Trump who carried them over a threshold, where they stopped retarding social progress and began actively reversing it. Removing Trump himself will not remove the problem—but nor would it be an empty gesture. He is a real part of the problem, and removing him might just give us the chance to make the deeper changes that need to be made.

The House agrees. Unfortunately, I doubt the Senate will.

Tithing makes quite a lot of sense

Dec 22 JDN 2458840

Christmas is coming soon, and it is a season of giving: Not only gifts to those we love, but also to charities that help people around the world. It’s a theme of some of our most classic Christmas stories, like A Christmas Carol. (I do have to admit: Scrooge really isn’t wrong for not wanting to give to some random charity without any chance to evaluate it. But I also get the impression he wasn’t giving a lot to evaluated charities either.) And people do really give more around this time of year: Charitable donation rates peak in November and December (though that may also have something to do with tax deductions).

Where should we give? This is not an easy question, but it’s one that we now have tools to answer: There are various independent charity evaluation agencies, like GiveWell and Charity Navigator, which can at least provide some idea of which charities are most cost-effective.

How much should we give? This question is a good deal harder.

Perhaps a perfect being would determine their own precise marginal utility of wealth, and the marginal utility of spending on every possible charity, and give of your wealth to the best possible charity up until those two marginal utilities are equal. Since $1 to UNICEF or the Against Malaria Foundation saves about 0.02 QALY, and (unless you’re a billionaire) you don’t have enough money to meaningfully affect the budget of UNICEF, you’d probably need to give until you are yourself at the UN poverty level of $1.90 per day.

I don’t know of anyone who does this. Even Peter Singer, who writes books that essentially tell us to do this, doesn’t do this. I’m not sure it’s humanly possible to do this. Indeed, I’m not even so sure that a perfect being would do it, since it would require destroying their own life and their own future potential.

How about we all give 10%? In other words, how about we tithe? Yes, it sounds arbitrary—because it is. It could just as well have been 8% or 11%. Perhaps one-tenth feels natural to a base-10 culture made of 10-fingered beings, and if we used a base-12 numeral system we’d think in terms of giving one-twelfth instead. But 10% feels reasonable to a lot of people, it has a lot of cultural support behind it already, and it has become a Schelling point for coordination on this otherwise intractable problem. We need to draw the line somewhere, and it might as well be there.

As Slate Star Codex put it:

It’s ten percent because that’s the standard decreed by Giving What We Can and the effective altruist community. Why should we believe their standard? I think we should believe it because if we reject it in favor of “No, you are a bad person unless you give all of it,” then everyone will just sit around feeling very guilty and doing nothing. But if we very clearly say “You have discharged your moral duty if you give ten percent or more,” then many people will give ten percent or more. The most important thing is having a Schelling point, and ten percent is nice, round, divinely ordained, and – crucially – the Schelling point upon which we have already settled. It is an active Schelling point. If you give ten percent, you can have your name on a nice list and get access to a secret forum on the Giving What We Can site which is actually pretty boring.

It’s ten percent because definitions were made for Man, not Man for definitions, and if we define “good person” in a way such that everyone is sitting around miserable because they can’t reach an unobtainable standard, we are stupid definition-makers. If we are smart definition-makers, we will define it in whichever way which makes it the most effective tool to convince people to give at least that much.

I think it would be also reasonable to adjust this proportion according to your household income. If you are extremely poor, give a token amount: Perhaps 1% or 2%. (As it stands, most poor people already give more than this, and most rich people give less.) If you are somewhat below the median household income, give a bit less: Perhaps 6% or 8%. (I currently give 8%; I plan to increase to 10% once I get a higher-paying job after graduation.) If you are somewhat above, give a bit more: Perhaps 12% or 15%. If you are spectacularly rich, maybe you should give as much as 25%.

Is 10% enough? Well, actually, if everyone gave, even 1% would probably be enough. The total GDP of the First World is about $40 trillion; 1% of that is $400 billion per year, which is more than enough to end world hunger. But since we know that not everyone will give, we need to adjust our standard upward so that those who do give will give enough. (There’s actually an optimization problem here which is basically equivalent to finding a monopoly’s profit-maximizing price.) And just ending world hunger probably isn’t enough; there is plenty of disease to cure, education to improve, research to do, and ecology to protect. If say a third of First World people give 10%, that would be about $1.3 trillion, which would be enough money to at least make a huge difference in all those areas.

You can decide for yourself where you think you should draw the line. But 10% is a pretty good benchmark, and above all—please, give something. If you give anything, you are probably already above average. A large proportion of people give nothing at all. (Only 24% of US tax returns include a charitable deduction—though, to be fair, a lot of us donate but don’t itemize deductions. Even once you account for that, only about 60% of US households give to charity in any given year.)

To a first approximation, all human behavior is social norms

Dec 15 JDN 2458833

The language we speak, the food we eat, and the clothes we wear—indeed, the fact that we wear clothes at all—are all the direct result of social norms. But norms run much deeper than this: Almost everything we do is more norm than not.

Why do sleep and wake up at a particular time of day? For most people, the answer is that they needed to get up to go to work. Why do you need to go to work at that specific time? Why does almost everyone go to work at the same time? Social norms.

Even the most extreme human behaviors are often most comprehensible in terms of social norms. The most effective predictive models of terrorism are based on social networks: You are much more likely to be a terrorist if you know people who are terrorists, and much more likely to become a terrorist if you spend a lot of time talking with terrorists. Cultists and conspiracy theorists seem utterly baffling if you imagine that humans form their beliefs rationally—and totally unsurprising if you realize that humans mainly form their beliefs by matching those around them.

For a long time, economists have ignored social norms at our peril; we’ve assumed that financial incentives will be sufficient to motivate behavior, when social incentives can very easily override them. Indeed, it is entirely possible for a financial incentive to have a negative effect, when it crowds out a social incentive: A good example is a friend who would gladly come over to help you with something as a friend, but then becomes reluctant if you offer to pay him $25. I previously discussed another example, where taking a mentor out to dinner sounds good but paying him seems corrupt.

Why do you drive on the right side of the road (or the left, if you’re in Britain)? The law? Well, the law is already a social norm. But in fact, it’s hardly just that. You probably sometimes speed or run red lights, which are also in violation of traffic laws. Yet somehow driving on the right side seem to be different. Well, that’s because driving on the right has a much stronger norm—and in this case, that norm is self-enforcing with the risk of severe bodily harm or death.

This is a good example of why it isn’t necessary for everyone to choose to follow a norm for that norm to have a great deal of power. As long as the norms include some mechanism for rewarding those who follow and punishing those who don’t, norms can become compelling even to those who would prefer not to obey. Sometimes it’s not even clear whether people are following a norm or following direct incentives, because the two are so closely aligned.

Humans are not the only social species, but we are by far the most social species. We form larger, more complex groups than any other animal; we form far more complex systems of social norms; and we follow those norms with slavish obedience. Indeed, I’m a little suspicious of some of the evolutionary models predicting the evolution of social norms, because they predict it too well; they seem to suggest that it should arise all the time, when in fact it’s only a handful of species who exhibit it at all and only we who build our whole existence around it.

Along with our extreme capacity for altruism, this is another way that human beings actually deviate more from the infinite identical psychopaths of neoclassical economics than most other animals. Yes, we’re smarter than other animals; other animals are more likely to make mistakes (though certainly we make plenty of our own). But most other animals aren’t motivated by entirely different goals than individual self-interest (or “evolutionary self-interest” in a Selfish Gene sort of sense) the way we typically are. Other animals try to be selfish and often fail; we try not to be selfish and usually succeed.

Economics experiments often go out of their way to exclude social motives as much as possible—anonymous random matching with no communication, for instance—and still end up failing. Human behavior in experiments is consistent, systematic—and almost never completely selfish.

Once you start looking for norms, you see them everywhere. Indeed, it becomes hard to see anything else. To a first approximation, all human behavior is social norms.

Good for the economy isn’t the same as good

Dec 8 JDN 2458826

Many of the common critiques of economics are actually somewhat misguided, or at least outdated: While there are still some neoclassical economists who think that markets are perfect and humans are completely rational, most economists these days would admit that there are at least some exceptions to this. But there’s at least one common critique that I think still has a good deal of merit: “Good for the economy” isn’t the same thing as good.

I’ve read literally dozens, if not hundreds, of articles on economics, in both popular press and peer-reviewed journals, that all defend their conclusions in the following way: “Intervention X would statistically be expected to increase GDP/raise total surplus/reduce unemployment. Therefore, policymakers should implement intervention X.” The fact that a policy would be “good for the economy” (in a very narrow sense) is taken as a completely compelling reason that this policy must be overall good.

The clearest examples of this always turn up during a recession, when inevitably people will start saying that cutting unemployment benefits will reduce unemployment. Sometimes it’s just right-wing pundits, but often it’s actually quite serious economists.

The usual left-wing response is to deny the claim, explain all the structural causes of unemployment in a recession and point out that unemployment benefits are not what caused the surge in unemployment. This is true; it is also utterly irrelevant. It can be simultaneously true that the unemployment was caused by bad monetary policy or a financial shock, and also true that cutting unemployment benefits would in fact reduce unemployment.

Indeed, I’m fairly certain that both of those propositions are true, to greater or lesser extent. Most people who are unemployed will remain unemployed regardless of how high or low unemployment benefits are; and likewise most people who are employed will remain so. But at the margin, I’m sure there’s someone who is on the fence about searching for a job, or who is trying to find a job but could try a little harder with some extra pressure, or who has a few lousy job offers they’re not taking because they hope to find a better offer later. That is, I have little doubt that the claim “Cutting unemployment benefits would reduce unemployment” is true.

The problem is that this is in no way a sufficient argument for cutting unemployment benefits. For while it might reduce unemployment per se, more importantly it would actually increase the harm of unemployment. Indeed, those two effects are in direct proportion: Cutting unemployment benefits only reduces unemployment insofar as it makes being unemployed a more painful and miserable experience for the unemployed.

Indeed, the very same (oversimplified) economic models that predict that cutting benefits would reduce unemployment use that precise mechanism, and thereby predict, necessarily, that cutting unemployment benefits will harm those who are unemployed. It has to. In some sense, it’s supposed to; otherwise it wouldn’t have any effect at all.
That is, if your goal is actually to help the people harmed by a recession, cutting unemployment benefits is absolutely not going to accomplish that. But if your goal is actually to reduce unemployment at any cost, I suppose it would in fact do that. (Also highly effective against unemployment: Mass military conscription. If everyone’s drafted, no one is unemployed!)

Similarly, I’ve read more than a few policy briefs written to the governments of poor countries telling them how some radical intervention into their society would (probably) increase their GDP, and then either subtly implying or outright stating that this means they are obliged to enact this intervention immediately.

Don’t get me wrong: Poor countries need to increase their GDP. Indeed, it’s probably the single most important thing they need to do. Providing better security, education, healthcare, and sanitation are all things that will increase GDP—but they’re also things that will be easier if you have more GDP.

(Rich countries, on the other hand? Maybe we don’t actually need to increase GDP. We may actually be better off focusing on things like reducing inequality and improving environmental sustainability, while keeping our level of GDP roughly the same—or maybe even reducing it somewhat. Stay inside the wedge.)

But the mere fact that a policy will increase GDP is not a sufficient reason to implement that policy. You also need to consider all sorts of other effects the policy will have: Poverty, inequality, social unrest, labor standards, pollution, and so on.

To be fair, sometimes these articles only say that the policy will increase GDP, and don’t actually assert that this is a sufficient reason to implement it, theoretically leaving open the possibility that other considerations will be overriding.

But that’s really not all that comforting. If the only thing you say about a policy is a major upside, like it or not, you are implicitly endorsing that policy. Framing is vital. Everything you say could be completely, objectively, factually true; but if you only tell one side of the story, you are presenting a biased view. There’s a reason the oath is “The truth, the whole truth, and nothing but the truth.” A partial view of the facts can be as bad as an outright lie.

Of course, it’s unreasonable to expect you to present every possible consideration that could become relevant. Rather, I expect you to do two things: First, if you include some positive aspects, also include some negative ones, and vice-versa; never let your argument sound completely one-sided. Second, clearly and explicitly acknowledge that there are other considerations you haven’t mentioned.

Moreover, if you are talking about something like increasing GDP or decreasing unemployment—something that has been, many times, by many sources, treated as though it were a completely compelling reason unto itself—you must be especially careful. In such a context, an article that would be otherwise quite balanced can still come off as an unqualified endorsement.

Creativity and mental illness

Dec 1 JDN 2458819

There is some truth to the stereotype that artistic people are crazy. Mental illnesses, particularly bipolar disorder, are overrepresented among artists, writers, and musicians. Creative people score highly on literally all five of the Big Five personality traits: They are higher in Openness, higher in Conscientiousness, higher in Extraversion (that one actually surprised me), higher in Agreeableness, and higher in Neuroticism. Creative people just have more personality, it seems.

But in fact mental illness is not as overrepresented among creative people as most people think, and the highest probability of being a successful artist occurs when you have close relatives with mental illness, but are not yourself mentally ill. Those with mental illness actually tend to be most creative when their symptoms are in remission. This suggests that the apparent link between creativity and mental illness may actually increase over time, as treatments improve and remission becomes easier.

One possible source of the link is that artistic expression may be a form of self-medication: Art therapy does seem to have some promise in treating a variety of mental disorders (though it is not nearly as effective as therapy and medication). And that wouldn’t explain why family history of mental illness is actually a better predictor of creativity than mental illness itself.

My guess is that in order to be creative, you need to think differently than other people. You need to see the world in a way that others do not see it. Mental illness is surely not the only way to do that, but it’s definitely one way.

But creativity also requires basic functioning: If you are totally crippled by a mental illness, you’re not going to be very creative. So the people who are most creative have just enough craziness to think differently, but not so much that it takes over their lives.

This might even help explain how mental illness persisted in our population, despite its obvious survival disadvantages. It could be some form of heterozygote advantage.

The classic example of heterozygote advantage is sickle-cell anemia: If you have no copies of the sickle-cell gene, you’re normal. If you have two copies, you have sickle-cell anemia, which is very bad. But if you have only one copy, you’re healthy—and you’re resistant to malaria. Thus, high risk of malaria—as we certainly had, living in central Africa—creates a selection pressure that keeps sickle-cell genes in the population, even though having two copies is much worse than having none at all.

Mental illness might function something like this. I suspect it’s far more complicated than sickle-cell anemia, which is literally just two alleles of a single gene; but the overall process may be similar. If having just a little bit of bipolar disorder or schizophrenia makes you see the world differently than other people and makes you more creative, there are lots of reasons why that might improve the survival of your genes: There are the obvious problem-solving benefits, but also the simple fact that artists are sexy.

The downside of such “weird-thinking” genes is that they can go too far and make you mentally ill, perhaps if you have too many copies of them, or if you face an environmental trigger that sets them off. Sometimes the reason you see the world differently than everyone else is that you’re just seeing it wrong. But if the benefits of creativity are high enough—and they surely are—this could offset the risks, in an evolutionary sense.

But one thing is quite clear: If you are mentally ill, don’t avoid treatment for fear it will damage your creativity. Quite the opposite: A mental illness that is well treated and in remission is the optimal state for creativity. Go seek treatment, so that your creativity may blossom.