Good for the economy isn’t the same as good

Dec 8 JDN 2458826

Many of the common critiques of economics are actually somewhat misguided, or at least outdated: While there are still some neoclassical economists who think that markets are perfect and humans are completely rational, most economists these days would admit that there are at least some exceptions to this. But there’s at least one common critique that I think still has a good deal of merit: “Good for the economy” isn’t the same thing as good.

I’ve read literally dozens, if not hundreds, of articles on economics, in both popular press and peer-reviewed journals, that all defend their conclusions in the following way: “Intervention X would statistically be expected to increase GDP/raise total surplus/reduce unemployment. Therefore, policymakers should implement intervention X.” The fact that a policy would be “good for the economy” (in a very narrow sense) is taken as a completely compelling reason that this policy must be overall good.

The clearest examples of this always turn up during a recession, when inevitably people will start saying that cutting unemployment benefits will reduce unemployment. Sometimes it’s just right-wing pundits, but often it’s actually quite serious economists.

The usual left-wing response is to deny the claim, explain all the structural causes of unemployment in a recession and point out that unemployment benefits are not what caused the surge in unemployment. This is true; it is also utterly irrelevant. It can be simultaneously true that the unemployment was caused by bad monetary policy or a financial shock, and also true that cutting unemployment benefits would in fact reduce unemployment.

Indeed, I’m fairly certain that both of those propositions are true, to greater or lesser extent. Most people who are unemployed will remain unemployed regardless of how high or low unemployment benefits are; and likewise most people who are employed will remain so. But at the margin, I’m sure there’s someone who is on the fence about searching for a job, or who is trying to find a job but could try a little harder with some extra pressure, or who has a few lousy job offers they’re not taking because they hope to find a better offer later. That is, I have little doubt that the claim “Cutting unemployment benefits would reduce unemployment” is true.

The problem is that this is in no way a sufficient argument for cutting unemployment benefits. For while it might reduce unemployment per se, more importantly it would actually increase the harm of unemployment. Indeed, those two effects are in direct proportion: Cutting unemployment benefits only reduces unemployment insofar as it makes being unemployed a more painful and miserable experience for the unemployed.

Indeed, the very same (oversimplified) economic models that predict that cutting benefits would reduce unemployment use that precise mechanism, and thereby predict, necessarily, that cutting unemployment benefits will harm those who are unemployed. It has to. In some sense, it’s supposed to; otherwise it wouldn’t have any effect at all.
That is, if your goal is actually to help the people harmed by a recession, cutting unemployment benefits is absolutely not going to accomplish that. But if your goal is actually to reduce unemployment at any cost, I suppose it would in fact do that. (Also highly effective against unemployment: Mass military conscription. If everyone’s drafted, no one is unemployed!)

Similarly, I’ve read more than a few policy briefs written to the governments of poor countries telling them how some radical intervention into their society would (probably) increase their GDP, and then either subtly implying or outright stating that this means they are obliged to enact this intervention immediately.

Don’t get me wrong: Poor countries need to increase their GDP. Indeed, it’s probably the single most important thing they need to do. Providing better security, education, healthcare, and sanitation are all things that will increase GDP—but they’re also things that will be easier if you have more GDP.

(Rich countries, on the other hand? Maybe we don’t actually need to increase GDP. We may actually be better off focusing on things like reducing inequality and improving environmental sustainability, while keeping our level of GDP roughly the same—or maybe even reducing it somewhat. Stay inside the wedge.)

But the mere fact that a policy will increase GDP is not a sufficient reason to implement that policy. You also need to consider all sorts of other effects the policy will have: Poverty, inequality, social unrest, labor standards, pollution, and so on.

To be fair, sometimes these articles only say that the policy will increase GDP, and don’t actually assert that this is a sufficient reason to implement it, theoretically leaving open the possibility that other considerations will be overriding.

But that’s really not all that comforting. If the only thing you say about a policy is a major upside, like it or not, you are implicitly endorsing that policy. Framing is vital. Everything you say could be completely, objectively, factually true; but if you only tell one side of the story, you are presenting a biased view. There’s a reason the oath is “The truth, the whole truth, and nothing but the truth.” A partial view of the facts can be as bad as an outright lie.

Of course, it’s unreasonable to expect you to present every possible consideration that could become relevant. Rather, I expect you to do two things: First, if you include some positive aspects, also include some negative ones, and vice-versa; never let your argument sound completely one-sided. Second, clearly and explicitly acknowledge that there are other considerations you haven’t mentioned.

Moreover, if you are talking about something like increasing GDP or decreasing unemployment—something that has been, many times, by many sources, treated as though it were a completely compelling reason unto itself—you must be especially careful. In such a context, an article that would be otherwise quite balanced can still come off as an unqualified endorsement.

Creativity and mental illness

Dec 1 JDN 2458819

There is some truth to the stereotype that artistic people are crazy. Mental illnesses, particularly bipolar disorder, are overrepresented among artists, writers, and musicians. Creative people score highly on literally all five of the Big Five personality traits: They are higher in Openness, higher in Conscientiousness, higher in Extraversion (that one actually surprised me), higher in Agreeableness, and higher in Neuroticism. Creative people just have more personality, it seems.

But in fact mental illness is not as overrepresented among creative people as most people think, and the highest probability of being a successful artist occurs when you have close relatives with mental illness, but are not yourself mentally ill. Those with mental illness actually tend to be most creative when their symptoms are in remission. This suggests that the apparent link between creativity and mental illness may actually increase over time, as treatments improve and remission becomes easier.

One possible source of the link is that artistic expression may be a form of self-medication: Art therapy does seem to have some promise in treating a variety of mental disorders (though it is not nearly as effective as therapy and medication). And that wouldn’t explain why family history of mental illness is actually a better predictor of creativity than mental illness itself.

My guess is that in order to be creative, you need to think differently than other people. You need to see the world in a way that others do not see it. Mental illness is surely not the only way to do that, but it’s definitely one way.

But creativity also requires basic functioning: If you are totally crippled by a mental illness, you’re not going to be very creative. So the people who are most creative have just enough craziness to think differently, but not so much that it takes over their lives.

This might even help explain how mental illness persisted in our population, despite its obvious survival disadvantages. It could be some form of heterozygote advantage.

The classic example of heterozygote advantage is sickle-cell anemia: If you have no copies of the sickle-cell gene, you’re normal. If you have two copies, you have sickle-cell anemia, which is very bad. But if you have only one copy, you’re healthy—and you’re resistant to malaria. Thus, high risk of malaria—as we certainly had, living in central Africa—creates a selection pressure that keeps sickle-cell genes in the population, even though having two copies is much worse than having none at all.

Mental illness might function something like this. I suspect it’s far more complicated than sickle-cell anemia, which is literally just two alleles of a single gene; but the overall process may be similar. If having just a little bit of bipolar disorder or schizophrenia makes you see the world differently than other people and makes you more creative, there are lots of reasons why that might improve the survival of your genes: There are the obvious problem-solving benefits, but also the simple fact that artists are sexy.

The downside of such “weird-thinking” genes is that they can go too far and make you mentally ill, perhaps if you have too many copies of them, or if you face an environmental trigger that sets them off. Sometimes the reason you see the world differently than everyone else is that you’re just seeing it wrong. But if the benefits of creativity are high enough—and they surely are—this could offset the risks, in an evolutionary sense.

But one thing is quite clear: If you are mentally ill, don’t avoid treatment for fear it will damage your creativity. Quite the opposite: A mental illness that is well treated and in remission is the optimal state for creativity. Go seek treatment, so that your creativity may blossom.

What we can be thankful for

Nov 24 JDN 2458812

Thanksgiving is upon us, yet as more and more evidence is revealed implicating President Trump in grievous crimes, as US carbon emissions that had been declining are now trending upward again, as our air quality deteriorates for the first time in decades, it may be hard to see what we should be thankful for.

But these are exceptions to a broader trend: The world is getting better, in almost every way, remarkably quickly. Homicide rates in the US are lower than they’ve been since the 1960s. Worldwide, the homicide rate has fallen 20% since 1990.

While world carbon emissions are still increasing, on a per capita basis they are actually starting to decline, and on an efficiency basis (kilograms of carbon-equivalent per dollar of CO2) they are at their lowest ever. This trend is likely to continue: The price of solar power has rapidly declined to the point where it is now the cheapest form of electric power.
The number—not just proportion, absolute number—of people in extreme poverty has declined by almost two-thirds within my own lifetime. The proportion is the lowest it has ever been in human history. World life expectancy is at its highest ever. Death rates from infectious disease fell by over 85% over the 20th century, and are now at their lowest ever.

I wouldn’t usually cite Reason as a source, but they’re right on this one: Defeat appears imminent for all four Horsemen of the Apocalypse. Pestilence, Famine, War, and even Death are all on the decline. We have a great deal to be grateful for: We are living in a golden age.

This is not to say that we should let ourselves become complacent and stop trying to make the world better: On the contrary, it proves that the world can be made better, which gives us every reason to redouble our efforts to do so.

Is Singularitarianism a religion?

 

Nov 17 JDN 2458805

I said in last week’s post that Pascal’s Mugging provides some deep insights into both Singularitarianism and religion. In particular, it explains why Singularitarianism seems so much like a religion.

This has been previously remarked, of course. I think Eric Steinhart makes the best case for Singularitarianism as a religion:

I think singularitarianism is a new religious movement. I might add that I think Clifford Geertz had a pretty nice (though very abstract) definition of religion. And I think singularitarianism fits Geertz’s definition (but that’s for another time).

My main interest is this: if singularitarianism is a new religious movement, then what should we make of it? Will it mainly be a good thing? A kind of enlightenment religion? It might be an excellent alternative to old-fashioned Abrahamic religion. Or would it degenerate into the well-known tragic pattern of coercive authority? Time will tell; but I think it’s worth thinking about this in much more detail.

To be clear: Singularitarianism is probably not a religion. It is certainly not a cult, as it has been even worse accused; the behaviors it prescribes are largely normative, pro-social behaviors, and therefore it would at worst be a mainstream religion. Really, if every religion only inspired people to do things like donate to famine relief and work on AI research (as opposed to, say, beheading gay people), I wouldn’t have much of a problem with religion.

In fact, Singularitarianism has one vital advantage over religion: Evidence. While the evidence in favor of it is not overwhelming, there is enough evidential support to lend plausibility to at least a broad concept of Singularitarianism: Technology will continue rapidly advancing, achieving accomplishments currently only in our wildest imaginings; artificial intelligence surpassing human intelligence will arise, sooner than many people think; human beings will change ourselves into something new and broadly superior; these posthumans will go on to colonize the galaxy and build a grander civilization than we can imagine. I don’t know that these things are true, but I hope they are, and I think it’s at least reasonably likely. All I’m really doing is extrapolating based on what human civilization has done so far and what we are currently trying to do now. Of course, we could well blow ourselves up before then, or regress to a lower level of technology, or be wiped out by some external force. But there’s at least a decent chance that we will continue to thrive for another million years to come.

But yes, Singularitarianism does in many ways resemble a religion: It offers a rich, emotionally fulfilling ontology combined with ethical prescriptions that require particular behaviors. It promises us a chance at immortality. It inspires us to work toward something much larger than ourselves. More importantly, it makes us special—we are among the unique few (millions?) who have the power to influence the direction of human and posthuman civilization for a million years. The stronger forms of Singularitarianism even have a flavor of apocalypse: When the AI comes, sooner than you think, it will immediately reshape everything at effectively infinite speed, so that from one year—or even one moment—to the next, our whole civilization will be changed. (These forms of Singularitarianism are substantially less plausible than the broader concept I outlined above.)

It’s this sense of specialness that Pascal’s Mugging provides some insight into. When it is suggested that we are so special, we should be inherently skeptical, not least because it feels good to hear that. (As Less Wrong would put it, we need to avoid a Happy Death Spiral.) Human beings like to feel special; we want to feel special. Our brains are configured to seek out evidence that we are special and reject evidence that we are not. This is true even to the point of absurdity: One cannot be mathematically coherent without admitting that the compliment “You’re one in a million.” is equivalent to the statement “There are seven thousand people as good or better than you.”—and yet, the latter seems much worse, because it does not make us sound special.

Indeed, the connection between Pascal’s Mugging and Pascal’s Wager is quite deep: Each argument takes a tiny probability and multiplies it by a huge impact in order to get a large expected utility. This often seems to be the way that religions defend themselves: Well, yes, the probability is small; but can you take the chance? Can you afford to take that bet if it’s really your immortal soul on the line?

And Singularitarianism has a similar case to make, even aside from the paradox of Pascal’s Mugging itself. The chief argument for why we should be focusing all of our time and energy on existential risk is that the potential payoff is just so huge that even a tiny probability of making a difference is enough to make it the only thing that matters. We should be especially suspicious of that; anything that says it is the only thing that matters is to be doubted with utmost care. The really dangerous religion has always been the fanatical kind that says it is the only thing that matters. That’s the kind of religion that makes you crash airliners into buildings.

I think some people may well have become Singularitarians because it made them feel special. It is exhilarating to be one of these lone few—and in the scheme of things, even a few million is a small fraction of all past and future humanity—with the power to effect some shift, however small, in the probability of a far grander, far brighter future.

Yet, in fact this is very likely the circumstance in which we are. We could have been born in the Neolithic, struggling to survive, utterly unaware of what would come a few millennia hence; we could have been born in the posthuman era, one of a trillion other artist/gamer/philosophers living in a world where all the hard work that needed to be done is already done. In the long S-curve of human development, we could have been born in the flat part on the left or the flat part on the right—and by all probability, we should have been; most people were. But instead we happened to be born in that tiny middle slice, where the curve slopes upward at its fastest. I suppose somebody had to be, and it might as well be us.

Sigmoid curve labeled

A priori, we should doubt that we were born so special. And when forming our beliefs, we should compensate for the fact that we want to believe we are special. But we do in fact have evidence, lots of evidence. We live in a time of astonishing scientific and technological progress.

My lifetime has included the progression from Deep Thought first beating David Levy to the creation of a computer one millimeter across that runs on a few nanowatts and nevertheless has ten times as much computing power as the 80-pound computer that ran the Saturn V. (The human brain runs on about 100 watts, and has a processing power of about 1 petaflop, so we can say that our energy efficiency is about 10 TFLOPS/W. The M3 runs on about 10 nanowatts and has a processing power of about 0.1 megaflops, so its energy efficiency is also about 10 TFLOPS/W. We did it! We finally made a computer as energy-efficient as the human brain! But we have still not matched the brain in terms of space-efficiency: The volume of the human brain is about 1000 cm^3, so our space efficiency is about 1 TFLOPS/cm^3. The volume of the M3 is about 1 mm^3, so its space efficiency is only about 100 MFLOPS/cm^3. The brain still wins by a factor of 10,000.)

My mother saw us go from the first jet airliners to landing on the Moon to the International Space Station and robots on Mars. She grew up before the polio vaccine and is still alive to see the first 3D-printed human heart. When I was a child, smartphones didn’t even exist; now more people have smartphones than have toilets. I may yet live to see the first human beings set foot on Mars. The pace of change is utterly staggering.

Without a doubt, this is sufficient evidence to believe that we, as a civilization, are living in a very special time. The real question is: Are we, as individuals, special enough to make a difference? And if we are, what weight of responsibility does this put upon us?

If you are reading this, odds are the answer to the first question is yes: You are definitely literate, and most likely educated, probably middle- or upper-middle-class in a First World country. Countries are something I can track, and I do get some readers from non-First-World countries; and of course I don’t observe your education or socioeconomic status. But at an educated guess, this is surely my primary reading demographic. Even if you don’t have the faintest idea what I’m talking about when I use Bayesian logic or calculus, you’re already quite exceptional. (And if you do? All the more so.)

That means the second question must apply: What do we owe these future generations who may come to exist if we play our cards right? What can we, as individuals, hope to do to bring about this brighter future?

The Singularitarian community will generally tell you that the best thing to do with your time is to work on AI research, or, failing that, the best thing to do with your money is to give it to people working on artificial intelligence research. I’m not going to tell you not to work on AI research or donate to AI research, as I do think it is among the most important things humanity needs to be doing right now, but I’m also not going to tell you that it is the one single thing you must be doing.

You should almost certainly be donating somewhere, but I’m not so sure it should be to AI research. Maybe it should be famine relief, or malaria prevention, or medical research, or human rights, or environmental sustainability. If you’re in the United States (as I know most of you are), the best thing to do with your money may well be to support political campaigns, because US political, economic, and military hegemony means that as goes America, so goes the world. Stop and think for a moment how different the prospects of global warming might have been—how many millions of lives might have been saved!—if Al Gore had become President in 2001. For lack of a few million dollars in Tampa twenty years ago, Miami may be gone in fifty. If you’re not sure which cause is most important, just pick one; or better yet, donate to a diversified portfolio of charities and political campaigns. Diversified investment isn’t just about monetary return.

And you should think carefully about what you’re doing with the rest of your life. This can be hard to do; we can easily get so caught up in just getting through the day, getting through the week, just getting by, that we lose sight of having a broader mission in life. Of course, I don’t know what your situation is; it’s possible things really are so desperate for you that you have no choice but to keep your head down and muddle through. But you should also consider the possibility that this is not the case: You may not be as desperate as you feel. You may have more options than you know. Most “starving artists” don’t actually starve. More people regret staying in their dead-end jobs than regret quitting to follow their dreams. I guess if you stay in a high-paying job in order to earn to give, that might really be ethically optimal; but I doubt it will make you happy. And in fact some of the most important fields are constrained by a lack of good people doing good work, and not by a simple lack of funding.

I see this especially in economics: As a field, economics is really not focused on the right kind of questions. There’s far too much prestige for incrementally adjusting some overcomplicated unfalsifiable mess of macroeconomic algebra, and not nearly enough for trying to figure out how to mitigate global warming, how to turn back the tide of rising wealth inequality, or what happens to human society once robots take all the middle-class jobs. Good work is being done in devising measures to fight poverty directly, but not in devising means to undermine the authoritarian regimes that are responsible for maintaining poverty. Formal mathematical sophistication is prized, and deep thought about hard questions is eschewed. We are carefully arranging the pebbles on our sandcastle in front of the oncoming tidal wave. I won’t tell you that it’s easy to change this—it certainly hasn’t been easy for me—but I have to imagine it’d be easier with more of us trying rather than with fewer. Nobody needs to donate money to economics departments, but we definitely do need better economists running those departments.

You should ask yourself what it is that you are really good at, what you—you yourself, not anyone else—might do to make a mark on the world. This is not an easy question: I have not quite answered for myself whether I would make more difference as an academic researcher, a policy analyst, a nonfiction author, or even a science fiction author. (If you scoff at the latter: Who would have any concept of AI, space colonization, or transhumanism, if not for science fiction authors? The people who most tilted the dial of human civilization toward this brighter future may well be Clarke, Roddenberry, and Asimov.) It is not impossible to be some combination or even all of these, but the more I try to take on the more difficult my life becomes.

Your own path will look different than mine, different, indeed, than anyone else’s. But you must choose it wisely. For we are very special individuals, living in a very special time.

Pascal’s Mugging

Nov 10 JDN 2458798

In the Singularitarian community there is a paradox known as “Pascal’s Mugging”. The name is an intentional reference to Pascal’s Wager (and the link is quite apt, for reasons I’ll discuss in a later post.)

There are a few different versions of the argument; Yudkowsky’s original argument in which he came up with the name “Pascal’s Mugging” relies upon the concept of the universe as a simulation and an understanding of esoteric mathematical notation. So here is a more intuitive version:

A strange man in a dark hood comes up to you on the street. “Give me five dollars,” he says, “or I will destroy an entire planet filled with ten billion innocent people. I cannot prove to you that I have this power, but how much is an innocent life worth to you? Even if it is as little as $5,000, are you really willing to bet on ten trillion to one odds that I am lying?”

Do you give him the five dollars? I suspect that you do not. Indeed, I suspect that you’d be less likely to give him the five dollars than if he had merely said he was homeless and asked for five dollars to help pay for food. (Also, you may have objected that you value innocent lives, even faraway strangers you’ll never meet, at more than $5,000 each—but if that’s the case, you should probably be donating more, because the world’s best charities can save a live for about $3,000.)

But therein lies the paradox: Are you really willing to bet on ten trillion to one odds?

This argument gives me much the same feeling as the Ontological Argument; as Russell said of the latter, “it is much easier to be persuaded that ontological arguments are no good than it is to say exactly what is wrong with them.” It wasn’t until I read this post on GiveWell that I could really formulate the answer clearly enough to explain it.

The apparent force of Pascal’s Mugging comes from the idea of expected utility: Even if the probability of an event is very small, if it has a sufficiently great impact, the expected utility can still be large.

The problem with this argument is that extraordinary claims require extraordinary evidence. If a man held a gun to your head and said he’d shoot you if you didn’t give him five dollars, you’d give him five dollars. This is a plausible claim and he has provided ample evidence. If he were instead wearing a bomb vest (or even just really puffy clothing that could conceal a bomb vest), and he threatened to blow up a building unless you gave him five dollars, you’d probably do the same. This is less plausible (what kind of terrorist only demands five dollars?), but it’s not worth taking the chance.

But when he claims to have a Death Star parked in orbit of some distant planet, primed to make another Alderaan, you are right to be extremely skeptical. And if he claims to be a being from beyond our universe, primed to destroy so many lives that we couldn’t even write the number down with all the atoms in our universe (which was actually Yudkowsky’s original argument), to say that you are extremely skeptical seems a grievous understatement.

That GiveWell post provides a way to make this intuition mathematically precise in terms of Bayesian logic. If you have a normal prior with mean 0 and standard deviation 1, and you are presented with a likelihood with mean X and standard deviation X, what should you make your posterior distribution?

Normal priors are quite convenient; they conjugate nicely. The precision (inverse variance) of the posterior distribution is the sum of the two precisions, and the mean is a weighted average of the two means, weighted by their precision.

So the posterior variance is 1/(1 + 1/X^2).

The posterior mean is 1/(1+1/X^2)*(0) + (1/X^2)/(1+1/X^2)*(X) = X/(X^2+1).

That is, the mean of the posterior distribution is just barely higher than zero—and in fact, it is decreasing in X, if X > 1.

For those who don’t speak Bayesian: If someone says he’s going to have an effect of magnitude X, you should be less likely to believe him the larger that X is. And indeed this is precisely what our intuition said before: If he says he’s going to kill one person, believe him. If he says he’s going to destroy a planet, don’t believe him, unless he provides some really extraordinary evidence.

What sort of extraordinary evidence? To his credit, Yudkowsky imagined the sort of evidence that might actually be convincing:

If a poorly-dressed street person offers to save 10(10^100) lives (googolplex lives) for $5 using their Matrix Lord powers, and you claim to assign this scenario less than 10-(10^100) probability, then apparently you should continue to believe absolutely that their offer is bogus even after they snap their fingers and cause a giant silhouette of themselves to appear in the sky.

This post he called “Pascal’s Muggle”, after the term from the Harry Potter series, since some of the solutions that had been proposed for dealing with Pascal’s Mugging had resulted in a situation almost as absurd, in which the mugger could exhibit powers beyond our imagining and yet nevertheless we’d never have sufficient evidence to believe him.

So, let me go on record as saying this: Yes, if someone snaps his fingers and causes the sky to rip open and reveal a silhouette of himself, I’ll do whatever that person says. The odds are still higher that I’m dreaming or hallucinating than that this is really a being from beyond our universe, but if I’m dreaming, it makes no difference, and if someone can make me hallucinate that vividly he can probably cajole the money out of me in other ways. And there might be just enough chance that this could be real that I’m willing to give up that five bucks.

These seem like really strange thought experiments, because they are. But like many good thought experiments, they can provide us with some important insights. In this case, I think they are telling us something about the way human reasoning can fail when faced with impacts beyond our normal experience: We are in danger of both over-estimating and under-estimating their effects, because our brains aren’t equipped to deal with magnitudes and probabilities on that scale. This has made me realize something rather important about both Singularitarianism and religion, but I’ll save that for next week’s post.

What if the charitable deduction were larger?

Nov 3 JDN 2458791

Right now, the charitable tax deduction is really not all that significant. It makes donating to charity cheaper, but you still always end up with less money after donating than you had before. It might cause you to donate more than you otherwise would have, but you’ll still only give to a charity you already care about.

This is because the tax deduction applies to your income, rather than your taxes directly. So if you make $100,000 and donate $10,000, you pay taxes as if your income were $90,000. Say your tax rate is 25%; then you go from paying $25,000 and keeping $75,000 to paying $22,500 and keeping $67,500. The more you donate, the less money you will have to keep.

Many people don’t seem to understand this; they seem to think that rich people can actually get richer by donating to charity. That can’t be done in our current tax system, or at least not legally. (There are fraudulent ways to do so; but there are fraudulent ways to do lots of things.) Part of the confusion may be related to the fact that people don’t seem to understand how tax brackets work; they worry about being “pushed into a higher tax bracket” as though this could somehow reduce their after-tax income, but that doesn’t happen. That isn’t how tax brackets work.

Some welfare programs work that way—for instance, seeing your income rise high enough to lose Medicaid eligibility can be bad enough that you would prefer to have less income—but taxes themselves do not.

The graph below shows the actual average tax rate (red) and marginal tax rate (purple) of the current US federal income tax:

Average_tax_rate
From that graph alone, you might think that going to a higher tax bracket could result in lower after-tax income. But the next graph, of before-tax (blue) and after-tax (green) income shows otherwise:

After_tax_income

All that tax deductions can do is shift you left on the green line. Without the tax deduction, you would instead shift left on the blue line, and then read off your position on the green line. Thus the tax deduction benefits you if you were already donating, but never leaves you richer than you would have been without donating at all.

For example, if you have an income of $700,000, you would pay $223,000 in taxes and keep $477,000 in after-tax income. If you instead donate $100,000, your adjusted gross income will be reduced to $600,000, you will only pay $186,000 in taxes, and you will keep $414,000 in after-tax income. If there were no tax deduction, you would still have to pay $223,000 in taxes, and your after-tax income would be only $377,000. So you do benefit from the tax deduction; but there is no amount of donation which will actually increase your after-tax income to above $477,000.

But we wouldn’t have to do it this way. We could instead apply the deduction as a tax credit, which would make the effect of the deduction far larger.

Several years back, Miles Kimball (an economist who formerly worked at Michigan, now at UC Boulder) proposed a quite clever change to the tax system:

My proposal is to raise marginal tax rates above about $75,000 per person–or $150,000 per couple–by 10% (a dime on every extra dollar), but offer a 100% tax credit for public contributions up to the entire amount of the tax surcharge.

Kimball’s argument for the policy is mainly that this would make a tax increase more palatable, by giving people more control over where their money goes. This is surely true, and a worthwhile endeavor.

But the even larger benefit might come from the increased charitable donations. If we limited the tax credit to particularly high-impact charities, we would increase the donations to those charities. Whereas in the current system you get the same deduction regardless of where you give your money, even though we know that some charities are literally hundreds of times as cost-effective as others.

In fact, we might not even want to limit the tax credit to that 10% surcharge. If people want to donate more than 10% of their income to high-impact charities, perhaps we should let them. This would mean that the federal deficit could actually increase under this policy, but if so, there would have to be so much money donated that we’d most likely end world hunger. That’s a tradeoff I’m quite willing to make.

In principle, we could even introduce a tax credit that is greater than 100%—say for instance you get a 120% donation for the top-rated charities. This is not mathematically inconsistent, though it is surely a very bad idea. In that case, it absolutely would be possible to end up with more money than you started with, and the richer you are, the more you could get. There would effectively be a positive return on charitable donations, with the money paid for from the government budget. Bill Gates for instance could pay $10 billion a year to charity and the government would not only pay for it, but also have to give him an extra $2 billion. So even for the best charities—which probably are actually a good deal more cost-effective than the US government—we should cap the tax credit at 100%.

Obvious choices for high-impact charities include UNICEF, the Red Cross, GiveDirectly, and the Malaria Consortium. We would need some sort of criteria to decide which charities should get the benefits; I’m thinking we could have some sort of panel of experts who rate charities based on their cost-effectiveness.

It wouldn’t have to be all-or-nothing, either; charities with good but not top ratings could get an increased deduction but not a 100% deduction. The expert panel could rate charities on a scale from 0 to 10, and then anything above 5 gets an (X-5)*10% tax credit.

In effect, the current policy says, “If you give to charity, you don’t have to pay taxes on the money you gave; but all of your other taxes still apply.” The new policy would say, “You can give to a top-impact charity instead of paying taxes.”

Americans hate taxes and already give a lot to charity, but most of those donations are to relatively ineffective charities. This policy could incentivize people to give more or at least give to better places, probably without hurting the government budget—and if it does hurt the government budget, the benefits will be well worth the cost.

Revealed preference: Does the fact that I did it mean I preferred it?

Post 312 Oct 27 JDN 2458784

One of the most basic axioms of neoclassical economics is revealed preference: Because we cannot observe preferences directly, we infer them from actions. Whatever you chose must be what you preferred.

Stated so badly, this is obviously not true: We often make decisions that we later come to regret. We may choose under duress, or confusion; we may lack necessary information. We change our minds.

And there really do seem to be economists who use it in this bald way: From the fact that a particular outcome occurred in a free market, they will infer that it must be optimally efficient. (“Freshwater” economists who are dubious of any intervention into markets seem to be most guilty of this.) In the most extreme form, this account would have us believe that people who trip and fall do so on purpose.

I doubt anyone believes that particular version—but there do seem to be people who believe that unemployment is the result of people voluntarily choosing not to work, and revealed preference has also led economists down some strange paths when trying to explain what sure looks like irrational behavior—such as “rational addiction” theory, positing that someone can absolutely become addicted to alcohol or heroin and end up ruining their life all based on completely rational, forward-thinking decision planning.

The theory can be adapted to deal with these issues, by specifying that it’s only choices made with full information and all of our faculties intact that count as revealing our preferences.

But when are we ever in such circumstances? When do we ever really have all the information we need in order to make a rational decision? Just what constitutes intact faculties? No one is perfectly rational—so how rational must we be in order for our decisions to count as revealing our preferences?

Revealed preference theory also quickly becomes tautologous: Why do we choose to do things? Because we prefer them. What do we prefer? What we choose to do. Without some independent account of what our preferences are, we can’t really predict behavior this way.

A standard counter-argument to this is that revealed preference theory imposes certain constraints of consistency and transitivity, so it is not utterly vacuous. The problem with this answer is that human beings don’t obey those constraints. The Allais Paradox, the Ellsberg Paradox, the sunk cost fallacy. It’s even possible to use these inconsistencies to create “money pumps” that will cause people to systematically give you money; this has been done in experiments. While real-world violations seem to be small, they’re definitely present. So insofar as your theory is testable, it’s false.

The good news is that we really don’t need revealed preference theory. We already have ways of telling what human beings prefer that are considerably richer than simply observing what they choose in various scenarios. One very simple but surprisingly powerful method is to ask. In general, if you ask people what they want and they have no reason to distrust you, they will in fact tell you what they want.

We also have our own introspection, as well as our knowledge about millions of years of evolutionary history that shaped our brains. We don’t expect a lot of people to prefer suffering, for instance (even masochists, who might be said to ‘prefer pain’, seem to be experiencing that pain rather differently than the rest of us would). We generally expect that people will prefer to stay alive rather than die. Some may prefer chocolate, others vanilla; but few prefer motor oil. Our preferences may vary, but they do follow consistent patterns; they are not utterly arbitrary and inscrutable.

There is a deeper problem that any account of human desires must face, however: Sometimes we are actually wrong about our own desires. Affective forecasting, the prediction of our own future mental states, is astonishingly unreliable. People often wildly overestimate the emotional effects of both positive and negative outcomes. (Interestingly, people with depression tend not to do this—those with severe depression often underestimate the emotional effects of positive outcomes, while those with mild depression seem to be some of the most accurate forecasters, an example of the depressive realism effect.)

There may be no simple solution to this problem. Human existence is complicated; we spend large portions of our lives trying to figure out what it is we really want.
This means that we should not simply trust that whatever it is happens is what everyone—or even necessarily anyone—wants to happen. People make mistakes, even large, systematic, repeated mistakes. Sometimes what happens is just bad, and we should be trying to change it. Indeed, sometimes people need to be protected from their own bad decisions.