What we can be thankful for

Nov 24 JDN 2458812

Thanksgiving is upon us, yet as more and more evidence is revealed implicating President Trump in grievous crimes, as US carbon emissions that had been declining are now trending upward again, as our air quality deteriorates for the first time in decades, it may be hard to see what we should be thankful for.

But these are exceptions to a broader trend: The world is getting better, in almost every way, remarkably quickly. Homicide rates in the US are lower than they’ve been since the 1960s. Worldwide, the homicide rate has fallen 20% since 1990.

While world carbon emissions are still increasing, on a per capita basis they are actually starting to decline, and on an efficiency basis (kilograms of carbon-equivalent per dollar of GDP) they are at their lowest ever. This trend is likely to continue: The price of solar power has rapidly declined to the point where it is now the cheapest form of electric power.
The number—not just proportion, absolute number—of people in extreme poverty has declined by almost two-thirds within my own lifetime. The proportion is the lowest it has ever been in human history. World life expectancy is at its highest ever. Death rates from infectious disease fell by over 85% over the 20th century, and are now at their lowest ever.

I wouldn’t usually cite Reason as a source, but they’re right on this one: Defeat appears imminent for all four Horsemen of the Apocalypse. Pestilence, Famine, War, and even Death are all on the decline. We have a great deal to be grateful for: We are living in a golden age.

This is not to say that we should let ourselves become complacent and stop trying to make the world better: On the contrary, it proves that the world can be made better, which gives us every reason to redouble our efforts to do so.

Is Singularitarianism a religion?

 

Nov 17 JDN 2458805

I said in last week’s post that Pascal’s Mugging provides some deep insights into both Singularitarianism and religion. In particular, it explains why Singularitarianism seems so much like a religion.

This has been previously remarked, of course. I think Eric Steinhart makes the best case for Singularitarianism as a religion:

I think singularitarianism is a new religious movement. I might add that I think Clifford Geertz had a pretty nice (though very abstract) definition of religion. And I think singularitarianism fits Geertz’s definition (but that’s for another time).

My main interest is this: if singularitarianism is a new religious movement, then what should we make of it? Will it mainly be a good thing? A kind of enlightenment religion? It might be an excellent alternative to old-fashioned Abrahamic religion. Or would it degenerate into the well-known tragic pattern of coercive authority? Time will tell; but I think it’s worth thinking about this in much more detail.

To be clear: Singularitarianism is probably not a religion. It is certainly not a cult, as it has been even worse accused; the behaviors it prescribes are largely normative, pro-social behaviors, and therefore it would at worst be a mainstream religion. Really, if every religion only inspired people to do things like donate to famine relief and work on AI research (as opposed to, say, beheading gay people), I wouldn’t have much of a problem with religion.

In fact, Singularitarianism has one vital advantage over religion: Evidence. While the evidence in favor of it is not overwhelming, there is enough evidential support to lend plausibility to at least a broad concept of Singularitarianism: Technology will continue rapidly advancing, achieving accomplishments currently only in our wildest imaginings; artificial intelligence surpassing human intelligence will arise, sooner than many people think; human beings will change ourselves into something new and broadly superior; these posthumans will go on to colonize the galaxy and build a grander civilization than we can imagine. I don’t know that these things are true, but I hope they are, and I think it’s at least reasonably likely. All I’m really doing is extrapolating based on what human civilization has done so far and what we are currently trying to do now. Of course, we could well blow ourselves up before then, or regress to a lower level of technology, or be wiped out by some external force. But there’s at least a decent chance that we will continue to thrive for another million years to come.

But yes, Singularitarianism does in many ways resemble a religion: It offers a rich, emotionally fulfilling ontology combined with ethical prescriptions that require particular behaviors. It promises us a chance at immortality. It inspires us to work toward something much larger than ourselves. More importantly, it makes us special—we are among the unique few (millions?) who have the power to influence the direction of human and posthuman civilization for a million years. The stronger forms of Singularitarianism even have a flavor of apocalypse: When the AI comes, sooner than you think, it will immediately reshape everything at effectively infinite speed, so that from one year—or even one moment—to the next, our whole civilization will be changed. (These forms of Singularitarianism are substantially less plausible than the broader concept I outlined above.)

It’s this sense of specialness that Pascal’s Mugging provides some insight into. When it is suggested that we are so special, we should be inherently skeptical, not least because it feels good to hear that. (As Less Wrong would put it, we need to avoid a Happy Death Spiral.) Human beings like to feel special; we want to feel special. Our brains are configured to seek out evidence that we are special and reject evidence that we are not. This is true even to the point of absurdity: One cannot be mathematically coherent without admitting that the compliment “You’re one in a million.” is equivalent to the statement “There are seven thousand people as good or better than you.”—and yet, the latter seems much worse, because it does not make us sound special.

Indeed, the connection between Pascal’s Mugging and Pascal’s Wager is quite deep: Each argument takes a tiny probability and multiplies it by a huge impact in order to get a large expected utility. This often seems to be the way that religions defend themselves: Well, yes, the probability is small; but can you take the chance? Can you afford to take that bet if it’s really your immortal soul on the line?

And Singularitarianism has a similar case to make, even aside from the paradox of Pascal’s Mugging itself. The chief argument for why we should be focusing all of our time and energy on existential risk is that the potential payoff is just so huge that even a tiny probability of making a difference is enough to make it the only thing that matters. We should be especially suspicious of that; anything that says it is the only thing that matters is to be doubted with utmost care. The really dangerous religion has always been the fanatical kind that says it is the only thing that matters. That’s the kind of religion that makes you crash airliners into buildings.

I think some people may well have become Singularitarians because it made them feel special. It is exhilarating to be one of these lone few—and in the scheme of things, even a few million is a small fraction of all past and future humanity—with the power to effect some shift, however small, in the probability of a far grander, far brighter future.

Yet, in fact this is very likely the circumstance in which we are. We could have been born in the Neolithic, struggling to survive, utterly unaware of what would come a few millennia hence; we could have been born in the posthuman era, one of a trillion other artist/gamer/philosophers living in a world where all the hard work that needed to be done is already done. In the long S-curve of human development, we could have been born in the flat part on the left or the flat part on the right—and by all probability, we should have been; most people were. But instead we happened to be born in that tiny middle slice, where the curve slopes upward at its fastest. I suppose somebody had to be, and it might as well be us.

Sigmoid curve labeled

A priori, we should doubt that we were born so special. And when forming our beliefs, we should compensate for the fact that we want to believe we are special. But we do in fact have evidence, lots of evidence. We live in a time of astonishing scientific and technological progress.

My lifetime has included the progression from Deep Thought first beating David Levy to the creation of a computer one millimeter across that runs on a few nanowatts and nevertheless has ten times as much computing power as the 80-pound computer that ran the Saturn V. (The human brain runs on about 100 watts, and has a processing power of about 1 petaflop, so we can say that our energy efficiency is about 10 TFLOPS/W. The M3 runs on about 10 nanowatts and has a processing power of about 0.1 megaflops, so its energy efficiency is also about 10 TFLOPS/W. We did it! We finally made a computer as energy-efficient as the human brain! But we have still not matched the brain in terms of space-efficiency: The volume of the human brain is about 1000 cm^3, so our space efficiency is about 1 TFLOPS/cm^3. The volume of the M3 is about 1 mm^3, so its space efficiency is only about 100 MFLOPS/cm^3. The brain still wins by a factor of 10,000.)

My mother saw us go from the first jet airliners to landing on the Moon to the International Space Station and robots on Mars. She grew up before the polio vaccine and is still alive to see the first 3D-printed human heart. When I was a child, smartphones didn’t even exist; now more people have smartphones than have toilets. I may yet live to see the first human beings set foot on Mars. The pace of change is utterly staggering.

Without a doubt, this is sufficient evidence to believe that we, as a civilization, are living in a very special time. The real question is: Are we, as individuals, special enough to make a difference? And if we are, what weight of responsibility does this put upon us?

If you are reading this, odds are the answer to the first question is yes: You are definitely literate, and most likely educated, probably middle- or upper-middle-class in a First World country. Countries are something I can track, and I do get some readers from non-First-World countries; and of course I don’t observe your education or socioeconomic status. But at an educated guess, this is surely my primary reading demographic. Even if you don’t have the faintest idea what I’m talking about when I use Bayesian logic or calculus, you’re already quite exceptional. (And if you do? All the more so.)

That means the second question must apply: What do we owe these future generations who may come to exist if we play our cards right? What can we, as individuals, hope to do to bring about this brighter future?

The Singularitarian community will generally tell you that the best thing to do with your time is to work on AI research, or, failing that, the best thing to do with your money is to give it to people working on artificial intelligence research. I’m not going to tell you not to work on AI research or donate to AI research, as I do think it is among the most important things humanity needs to be doing right now, but I’m also not going to tell you that it is the one single thing you must be doing.

You should almost certainly be donating somewhere, but I’m not so sure it should be to AI research. Maybe it should be famine relief, or malaria prevention, or medical research, or human rights, or environmental sustainability. If you’re in the United States (as I know most of you are), the best thing to do with your money may well be to support political campaigns, because US political, economic, and military hegemony means that as goes America, so goes the world. Stop and think for a moment how different the prospects of global warming might have been—how many millions of lives might have been saved!—if Al Gore had become President in 2001. For lack of a few million dollars in Tampa twenty years ago, Miami may be gone in fifty. If you’re not sure which cause is most important, just pick one; or better yet, donate to a diversified portfolio of charities and political campaigns. Diversified investment isn’t just about monetary return.

And you should think carefully about what you’re doing with the rest of your life. This can be hard to do; we can easily get so caught up in just getting through the day, getting through the week, just getting by, that we lose sight of having a broader mission in life. Of course, I don’t know what your situation is; it’s possible things really are so desperate for you that you have no choice but to keep your head down and muddle through. But you should also consider the possibility that this is not the case: You may not be as desperate as you feel. You may have more options than you know. Most “starving artists” don’t actually starve. More people regret staying in their dead-end jobs than regret quitting to follow their dreams. I guess if you stay in a high-paying job in order to earn to give, that might really be ethically optimal; but I doubt it will make you happy. And in fact some of the most important fields are constrained by a lack of good people doing good work, and not by a simple lack of funding.

I see this especially in economics: As a field, economics is really not focused on the right kind of questions. There’s far too much prestige for incrementally adjusting some overcomplicated unfalsifiable mess of macroeconomic algebra, and not nearly enough for trying to figure out how to mitigate global warming, how to turn back the tide of rising wealth inequality, or what happens to human society once robots take all the middle-class jobs. Good work is being done in devising measures to fight poverty directly, but not in devising means to undermine the authoritarian regimes that are responsible for maintaining poverty. Formal mathematical sophistication is prized, and deep thought about hard questions is eschewed. We are carefully arranging the pebbles on our sandcastle in front of the oncoming tidal wave. I won’t tell you that it’s easy to change this—it certainly hasn’t been easy for me—but I have to imagine it’d be easier with more of us trying rather than with fewer. Nobody needs to donate money to economics departments, but we definitely do need better economists running those departments.

You should ask yourself what it is that you are really good at, what you—you yourself, not anyone else—might do to make a mark on the world. This is not an easy question: I have not quite answered for myself whether I would make more difference as an academic researcher, a policy analyst, a nonfiction author, or even a science fiction author. (If you scoff at the latter: Who would have any concept of AI, space colonization, or transhumanism, if not for science fiction authors? The people who most tilted the dial of human civilization toward this brighter future may well be Clarke, Roddenberry, and Asimov.) It is not impossible to be some combination or even all of these, but the more I try to take on the more difficult my life becomes.

Your own path will look different than mine, different, indeed, than anyone else’s. But you must choose it wisely. For we are very special individuals, living in a very special time.

Pascal’s Mugging

Nov 10 JDN 2458798

In the Singularitarian community there is a paradox known as “Pascal’s Mugging”. The name is an intentional reference to Pascal’s Wager (and the link is quite apt, for reasons I’ll discuss in a later post.)

There are a few different versions of the argument; Yudkowsky’s original argument in which he came up with the name “Pascal’s Mugging” relies upon the concept of the universe as a simulation and an understanding of esoteric mathematical notation. So here is a more intuitive version:

A strange man in a dark hood comes up to you on the street. “Give me five dollars,” he says, “or I will destroy an entire planet filled with ten billion innocent people. I cannot prove to you that I have this power, but how much is an innocent life worth to you? Even if it is as little as $5,000, are you really willing to bet on ten trillion to one odds that I am lying?”

Do you give him the five dollars? I suspect that you do not. Indeed, I suspect that you’d be less likely to give him the five dollars than if he had merely said he was homeless and asked for five dollars to help pay for food. (Also, you may have objected that you value innocent lives, even faraway strangers you’ll never meet, at more than $5,000 each—but if that’s the case, you should probably be donating more, because the world’s best charities can save a live for about $3,000.)

But therein lies the paradox: Are you really willing to bet on ten trillion to one odds?

This argument gives me much the same feeling as the Ontological Argument; as Russell said of the latter, “it is much easier to be persuaded that ontological arguments are no good than it is to say exactly what is wrong with them.” It wasn’t until I read this post on GiveWell that I could really formulate the answer clearly enough to explain it.

The apparent force of Pascal’s Mugging comes from the idea of expected utility: Even if the probability of an event is very small, if it has a sufficiently great impact, the expected utility can still be large.

The problem with this argument is that extraordinary claims require extraordinary evidence. If a man held a gun to your head and said he’d shoot you if you didn’t give him five dollars, you’d give him five dollars. This is a plausible claim and he has provided ample evidence. If he were instead wearing a bomb vest (or even just really puffy clothing that could conceal a bomb vest), and he threatened to blow up a building unless you gave him five dollars, you’d probably do the same. This is less plausible (what kind of terrorist only demands five dollars?), but it’s not worth taking the chance.

But when he claims to have a Death Star parked in orbit of some distant planet, primed to make another Alderaan, you are right to be extremely skeptical. And if he claims to be a being from beyond our universe, primed to destroy so many lives that we couldn’t even write the number down with all the atoms in our universe (which was actually Yudkowsky’s original argument), to say that you are extremely skeptical seems a grievous understatement.

That GiveWell post provides a way to make this intuition mathematically precise in terms of Bayesian logic. If you have a normal prior with mean 0 and standard deviation 1, and you are presented with a likelihood with mean X and standard deviation X, what should you make your posterior distribution?

Normal priors are quite convenient; they conjugate nicely. The precision (inverse variance) of the posterior distribution is the sum of the two precisions, and the mean is a weighted average of the two means, weighted by their precision.

So the posterior variance is 1/(1 + 1/X^2).

The posterior mean is 1/(1+1/X^2)*(0) + (1/X^2)/(1+1/X^2)*(X) = X/(X^2+1).

That is, the mean of the posterior distribution is just barely higher than zero—and in fact, it is decreasing in X, if X > 1.

For those who don’t speak Bayesian: If someone says he’s going to have an effect of magnitude X, you should be less likely to believe him the larger that X is. And indeed this is precisely what our intuition said before: If he says he’s going to kill one person, believe him. If he says he’s going to destroy a planet, don’t believe him, unless he provides some really extraordinary evidence.

What sort of extraordinary evidence? To his credit, Yudkowsky imagined the sort of evidence that might actually be convincing:

If a poorly-dressed street person offers to save 10(10^100) lives (googolplex lives) for $5 using their Matrix Lord powers, and you claim to assign this scenario less than 10-(10^100) probability, then apparently you should continue to believe absolutely that their offer is bogus even after they snap their fingers and cause a giant silhouette of themselves to appear in the sky.

This post he called “Pascal’s Muggle”, after the term from the Harry Potter series, since some of the solutions that had been proposed for dealing with Pascal’s Mugging had resulted in a situation almost as absurd, in which the mugger could exhibit powers beyond our imagining and yet nevertheless we’d never have sufficient evidence to believe him.

So, let me go on record as saying this: Yes, if someone snaps his fingers and causes the sky to rip open and reveal a silhouette of himself, I’ll do whatever that person says. The odds are still higher that I’m dreaming or hallucinating than that this is really a being from beyond our universe, but if I’m dreaming, it makes no difference, and if someone can make me hallucinate that vividly he can probably cajole the money out of me in other ways. And there might be just enough chance that this could be real that I’m willing to give up that five bucks.

These seem like really strange thought experiments, because they are. But like many good thought experiments, they can provide us with some important insights. In this case, I think they are telling us something about the way human reasoning can fail when faced with impacts beyond our normal experience: We are in danger of both over-estimating and under-estimating their effects, because our brains aren’t equipped to deal with magnitudes and probabilities on that scale. This has made me realize something rather important about both Singularitarianism and religion, but I’ll save that for next week’s post.

What if the charitable deduction were larger?

Nov 3 JDN 2458791

Right now, the charitable tax deduction is really not all that significant. It makes donating to charity cheaper, but you still always end up with less money after donating than you had before. It might cause you to donate more than you otherwise would have, but you’ll still only give to a charity you already care about.

This is because the tax deduction applies to your income, rather than your taxes directly. So if you make $100,000 and donate $10,000, you pay taxes as if your income were $90,000. Say your tax rate is 25%; then you go from paying $25,000 and keeping $75,000 to paying $22,500 and keeping $67,500. The more you donate, the less money you will have to keep.

Many people don’t seem to understand this; they seem to think that rich people can actually get richer by donating to charity. That can’t be done in our current tax system, or at least not legally. (There are fraudulent ways to do so; but there are fraudulent ways to do lots of things.) Part of the confusion may be related to the fact that people don’t seem to understand how tax brackets work; they worry about being “pushed into a higher tax bracket” as though this could somehow reduce their after-tax income, but that doesn’t happen. That isn’t how tax brackets work.

Some welfare programs work that way—for instance, seeing your income rise high enough to lose Medicaid eligibility can be bad enough that you would prefer to have less income—but taxes themselves do not.

The graph below shows the actual average tax rate (red) and marginal tax rate (purple) of the current US federal income tax:

Average_tax_rate
From that graph alone, you might think that going to a higher tax bracket could result in lower after-tax income. But the next graph, of before-tax (blue) and after-tax (green) income shows otherwise:

After_tax_income

All that tax deductions can do is reduce your taxable income. Thus the tax deduction benefits you if you were already donating, but never leaves you richer than you would have been without donating at all.

For example, if you have an income of $700,000, you would pay $223,000 in taxes and keep $477,000 in after-tax income. If you instead donate $100,000, your adjusted gross income will be reduced to $600,000, you will only pay $186,000 in taxes, and you will keep $414,000 in after-tax income. If there were no tax deduction, you would still have to pay $223,000 in taxes, and your after-tax income would be only $377,000. So you do benefit from the tax deduction; but there is no amount of donation which will actually increase your after-tax income to above $477,000.

But we wouldn’t have to do it this way. We could instead apply the deduction as a tax credit, which would make the effect of the deduction far larger.

Several years back, Miles Kimball (an economist who formerly worked at Michigan, now at UC Boulder) proposed a quite clever change to the tax system:

My proposal is to raise marginal tax rates above about $75,000 per person–or $150,000 per couple–by 10% (a dime on every extra dollar), but offer a 100% tax credit for public contributions up to the entire amount of the tax surcharge.

Kimball’s argument for the policy is mainly that this would make a tax increase more palatable, by giving people more control over where their money goes. This is surely true, and a worthwhile endeavor.

But the even larger benefit might come from the increased charitable donations. If we limited the tax credit to particularly high-impact charities, we would increase the donations to those charities. Whereas in the current system you get the same deduction regardless of where you give your money, even though we know that some charities are literally hundreds of times as cost-effective as others.

In fact, we might not even want to limit the tax credit to that 10% surcharge. If people want to donate more than 10% of their income to high-impact charities, perhaps we should let them. This would mean that the federal deficit could actually increase under this policy, but if so, there would have to be so much money donated that we’d most likely end world hunger. That’s a tradeoff I’m quite willing to make.

In principle, we could even introduce a tax credit that is greater than 100%—say for instance you get a 120% donation for the top-rated charities. This is not mathematically inconsistent, though it is surely a very bad idea. In that case, it absolutely would be possible to end up with more money than you started with, and the richer you are, the more you could get. There would effectively be a positive return on charitable donations, with the money paid for from the government budget. Bill Gates for instance could pay $10 billion a year to charity and the government would not only pay for it, but also have to give him an extra $2 billion. So even for the best charities—which probably are actually a good deal more cost-effective than the US government—we should cap the tax credit at 100%.

Obvious choices for high-impact charities include UNICEF, the Red Cross, GiveDirectly, and the Malaria Consortium. We would need some sort of criteria to decide which charities should get the benefits; I’m thinking we could have some sort of panel of experts who rate charities based on their cost-effectiveness.

It wouldn’t have to be all-or-nothing, either; charities with good but not top ratings could get an increased deduction but not a 100% deduction. The expert panel could rate charities on a scale from 0 to 10, and then anything above 5 gets an (X-5)*10% tax credit.

In effect, the current policy says, “If you give to charity, you don’t have to pay taxes on the money you gave; but all of your other taxes still apply.” The new policy would say, “You can give to a top-impact charity instead of paying taxes.”

Americans hate taxes and already give a lot to charity, but most of those donations are to relatively ineffective charities. This policy could incentivize people to give more or at least give to better places, probably without hurting the government budget—and if it does hurt the government budget, the benefits will be well worth the cost.