Reasons to like Joe Biden

Sep 6 JDN 2459099

Maybe it’s because I follow too many radical leftists on social media (this is at least a biased sample, no doubt), but I’ve seen an awful lot of posts basically making this argument: “Joe Biden is terrible, but we have to elect him, because Donald Trump is worse.”

And make no mistake: Whatever else you think about this election, the fact that Donald Trump is a fascist and Joe Biden is not is indeed a fully sufficient reason to vote for Biden. You shouldn’t need any more than that.

But in fact Joe Biden is not terrible. Yes, there are some things worth criticizing about his record and his platform—particularly with regard to civil liberties and war (both of those links are to my own posts making such criticisms of the Obama administration). I don’t want to sweep these significant flaws under the rug.

Yet, there are also a great many things that are good about Biden and his platform, and it’s worthwhile to talk about them. You shouldn’t feel like you are holding your nose and voting for the lesser of two evils; Biden is going to make a very good President.

First and foremost, there is his plan to invest in clean energy and combat climate change. For the first time in decades, we have a Presidential candidate who is explicitly pro-nuclear and has a detailed, realistic plan for achieving net-zero carbon emissions within a generation. We should have done this 30 years ago; but far better to start now than to wait even longer.

Then there is Biden’s plan for affordable housing. He wants to copy California’s Homeowner Bill of Rights at the federal level, fight redlining, expand Section 8, and nationalize the credit rating system. Above all, he wants to create a new First Down Payment Tax Credit that will provide first-time home buyers with $15,000 toward a down payment on a home. That is how you increase homeownership. The primary reason why people rent instead of owning is that they can’t afford the down payment.

Biden is also serious about LGBT rights, and wants to pass the Equality Act, which would finally make all discrimination based on sexual orientation or gender identity illegal at the federal level. He has plans to extend and aggressively enforce federal rules protecting people with disabilities. His plans for advancing racial equality seem to be thoroughly baked into all of his proposals, from small business funding to housing reform—likely part of why he’s so popular among Black voters.

His plan for education reform includes measures to equalize funding between rich and poor districts and between White and non-White districts.

Biden’s healthcare plan isn’t quite Medicare For All, but it’s actually remarkably close to that. He wants to provide a public healthcare option available to everyone, and also lower the Medicare eligibility age to 60 instead of 65. This means that anyone who wants Medicare will be able to buy into it, and also sets a precedent of lowering the eligibility age—remember, all we really need to do to get Medicare For All is lower that age to 18. Moreover, it avoids forcing people off private insurance that they like, which is the main reason why Medicare For All still does not have majority support.

While many on the left have complained that Biden believes in “tough on crime”, his plan for criminal justice reform actually strikes a very good balance between maintaining low crime rates and reducing incarceration and police brutality. The focus is on crime prevention instead of punishment, and it includes the elimination of all federal use of privatized prisons.

Most people would give lip service to being against domestic violence, but Biden has a detailed plan for actually protecting survivors and punishing abusers—including ratifying the Equal Rights Amendment and ending the rape kit backlog. The latter is an utter no-brainer. If we need to, we can pull the money from just about any other form of law enforcement (okay, I guess not homicide); those rape kits need to be tested and those rapists need to be charged.

Biden also has a sensible plan for gun control, which is consistent with the Second Amendment and Supreme Court precedent but still could provide substantial protections by reinstating the ban on assault weapons and high-capacity magazines, requiring universal background checks, and adding other sensible restrictions on who can be licensed to own firearms. It won’t do much about handguns or crimes of passion, but it should at least reduce mass shootings.

Biden doesn’t want to implement free four-year college—then again, neither do I—but he does have a plan for free community college and vocational schooling.

He also has a very ambitious plan for campaign finance reform, including a Constitutional Amendment that would ban all private campaign donations. Honestly if anything the plan sounds too ambitious; I doubt we can really implement all of these things any time soon. But if even half of them get through, our democracy will be in much better shape.

His immigration policy, while far from truly open borders, would reverse Trump’s appalling child-separation policy, expand access to asylum, eliminate long-term detention in favor of a probation system, and streamline the path to citizenship.

Biden’s platform is the first one I’ve seen that gives detailed plans for foreign aid and international development projects; he is particularly focused on Latin America.

I’ve seen many on the left complain that Biden was partly responsible for the current bankruptcy system that makes it nearly impossible to discharge student loans; well, his current platform includes a series of reforms developed by Elizabeth Warren designed to reverse that.

I do think Biden is too hawkish on war and not serious enough about protecting civil liberties—and I said the same thing about Obama years ago. But Biden isn’t just better than Trump (almost anyone would be better than Trump); he’s actually a genuinely good candidate with a strong, progressive platform.

You should already have been voting for Biden anyway. But hopefully now you can actually do it with some enthusiasm.

How much should we value statistical lives?

June 9 JDN 2458644

The very concept of putting a dollar value on a human life offends most people. I understand why: It suggests that human lives are fungible, and also seems to imply that killing people is just fine as long as it produces sufficient profit.

In next week’s post I’ll try to assuage some of those fears: Saying that a life is worth say $5 million doesn’t actually mean that it’s justifiable to kill someone as long as it pays you $5 million.

But for now let me say that we really have no choice but to do this. There are a huge number of interventions we could make in the world that all have the same basic form: They could save lives, but they cost money. We need to be able to say when we are justified in spending more money to save more lives, and when we are not.

No, it simply won’t do to say that “money is no object”. Because money isn’t just money—money is human happiness. A willingness to spend unlimited amounts to save even a single life, if it could be coherently implemented at all, would result in, if not complete chaos or deadlock, a joyless, empty world where we all live to be 100 by being contained in protective foam and fed by machines. It may be uncomfortable to ask a question like “How many people should we be willing to let die to let ourselves have Disneyland?”; but if that answer were zero, we should not have Disneyland. The same is true for almost everything in our lives: From automobiles to chocolate, almost any product you buy, any service you consume, has resulted in some person’s death at some point.

And there is an even more urgent reason, in fact: There are many things we are currently not doing that could save many lives for very little money. Targeted foreign aid or donations to top charities could save lives for as little as $1000 each. Foreign aid is so cost-effective that even if the only thing foreign aid had ever accomplished was curing smallpox, it would be twice as cost-effective as the UK National Health Service (which is one of the best healthcare systems in the world). Tighter environmental regulations save an additional life for about $200,000 in compliance cost, which is less than we would have spent in health care costs; the Clean Air Act added about $12 trillion to the US economy over the last 30 years.

Reduced military spending could literally pay us money to save people’s lives—based on the cost of the Afghanistan War, we are currently paying as much as $1 million per person to kill people that we really have very little reason to kill.

Most of the lives we could save are statistical lives: We can’t point to a particular individual who will or will not die because of the decision, but we can do the math and say approximately how many people will or will not die. We know that approximately 11,000 people will die each year if we loosen regulations on mercury pollution; we can’t say who they are, but they’re out there. Human beings have a lot of trouble thinking this way; it’s just not how our brains evolved to work. But when we’re talking about policy on a national or global scale, it’s quite simply the only way to do things. Anything else is talking nonsense.

Standard estimates of the value of a statistical life range from about $4 million to $9 million. These estimates are based on how much people are willing to pay for reductions in risk. So for instance if people would pay $100 to reduce their chances of dying by 0.01%, we divide the former by the latter to say that a life is worth about $1 million.

It’s a weird question: You clearly can’t just multiply like that. How much would you be willing to accept for a 100% chance of death? Presumably there isn’t really such an amount, because you would be dead. So your willingness-to-accept is undefined. And there’s no particular reason for it to be linear below that: Since marginal utility of wealth is decreasing, the amount you would demand for a 50% chance of death is a lot more than 50 times as much as what you would demand for a 1% chance of death.
Say for instance that utility of wealth is logarithmic. Say your currently lifetime wealth is $1 million, and your current utility is about 70 QALY. Then if we measure wealth in thousands of dollars, we have W = 1000 and U = 10 ln W.

How much would you be willing to accept for a 1% chance of death? Your utility when dead is presumably zero, so we are asking for an amount m such that 0.99 U(W+m) = U(W). 0.99 (10 ln (W+m)) = 10 ln (W) means (W+m)^0.99 = W, so m = W^(1/0.99) – W. We started with W = 1000, so m = 72. You would be willing to accept $72,000 for a 1% chance of death. So we would estimate the value of a statistical life at $7.2 million.

How much for a 0.0001% chance of death? W^(1/0.999999)-W = 0.0069. So you would demand $6.90 for such a risk, and we’d estimate your value of a statistical life at $6.9 million. Pretty close, though not the same.

But how much would you be willing to accept for a 50% chance of death? W^(1/0.5) – W = 999,000. That is, $999 million. So if we multiplied that out, we’d say that your value of a statistical life has now risen to a staggering (and ridiculous) $2 billion.

Mathematically, the estimates are more consistent if we use small probabilities—but all this assumes that people actually know their own utility of wealth and calculate it correctly, which is a very unreasonable assumption.

The much bigger problem with this method is that human beings are terrible at dealing with small probabilities. When asked how much they’d be willing to pay to reduce their chances of dying by 0.01%, most people probably have absolutely no idea and may literally just say a random number.

We need to rethink our entire approach for judging such numbers. Honestly we shouldn’t be trying to put a dollar value on a human life; we should be asking about the dollar cost of saving a human life. We should be asking what else we could do with that money. Indeed, for the time being, I think the best thing to do is actually to compare lives to lives: How many lives could we save for this amount of money?

Thus, if we’re considering starting a war that will cost $1 trillion, we need to ask ourselves: How many innocent people would die if we don’t do that? How many will die if we do? And what else could we do with a trillion dollars? If the war is against Nazi Germany, okay, sure; we’re talking about killing millions to save tens of millions. But if it’s against ISIS, or Iran, those numbers don’t come out so great.

If we have a choice between two policies, each of which will cost $10 billion, and one of them will save 1,000 lives while the other will save 100,000, the obvious answer is to pick the second one. Yet this is exactly the world we live in, and we’re not doing that. We are throwing money at military spending and tax cuts (things that many not save any lives at all) and denying it from climate change adaptation, foreign aid, and poverty relief.

Instead of asking whether a given intervention is cost-effective based upon some notion of a dollar value of a human life, we should be asking what the current cost of saving a human life is, and we should devote all available resources into whatever means saves the most lives for the least money. Most likely that means some sort of foreign aid, public health intervention, or poverty relief in Third World countries. It clearly does not mean cutting taxes on billionaires or starting another war in the Middle East.

Scope neglect and the question of optimal altruism

JDN 2457090 EDT 16:15.

We’re now on Eastern Daylight Time because of this bizarre tradition of shifting our time zone forward for half of the year. It’s supposed to save energy, but a natural experiment in India suggests it actually increases energy demand. So why do we do it? Like every ridiculous tradition (have you ever tried to explain Groundhog Day to someone from another country?), we do it because we’ve always done it.
This week’s topic is scope neglect, one of the most pervasive—and pernicious—cognitive heuristics human beings face. Scope neglect raises a great many challenges not only practically but also theoretically—it raises what I call the question of optimal altruism.

The question is simple to ask yet remarkably challenging to answer: How much should we be willing to sacrifice in order to benefit others? If we think of this as a number, your solidarity coefficient (s), it is equal to the cost you are willing to pay divided by the benefit your action has for someone else: s B > C.

This is analogous to the biological concept relatedness (r), on which Hamilton’s Rule applies: r B > C. Solidarity is the psychological analogue; instead of valuing people based on their genetic similarity to you, you value them based on… well, that’s the problem.

I can easily place upper and lower bounds: The lower bound is zero: You should definitely be willing to sacrifice something to help other people—otherwise you are a psychopath. The upper bound is one: There’s no point in paying more cost than you produce in benefit, and in fact even paying the same cost to yourself as you yield in benefits for other people doesn’t make a lot of sense, because it means that your own self-interest is meaningless and the fact that you understand your own needs better than the needs of others is also irrelevant.

But beyond that, it gets a lot harder—and that may explain why we suffer scope neglect in the first place. Should it be 90%? 50%? 10%? 1%? How should it vary between friends versus family versus strangers? It’s really hard to say. And this inability to precisely decide how much other people should be worth to us may be part of why we suffer scope neglect.

Scope neglect is the fact that we are not willing to expend effort or money in direct proportion to the benefit it would have. When different groups were asked how much they would be willing to donate in order to save the lives of 2,000 birds, 20,000 birds, or 200,000 birds, the answers they gave were statistically indistinguishable—always about $80. But however much a bird’s life is worth to you, shouldn’t 200,000 birds be worth, well, 200,000 times as much? In fact, more than that, because the marginal utility of wealth is decreasing, but I see no reason to think that the marginal utility of birds decreases nearly as fast.

But therein lies the problem: Usually we can’t pay 200,000 times as much. I’d feel like a horrible person if I weren’t willing to expend at least $10 or an equivalent amount of effort in order to save a bird. To save 200,000 birds that means I’d owe $2 million—and I simply don’t have $2 million.

You can get similar results to the bird experiment if you use children—though, as one might hope, the absolute numbers are a bit bigger, usually more like $500 to $1000. (And this, it turns out, is actually about how much it actually costs to save a child’s life by a particularly efficient means, such as anti-malaria nets, de-worming, or direct cash transfer. So please, by all means, give $1000 to UNICEF or the Against Malaria Foundation. If you can’t give $1000, give $100; if you can’t give $100, give $10.) It doesn’t much matter whether you say that the project will save 500 children, 5,000 children, or 50,000 children—people still will give about $500 to $1000. But once again, if I’m willing to spend $1000 to save a child—and I definitely am—how much should I be willing to spend to end malaria, which kills 500,000 children a year? Apparently $500 million, which not only do I not have, I almost certainly will not make that much money cumulatively through my entire life. ($2 million, on the other hand, I almost certainly will make cumulatively—the median income of an economist is $90,000 per year, so if I work for at least 22 years with that as my average income I’ll have cumulatively made $2 million. My net wealth may never be that high—though if I get better positions, or I’m lucky enough or clever enough with the stock market it might—but my cumulative income almost certainly will. Indeed, the average gain in cumulative income from a college degree is about $1 million. Because it takes time—time is money—and loans carry interest, this gives it a net present value of about $300,000.)

But maybe scope neglect isn’t such a bad thing after all. There is a very serious problem with these sort of moral dilemmas: The question didn’t say I would single-handedly save 200,000 birds—and indeed, that notion seems quite ridiculous. If I knew that I could actually save 200,000 birds and I were the only one who could do it, dammit, I would try to come up with that $2 million. I might not succeed, but I really would try as hard as I could.

And if I could single-handedly end malaria, I hereby vow that I would do anything it took to achieve that. Short of mass murder, anything I could do couldn’t be a higher cost to the world than malaria itself. I have no idea how I’d come up with $500 million, but I’d certainly try. Bill Gates could easily come up with that $500 million—so he did. In fact he endowed the Gates Foundation with $28 billion, and they’ve spent $1.3 billion of that on fighting malaria, saving hundreds of thousands of lives.

With this in mind, what is scope neglect really about? I think it’s about coordination. It’s not that people don’t care more about 200,000 birds than they do about 2,000; and it’s certainly not that they don’t care more about 50,000 children than they do about 500. Rather, the problem is that people don’t know how many other people are likely to donate, or how expensive the total project is likely to be; and we don’t know how much we should be willing to pay to save the life of a bird or a child.

Hence, what we basically do is give up; since we can’t actually assess the marginal utility of our donation dollars, we fall back on our automatic emotional response. Our mind focuses itself on visualizing that single bird covered in oil, or that single child suffering from malaria. We then hope that the representative heuristic will guide us in how much to give. Or we follow social norms, and give as much as we think others would expect us to give.

While many in the effective altruism community take this to be a failing, they never actually say what we should do—they never give us a figure for how much money we should be willing to donate to save the life of a child. Instead they retreat to abstraction, saying that whatever it is we’re willing to give to save a child, we should be willing to give 50,000 times as much to save 50,000 children.

But it’s not that simple. A bigger project may attract more supporters; if the two occur in direct proportion, then constant donation is the optimal response. Since it’s probably not actually proportional, you likely should give somewhat more to causes that affect more people; but exactly how much more is an astonishingly difficult question. I really don’t blame people—or myself—for only giving a little bit more to causes with larger impact, because actually getting the right answer is so incredibly hard. This is why it’s so important that we have institutions like GiveWell and Charity Navigator which do the hard work to research the effectiveness of charities and tell us which ones we should give to.

Yet even if we can properly prioritize which charities to give to first, that still leaves the question of how much each of us should give. 1% of our income? 5%? 10%? 20%? 50%? Should we give so much that we throw ourselves into the same poverty we are trying to save others from?

In his earlier work Peter Singer seemed to think we should give so much that it throws us into poverty ourselves; he asked us to literally compare every single purchase and ask ourselves whether a year of lattes or a nicer car is worth a child’s life. Of course even he doesn’t live that way, and in his later books Singer seems to have realized this, and now recommends the far more modest standard that everyone give at least 1% of their income. (He himself gives about 33%, but he’s also very rich so he doesn’t feel it nearly as much.) I think he may have overcompensated; while if literally everyone gave at least 1% that would be more than enough to end world hunger and solve many other problems—world nominal GDP is over $70 trillion, so 1% of that is $700 billion a year—we know that this won’t happen. Some will give more, others less; most will give nothing at all. Hence I think those of us who give should give more than our share; hence I lean toward figures more like 5% or 10%.

But then, why not 50% or 90%? It is very difficult for me to argue on principle why we shouldn’t be expected to give that much. Because my income is such a small proportion of the total donations, the marginal utility of each dollar I give is basically constant—and quite high; if it takes about $1000 to save a child’s life on average, and each of these children will then live about 60 more years at about half the world average happiness, that’s about 30 QALY per $1000, or about 30 milliQALY per dollar. Even at my current level of income (incidentally about as much as I think the US basic income should be), I’m benefiting myself only about 150 microQALY per dollar—so my money is worth about 200 times as much to those children as it is to me.

So now we have to ask ourselves the really uncomfortable question: How much do I value those children, relative to myself? If I am at all honest, the value is not 1; I’m not prepared to die for someone I’ve never met 10,000 kilometers away in a nation I’ve never even visited, nor am I prepared to give away all my possessions and throw myself into the same starvation I am hoping to save them from. I value my closest friends and family approximately the same as myself, but I have to admit that I value random strangers considerably less.

Do I really value them at less than 1%, as these figures would seem to imply? I feel like a monster saying that, but maybe it really isn’t so terrible—after all, most economists seem to think that the optimal solidarity coefficient is in fact zero. Maybe we need to become more comfortable admitting that random strangers aren’t worth that much to us, simply so that we can coherently acknowledge that they aren’t worth nothing. Very few of us actually give away all our possessions, after all.

Then again, what do we mean by worth? I can say from direct experience that a single migraine causes me vastly more pain than learning about the death of 200,000 people in an earthquake in Southeast Asia. And while I gave about $100 to the relief efforts involved in that earthquake, I’ve spent considerably more on migraine treatments—thousands, once you include health insurance. But given the chance, would I be willing to suffer a migraine to prevent such an earthquake? Without hesitation. So the amount of pain we feel is not the same as the amount of money we pay, which is not the same as what we would be willing to sacrifice. I think the latter is more indicative of how much people’s lives are really worth to us—but then, what we pay is what has the most direct effect on the world.

It’s actually possible to justify not dying or selling all my possessions even if my solidarity coefficient is much higher—it just leads to some really questionable conclusions. Essentially the argument is this: I am an asset. I have what economists call “human capital”—my health, my intelligence, my education—that gives me the opportunity to affect the world in ways those children cannot. In my ideal imagined future (albeit improbable) in which I actually become President of the World Bank and have the authority to set global development policy, I myself could actually have a marginal impact of megaQALY—millions of person-years of better life. In the far more likely scenario in which I attain some mid-level research or advisory position, I could be one of thousands of people who together have that sort of impact—which still means my own marginal effect is on the order of kiloQALY. And clearly it’s true that if I died, or even if I sold all my possessions, these events would no longer be possible.

The problem with that reasoning is that it’s wildly implausible to say that everyone in the First World are in this same sort of position—Peter Singer can say that, and maybe I can say that, and indeed hundreds of development economists can say that—but at least 99.9% of the First World population are not development economists, nor are they physicists likely to invent cold fusion, nor biomedical engineers likely to cure HIV, nor aid workers who distribute anti-malaria nets and polio vaccines, nor politicians who set national policy, nor diplomats who influence international relations, nor authors whose bestselling books raise worldwide consciousness. Yet I am not comfortable saying that all the world’s teachers, secretaries, airline pilots and truck drivers should give away their possessions either. (Maybe all the world’s bankers and CEOs should—or at least most of them.)

Is it enough that our economy would collapse without teachers, secretaries, airline pilots and truck drivers? But this seems rather like the fact that if everyone in the world visited the same restaurant there wouldn’t be enough room. Surely we could do without any individual teacher, any individual truck driver? If everyone gave the same proportion of their income, 1% would be more than enough to end malaria and world hunger. But we know that everyone won’t give, and the job won’t get done if those of us who do give only 1%.

Moreover, it’s also clearly not the case that everything I spend money on makes me more likely to become a successful and influential development economist. Buying a suit and a car actually clearly does—it’s much easier to get good jobs that way. Even leisure can be justified to some extent, since human beings need leisure and there’s no sense burning myself out before I get anything done. But do I need both of my video game systems? Couldn’t I buy a bit less Coke Zero? What if I watched a 20-inch TV instead of a 40-inch one? I still have free time; could I get another job and donate that money? This is the sort of question Peter Singer tells us to ask ourselves, and it quickly leads to a painfully spartan existence in which most of our time is spent thinking about whether what we’re doing is advancing or damaging the cause of ending world hunger. But then the cost of that stress and cognitive effort must be included; but how do you optimize your own cognitive effort? You need to think about the cost of thinking about the cost of thinking… and on and on. This is why bounded rationality modeling is hard, even though it’s plainly essential to both cognitive science and computer science. (John Stuart Mill wrote an essay that resonates deeply with me about how the pressure to change the world drove him into depression, and how he learned to accept that he could still change the world even if he weren’t constantly pressuring himself to do so—and indeed he did. James Mill set out to create in his son, John Stuart Mill, the greatest philosopher in the history of the world—and I believe that he succeeded.)

Perhaps we should figure out what proportion of the world’s people are likely to give, and how much we need altogether, and then assign the amount we expect from each of them based on that? The more money you ask from each, the fewer people are likely to give. This creates an optimization problem akin to setting the price of a product under monopoly—monopolies maximize profits by carefully balancing the quantity sold with the price at which they sell, and perhaps a similar balance would allow us to maximize development aid. But wouldn’t it be better if we could simply increase the number of people who give, so that we don’t have to ask so much of those who are generous? That means tax-funded foreign aid is the way to go, because it ensures coordination. And indeed I do favor increasing foreign aid to about 1% of GDP—in the US it is currently about $50 billion, 0.3% of GDP, a little more than 1% of the Federal budget. (Most people who say we should “cut” foreign aid don’t realize how small it already is.) But foreign aid is coercive; wouldn’t it be better if people would give voluntarily?

I don’t have a simple answer. I don’t know how much other people’s lives ought to be worth to us, or what it means for our decisions once we assign that value. But I hope I’ve convinced you that this problem is an important one—and made you think a little more about scope neglect and why we have it.