A new Santa Baby

Dec 28 JDN 2461038

In the song “Santa Baby”, there are several high-value items requested as Christmas gifts. I’m currently working on a rewrite of the song that compares these items with humanitarian interventions of the same cost, making into a protest song—but so far I’ve had trouble making it actually singable with the meter of the song.

So for now, I thought I’d share my cost estimates and what could be purchased with those same amounts:

Sable: $1000 More expensive than most dogs, but really not that bad! In fact, some purebreds cost more than that.

1954 convertible: $28,000; yeah, classic cars are really not that expensive actually.

Yacht: There are yachts and then there are yachts. Could cost anywhere from $300,000 to $500 million.

Platinum mine: Hard to estimate, but with platinum costing $2400 per ounce and mines capable of producing thousands of ounces per year for 20 years, should be worth at least $100 million—and possibly as much as $1 billion.

Duplex: $400,000 or so, depending on the location.

Decorations at Tiffany’s: Depends on what you buy, but easily $10,000 to trim a whole tree; that store is so wildly overpriced that a jewellery box can cost you $2,000 and even an individual Christmas tree ornament can cost $160. (Seriously, don’t shop at Tiffany’s.)

Ring: Depends on a lot of factors; I’ll assume platinum, so that will run you anywhere from $400 for a basic band to $95,000 for one with a huge diamond.

The platinum mine is a clear outlier; unless you buy one of the largest yachts in the world, none of the other items even come close to its price. Aside from the yacht, all the other items add up to less than a million dollars, and even the cheapest platinum mines are clearly worth more than that.

What else could you buy for these amounts?

Well, a malaria net costs about $2, and on average every $3,000 spent saves a child’s life. A vaccine costs about $1-$5 per dose. So for the price of the platinum mine alone, we could buy 50 million malaria nets or 20 million vaccines, and either way expect to save the lives of about 30,000 children.

(Maybe some other time I’ll actually make this into something singable.)

On the other hand, if you really wanna buy a sable or a 1954 convertible, they’re really not that expensive. The former is cheaper than a purebred dog, and the latter costs about the same as a new car.

The longest night

Dec 21 JDN 2461031

When this post goes live, it will be (almost exactly) the winter solstice in the Northern Hemisphere. In our culture, derived mainly from European influences, we associate this time of year with Christmas; but in fact solstice celebrations are much more ancient and universal than that. Humans have been engaging in some sort of ritual celebration—often involving feasts and/or gifts—around the winter solstice in basically every temperate region of the world for as far back as we are able to determine. (You don’t see solstice celebrations so much in tropical regions, because “winter” isn’t really a thing there; those cultures tend to adopt lunar or lunisolar calendars instead.) Presumably humans have been doing something along these lines for about as long as there have been humans to do them.

I think part of why solstice celebrations are so enduring is that the solstice has both powerful symbolism and practical significance. It is the longest night of the year, when the sky will be darkest for the longest time and light for the shortest—above the Arctic Circle, the night lasts 24 hours and the sky never gets light at all. But from that point forward, the light will start to return. The solstice also heralds the start of the winter months, when the air is cold enough to be dangerous and food becomes much scarcer.

Of course, today we don’t have to worry about that so much: We have electric heating and refrigeration, so we can stay warm inside and eat pretty much whatever we want all year round. The practical significance, then, of the solstice has greatly decreased for us.

Yet it’s still a very symbolic time: The darkness is at its worst, the turning point is reached, the light will soon return. And when we reflect on how much safer we are than our ancestors were during this time of year, we may find it in our hearts to feel some gratitude for how far humanity has come—even if we still have terribly far yet to go.

And this year, in particular, I think we are seeing the turning point for a lot of darkness. The last year especially has been a nightmare for, well, the entire free world—not to mention all the poor countries who depended on us for aid—but at last it seems like we are beginning to wake from that nightmare. Within margin of error, Trump’s approval rating is at the lowest it has ever been, about 43% (still shockingly high, I admit), and the Republicans seem to be much more divided and disorganized than they were just a year ago, some of them even openly defying Trump instead of bowing at his every word.

Of course, while the motions of the Earth are extraordinarily regular and predictable, changes in society are not. The solstice will certainly happen on schedule, and the days will certainly get longer for the next six months after that—I’d give you million-to-one odds on either proposition. (Frankly, if I ever had to pay, we’d probably have bigger problems!)

But as far as our political, economic, and cultural situation, things could get very well get worse again before they get better. There’s even a chance they won’t get better, that it’s all downhill from here—but I believe those chances are very small. Things are not so bleak as that.

While there have certainly been setbacks and there will surely be more, on the whole humanity’s trajectory has been upward, toward greater justice and prosperity. Things feel so bad right now, not so much because they are bad in absolute terms (would you rather live as a Roman slave or a Medieval peasant?), but because this is such a harsh reversal in an otherwise upward trend—and because we can see just how easy it would be to do even better still, if the powers that be had half the will to do so.

So here’s hoping that on this longest night, at least some of the people with the power to make things better will see a little more of the light.

The confidence game

Dec 14 JDN 2461024

Our society rewards confidence. Indeed, it seems to do so without limit: The more confident you are, the more successful you will be, the more prestige you will gain, the more power you will have, the more money you will make. It doesn’t seem to matter whether your confidence is justified; there is no punishment for overconfidence and no reward for humility.

If you doubt this, I give you Exhibit A: President Donald Trump.

He has nothing else going for him. He manages to epitomize almost every human vice and lack in almost every human virtue. He is ignorant, impulsive, rude, cruel, incurious, bigoted, incompetent, selfish, xenophobic, racist, and misogynist. He has no empathy, no understanding of justice, and little capacity for self-control. He cares nothing for truth and lies constantly, even to the point of pathology. He has been convicted of multiple felonies. His businesses routinely go bankrupt, and he saves his wealth mainly through fraud and lawsuits. He has publicly admitted to sexually assaulting adult women, and there is mounting evidence that he has also sexually assaulted teenage girls. He is, in short, one of the worst human beings in the world. He does not have the integrity or trustworthiness to be an assistant manager at McDonald’s, let alone President of the United States.

But he thinks he’s brilliant and competent and wise and ethical, and constantly tells everyone around him that he is—and millions of people apparently believe him.

To be fair, confidence is not the only trait that our society rewards. Sometimes it does actually reward hard work, competence, or intellect. But in fact it seems to reward these virtues less consistently than it rewards confidence. And quite frankly I’m not convinced our society rewards honesty at all; liars and frauds seem to be disproportionately represented among the successful.

This troubles me most of all because confidence is not a virtue.

There is nothing good about being confident per se. There is virtue in notbeing underconfident, because underconfidence prevents you from taking actions you should take. But there is just as much virtue in not being overconfident, because overconfidence makes you take actions you shouldn’t—and if anything, is the more dangerous of the two. Yet our culture appears utterly incapable of discerning whether confidence is justifiable—even in the most blatantly obvious cases—and instead rewards everyone all the time for being as confident as they can possibly be.

In fact, the most confident people are usually less competent than the most humble people—because when you really understand something, you also understand how much you don’t understand.

We seem totally unable to tell whether someone who thinks they are right is actually right; and so, whoever thinks they are right is assumed to be right, all the time, every time.

Some of this may even be genetic, a heuristic that perhaps made more sense in our ancient environment. Even quite young children already are more willing to trust confident answers than hesitant ones, in multiple experiments.

Studies suggest that experts are just as overconfident as anyone else, but to be frank, I think this is because you don’t get to be called an expert unless you’re overconfident; people with intellectual humility are filtered out by the brutal competition of academia before they can get tenure.

I guess this is also personal for me.

I am not a confident person. Temperamentally, I just feel deeply uncomfortable going out on a limb and asserting things when I’m not entirely certain of them. I also have something of a complex about ever being perceived as arrogant or condescending, maybe because people often seem to perceive me that way even when I am actively trying to do the opposite. A lot of people seem to take you as condescending when you simply acknowledge that you have more expertise on something than they do.

I am also apparently a poster child for Impostor Syndrome. I once went to an Impostor Syndrome with a couple dozen other people where they played a bingo game for Impostor Syndrome traits and behaviors—and won. I once went to a lecture by George Akerlof where he explained that he attributed his Nobel Prize more to luck and circumstances than any particular brilliance on his part—and I guarantee you, in the extremely unlikely event I ever win a prize like that, I’ll say the same.

Compound this with the fact that our society routinely demands confidence in situations where absolutely no one could ever justify being confident.

Consider a job interview, when they ask you: “Why are you the best candidate for this job?” I couldn’t possibly know that. No one in my position could possibly know that. I literally do not know who your other candidates are in order to compare myself to them. I can tell you why I am qualified, but that’s all I can do. I could be the best person for the job, but I have no idea if I am. It’s your job to figure that out, with all the information in front of you—and I happen to know that you’re actually terrible at it, even with all that information I don’t have access to. If I tell you I know I’m the best person for the job, I am, by construction, either wildly overconfident or lying. (And in my case, it would definitely be lying.)

In fact, if I were a hiring manager, I would probably disqualify anyone who told me they were the best person for the job—because the one thing I now know about them is that they are either overconfident or willing to lie. (But I’ll probably never be a hiring manager.)

Likewise, I’ve been often told when pitching creative work to explain why I am the best or only person who could bring this work to life, or to provide accurate forecasts of how much the work would sell if published. I almost certainly am not the best or only person who could do anything—only a handful of people on Earth could realistically say that they are, and they’ve all already won Oscars or Emmys or Nobel Prizes. Accurate sales forecasts for creative works are so difficult that even Disney Corporation, an ever-growing conglomerate media superpower with billions of dollars to throw at the problem and even more billions of dollars at stake in getting it right, still routinely puts out films that are financial failures.


They casually hand you impossible demands and then get mad at you when you say you can’t meet them. And then they go pick someone else who claims to be able to do the impossible.

There is some hope, however.

Some studies suggest that people can sometimes recognize and punish overconfidence—though, again, I don’t see how that can be reconciled with the success of Donald Trump. In this study of evaluating expert witnesses, the most confident witnesses were rated as slightly less reliable than the moderately-confident ones, but both were far above the least-confident ones.

Surprisingly simple interventions can make intellectual humility more salient to people, and make them more willing to trust people who express doubt—who are, almost without exception, the more trustworthy people.

But somehow, I think I have to learn to express confidence I don’t feel, because that’s how you succeed in our society.

How to be a deontological consequentialist

Dec 7 JDN 2461017

As is commonly understood, there are two main branches of normative ethics:

  • Deontology, on which morality consists in following rules and fulfilling obligations, and
  • Consequentialism, on which morality consists in maximizing good consequences.

The conflict between them has raged for centuries, with Kantians leading the deontologists and utilitarians leading the consequentialists. Both theories seem to have a lot of good points, but neither can decisively defeat the other.

I think this is because they are both basically correct.

In their strongest forms, deontology and consequentialism are mutually contradictory; but it turns out that you can soften each of them a little bit, and the results become compatible.

To make deontology a little more consequentialist, let’s ask a simple question:

What makes a rule worth following?

I contend that the best answer we have is “because following that rule would make the world better off than not following that rule”. (Even Kantians pretty much have to admit this: What maxim could you will to be an absolute law? Only a law that would yield good outcomes.)

That is, the ultimate justification of a sound deontology would be fundamentally consequentialist.

But lest the consequentialists get too smug, we can also ask them another question, which is a bit subtler:

How do you know which actions will ultimately have good consequences?

Sure, if we were omniscient beings who could perfectly predict the consequences of our actions across the entire galaxy on into the indefinite future, we could be proper act utilitarians who literally choose every single action according to a calculation of the expected utility.

But in practice, we have radical uncertainty about the long-term consequences of our actions, and can generally only predict the immediate consequences.

That leads to the next question:

Would you really want to live in a world where people optimized immediate consequences?

I contend that you would not, that such a world actually sounds like a dystopian nightmare.

Immediate consequences say that if a healthy person walks into a hospital and happens to have compatible organs for five people who need donations, we should kill that person, harvest their organs, and give them to the donors. (This is the organ transplant variant of the Trolley Problem.)

Basically everyone recognizes that this is wrong. But why is it wrong? That’s thornier. One pretty convincing case is that a systematic policy of this kind would undermine trust in hospitals and destroy the effectiveness of healthcare in general, resulting in disastrous consequences far outweighing the benefit of saving those five people. But those aren’t immediate consequences, and indeed, it’s quite difficult to predict exactly how many crazy actions like this it would take to undermine people’s trust in hospitals, just how much it would undermine that trust, or exactly what the consequences of that lost trust would be.

So it seems like it’s actually better to have a rule about this.

This makes us into rule utilitarians, who instead of trying to optimize literally every single action—which requires information we do not have and never will—we instead develop a system of rules that we can follow, heuristics that will allow us to get better outcomes generally even if they can’t be guaranteed to produce the best possible outcome in any particular case.

That is, the output of a sophisticated consequentialism is fundamentally deontological.

We have come at the question of normative ethics from two very different directions, but the results turned out basically the same:

We should follow the rules that would have the best consequences.

The output of our moral theory is rules, like deontology; but its fundamental justification is based on outcomes, like consequentialism.

In my experience, when I present this account to staunch deontologists, they are pretty much convinced by it. They’re prepared to give up the fundamental justification to consequences if it allows them to have their rules.

The resistance I get is mainly from staunch consequentialists, who insist that it’s not so difficult to optimize individual actions, and so we should just do that instead of making all these rules.

So it is to those consequentialists, particularly those who say “rule utilitarianism collapses into act utilitarianism”, to whom the rest of the post is addressed.

First, let me say that I agree.

In the ideal case of omniscient, perfectly-benevolent, perfectly-rational agents, rule utilitarianism mathematically collapses into act utilitarianism. That is a correct theorem.

However, we do not live in the ideal case of omniscient, perfectly-benevolent, perfectly-rational agents. We are not even close to that ideal case; we will never be close to that ideal case. Indeed, I think part of the problem here is that you fail to fully grasp the depth and width of the chasm between here and there. Even a galactic civilization of a quintillion superhuman AIs would still not be close to that ideal case.

Quite frankly, humans aren’t even particularly good at forecasting what will make themselves happy.

There are massive errors and systematic biases in human affective forecasting.

One of the post important biases is impact bias: People systematically overestimate the impact of individual events on their long-term happiness. Some of this seems to be just due to focus: Paying attention to a particular event exaggerates its importance in your mind, and makes it harder for you to recall other events that might push your emotions in a different direction. Another component is called immune neglect: people fail to account for their own capacity to habituate to both pleasant and unpleasant experiences. (This effect is often overstated: It’s a common misconception that lottery winners are no happier than they were before. No, they absolutely are happier, on average; they’re just not as much happier as they predicted themselves to be.)

People also use inconsistent time discounting: $10 today is judged as better than $11 tomorrow, but $10 in 364 days is not regarded as better than $11 in 365 days—so if I made a decision a year ago, I’d want to change it now. (The correct answer, by the way, is to take the $11; a discount rate of 10% per day is a staggering 120,000,000,000,000,000% APR—seriously; check it yourself—so you’d better not be discounting at that rate, unless you’re literally going to die before tomorrow.)

Now, compound that with the fact that different human beings come at the world from radically different perspectives and with radically different preferences.

How good do you think we are at predicting what will make other people happy?

Damn right: We’re abysmal.

Basically everyone assumes that what they want and what they would feel is also what other people will want and feel—which, honestly, explains a lot about politics. As a result, my prediction of your feelings is more strongly correlated with my prediction of my feelings than it is with your actual feelings.

The impact bias is especially strong when forecasting other people’s feelings in response to our own actions: We tend to assume that other people care more about what we do than they actually care—and this seems to be a major source of social anxiety.

People also tend to overestimate the suffering of others, and are generally willing to endure more pain than they are willing to inflict upon others. (This one seems like it might be a good thing!)

Even when we know people well, we can still be totally blindsided by their emotional reactions. We’re just really awful at this.

Does this just mean that morality is hopeless? We have no idea what we’re doing?

Fortunately, no. Because while no individual can correctly predict or control the outcomes of particular actions, the collective action of well-designed institutions can in fact significantly improve the outcomes of policy.

This is why we have things like the following:

  • Laws
  • Courts
  • Regulations
  • Legislatures
  • Constitutions
  • Newspapers
  • Universities

These institutions—which form the backbone of liberal democracy—aren’t simply arbitrary. They are the result of hard-fought centuries, a frothing, volatile, battle-tested mix of intentional design and historical evolution.

Are these institutions optimal? Good heavens, no!

But we have no idea what optimal institutions look like, and probably never will. (Those galaxy-spanning AIs will surely have a better system than this; but even theirs probably won’t be optimal.) Instead, what we are stuck with are the best institutions we’ve come up with so far.

Moreover, we do have very clear empirical evidence at this point that some form of liberal democracy with a mixed economy is the best system we’ve got so far. One can reasonably debate whether Canada is doing better or worse than France, or whether the system in Denmark could really be scaled to the United States, or just what the best income tax rates are; but there is a large, obvious, and important difference between life in a country like Canada or Denmark and life in a country like Congo or Afghanistan.

Indeed, perhaps there is no better pair to compare than North and South Korea: Those two countries are right next to each other, speak the same language, and started in more or less the same situation; but the south got good institutions and the north got bad ones, and now the difference between them couldn’t be more stark. (Honestly, this is about as close as we’re ever likely to get of a randomized controlled experiment in macroeconomics.)

People in South Korea now live about as well as some of the happiest places in the world; their GDP per capita PPP is about $65,000 per year, roughly the same as Canada. People in North Korea live about as poorly as it is possible for humans to live, subject to totalitarian oppression and living barely above subsistence; their GDP per capita PPP is estimated to be $600 per year—less than 1% as much.

The institutions of South Korea are just that much better.

Indeed, there’s one particular aspect of good institutions that seems really important, yet is actually kind of hard to justify in act-utilitarian terms:

Why is freedom good?

A country’s level of freedom is almost perfectly correlated with its overall level of happiness and development. (Yes, even on this measure, #ScandinaviaIsBetter.)

But why? In theory, letting people do whatever they want could actually lead to really bad outcomes—and indeed, occasionally it does. There’s even a theorem that liberty is incompatible with full Pareto-efficiency. But all the countries with the happiest people seem to have a lot of liberty, and indeed the happiest ones seem to have the most. How come?

My answer:

Personal liberty is a technology for heuristic utility maximization.

In the ideal case, we wouldn’t really need personal liberty; you could just compel everyone to do whatever is optimal all the time, and that would—by construction—be optimal. It might even be sort of nice: You don’t need to make any difficult decisions, you can just follow the script and know that everything will turn out for the best.

But since we don’t know what the optimal choice is—even in really simple cases, like what you should eat for lunch tomorrow—we can’t afford to compel people in this way. (It would also be incredibly costly to implement such totalitarian control, but that doesn’t stop some governments from trying!)

Then there are disagreements: What I think is optimal may not be what you think is optimal, and in truth we’re probably both wrong (but one of us may be less wrong).

And that’s not even getting into conflicts of interest: We aren’t just lacking in rationality, we’re also lacking in benevolence. Some people are clearly much more benevolent than others, but none of us are really 100% selfless. (Sadly, I think some people are 100% selfish.)

In fact, this is a surprisingly deep question:

Would the world be better if we were selfless?

Could there be actually some advantage in aggregate to having some degree of individual self-interest?

Here are some ways that might hold, just off the top of my head:

  • Partial self-interest supports an evolutionary process of moral and intellectual development that otherwise would be stalled or overrun by psychopaths—see my post on Rousseaus and Axelrods
  • Individuals have much deeper knowledge of their own preferences than anyone else’s, and thus can optimize them much better. (Think about it: This is true even of people you know very well. Otherwise, why would we ever need to ask our spouses one of the most common questions in any marriage: “Honey, what do you want for dinner tonight?”)
  • Self-interest allows for more efficient economic incentives, and thus higher overall productivity.

Of course, total selfishness is clearly not optimal—that way lies psychopathy. But some degree of selfishness might actually be better for long-term aggregate outcomes than complete altruism, and this is to some extent an empirical question.

Personal liberty solves a lot of these problems: Since people are best at knowing their own preferences, let people figure out on their own what’s good for them. Give them the freedom to live the kind of life they want to live, within certain reasonable constraints to prevent them from causing great harm to others or suffering some kind of unrecoverable mistake.

This isn’t exactly a new idea; it’s basically the core message of John Stuart Mill’s On Liberty (which I consider a good candidate for the best book every written—seriously, it beats the Bible by a light-year). But by putting it in more modern language, I hope to show that deontology and consequentialism aren’t really so different after all.

And indeed, for all its many and obvious flaws, freedom seems to work pretty well—at least as well as anything we’ve tried.