Alien invasions: Could they happen, and could we survive?

July 30, JDN 2457600


It’s not actually the top-grossing film in the US right now (that would be The Secret Life of Pets), but Independence Day: Resurgence made a quite respectable gross of $343 million worldwide, giving it an ROI of 108% over its budget of $165 million. It speaks to something deep in our minds—and since most of the money came from outside the US, apparently not just Americans, though it is a deeply American film—about the fear, but perhaps also the excitement, of a possible alien invasion.

So, how likely are alien invasions anyway?

Well, first of all, how likely are aliens?

One of the great mysteries of astronomy is the Fermi Paradox: Everything we know about astronomy, biology, and probability tells us that there should be, somewhere out in the cosmos, a multitude of extraterrestrial species, and some of them should even be intelligent enough to form civilizations and invent technology. So why haven’t we found any clear evidence of any of them?

Indeed, the Fermi Paradox became even more baffling in just the last two years, as we found literally thousands of new extrasolar planets, many of them quite likely to be habitable. More extrasolar planets have been found since 2014 than in all previous years of human civilization. Perhaps this is less surprising when we remember that no extrasolar planets had ever been confirmed before 1992—but personally I think that just makes it this much more amazing that we are lucky enough to live in such a golden age of astronomy.

The Drake equation was supposed to tell us how probable it is that we should encounter an alien civilization, but the equation isn’t much use to us because so many of its terms are so wildly uncertain. Maybe we can pin down how many planets there are soon, but we still don’t know what proportion of planets can support life, what proportion of those actually have life, or above all what proportion of ecosystems ever manage to evolve a technological civilization or how long such a civilization is likely to last. All possibilities from “they’re everywhere but we just don’t notice or they actively hide from us” to “we are actually the only ones in the last million years” remain on the table.

But let’s suppose that aliens do exist, and indeed have technology sufficient to reach our solar system. Faster-than-light capability would certainly do it, but it isn’t strictly necessary; with long lifespans, cryonic hibernation, or relativistic propulsion aliens could reasonably expect to travel at least between nearby stars within their lifetimes. The Independence Day aliens appear to have FTL travel, but interestingly it makes the most sense if they do not have FTL communication—it took them 20 years to get the distress call because it was sent at lightspeed. (Or perhaps the ansible was damaged in the war, and they fell back to a lightspeed emergency system?) Otherwise I don’t quite get why it would take the Queen 20 years to deploy her personal battlecruiser after the expeditionary force she sent was destroyed—maybe she was just too busy elsewhere to bother with our backwater planet? What did she want from our planet again?

That brings me to my next point: Just what motivation would aliens have for attacking us? We often take it for granted that if aliens exist, and have the capability to attack us, they would do so. But that really doesn’t make much sense. Do they just enjoy bombarding primitive planets? I guess it’s possible they’re all sadistic psychopaths, but it seems like any civilization stable enough to invent interstellar travel has got to have some kind of ethical norms. Maybe they see us as savages or even animals, and are therefore willing to kill us—but that still means they need a reason.

Another idea, taken seriously in V and less so in Cowboys & Aliens, is that there is some sort of resource we have that they want, and they’re willing to kill us to get it. This is probably such a common trope because it has been a common part of human existence; we are very familiar with people killing other people in order to secure natural resources such as gold, spices, or oil. (Indeed, to some extent it continues to this day.)

But this actually doesn’t make a lot of sense on an interstellar scale. Certainly water (V) and gold (Cowboys & Aliens) are not things they would have even the slightest reason to try to claim from an inhabited planet, as comets are a better source of water and asteroids are a better source of gold. Indeed, almost nothing inorganic could really be cost-effective to obtain from an inhabited planet; far easier to just grab it from somewhere that won’t fight back, and may even have richer veins and lower gravity.

It’s possible they would want something organic—lumber or spices, I guess. But I’m not sure why they’d want those things, and it seems kind of baffling that they wouldn’t just trade if they really want them. I’m sure we’d gladly give up a great deal of oregano and white pine in exchange for nanotechnology and FTL. I guess I could see this happening because they assume we’re too stupid to be worth trading with, or they can’t establish reliable means of communication. But one of the reasons why globalization has succeeded where colonialism failed is that trade is a lot more efficient than theft, and I find it unlikely that aliens this advanced would have failed to learn that lesson.

Media that imagines they’d enslave us makes even less sense; slavery is wildly inefficient, and they probably have such ludicrously high productivity that they are already coping with a massive labor glut. (I suppose maybe they send off unemployed youths to go conquer random planets just to give them something to do with their time? Helps with overpopulation too.)

I actually thought Independence Day: Resurgence did a fairlygood job of finding a resource that is scarce enough to be worth fighting over while also not being something we would willingly trade. Spoiler alert, I suppose:

Molten cores. Now, I haven’t the foggiest what one does with molten planet cores that somehow justifies the expenditure of all that energy flying between solar systems and digging halfway through planets with gigantic plasma drills, but hey, maybe they are actually tremendously useful somehow. They certainly do contain huge amounts of energy, provided you can extract it efficiently. Moreover, they are scarce; of planets we know about, most of them do not have molten cores. Earth, Venus, and Mercury do, and we think Mars once did; but none of the gas giants do, and even if they did, it’s quite plausible that the Queen’s planet-cracker drill just can’t drill that far down. Venus sounds like a nightmare to drill, so really the only planet I’d expect them to extract before Earth would be Mercury. And maybe they figured they needed both cores to justify the trip, in which case it would make sense to hit the inhabited planet first so we don’t have time to react and prepare our defenses. (I can’t imagine we’d take giant alien ships showing up and draining Mercury’s core lying down.) I’m imagining the alien economist right now, working out the cost-benefit analysis of dealing with Venus’s superheated atmosphere and sulfuric acid clouds compared to the cost of winning a war against primitive indigenous apes with nuclear missiles: Well, doubling our shield capacity is cheaper than covering the whole ship in sufficient anticorrosive, so I guess we’ll go hit the ape planet. (They established in the first film that their shields can withstand direct hits from nukes—the aliens came prepared.)

So, maybe killing us for our resources isn’t completely out of the question, but it seems unlikely.

Another possibility is religious fanaticism: Every human culture has religion in some form, so why shouldn’t the aliens? And if they do, it’s likely radically different from anything we believe. If they become convinced that our beliefs are not simply a minor nuisance but an active threat to the holy purity of the galaxy, they could come to our system on a mission to convert or destroy at any cost; and since “convert” seems very unlikely, “destroy” would probably become their objective pretty quickly. It wouldn’t have to make sense in terms of a cost-benefit analysis—fanaticism doesn’t have to make sense at all. The good news here is that any culture fanatical enough to randomly attack other planets simply for believing differently from them probably won’t be cohesive enough to reach that level of technology. (Then again, we somehow managed a world with both ISIS and ICBMs.)

Personally I think there is a far more likely scenario for alien invasions, and that is benevolent imperialism.

Why do I specify “benevolent”? Because if they aren’t interested in helping us, there’s really no reason for them to bother with us in the first place. But if their goal is to uplift our civilization, the only way they can do that is by interacting with us.

Now, note that I use the word “benevolent”, not the word “beneficent”. I think they would have to desire to make our lives better—but I’m not so convinced they actually would make our lives better. In our own history, human imperialism was rarely benevolent in the first place, but even where it was, it was even more rarely actually beneficent. Their culture would most likely be radically different from our own, and what they think of as improvements might seem to us strange, pointless, or even actively detrimental. But don’t you see that the QLX coefficient is maximized if you convert all your mountains into selenium extractors? (This is probably more or less how Native Americans felt when Europeans started despoiling their land for things called “coal” and “money”.) They might even try to alter us biologically to be more similar to them: But haven’t you always wanted tentacles? Hands are so inefficient!

Moreover, even if their intentions were good and their methods of achieving them were sound, it’s still quite likely that we would violently resist. I don’t know if humans are a uniquely rebellious species—let’s hope not, lest the aliens be shocked into overreacting when we rebel—but in general humans do not like being ruled over and forced to do things, even when those rulers are benevolent and the things they are forced to do are worth doing.

So, I think the most likely scenario for a war between humans and aliens is that they come in and start trying to radically reorganize our society, and either because their demands actually are unreasonable, or at least because we think they are, we rebel against their control.

Then what? Could we actually survive?

The good news is: Yes, we probably could.

If aliens really did come down trying to extract our molten core or something, the movies are all wrong: We’d have basically no hope. It really makes no sense at all that we could win a full-scale conflict with a technologically superior species if they were willing to exterminate us. Indeed, if what they were after didn’t depend upon preserving local ecology, their most likely mode of attack is to arrive in the system and immediately glass the planet. Nuclear weapons are already available to us for that task; if they’re more advanced they might have antimatter bombs, relativistic kinetic warheads, or even something more powerful still. We might be all dead before we even realized what was happening, or they might destroy 90% of us right away and mop up the survivors later with little difficulty.

If they wanted something that required ecological stability (I shall henceforth dub this the “oregano scenario”), yet weren’t willing to trade for some reason, then they wouldn’t unleash full devastation, and we’d have the life-dinner principle on our side: The hare runs for his life, but the fox only runs for her dinner. So if the aliens are trying to destroy us to get our delicious spices, we have a certain advantage from the fact that we are willing to win at essentially any cost, while at some point that alien economist is going to run the numbers and say, “This isn’t cost-effective. Let’s cut our losses and hit another system instead.”

If they wanted to convert us to their religion, well, we’d better hope enough people convert, because otherwise they’re going to revert to, you guessed it, glass the planet. At least this means they would probably at least try to communicate first, so we’d have some time to prepare; but it’s unlikely that even if their missionaries spent decades trying to convert us we could seriously reduce our disadvantage in military technology during that time. So really, our best bet is to adopt the alien religion. I guess what I’m really trying to say here is “All Hail Xemu.”

But in the most likely scenario that their goal is actually to make our lives better, or at least better as they see it, they will not be willing to utilize their full military capability against us. They might use some lethal force, especially if they haven’t found reliable means of nonlethal force on sufficient scale; but they aren’t going to try to slaughter us outright. Maybe they kill a few dissenters to set an example, or fire into a crowd to disperse a riot. But they are unlikely to level a city, and they certainly wouldn’t glass the entire planet.

Our best bet would probably actually be nonviolent resistance, as this has a much better track record against benevolent imperialism. Gandhi probably couldn’t have won a war against Britain, but he achieved India’s independence because he was smart enough to fight on the front of public opinion. Likewise, even with one tentacle tied behind their backs by their benevolence, the aliens would still probably be able to win any full-scale direct conflict; but if our nonviolent resistance grew strong enough, they might finally take the hint and realize we don’t want their so-called “help”.

So, how about someone makes that movie? Aliens come to our planet, not to kill us, but to change us, make us “better” according to their standards. QLX coefficients are maximized, and an intrepid few even get their tentacles installed. But the Resistance arises, and splits into two factions: One tries to use violence, and is rapidly crushed by overwhelming firepower, while the other uses nonviolent resistance. Ultimately the Resistance grows strong enough to overthrow the alien provisional government, and they decide to cut their losses and leave our planet. Then, decades later, we go back to normal, and wonder if we made the right decision, or if maybe QLX coefficients really were the most important thing after all.

[The image is released under a CC0 copyleft from Pixabay.]

The only thing necessary for the triumph of evil is that good people refuse to do cost-benefit analysis

July 27, JDN 2457597

My title is based on a famous quote often attributed to Edmund Burke, but which we have no record of him actually saying:

The only thing necessary for the triumph of evil is that good men do nothing.

The closest he actually appears to have written is this:

When bad men combine, the good must associate; else they will fall one by one, an unpitied sacrifice in a contemptible struggle.

Burke’s intended message was about the need for cooperation and avoiding diffusion of responsibility; then his words were distorted into a duty to act against evil in general.

But my point today is going to be a little bit more specific: A great deal of real-world evils would be eliminated if good people were more willing to engage in cost-benefit analysis.

As discussed on Less Wrong awhile back, there is a common “moral” saying which comes from the Talmud (if not earlier; and of course it’s hardly unique to Judaism), which gives people a great warm and fuzzy glow whenever they say it:

Whoever saves a single life, it is as if he had saved the whole world.

Yet this is in fact the exact opposite of moral. It is a fundamental, insane perversion of morality. It amounts to saying that “saving a life” is just a binary activity, either done or not, and once you’ve done it once, congratulations, you’re off the hook for the other 7 billion. All those other lives mean literally nothing, once you’ve “done your duty”.

Indeed, it would seem to imply that you can be a mass murderer, as long as you save someone else somewhere along the line. If Mao Tse-tung at some point stopped someone from being run over by a car, it’s okay that his policies killed more people than the population of Greater Los Angeles.

Conversely, if anything you have ever done has resulted in someone’s death, you’re just as bad as Mao; in fact if you haven’t also saved someone somewhere along the line and he has, you’re worse.

Maybe this is how you get otherwise-intelligent people saying such insanely ridiculous things as George W. Bush’s crimes are uncontroversially worse than Osama bin Laden’s.” (No, probably not, since Chomsky at least feigns something like cost-benefit analysis. I’m not sure what his failure mode is, but it’s probably not this one in particular. “Uncontroversially”… you keep using that word…)

Cost-benefit analysis is actually a very simple concept (though applying it in practice can be mind-bogglingly difficult): Try to maximize the good things minus the bad things. If an action would increase good things more than bad things, do it; if it would increase bad things more than good things, don’t do it.

What it replaces is simplistic deontological reasoning about “X is always bad” or “Y is always good”; that’s almost never true. Even great evils can be justified by greater goods, and many goods are not worth having because of the evils they would require to achieve. We seem to want all our decisions to have no downside, perhaps because that would resolve our cognitive dissonance most easily; but in the real world, most decisions have an upside and a downside, and it’s a question of which is larger.

Why is it that so many people—especially good people—have such an aversion to cost-benefit analysis?

I gained some insight into this by watching a video discussion from an online Harvard course taught by Michael Sandel (which is free, by the way, if you’d like to try it out). He was leading the discussion Socratically, which is in general a good method of teaching—but like anything else can be used to teach things that are wrong, and is in some ways more effective at doing so because it has a way of making students think they came up with the answers on their own. He says something like, “Do we really want our moral judgments to be based on cost-benefit analysis?” and gives some examples where people made judgments using cost-benefit analysis to support his suggestion that this is something bad.

But of course his examples are very specific: They all involve corporations using cost-benefit analysis to maximize profits. One of them is the Ford Pinto case, where Ford estimated the cost to them of a successful lawsuit, multiplied by the probability of such lawsuits, and then compared that with the cost of a total recall. Finding that the lawsuits were projected to be cheaper, they opted for that result, and thereby allowed several people to be killed by their known defective product.

Now, it later emerged that Ford Pintos were not actually especially dangerous, and in fact Ford didn’t just include lawsuits but also a standard estimate of the “value of a statistical human life”, and as a result of that their refusal to do the recall was probably the completely correct decision—but why let facts get in the way of a good argument?

But let’s suppose that all the facts had been as people thought they were—the product was unsafe and the company was only interested in their own profits. We don’t need to imagine this hypothetically; this is clearly what actually happened with the tobacco industry, and indeed with the oil industry. Is that evil? Of course it is. But not because it’s cost-benefit analysis.

Indeed, the reason this is evil is the same reason most things are evil: They are psychopathically selfish. They advance the interests of those who do them, while causing egregious harms to others.

Exxon is apparently prepared to sacrifice millions of lives to further their own interests, which makes them literally no better than Mao, as opposed to this bizarre “no better than Mao” that we would all be if the number of lives saved versus killed didn’t matter. Let me be absolutely clear; I am not speaking in hyperbole when I say that the board of directors of Exxon is morally no better than Mao. No, I mean they literally are willing to murder 20 million people to serve their own interests—more precisely 10 to 100 million, by WHO estimates. Maybe it matters a little bit that these people will be killed by droughts and hurricanes rather than by knives and guns; but then, most of the people Mao killed died of starvation, and plenty of the people killed by Exxon will too. But this statement wouldn’t have the force it does if I could not speak in terms of quantitative cost-benefit analysis. Killing people is one thing, and most industries would have to own up to it; being literally willing to kill as many people as history’s greatest mass murderers is quite anotherand yet it is true of Exxon.

But I can understand why people would tend to associate cost-benefit analysis with psychopaths maximizing their profits; there are two reasons for this.

First, most neoclassical economists appear to believe in both cost-benefit analysis and psychopathic profit maximization. They don’t even clearly distinguish their concept of “rational” from the concept of total psychopathic selfishness—hence why I originally titled this blog “infinite identical psychopaths”. The people arguing for cost-benefit analysis are usually economists, and economists are usually neoclassical, so most of the time you hear arguments for cost-benefit analysis they are also linked with arguments for horrifically extreme levels of selfishness.

Second, most people are uncomfortable with cost-benefit analysis, and as a result don’t use it. So, most of the cost-benefit analysis you’re likely to hear is done by terrible human beings, typically at the reins of multinational corporations. This becomes self-reinforcing, as all the good people don’t do cost-benefit analysis, so they don’t see good people doing it, so they don’t do it, and so on.

Therefore, let me present you with some clear-cut cases where cost-benefit analysis can save millions of lives, and perhaps even save the world.

Imagine if our terrorism policy used cost-benefit analysis; we wouldn’t kill 100,000 innocent people and sacrifice 4,400 soldiers fighting a war that didn’t have any appreciable benefit as a bizarre form of vengeance for 3,000 innocent people being killed. Moreover, we wouldn’t sacrifice core civil liberties to prevent a cause of death that’s 300 times rarer than car accidents.

Imagine if our healthcare policy used cost-benefit analysis; we would direct research funding to maximize our chances of saving lives, not toward the form of cancer that is quite literally the sexiest. We would go to a universal healthcare system like the rest of the First World, and thereby save thousands of additional lives while spending less on healthcare.

With cost-benefit analysis, we would reform our system of taxes and subsidies to internalize the cost of carbon emissions, most likely resulting in a precipitous decline of the oil and coal industries and the rapid rise of solar and nuclear power, and thereby save millions of lives. Without cost-benefit analysis, we instead get unemployed coal miners appearing on TV to grill politicians about how awful it is to lose your job even though that job is decades obsolete and poisoning our entire planet. Would eliminating coal hurt coal miners? Yes, it would, at least in the short run. It’s also completely, totally worth it, by at least a thousandfold.

We would invest heavily in improving our transit systems, with automated cars or expanded rail networks, thereby preventing thousands of deaths per year—instead of being shocked and outraged when an automated car finally kills one person, while manual vehicles in their place would have killed half a dozen by now.

We would disarm all of our nuclear weapons, because the risk of a total nuclear apocalypse is not worth it to provide some small increment in national security above our already overwhelming conventional military. While we’re at it, we would downsize that military in order to save enough money to end world hunger.

And oh by the way, we would end world hunger. The benefits of doing so are enormous; the costs are remarkably small. We’ve actually been making a great deal of progress lately—largely due to the work of development economists, and lots and lots of cost-benefit analysis. This process involves causing a lot of economic disruption, making people unemployed, taking riches away from some people and giving them to others; if we weren’t prepared to bear those costs, we would never get these benefits.

Could we do all these things without cost-benefit analysis? I suppose so, if we go through the usual process of covering of our ears whenever a downside is presented and amplification whenever an upside is presented, until we can more or less convince ourselves that there is no downside even though there always is. We can continue having arguments where one side presents only downsides, the other side presents only upsides, and then eventually one side prevails by sheer numbers, and it could turn out to be the upside team (or should I say “tribe”?).

But I think we’d progress a lot faster if we were honest about upsides and downsides, and had the courage to stand up and say, “Yes, that downside is real; but it’s worth it.” I realize it’s not easy to tell a coal miner to his face that his job is obsolete and killing people, and I don’t really blame Hillary Clinton for being wishy-washy about it; but the truth is, we need to start doing that. If we accept that costs are real, we may be able to mitigate them (as Hillary plans to do with a $30 billion investment in coal mining communities, by the way); if we pretend they don’t exist, people will still get hurt but we will be blind to their suffering. Or worse, we will do nothing—and evil will triumph.

Lukewarm support is a lot better than opposition

July 23, JDN 2457593

Depending on your preconceptions, this statement may seem either eminently trivial or offensively wrong: Lukewarm support is a lot better than opposition.

I’ve always been in the “trivial” camp, so it has taken me awhile to really understand where people are coming from when they say things like the following.

From a civil rights activist blogger (“POC” being “person of color” in case you didn’t know):

Many of my POC friends would actually prefer to hang out with an Archie Bunker-type who spits flagrantly offensive opinions, rather than a colorblind liberal whose insidious paternalism, dehumanizing tokenism, and cognitive indoctrination ooze out between superficially progressive words.

From the Daily Kos:

Right-wing racists are much more honest, and thus easier to deal with, than liberal racists.

From a Libertarian blogger:

I can deal with someone opposing me because of my politics. I can deal with someone who attacks me because of my religious beliefs. I can deal with open hostility. I know where I stand with people like that.

They hate me or my actions for (insert reason here). Fine, that is their choice. Let’s move onto the next bit. I’m willing to live and let live if they are.

But I don’t like someone buttering me up because they need my support, only to drop me the first chance they get. I don’t need sweet talk to distract me from the knife at my back. I don’t need someone promising the world just so they can get a boost up.

In each of these cases, people are expressing a preference for dealing with someone who actively opposes them, rather than someone who mostly supports them. That’s really weird.

The basic fact that lukewarm support is better than opposition is basically a mathematical theorem. In a democracy or anything resembling one, if you have the majority of population supporting you, even if they are all lukewarm, you win; if you have the majority of the population opposing you, even if the remaining minority is extremely committed to your cause, you lose.

Yes, okay, it does get slightly more complicated than that, as in most real-world democracies small but committed interest groups actually can pressure policy more than lukewarm majorities (the special interest effect); but even then, you are talking about the choice between no special interests and a special interest actively against you.

There is a valid question of whether it is more worthwhile to get a small, committed coalition, or a large, lukewarm coalition; but at the individual level, it is absolutely undeniable that supporting you is better for you than opposing you, full stop. I mean that in the same sense that the Pythagorean theorem is undeniable; it’s a theorem, it has to be true.

If you had the opportunity to immediately replace every single person who opposes you with someone who supports you but is lukewarm about it, you’d be insane not to take it. Indeed, this is basically how all social change actually happens: Committed supporters persuade committed opponents to become lukewarm supporters, until they get a majority and start winning policy votes.

If this is indeed so obvious and undeniable, why are there so many people… trying to deny it?

I came to realize that there is a deep psychological effect at work here. I could find very little in the literature describing this effect, which I’m going to call heretic effect (though the literature on betrayal aversion, several examples of which are linked in this sentence, is at least somewhat related).

Heretic effect is the deeply-ingrained sense human beings tend to have (as part of the overall tribal paradigm) that one of the worst things you can possibly do is betray your tribe. It is worse than being in an enemy tribe, worse even than murdering someone. The one absolutely inviolable principle is that you must side with your tribe.

This is one of the biggest barriers to police reform, by the way: The Blue Wall of Silence is the result of police officers identifying themselves as a tight-knit tribe and refusing to betray one of their own for anything. I think the best option for convincing police officers to support reform is to reframe violations of police conduct as themselves betrayals—the betrayal is not the IA taking away your badge, the betrayal is you shooting an unarmed man because he was Black.

Heretic effect is a particular form of betrayal aversion, where we treat those who are similar to our tribe but not quite part of it as the very worst sort of people, worse than even our enemies, because at least our enemies are not betrayers. In fact it isn’t really betrayal, but it feels like betrayal.

I call it “heretic effect” because of the way that exclusivist religions (including all the Abrahamaic religions, and especially Christianity and Islam) focus so much of their energy on rooting out “heretics”, people who almost believe the same as you do but not quite. The Spanish Inquisition wasn’t targeted at Buddhists or even Muslims; it was targeted at Christians who slightly disagreed with Catholicism. Why? Because while Buddhists might be the enemy, Protestants were betrayers. You can still see this in the way that Muslim societies treat “apostates”, those who once believed in Islam but don’t anymore. Indeed, the very fact that Christianity and Islam are at each other’s throats, rather than Hinduism and atheism, shows that it’s the people who almost agree with you that really draw your hatred, not the people whose worldview is radically distinct.

This is the effect that makes people dislike lukewarm supporters; like heresy, lukewarm support feels like betrayal. You can clearly hear that in the last quote: “I don’t need sweet talk to distract me from the knife at my back.” Believe it or not, Libertarians, my support for replacing the social welfare state with a basic income, decriminalizing drugs, and dramatically reducing our incarceration rate is not deception. Nor do I think I’ve been particularly secretive about my desire to make taxes more progressive and environmental regulations stronger, the things you absolutely don’t agree with. Agreeing with you on some things but not on other things is not in fact the same thing as lying to you about my beliefs or infiltrating and betraying your tribe.

That said, I do sort of understand why it feels that way. When I agree with you on one thing (decriminalizing cannabis, for instance), it sends you a signal: “This person thinks like me.” You may even subconsciously tag me as a fellow Libertarian. But then I go and disagree with you on something else that’s just as important (strengthening environmental regulations), and it feels to you like I have worn your Libertarian badge only to stab you in the back with my treasonous environmentalism. I thought you were one of us!

Similarly, if you are a social justice activist who knows all the proper lingo and is constantly aware of “checking your privilege”, and I start by saying, yes, racism is real and terrible, and we should definitely be working to fight it, but then I question something about your language and approach, that feels like a betrayal. At least if I’d come in wearing a Trump hat you could have known which side I was really on. (And indeed, I have had people unfriend me or launch into furious rants at me for questioning the orthodoxy in this way. And sure, it’s not as bad as actually being harassed on the street by bigots—a thing that has actually happened to me, by the way—but it’s still bad.)

But if you can resist this deep-seated impulse and really think carefully about what’s happening here, agreeing with you partially clearly is much better than not agreeing with you at all. Indeed, there’s a fairly smooth function there, wherein the more I agree with your goals the more our interests are aligned and the better we should get along. It’s not completely smooth, because certain things are sort of package deals: I wouldn’t want to eliminate the social welfare system without replacing it with a basic income, whereas many Libertarians would. I wouldn’t want to ban fracking unless we had established a strong nuclear infrastructure, but many environmentalists would. But on the whole, more agreement is better than less agreement—and really, even these examples are actually surface-level results of deeper disagreement.

Getting this reaction from social justice activists is particularly frustrating, because I am on your side. Bigotry corrupts our society at a deep level and holds back untold human potential, and I want to do my part to undermine and hopefully one day destroy it. When I say that maybe “privilege” isn’t the best word to use and warn you about not implicitly ascribing moral responsibility across generations, this is not me being a heretic against your tribe; this is a strategic policy critique. If you are writing a letter to the world, I’m telling you to leave out paragraph 2 and correcting your punctuation errors, not crumpling up the paper and throwing it into a fire. I’m doing this because I want you to win, and I think that your current approach isn’t working as well as it should. Maybe I’m wrong about that—maybe paragraph 2 really needs to be there, and you put that semicolon there on purpose—in which case, go ahead and say so. If you argue well enough, you may even convince me; if not, this is the sort of situation where we can respectfully agree to disagree. But please, for the love of all that is good in the world, stop saying that I’m worse than the guys in the KKK hoods. Resist that feeling of betrayal so that we can have a constructive critique of our strategy. Don’t do it for me; do it for the cause.

Expensive cheap things, cheap expensive things

July 20, JDN 2457590

My posts recently have been fairly theoretical and mathematically intensive, so I thought I’d take a break from that today and offer you a much simpler, more practical post that you could use right away to improve your own finances.

Cognitive economists are so accustomed to using the word “heuristic” in contrast with words like “optimal” and “rational” that we tend to treat them as something bad. If only we didn’t have these darn heuristics, we could be those perfect rational agents the neoclassicists keep telling us about!

But in fact this is almost completely backwards: Heuristics are the reason human beings are capable of rational thought, unlike, well, anything else in the known universe. To be fair, many animals are capable of some limited rationality, often more than most people realize, but still far less than our own—and what rationality they have is born of the same evolutionary heuristics we use. Computers and robots are now approaching something that could be called rationality, but they still have a long way to go before they’ll really be acting rationally rather than perfectly following precise instructions—and of course we made them, modeled after our own thought processes. Current robots are logical, but not rational. The difference between logic and rationality is rather like that between intelligence and wisdom. Logic dictates that coffee is a berry; rationality says you may not enjoy it in your fruit salad. Robots are still at the point where they’d put coffee in our fruit salads if we told them to include a random mix of berries.

Heuristics are what allows us to make rational decisions 90% of the time. We might wish for something that would make us rational 100% of the time, but no known method exists; the best we can do is learn better heuristics to raise our percentage to perhaps 92% or 95%. With no heuristics at all, we would be 0% rational, not 100%.

So today I’m going to offer you a new heuristic, which I think might help you give your choices that little 2% boost. Expensive cheap things, cheap expensive things.

This is a little mantra to repeat to yourself whenever you have a purchasing decision to make—which, in a consumerist economy like ours, is surely several times a day. The precise definition of “cheap” and “expensive” will vary according to your income (to a billionaire, my lifetime income is a pittance; to someone at the UN poverty level, my annual income is an unimaginable bounty of riches). But for a typical middle-class American, “cheap” can be approximately defined by a Jackson heuristic—anything less than $20 is cheap—and “expensive” by a Benjamin heuristic—anything over $100 is expensive. It doesn’t need to be hard-edged either; you should apply this heuristic more thoroughly for purchases of $10,000 (i.e. cars) than you do for purchase of $1,000, and still more so for purchase of $100,000 (houses).

Expensive cheap things, cheap expensive things; what do I mean by that?

If you are going to buy something cheap, you can choose the expensive variety if you like. If you have the choice of a $1 toothbrush, a $5 toothbrush, and a $10 toothbrush, and you really do like the $10 toothbrush, don’t agonize over it—just buy the damn $10 toothbrush. Obviously there’s no reason to do that if the $1 toothbrush is really just as good for your needs; but if there’s any difference in quality you care about, it is almost certainly worth it to buy the better one.

If you are going to buy something expensive, you should choose the cheap variety if you can. If you have the choice of a $14,000 car, a $15,000 car, and a $16,000 car, you should buy the $14,000 car, unless the other cars are massively superior. You should basically be aiming for the cheapest bare-minimum choice that allows you to meet your needs. (I should be careful using cars as my example, because many old used cars that seem “cheap” are actually more expensive to fuel and maintain than it would cost to simply buy a newer model—but assume you’ve factored in a good estimate of the maintenance cost. You should almost never buy cars that aren’t at least a year old, however—first-year depreciation is huge. Let someone else lease it for a year before it you buy it.)

Why do I say this? Many people find the result counter-intuitive: I just told you to spend 900% more on toothbrushes, but insisted that you scrounge to save 12.5% on a car. Even if we adjust for the asymmetry using log points, I told you to indulge 230 log points of toothbrush for a tiny gain, while insisted you bear no-frills bare-minimum to save 13 log points of car.

I have also saved you $1,991. That’s why.

Intuitively we tend to think in terms of proportional prices—this car is 12.5% cheaper than that car, this toothbrush is 900% more expensive than that toothbrush. But you don’t spend money in proportions. You spend it in absolute amounts. So when you decide to make a purchase, you need to train yourself to think in terms of the absolute difference in price—paying $9 more versus paying $2000 more.

Businesses are counting on you not to think this way; that car dealer is surely going to point out that the $16,000 model has a sunroof and upgraded tire rims and whatever, and it’s only 14% more! But unless you would seriously be willing to pay $2,000 to get a sunroof and upgraded tire rims installed later, you should not upgrade to the $16,000 model. Don’t let them bamboozle you with “it’s a $5,000 value!”; it might well be a $5,000 price to do elsewhere, but that’s not the same thing. Only you can decide whether it’s of sufficient value to you.

There’s another reason this heuristic can be useful, which is that it will tend to pressure you into buying experiences instead of objects—and it is a well-established pattern in cognitive economics that experiences are a more cost-effective source of happiness than objects. “Expensive cheap things, cheap expensive things” doesn’t necessarily pressure toward buying experiences, as one could certainly load up on useless $20 gadgets or spend $5,000 on a luxurious vacation to Paris. But as a general pattern (and heuristics are all about general patterns!) you’re more likely to spend $20 on a dinner or $5,000 on a car. Some of the cheapest things people buy, like dining out with friends, are some of the greatest sources of happiness—you are, in a real sense, buying friendship. Some of the most expensive things people buy, like real estate, are precisely the sort of thing you should be willing to skimp on, because they really won’t bring you happiness. Larger houses are not statistically associated with higher happiness.

Indeed, part of the great crisis of real estate prices (which is a phenomenon across all First World cities, and surprisingly worse in Canada than the US, though worse still in California in particular) probably comes from people not applying this sort of heuristic. “This house is $240,000, but that one is only 10% more and look how much nicer it is!” That’s $24,000. You can buy that nicer house, or you can buy a second car. Or you can have an extra year of your child’s college fund. That is what that 10% actually means. I’m sure this isn’t the primary reason why housing in the US is so ludicrously expensive, but it may be a contributing factor. (Krugman argued similarly during the housing crash.)

Like any heuristic, “Expensive cheap things, cheap expensive things” will sometimes fail you, and if you think carefully you can probably outperform it. But I’ve found it’s a good habit to get into; it has helped me save money more than just about anything else I’ve tried.

If we had range voting, who would win this election?

July 16, JDN 2457586

The nomination of Donald Trump is truly a terrible outcome, and may be unprecedented in American history. One theory of its causation, taken by many policy elites (reviewed here by the Brookings Institution), is that this is a sign of “too much democracy”, a sentiment such elites often turn to, as The Economist did in the wake of the Great Recession. Even Salon has published such a theory. Yet as Michael Lind of the New York Times recognized, the problem is clearly not too much democracy but too little. “Too much democracy” is not an outright incoherent notion—it is something that I think in principle could exist—but I have never encountered it. Every time someone claims a system is too democratic, I have found that deeper digging shows that what they really mean is that it doesn’t privilege their interests enough.

Part of the problem, I think, is that even democracy as we know it in the real world is really not all that democratic, especially not in the United States, where it is totally dominated by a plurality vote system that forces us to choose between two parties. Most of the real decision-making happens in Senate committees, and when votes are important they are really most important in primaries. To be clear, I’m not saying that votes don’t count in the US or you shouldn’t vote; they do count, and you should vote. But anyone saying this system is “too democratic” clearly has no idea just how much more democratic it could be.

Indeed, there is one simple change that would both greatly expand democracy, weaken the two-party system, and undermine Trump in one fell swoop, and it is called range voting. I’ve sung the praises of range voting many times before, but some anvils need to be dropped; I guess it’s just this thing I have when a system is mathematically proven superior.

Today I’d like to run a little thought experiment: What would happen if we had used range voting this election? I’m going to use actual poll data, rather than making up hypotheticals like The New York Times did when they tried to make this same argument using Condorcet voting. (Condorcet voting is basically range voting lite, for people who don’t believe in cardinal utility.)

Of course, no actual range voting has been conducted, so I have to extrapolate. So here’s my simple, but I think reasonably reliable, methodology: I’m going to use aggregated favorability ratings from Real Clear Politics (except for Donald Trump, whom Real Clear Politics didn’t include for some reason; for him I’m using Washington Post poll numbers, which are comparable for Clinton). Sadly I couldn’t find good figures on favorability ratings for Jill Stein and Gary Johnson, though I’d very much like to; so sadly I had to exclude them. Had I included them, it’s quite possible one of them could have won, which would make my point even more strongly.

I score the ratings as follows: Every “unfavorable” rating counts as a 0. Every “favorable” rating counts as a 1. Other ratings will be ignored, and I’ll add 10% “unfavorable” ratings to every candidate as a “soft quorum” (here’s an explanation of why we want to do this). Technically this is really approval voting, which is a special case of range voting where you can only vote 0 or 1.

All right, here goes.

Candidate Favorable Unfavorable Overall score
Bernie Sanders 48.4% 37.9% 50.5%
Joe Biden 47.4% 36.6% 50.4%
Elizabeth Warren 36.0% 32.0% 46.2%
Ben Carson 37.8% 42.0% 42.1%
Marco Rubio 36.3% 40.3% 41.9%
Hillary Clinton 39.6% 55.3% 37.7%
Scott Walker 23.5% 29.3% 37.4%
Chris Christie 29.8% 44.5% 35.3%
Mike Huckabee 27.0% 40.7% 34.7%
Rand Paul 25.7% 41.0% 33.5%
Jeb Bush 30.8% 52.4% 33.0%
Mike O’Malley 17.5% 27.0% 32.1%
Bobby Jindal 18.7% 30.3% 31.7%
Rick Santorum 24.0% 42.0% 31.6%
Rick Perry 21.0% 39.3% 29.9%
Jim Webb 10.3% 15.0% 29.2%
Donald Trump 29.0% 70.0% 26.6%

Joe Biden and Elizabeth Warren aren’t actually running, but it would be great if they did (and of course people like them, what’s not to like?). Ben Carson does surprisingly well, which I confess is baffling; he’s a nice enough guy, I guess, but he’s also crazypants. Hopefully if he’d campaigned longer, his approval ratings would have fallen as people heard him talk, much like Sarah Palin and for the same reasons—but note that even if this didn’t happen, he still wouldn’t have won. Marco Rubio was always the least-scary Republican option, so it’s nice to see him come up next. And then of course we have Hillary Clinton, who will actually be our next President. (6th place ain’t so bad?)

But look, there, who is that up at the top? Why, it’s Bernie Sanders.

Let me be clear about this: Using our current poll numbers—I’m not assuming that people become more aware of him, or more favorable to him, I’m just using the actual figures we have from polls of the general American population right now—if we had approval voting, and probably if we had more expressive range voting, Bernie Sanders would win the election.

Moreover, where is Donald Trump? The very bottom. He is literally the most hated candidate, and couldn’t even beat Jim Webb or Rick Perry under approval voting.

Trump didn’t win the hearts and minds of the American people, he knew how to work the system. He knew how to rally the far-right base of the Republican Party in order to secure the nomination, and he knew that the Republican leadership would fall in line and continue their 25-year-long assault on Hillary Clinton’s character once he had.

This disaster was created by our plurality voting system. If we’d had a more democratic voting system, Bernie Sanders would be narrowly beating Joe Biden. But instead Hillary Clinton is narrowly beating Donald Trump.

Trump is not the product of too much democracy, but too little.

“The cake is a lie”: The fundamental distortions of inequality

July 13, JDN 2457583

Inequality of wealth and income, especially when it is very large, fundamentally and radically distorts outcomes in a capitalist market. I’ve already alluded to this matter in previous posts on externalities and marginal utility of wealth, but it is so important I think it deserves to have its own post. In many ways this marks a paradigm shift: You can’t think about economics the same way once you realize it is true.

To motivate what I’m getting at, I’ll expand upon an example from a previous post.

Suppose there are only two goods in the world; let’s call them “cake” (K) and “money” (M). Then suppose there are three people, Baker, who makes cakes, Richie, who is very rich, and Hungry, who is very poor. Furthermore, suppose that Baker, Richie and Hungry all have exactly the same utility function, which exhibits diminishing marginal utility in cake and money. To make it more concrete, let’s suppose that this utility function is logarithmic, specifically: U = 10*ln(K+1) + ln(M+1)

The only difference between them is in their initial endowments: Baker starts with 10 cakes, Richie starts with $100,000, and Hungry starts with $10.

Therefore their starting utilities are:

U(B) = 10*ln(10+1)= 23.98

U(R) = ln(100,000+1) = 11.51

U(H) = ln(10+1) = 2.40

Thus, the total happiness is the sum of these: U = 37.89

Now let’s ask two very simple questions:

1. What redistribution would maximize overall happiness?
2. What redistribution will actually occur if the three agents trade rationally?

If multiple agents have the same diminishing marginal utility function, it’s actually a simple and deep theorem that the total will be maximized if they split the wealth exactly evenly. In the following blockquote I’ll prove the simplest case, which is two agents and one good; it’s an incredibly elegant proof:

Given: for all x, f(x) > 0, f'(x) > 0, f”(x) < 0.

Maximize: f(x) + f(A-x) for fixed A

f'(x) – f'(A – x) = 0

f'(x) = f'(A – x)

Since f”(x) < 0, this is a maximum.

Since f'(x) > 0, f is monotonic; therefore f is injective.

x = A – x


This can be generalized to any number of agents, and for multiple goods. Thus, in this case overall happiness is maximized if the cakes and money are both evenly distributed, so that each person gets 3 1/3 cakes and $33,336.66.

The total utility in that case is:

3 * (10 ln(10/3+1) + ln(33,336.66+1)) = 3 * (14.66 + 10.414) = 3 (25.074) =75.22

That’s considerably better than our initial distribution (almost twice as good). Now, how close do we get by rational trade?

Each person is willing to trade up until the point where their marginal utility of cake is equal to their marginal utility of money. The price of cake will be set by the respective marginal utilities.

In particular, let’s look at the trade that will occur between Baker and Richie. They will trade until their marginal rate of substitution is the same.

The actual algebra involved is obnoxious (if you’re really curious, here are some solved exercises of similar trade problems), so let’s just skip to the end. (I rushed through, so I’m not actually totally sure I got it right, but to make my point the precise numbers aren’t important.)
Basically what happens is that Richie pays an exorbitant price of $10,000 per cake, buying half the cakes with half of his money.

Baker’s new utility and Richie’s new utility are thus the same:
U(R) = U(B) = 10*ln(5+1) + ln(50,000+1) = 17.92 + 10.82 = 28.74
What about Hungry? Yeah, well, he doesn’t have $10,000. If cakes are infinitely divisible, he can buy up to 1/1000 of a cake. But it turns out that even that isn’t worth doing (it would cost too much for what he gains from it), so he may as well buy nothing, and his utility remains 2.40.

Hungry wanted cake just as much as Richie, and because Richie has so much more Hungry would have gotten more happiness from each new bite. Neoclassical economists promised him that markets were efficient and optimal, and so he thought he’d get the cake he needs—but the cake is a lie.

The total utility is therefore:

U = U(B) + U(R) + U(H)

U = 28.74 + 28.74 + 2.40

U = 59.88

Note three things about this result: First, it is more than where we started at 37.89—trade increases utility. Second, both Richie and Baker are better off than they were—trade is Pareto-improving. Third, the total is less than the optimal value of 75.22—trade is not utility-maximizing in the presence of inequality. This is a general theorem that I could prove formally, if I wanted to bore and confuse all my readers. (Perhaps someday I will try to publish a paper doing that.)

This result is incredibly radical—it basically goes against the core of neoclassical welfare theory, or at least of all its applications to real-world policy—so let me be absolutely clear about what I’m saying, and what assumptions I had to make to get there.

I am saying that if people start with different amounts of wealth, the trades they would willfully engage in, acting purely under their own self interest, would not maximize the total happiness of the population. Redistribution of wealth toward equality would increase total happiness.

First, I had to assume that we could simply redistribute goods however we like without affecting the total amount of goods. This is wildly unrealistic, which is why I’m not actually saying we should reduce inequality to zero (as would follow if you took this result completely literally). Ironically, this is an assumption that most neoclassical welfare theory agrees with—the Second Welfare Theorem only makes any sense in a world where wealth can be magically redistributed between people without any harmful economic effects. If you weaken this assumption, what you find is basically that we should redistribute wealth toward equality, but beware of the tradeoff between too much redistribution and too little.

Second, I had to assume that there’s such a thing as “utility”—specifically, interpersonally comparable cardinal utility. In other words, I had to assume that there’s some way of measuring how much happiness each person has, and meaningfully comparing them so that I can say whether taking something from one person and giving it to someone else is good or bad in any given circumstance.

This is the assumption neoclassical welfare theory generally does not accept; instead they use ordinal utility, on which we can only say whether things are better or worse, but never by how much. Thus, their only way of determining whether a situation is better or worse is Pareto efficiency, which I discussed in a post a couple years ago. The change from the situation where Baker and Richie trade and Hungry is left in the lurch to the situation where all share cake and money equally in socialist utopia is not a Pareto-improvement. Richie and Baker are slightly worse off with 25.07 utilons in the latter scenario, while they had 28.74 utilons in the former.

Third, I had to assume selfishness—which is again fairly unrealistic, but again not something neoclassical theory disagrees with. If you weaken this assumption and say that people are at least partially altruistic, you can get the result where instead of buying things for themselves, people donate money to help others out, and eventually the whole system achieves optimal utility by willful actions. (It depends just how altruistic people are, as well as how unequal the initial endowments are.) This actually is basically what I’m trying to make happen in the real world—I want to show people that markets won’t do it on their own, but we have the chance to do it ourselves. But even then, it would go a lot faster if we used the power of government instead of waiting on private donations.

Also, I’m ignoring externalities, which are a different type of market failure which in no way conflicts with this type of failure. Indeed, there are three basic functions of government in my view: One is to maintain security. The second is to cancel externalities. The third is to redistribute wealth. The DOD, the EPA, and the SSA, basically. One could also add macroeconomic stability as a fourth core function—the Fed.

One way to escape my theorem would be to deny interpersonally comparable utility, but this makes measuring welfare in any way (including the usual methods of consumer surplus and GDP) meaningless, and furthermore results in the ridiculous claim that we have no way of being sure whether Bill Gates is happier than a child starving and dying of malaria in Burkina Faso, because they are two different people and we can’t compare different people. Far more reasonable is not to believe in cardinal utility, meaning that we can say an extra dollar makes you better off, but we can’t put a number on how much.

And indeed, the difficulty of even finding a unit of measure for utility would seem to support this view: Should I use QALY? DALY? A Likert scale from 0 to 10? There is no known measure of utility that is without serious flaws and limitations.

But it’s important to understand just how strong your denial of cardinal utility needs to be in order for this theorem to fail. It’s not enough that we can’t measure precisely; it’s not even enough that we can’t measure with current knowledge and technology. It must be fundamentally impossible to measure. It must be literally meaningless to say that taking a dollar from Bill Gates and giving it to the starving Burkinabe would do more good than harm, as if you were asserting that triangles are greener than schadenfreude.

Indeed, the whole project of welfare theory doesn’t make a whole lot of sense if all you have to work with is ordinal utility. Yes, in principle there are policy changes that could make absolutely everyone better off, or make some better off while harming absolutely no one; and the Pareto criterion can indeed tell you that those would be good things to do.

But in reality, such policies almost never exist. In the real world, almost anything you do is going to harm someone. The Nuremburg trials harmed Nazi war criminals. The invention of the automobile harmed horse trainers. The discovery of scientific medicine took jobs away from witch doctors. Inversely, almost any policy is going to benefit someone. The Great Leap Forward was a pretty good deal for Mao. The purges advanced the self-interest of Stalin. Slavery was profitable for plantation owners. So if you can only evaluate policy outcomes based on the Pareto criterion, you are literally committed to saying that there is no difference in welfare between the Great Leap Forward and the invention of the polio vaccine.

One way around it (that might actually be a good kludge for now, until we get better at measuring utility) is to broaden the Pareto criterion: We could use a majoritarian criterion, where you care about the number of people benefited versus harmed, without worrying about magnitudes—but this can lead to Tyranny of the Majority. Or you could use the Difference Principle developed by Rawls: find an ordering where we can say that some people are better or worse off than others, and then make the system so that the worst-off people are benefited as much as possible. I can think of a few cases where I wouldn’t want to apply this criterion (essentially they are circumstances where autonomy and consent are vital), but in general it’s a very good approach.

Neither of these depends upon cardinal utility, so have you escaped my theorem? Well, no, actually. You’ve weakened it, to be sure—it is no longer a statement about the fundamental impossibility of welfare-maximizing markets. But applied to the real world, people in Third World poverty are obviously the worst off, and therefore worthy of our help by the Difference Principle; and there are an awful lot of them and very few billionaires, so majority rule says take from the billionaires. The basic conclusion that it is a moral imperative to dramatically reduce global inequality remains—as does the realization that the “efficiency” and “optimality” of unregulated capitalism is a chimera.

Asymmetric nominal rigidity, or why everything is always “on sale”

July 9, JDN 2457579

The next time you’re watching television or shopping, I want you to count the number of items that are listed as “on sale” versus the number that aren’t. (Also, be careful to distinguish labels like “Low Price!” and “Great Value!” that are dressed up like “on sale” labels but actually indicate the usual price.) While “on sale” is presented as though it’s something rare and special, in reality anywhere from a third to half of all products are on sale at any given time. At some retailers (such as Art Van Furniture and Jos. A. Bank clothing), literally almost everything is almost always on sale.

There is a very good explanation for this in terms of cognitive economics. It is a special case of a more general phenomenon of asymmetric nominal rigidity. Asymmetric nominal rigidity is the tendency of human beings to be highly resistant to (rigidity) changes in actual (nominal) dollar prices, but only in the direction that hurts them (asymmetric). Ultimately this is an expression of the far deeper phenomenon of loss aversion, where losses are felt much more than gains.

Usually we actually talk about downward nominal wage rigidity, which is often cited as a reason why depressions can get so bad. People are extremely resistant to having their wages cut, even if there is a perfectly good reason to do so, and even if the economy is under deflation so that their real wage is not actually falling. It doesn’t just feel unpleasant; it feels unjust. People feel betrayed when they see the numbers on their paycheck go down, and they are willing to bear substantial costs to retaliate against that injustice—typically, they quit or go on strike. This reduces spending, which then exacerbates the deflation, which requires more wage cuts—and down we go into the spiral of depression, unless the government intervenes with monetary and fiscal policy.

But what does this have to do with everything being on sale? Well, for every downward wage rigidity, there is an upward price rigidity. When things become more expensive, people stop buying them—even if they could still afford them, and often even if the price increase is quite small. Again, they feel in some sense betrayed by the rising price (though not to the same degree as they feel betrayed by falling wages, due to their closer relationship to their employer). Responses to price increases are about twice as strong as responses to price decreases, just as losses are felt about twice as much as gains.

Businesses have figured this out—in some ways faster than economists did—and use it to their advantage; and thus so many things are “on sale”.

Actually, “on sale” serves two functions, which can be distinguished according to their marketing strategies. Businesses like Jos. A. Bank where almost everything is on sale are primarily exploiting anchoring—they want people to think of the listed “retail price” as the default price, and then the “sale price” that everyone actually pays feels lower as a result. If they “drop” the price of something from $300 to $150 feels like the company is doing you a favor; whereas if they had just priced it at $150 to begin with, you wouldn’t get any warm fuzzy feelings from that. This works especially well for products that people don’t purchase very often and aren’t accustomed to comparing—which is why you see it in furniture stores and high-end clothing retailers, not in grocery stores and pharmacies.

But even when people are accustomed to shopping around and are familiar with what the price ordinarily would be, sales serve a second function, because of asymmetric nominal rigidity: They escape that feeling of betrayal that comes from raising prices.

Here’s how it works: Due to the thousand natural shocks that flesh is heir to, there will always be some uncertainty in the prices you will want to set in the future. Future prices may go up, they may go down; and people spend their lives trying to predict this sort of thing and rarely outperform chance. But if you just raise and lower your prices as the winds blow (as most neoclassical economists generally assume you will), you will alienate your customers. Just as a ratchet works by turning the bolt more in one direction than the other, this sort of roller-coaster pricing would attract a small number of customers with each price decrease, then repel a larger number with each increase, until after a few cycles of rise and fall you would run out of customers. This is the real source of price rigidities, not that silly nonsense about “menu costs”. Especially in the Information Age, it costs almost nothing to change the number on the label—but change it wrong and it may cost you the customer.

One response would simply be to set your price at a reasonable estimate of the long-term optimal average price, but this leaves a lot of money on the table, as some times it will be too low (your inventory sells out and you make less profit than you could have), and even worse, other times it will be too high (customers refuse to buy your product). If only there were a way to change prices without customers feeling so betrayed!

Well, it turns out, there is, and it’s called “on sale”. You have a new product that you want to sell. You start by setting the price of the product at about the highest price you would ever need to sell it in the foreseeable future. Then, unless right now happens to be a time where demand is high and prices should also be high, you immediately put it on sale, and have the marketing team drum up some excuse about wanting to draw attention to your exciting new product. You put a deadline on that sale, which may be explicit (“Ends July 30”) or vague (“For a Limited Time!” which is technically always true—you merely promise that your sale will not last until the heat death of the universe), but clearly indicates to customers that you are not promising to keep this price forever.

Then, when demand picks up and you want to raise the price, you can! All you have to do is end the sale, which if you left the deadline vague can be done whenever you like. Even if you set explicit deadlines (which will make customers even more comfortable with the changes, and also give them a sense of urgency that may lead to more impulse buying), you can just implement a new sale each time the last one runs out, varying the discount according to market conditions. Customers won’t retaliate, because they won’t feel betrayed; you said fair and square the sale wouldn’t last forever. They will still buy somewhat less, of course; that’s the Law of Demand. But they won’t overcompensate out of spite and outrage; they’ll just buy the amount that is their new optimal purchase amount at this new price.

Coupons are a lot like sales, but they’re actually even more devious; they allow for a perfectly legal form of price discrimination. Businesses know that only certain types of people clip coupons; roughly speaking, people who are either very poor or very frugal—either way, people who are very responsive to prices. Coupons allow them to set a lower price for those groups of people, while setting a higher price for other people whose demand is more inelastic. A similar phenomenon is going on with student and senior discounts; students and seniors get lower prices because they typically have less income than other adults (though why there is so rarely a youth discount, only a student discount, I’m actually not sure—controlling for demographics, students are in general richer than non-students).

Once you realize this is what’s happening, what should you do as a customer? Basically, try to ignore whether or not a label says “on sale”. Look at the actual number of the price, and try to compare it to prices you’ve paid in the past for that product, as well as of course how much value the product is worth to you. If indeed this is a particularly low price and the product is durable, you may well be wise to purchase more and stock up for the future. But you should try to train yourself to react the same way to “On sale, now $49.99” as you would to simply “$49.99”. (Making your reaction exactly the same is probably impossible, but the closer you can get the better off you are likely to be.) Always compare prices from multiple sources for any major purchase (Amazon makes this easier than ever before), and compare actual prices you would pay—with discounts, after taxes, including shipping. The rest is window dressing.

If you get coupons or special discounts, of course use them—but only if you were going to make the purchase anyway, or were just barely on the fence about it. Rarely is it actually rational for you to buy something you wouldn’t have bought just because it’s on sale for 50% off, let alone 10% off. It’s far more likely that you’d either want to buy it anyway, or still have no reason to buy it even at the new price. Businesses are of course hoping you’ll overcompensate for the discount and buy more than you would have otherwise. Foil their plans, and thereby make your life better and our economy more efficient.