How we sold our privacy piecemeal

Apr 2, JDN 2457846

The US Senate just narrowly voted to remove restrictions on the sale of user information by Internet Service Providers. Right now, your ISP can basically sell your information to whomever they like without even telling you. The new rule that the Senate struck down would have required them to at least make you sign a form with some fine print on it, which you probably would sign without reading it. So in practical terms maybe it makes no difference.

…or does it? Maybe that’s really the mistake we’ve been making all along.

In cognitive science we have a concept called the just-noticeable difference (JND); it is basically what it sounds like. If you have two stimuli—two colors, say, or sounds of two different pitches—that differ by an amount smaller than the JND, people will not notice it. But if they differ by more than the JND, people will notice. (In practice it’s a bit more complicated than that, as different people have different JND thresholds and even within a person they can vary from case to case based on attention or other factors. But there’s usually a relatively narrow range of JND values, such that anything below that is noticed by no one and anything above that is noticed by almost everyone.)

The JND seems like an intuitively obvious concept—of course you can’t tell the difference between a color of 432.78 nanometers and 432.79 nanometers!—but it actually has profound implications. In particular it undermines the possibility of having truly transitive preferences. If you prefer some colors to others—which most of us do—but you have a nonzero JND in color wavelengths—as we all do—then I can do the following: Find one color you like (for concreteness, say you like blue of 475 nm), and another color you don’t (say green of 510 nm). Let you choose between the blue you like and another blue, 475.01 nm. Will you prefer one to the other? Of course not, the difference is within your JND. So now compare 475.01 nm and 475.02 nm; which do you prefer? Again, you’re indifferent. And I can go on and on this way a few thousand times, until finally I get to 510 nanometers, the green you didn’t like. I have just found a chain of your preferences that is intransitive; you said A = B = C = D… all the way down the line to X = Y = Z… but then at the end you said A > Z. Your preferences aren’t transitive, and therefore aren’t well-defined rational preferences. And you could do the same to me, so neither are mine.

Part of the reason we’ve so willingly given up our privacy in the last generation or so is our paranoid fear of terrorism, which no doubt triggers deep instincts about tribal warfare. Depressingly, the plurality of Americans think that our government has not gone far enough in its obvious overreaches of the Constitution in the name of defending us from a threat that has killed fewer Americans in my lifetime than die from car accidents each month.

But that doesn’t explain why we—and I do mean we, for I am as guilty as most—have so willingly sold our relationships to Facebook and our schedules to Google. Google isn’t promising to save me from the threat of foreign fanatics; they’re merely offering me a more convenient way to plan my activities. Why, then, am I so cavalier about entrusting them with so much personal data?

 

Well, I didn’t start by giving them my whole life. I created an email account, which I used on occasion. I tried out their calendar app and used it to remind myself when my classes were. And so on, and so forth, until now Google knows almost as much about me as I know about myself.

At each step, it didn’t feel like I was doing anything of significance; perhaps indeed it was below my JND. Each bit of information I was giving didn’t seem important, and perhaps it wasn’t. But all together, our combined information allows Google to make enormous amounts of money without charging most of its users a cent.

The process goes something like this. Imagine someone offering you a penny in exchange for telling them how many times you made left turns last week. You’d probably take it, right? Who cares how many left turns you made last week? But then they offer another penny in exchange for telling them how many miles you drove on Tuesday. And another penny for telling them the average speed you drive during the afternoon. This process continues hundreds of times, until they’ve finally given you say $5.00—and they know exactly where you live, where you work, and where most of your friends live, because all that information was encoded in the list of driving patterns you gave them, piece by piece.

Consider instead how you’d react if someone had offered, “Tell me where you live and work and I’ll give you $5.00.” You’d be pretty suspicious, wouldn’t you? What are they going to do with that information? And $5.00 really isn’t very much money. Maybe there’s a price at which you’d part with that information to a random suspicious stranger—but it’s probably at least $50 or even more like $500, not $5.00. But by asking it in 500 different questions for a penny each, they can obtain that information from you at a bargain price.

If you work out how much money Facebook and Google make from each user, it’s actually pitiful. Facebook has been increasing their revenue lately, but it’s still less than $20 per user per year. The stranger asks, “Tell me who all your friends are, where you live, where you were born, where you work, and what your political views are, and I’ll give you $20.” Do you take that deal? Apparently, we do. Polls find that most Americans are willing to exchange privacy for valuable services, often quite cheaply.

 

Of course, there isn’t actually an alternative social network that doesn’t sell data and instead just charges a subscription fee. I don’t think this is a fundamentally unfeasible business model, but it hasn’t succeeded so far, and it will have an uphill battle for two reasons.

The first is the obvious one: It would have to compete with Facebook and Google, who already have the enormous advantage of a built-in user base of hundreds of millions of people.

The second one is what this post is about: The social network based on conventional economics rather than selling people’s privacy can’t take advantage of the JND.

I suppose they could try—charge $0.01 per month at first, then after awhile raise it to $0.02, $0.03 and so on until they’re charging $2.00 per month and actually making a profit—but that would be much harder to pull off, and it would provide the least revenue when it is needed most, at the early phase when the up-front costs of establishing a network are highest. Moreover, people would still feel that; it’s a good feature of our monetary system that you can’t break money into small enough denominations to really consistently hide under the JND. But information can be broken down into very tiny pieces indeed. Much of the revenue earned by these corporate giants is actually based upon indexing the keywords of the text we write; we literally sell off our privacy word by word.

 

What should we do about this? Honestly, I’m not sure. Facebook and Google do in fact provide valuable services, without which we would be worse off. I would be willing to pay them their $20 per year, if I could ensure that they’d stop selling my secrets to advertisers. But as long as their current business model keeps working, they have little incentive to change. There is in fact a huge industry of data brokering, corporations you’ve probably never heard of that make their revenue entirely from selling your secrets.

In a rare moment of actual journalism, TIME ran an article about a year ago arguing that we need new government policy to protect us from this kind of predation of our privacy. But they had little to offer in the way of concrete proposals.

The ACLU does better: They have specific proposals for regulations that should be made to protect our information from the most harmful prying eyes. But as we can see, the current administration has no particular interest in pursuing such policies—if anything they seem to do the opposite.

Caught between nepotism and credentialism

Feb 19, JDN 2457804

One of the more legitimate criticisms out there of we “urban elites” is our credentialismour tendency to decide a person’s value as an employee or even as a human being based solely upon their formal credentials. Randall Collins, an American sociologist, wrote a book called The Credential Society arguing that much of the class stratification in the United States is traceable to this credentialism—upper-middle-class White Anglo-Saxon Protestants go to the good high schools to get into the good colleges to get the good careers, and all along the way maintain subtle but significant barriers to keep everyone else out.

A related concern is that of credential inflation, where more and more people get a given credential (such as a high school diploma or a college degree), and it begins to lose value as a signal of status. It is often noted that a bachelor’s degree today “gets” you the same jobs that a high school diploma did two generations ago, and two generations hence you may need a master’s or even a PhD.

I consider this concern wildly overblown, however. First of all, they’re not actually the same jobs at all. Even our “menial” jobs of today require skills that most people didn’t have two generations ago—not simply those involving electronics and computers, but even quite basic literacy and numeracy. Yes, you could be a banker in the 1920s with a high school diploma, but plenty of bankers in the 1920s didn’t know algebra. What, you think they were arbitraging derivatives based on the Black-Scholes model?

The primary purpose of education should be to actually improve students’ abilities, not to signal their superior status. More people getting educated is good, not bad. If we really do need signals, we can devise better ones than making people pay tens of thousands of dollars in tuition and spending years taking classes. An expenditure of that magnitude should be accomplishing something, not just signaling. (And given the overwhelming positive correlation between a country’s educational attainment and its economic development, clearly education is actually accomplishing something.) Our higher educational standards have directly tied to higher technology and higher productivity. If indeed you need a PhD to be a janitor in 2050, it will be because in 2050 a “janitor” is actually the expert artificial intelligence engineer who commands an army of cleaning robots, not because credentials have “inflated”. Thinking that credentials “inflate” requires thinking that business managers must be very stupid, that they would exclude whole swaths of qualified candidates that they could pay less to do the same work. Only a complete moron would require a PhD to hire you for wielding a mop.

No, what concerns me is an over-emphasis on prestigious credentials over genuine competence. This is definitely a real issue in our society: Almost every US President went to an Ivy League university, yet several of them (George W. Bush, anyone?) clearly would not actually have been selected by such a university if their families had not been wealthy and well-connected. (Harvard’s application literally contains a question asking whether you are a “lineal or collateral descendant” of one of a handful of super-wealthy families.) Papers that contain errors so basic that I would probably get a failing grade as a grad student for them become internationally influential because they were written by famous economists with fancy degrees.

Ironically, it may be precisely because elite universities try not to give grades or special honors that so many of their students try so desperately to latch onto any bits of social status they can get their hands on. In this blog post, a former Yale law student comments on how, without grades or cum laude to define themselves, Yale students became fiercely competitive in the pettiest ways imaginable. Or it might just be a selection effect; to get into Yale you’ve probably got to be pretty competitive, so even if they don’t give out grades once you get there, you can take the student out of the honors track, but you can’t take the honors track out of the student.

But perhaps the biggest problem with credentialism is… I don’t see any viable alternatives!

We have to decide who is going to be hired for technical and professional positions somehow. It almost certainly can’t be everyone. And the most sensible way to do it would be to have a process people go through to get trained and evaluated on their skills in that profession—that is, a credential.

What else would we do? We could decide randomly, I suppose; well, good luck with that. Or we could try to pick people who don’t have qualifications (“anti-credentialism” I suppose), which would be systematically wrong. Or individual employers could hire individuals they know and trust on a personal level, which doesn’t seem quite so ridiculous—but we have a name for that too, and it’s nepotism.

Even anti-credentialism does exist, bafflingly enough. Many people voted for George W. Bush because they said he was “the kind of guy you can have a beer with”. That wasn’t true, of course; he was the spoiled child of a billionaire, a man who had never really worked a day in his life. But even if it had been true, so what? How is that a qualification to be the leader of the free world? And how many people voted for Trump precisely because he had no experience in government? This made sense to them somehow. (And, shockingly, he has no idea what he’s doing. Actually what is shocking is that he admits that.)

Nepotism of course happens all the time. In fact, nepotism is probably the default state for humans. The continual re-emergence of hereditary monarchy and feudalism around the world suggests that this is some sort of attractor state for human societies, that in the absence of strong institutional pressures toward some other system this is what people will generally settle into. And feudalism is nothing if not nepotistic; your position in life is almost entirely determined by your father’s position, and his father’s before that.

Formal credentials can put a stop to that. Of course, your ability to obtain the credential often depends upon your income and social status. But if you can get past those barriers and actually get the credential, you now have a way of pushing past at least some of the competitors who would have otherwise been hired on their family connections alone. The rise in college enrollments—and women actually now exceeding men in college enrollment rates—is one of the biggest reasons why the gender pay gap is rapidly closing among young workers. Nepotism and sexism that would otherwise have hired unqualified men is now overtaken by the superior credentials of qualified women.

Credentialism does still seem suboptimal… but from where I’m sitting, it seems like a second-best solution. We can’t actually observe people’s competence and ability directly, so we need credentials to provide an approximate measurement. We can certainly work to improve credentials—and for example, I am fiercely opposed to multiple-choice testing because it produces such meaningless credentials—but ultimately I don’t see any alternative to credentials.

Is intellectual property justified?

Feb 12, JDN 2457797

I had hoped to make this week’s post more comprehensive, but as I’ve spent the last week suffering from viral bronchitis I think I will keep this one short and revisit the topic in a few weeks.

Intellectual property underlies an increasingly large proportion of the world’s economic activity, more so now than ever before. We don’t just patent machines anymore; we patent drugs, and software programs, and even plants. Compared to that, copyrights on books, music, and movies seem downright pedestrian.

Though surely not the only cause, this is almost certainly contributing to the winner-takes-all effect; if you own the patent to something important, you can appropriate a huge amount of wealth to yourself with very little effort.

Moreover, this is not something that happened automatically as a natural result of market forces or autonomous human behavior. This is a policy, one that requires large investments in surveillance and enforcement to maintain. Intellectual property is probably the single largest market intervention that our government makes, and it is in a very strange direction: With antitrust law, the government seeks to undermine monopolies; but with intellectual property, the government seeks to protect monopolies.

So it’s important to ask: What is the justification for intellectual property? Do we actually have a good reason for doing this?

The basic argument goes something like this:

Many intellectual endeavors, such as research, invention, and the creation of art, require a large up-front investment of resources to complete, but once completed it costs almost nothing to disseminate the results. There is a very large fixed cost that makes it difficult to create these goods at all, but once they exist, the marginal cost of producing more of them is minimal.

If we didn’t have any intellectual property, once someone created an invention or a work of art, someone else could simply copy it and sell it at a much lower price. If enough competition emerged to drive price down to marginal cost, the original creator of the good would not only not profit, but would actually take an enormous loss, as they paid that large fixed cost but none of their competitors did.

Thus, knowing that they will take a loss if they do, individuals will not create inventions or works of art in the first place. Without intellectual property, all research, invention, and art would grind to a halt.

 

That last sentence sounds terrible, right? What would we do without research, invention, or art? But then if you stop and think about it for a minute, it becomes clear that this can’t possibly be the outcome of eliminating intellectual property. Most societies throughout the history of human civilization have not had a system of intellectual property, and yet they have all had art, and most of them have had research and invention as well.

If intellectual property is to be defended, it can’t be because we would have none of these things without it—it must be that we would have less, and so much less that it offsets the obvious harms of concentrating so much wealth and power in a handful of individuals.

I had hoped to get into the empirical results of different intellectual property regimes, but due to my illness I’m going to save that for another day.

Instead I’m just going to try to articulate what the burden of proof here really needs to be.

First of all, showing that we spend a lot of money on patents contributes absolutely nothing useful to defending them. Yes, we all know patents are expensive. The question is whether they are worth it. To show that this is not a strawman, here’s an article by IP Watchdog that takes the fact that “a new study showing that academic patent licensing contributed more than $1 trillion to the U.S. economy over eighteen years” is some kind of knockdown argument in favor of patents. If you actually showed that this economic activity would not exist without patents, then that would be an argument for patents. But all this study actually does is shows that we spend that much on patents, which says nothing about whether this is a good use of resources. It’s like when people try to defend the F-35 boondoggle by saying “it supports thousands of jobs!”; well, yes, but what about the millions of jobs we could be supporting instead if we used that money for something more efficient? (And indeed, the evidence is quite clear that spending on the F-35 destroys more jobs than it creates.) So any serious of estimate of economic benefits of intellectual property must also come with an estimate of the economic cost of intellectual property, or it is just propaganda.
It’s not enough to show some non-negligible (much less “statistically significant”) increase in innovation as a result of intellectual property. The effect size is critical; the increase in innovation needs to be large enough that it justifies having world-spanning monopolies that concentrate the world’s wealth in the hands of a few individuals. Because we already know that intellectual property concentrates wealth; they are monopolies, and monopolies concentrate wealth. It’s not enough to show that there is a benefit; that benefit must be greater than the cost, and there must be no alternative methods that allow us to achieve a greater net benefit.
It’s also important to be clear what we mean by “innovation”; this can be a very difficult thing to measure. But in principle what we really want to know is whether we are supporting important innovation—whether we will get more Mona Lisas and more polio vaccines, not simply whether we will get more Twilight and more Viagra. And one of the key problems with intellectual property as a method of funding innovation is that there is only a vague link between the profits that can be extracted and the benefits of the innovation. (Though to be fair, this is actually a more general problem; it is literally a mathematical theorem that competitive markets only maximize utility if you value rich people more, in inverse proportion to their marginal utility of wealth.)

Innovation is certainly important. Indeed, it is no exaggeration to say that innovation is the foundation of economic development and civilization itself. Defenders of intellectual property often want you to stop the conversation there: “Innovation is important!” Don’t let them. It’s not enough to say that innovation is important; intellectual property must also be the best way of achieving that innovation.

Is it? Well, in a few weeks I’ll get back to what the data actually says on this. There is some evidence supporting intellectual property—but the case is a lot weaker than you have probably been led to believe.

Bigotry is more powerful than the market

Nov 20, JDN 2457683

If there’s one message we can take from the election of Donald Trump, it is that bigotry remains a powerful force in our society. A lot of autoflagellating liberals have been trying to explain how this election result really reflects our failure to help people displaced by technology and globalization (despite the fact that personal income and local unemployment had negligible correlation with voting for Trump), or Hillary Clinton’s “bad campaign” that nonetheless managed the same proportion of Democrat turnout that re-elected her husband in 1996.

No, overwhelmingly, the strongest predictor of voting for Trump was being White, and living in an area where most people are White. (Well, actually, that’s if you exclude authoritarianism as an explanatory variable—but really I think that’s part of what we’re trying to explain.) Trump voters were actually concentrated in areas less affected by immigration and globalization. Indeed, there is evidence that these people aren’t racist because they have anxiety about the economy—they are anxious about the economy because they are racist. How does that work? Obama. They can’t believe that the economy is doing well when a Black man is in charge. So all the statistics and even personal experiences mean nothing to them. They know in their hearts that unemployment is rising, even as the BLS data clearly shows it’s falling.

The wide prevalence and enormous power of bigotry should be obvious. But economists rarely talk about it, and I think I know why: Their models say it shouldn’t exist. The free market is supposed to automatically eliminate all forms of bigotry, because they are inefficient.

The argument for why this is supposed to happen actually makes a great deal of sense: If a company has the choice of hiring a White man or a Black woman to do the same job, but they know that the market wage for Black women is lower than the market wage for White men (which it most certainly is), and they will do the same quality and quantity of work, why wouldn’t they hire the Black woman? And indeed, if human beings were rational profit-maximizers, this is probably how they would think.

More recently some neoclassical models have been developed to try to “explain” this behavior, but always without daring to give up the precious assumption of perfect rationality. So instead we get the two leading neoclassical theories of discrimination, which are statistical discrimination and taste-based discrimination.

Statistical discrimination is the idea that under asymmetric information (and we surely have that), features such as race and gender can act as signals of quality because they are correlated with actual quality for various reasons (usually left unspecified), so it is not irrational after all to choose based upon them, since they’re the best you have.

Taste-based discrimination is the idea that people are rationally maximizing preferences that simply aren’t oriented toward maximizing profit or well-being. Instead, they have this extra term in their utility function that says they should also treat White men better than women or Black people. It’s just this extra thing they have.

A small number of studies have been done trying to discern which of these is at work.
The correct answer, of course, is neither.

Statistical discrimination, at least, could be part of what’s going on. Knowing that Black people are less likely to be highly educated than Asians (as they definitely are) might actually be useful information in some circumstances… then again, you list your degree on your resume, don’t you? Knowing that women are more likely to drop out of the workforce after having a child could rationally (if coldly) affect your assessment of future productivity. But shouldn’t the fact that women CEOs outperform men CEOs be incentivizing shareholders to elect women CEOs? Yet that doesn’t seem to happen. Also, in general, people seem to be pretty bad at statistics.

The bigger problem with statistical discrimination as a theory is that it’s really only part of a theory. It explains why not all of the discrimination has to be irrational, but some of it still does. You need to explain why there are these huge disparities between groups in the first place, and statistical discrimination is unable to do that. In order for the statistics to differ this much, you need a past history of discrimination that wasn’t purely statistical.

Taste-based discrimination, on the other hand, is not a theory at all. It’s special pleading. Rather than admit that people are failing to rationally maximize their utility, we just redefine their utility so that whatever they happen to be doing now “maximizes” it.

This is really what makes the Axiom of Revealed Preference so insidious; if you really take it seriously, it says that whatever you do, must by definition be what you preferred. You can’t possibly be irrational, you can’t possibly be making mistakes of judgment, because by definition whatever you did must be what you wanted. Maybe you enjoy bashing your head into a wall, who am I to judge?

I mean, on some level taste-based discrimination is what’s happening; people think that the world is a better place if they put women and Black people in their place. So in that sense, they are trying to “maximize” some “utility function”. (By the way, most human beings behave in ways that are provably inconsistent with maximizing any well-defined utility function—the Allais Paradox is a classic example.) But the whole framework of calling it “taste-based” is a way of running away from the real explanation. If it’s just “taste”, well, it’s an unexplainable brute fact of the universe, and we just need to accept it. If people are happier being racist, what can you do, eh?

So I think it’s high time to start calling it what it is. This is not a question of taste. This is a question of tribal instinct. This is the product of millions of years of evolution optimizing the human brain to act in the perceived interest of whatever it defines as its “tribe”. It could be yourself, your family, your village, your town, your religion, your nation, your race, your gender, or even the whole of humanity or beyond into all sentient beings. But whatever it is, the fundamental tribe is the one thing you care most about. It is what you would sacrifice anything else for.

And what we learned on November 9 this year is that an awful lot of Americans define their tribe in very narrow terms. Nationalistic and xenophobic at best, racist and misogynistic at worst.

But I suppose this really isn’t so surprising, if you look at the history of our nation and the world. Segregation was not outlawed in US schools until 1955, and there are women who voted in this election who were born before American women got the right to vote in 1920. The nationalistic backlash against sending jobs to China (which was one of the chief ways that we reduced global poverty to its lowest level ever, by the way) really shouldn’t seem so strange when we remember that over 100,000 Japanese-Americans were literally forcibly relocated into camps as recently as 1942. The fact that so many White Americans seem all right with the biases against Black people in our justice system may not seem so strange when we recall that systemic lynching of Black people in the US didn’t end until the 1960s.

The wonder, in fact, is that we have made as much progress as we have. Tribal instinct is not a strange aberration of human behavior; it is our evolutionary default setting.

Indeed, perhaps it is unreasonable of me to ask humanity to change its ways so fast! We had millions of years to learn how to live the wrong way, and I’m giving you only a few centuries to learn the right way?

The problem, of course, is that the pace of technological change leaves us with no choice. It might be better if we could wait a thousand years for people to gradually adjust to globalization and become cosmopolitan; but climate change won’t wait a hundred, and nuclear weapons won’t wait at all. We are thrust into a world that is changing very fast indeed, and I understand that it is hard to keep up; but there is no way to turn back that tide of change.

Yet “turn back the tide” does seem to be part of the core message of the Trump voter, once you get past the racial slurs and sexist slogans. People are afraid of what the world is becoming. They feel that it is leaving them behind. Coal miners fret that we are leaving them behind by cutting coal consumption. Factory workers fear that we are leaving them behind by moving the factory to China or inventing robots to do the work in half the time for half the price.

And truth be told, they are not wrong about this. We are leaving them behind. Because we have to. Because coal is polluting our air and destroying our climate, we must stop using it. Moving the factories to China has raised them out of the most dire poverty, and given us a fighting chance toward ending world hunger. Inventing the robots is only the next logical step in the process that has carried humanity forward from the squalor and suffering of primitive life to the security and prosperity of modern society—and it is a step we must take, for the progress of civilization is not yet complete.

They wouldn’t have to let themselves be left behind, if they were willing to accept our help and learn to adapt. That carbon tax that closes your coal mine could also pay for your basic income and your job-matching program. The increased efficiency from the automated factories could provide an abundance of wealth that we could redistribute and share with you.

But this would require them to rethink their view of the world. They would have to accept that climate change is a real threat, and not a hoax created by… uh… never was clear on that point actually… the Chinese maybe? But 45% of Trump supporters don’t believe in climate change (and that’s actually not as bad as I’d have thought). They would have to accept that what they call “socialism” (which really is more precisely described as social democracy, or tax-and-transfer redistribution of wealth) is actually something they themselves need, and will need even more in the future. But despite rising inequality, redistribution of wealth remains fairly unpopular in the US, especially among Republicans.

Above all, it would require them to redefine their tribe, and start listening to—and valuing the lives of—people that they currently do not.

Perhaps we need to redefine our tribe as well; many liberals have argued that we mistakenly—and dangerously—did not include people like Trump voters in our tribe. But to be honest, that rings a little hollow to me: We aren’t the ones threatening to deport people or ban them from entering our borders. We aren’t the ones who want to build a wall (though some have in fact joked about building a wall to separate the West Coast from the rest of the country, I don’t think many people really want to do that). Perhaps we live in a bubble of liberal media? But I make a point of reading outlets like The American Conservative and The National Review for other perspectives (I usually disagree, but I do at least read them); how many Trump voters do you think have ever read the New York Times, let alone Huffington Post? Cosmopolitans almost by definition have the more inclusive tribe, the more open perspective on the world (in fact, do I even need the “almost”?).

Nor do I think we are actually ignoring their interests. We want to help them. We offer to help them. In fact, I want to give these people free money—that’s what a basic income would do, it would take money from people like me and give it to people like them—and they won’t let us, because that’s “socialism”! Rather, we are simply refusing to accept their offered solutions, because those so-called “solutions” are beyond unworkable; they are absurd, immoral and insane. We can’t bring back the coal mining jobs, unless we want Florida underwater in 50 years. We can’t reinstate the trade tariffs, unless we want millions of people in China to starve. We can’t tear down all the robots and force factories to use manual labor, unless we want to trigger a national—and then global—economic collapse. We can’t do it their way. So we’re trying to offer them another way, a better way, and they’re refusing to take it. So who here is ignoring the concerns of whom?

Of course, the fact that it’s really their fault doesn’t solve the problem. We do need to take it upon ourselves to do whatever we can, because, regardless of whose fault it is, the world will still suffer if we fail. And that presents us with our most difficult task of all, a task that I fully expect to spend a career trying to do and yet still probably failing: We must understand the human tribal instinct well enough that we can finally begin to change it. We must know enough about how human beings form their mental tribes that we can actually begin to shift those parameters. We must, in other words, cure bigotry—and we must do it now, for we are running out of time.

Why are movies so expensive? Did they used to be? Do they need to be?

August 10, JDN 2457611

One of the better arguments in favor of copyright involves film production. Films are extraordinarily expensive to produce; without copyright, how would they recover their costs? $100 million is a common budget these days.

It is commonly thought that film budgets used to be much smaller, so I looked at some data from The Numbers on over 5,000 films going back to 1915, and inflation-adjusted the budgets using the CPI. (I learned some interesting LibreOffice Calc functions in the process of merging the data; also LibreOffice crashed a few times trying to make the graphs, so that’s fun. I finally realized it had copied over all the 10,000 hyperlinks from the HTML data set.)

If you just look at the nominal figures, there does seem to be some sort of upward trend:

Movie_Budgets_nominal

But once you do the proper inflation adjustment, this trend basically disappears:

Movie_Budgets_adjusted

In real terms, the grosses of some early movies are quite large. Adjusted to 2015 dollars, Gone with the Wind grossed $6.659 billion—still the highest ever. In 1937, Snow White and the Seven Dwarfs grossed over $3.043 billion in 2015 dollars. In 1950, Cinderella made it to $2.592 billion in today’s money. (Horrifyingly, The Birth of a Nation grossed $258 million in today’s money.)

Nor is there any evidence that movie production has gotten more expensive. The linear trend is actually negative, though with a very small slope that is not statistically significant. On average, the real budget of a movie falls by $1752 per year.

Movie_Budgets_trend

While the two most expensive movies came out recently (Pirates of the Caribbean: At World’s End and Avatar), the third most expensive was released in 1963 (Cleopatra). The really hugely expensive movies do seem to cluster relatively recently—but then so do the really cheap films, some of which have budgets under $10,000. It may just be that more movies are produced in general, and overall the cost of producing a film doesn’t seem to have changed in real terms. The best return on investment is My Date with Drew, released in 2005, which had a budget of $1,100 but grossed $181,000, giving it an ROI of 16,358%. The highest real profit was of course Gone with the Wind, which made an astonishing $6.592 billion, though Titanic, Avatar, Aliens and Terminator 2 combined actually beat it with a total profit of $6.651 billion, which may explain why James Cameron can now basically make any movie he wants and already has four sequels lined up for Avatar.

The biggest real loss was 1970’s Waterloo, which made back only $18 million of its $153 million budget, losing $135 million and having an ROI of -87.7%. This was not quite as bad an ROI as 2002’s The Adventures of Pluto Nash, which had an ROI of -92.91%.

But making movies has always been expensive, at least for big blockbusters. (The $8,900 budget of Primer is something I could probably put on credit cards if I had to.) It’s nothing new to spend $100 million in today’s money.

When considering the ethics and economics of copyright, it’s useful to think about what Michele Boldrin calls “pizzaright”: you can’t copy my pizza, or you are guilty of pizzaright infringement. Many of the arguments for copyright are so general—this is a valuable service, it carries some risk of failure, it wouldn’t be as profitable without the monopoly, so fewer companies might enter the business—that they would also apply to pizza. Yet somehow nobody thinks that pizzaright should be a thing. If there is a justification for copyrights, it must come from the special circumstances of works of art (broadly conceived, including writing, film, music, etc.), and the only one that really seems strong enough is the high upfront cost of certain types of art—and indeed, the only ones that really seem to fit that are films and video games.

Painting, writing, and music just aren’t that expensive. People are willing to create these things for very little money, and can do so more or less on their own, especially nowadays. If the prices are reasonable, people will still want to buy from the creators directly—and sure enough, widespread music piracy hasn’t killed music, it has only killed the corporate record industry. But movies and video games really can easily cost $100 million to make, so there’s a serious concern of what might happen if they couldn’t use copyright to recover their costs.

The question for me is, did we really need copyright to fund these budgets?

Let’s take a look at how Star Wars made its money. $6.249 billion came from box office revenue, while $873 million came from VHS and DVD sales; those would probably be substantially reduced if not for copyright. But even before The Force Awakens was released, the Star Wars franchise had already made some $12 billion in toy sales alone. “Merchandizing, merchandizing, where the real money from the movie is made!”

Did they need intellectual property to do that? Well, yes—but all they needed was trademark. Defenders of “intellectual property” like to use that term because it elides fundamental distinctions between the three types: trademark, copyright, and patent.
Trademark is unproblematic. You can’t lie about who you are or where you products came from when you’re selling something. So if you are claiming to sell official Star Wars merchandise, you’d better be selling official Star Wars merchandise, and trademark protects that.

Copyright is problematic, but may be necessary in some cases. Copyright protects the content of the movies from being copied or modified without Lucasfilm’s permission. So now rather than simply protecting against the claim that you represent Lucasfilm, we are protecting against people buying the movie, copying it, and reselling the copies—even though that is a real economic service they are providing, and is in no way fraudulent as long as they are clear about the fact that they made the copies.

Patent is, frankly, ridiculous. The concept of “owning” ideas is absurd. You came up with a good way to do something? Great! Go do it then. But don’t expect other people to pay you simply for the privilege of hearing your good idea. Of course I want to financially support researchers, but there are much, much better ways of doing that, like government grants and universities. Patents only raise revenue for research that sells, first of all—so vaccines and basic research can’t be funded that way, even though they are the most important research by far. Furthermore, there’s nothing to guarantee that the person who actually invented the idea is the one who makes the profit from it—and in our current system where corporations can own patents (and do own almost 90% of patents), it typically isn’t. Even if it were, the whole concept of owning ideas is nonsensical, and it has driven us to the insane extremes of corporations owning patents on human DNA. The best argument I’ve heard for patents is that they are a second-best solution that incentivizes transparency and avoids trade secrets from becoming commonplace; but in that case they should definitely be short, and we should never extend them. Companies should not be able to make basically cosmetic modifications and renew the patent, and expiring patents should be a cause for celebration.

Hollywood actually formed in Los Angeles precisely to escape patents, but of course they love copyright and trademark. So do they like “intellectual property”?

Could blockbuster films be produced profitably using only trademark, in the absence of copyright?

Clearly Star Wars would have still turned a profit. But not every movie can do such merchandizing, and when movies start getting written purely for merchandizing it can be painful to watch.

The real question is whether a film like Gone with the Wind or Avatar could still be made, and make a reasonable profit (if a much smaller one).

Well, there’s always porn. Porn raises over $400 million per year in revenue, despite having essentially unenforceable copyright. They too are outraged over piracy, yet somehow I don’t think porn will ever cease to exist. A top porn star can make over $200,000 per year.Then there are of course independent films that never turn a profit at all, yet people keep making them.

So clearly it is possible to make some films without copyright protection, and something like Gone with the Wind needn’t cost $100 million to make. The only reason it cost as much as it did (about $66 million in today’s money) was that movie stars could command huge winner-takes-all salaries, which would no longer be true if copyright went away. And don’t tell me people wouldn’t be willing to be movie stars for $200,000 a year instead of $1.8 million (what Clark Gable made for Gone with the Wind, adjusted for inflation).

Yet some Hollywood blockbuster budgets are genuinely necessary. The real question is whether we could have Avatar without copyright. Not having films like Avatar is something I would count as a substantial loss to our society; we would lose important pieces of our art and culture.

So, where did all that money go? I don’t have a breakdown for Avatar in particular, but I do have a full budget breakdown for The Village. Of its $71.7 million, $33.5 million was “above the line”, which basically means the winner-takes-all superstar salaries for the director, producer, and cast. That amount could be dramatically reduced with no real cost to society—let’s drop it to say $3 million. Shooting costs were $28.8 million, post-production was $8.4 million, and miscellaneous expenses added about $1 million; all of those would be much harder to reduce (they mainly go to technical staff who make reasonable salaries, not to superstars), so let’s assume the full amount is necessary. That’s about $38 million in real cost to produce. Avatar had a lot more (and better) post-production, so let’s go ahead and multiply the post-production budget by an order of magnitude to $84 million. Our new total budget is $113.8 million.
That sounds like a lot, and it is; but this could be made back without copyright. Avatar sold over 14.5 million DVDs and over 8 million Blu-Rays. Conservatively assuming that the price elasticity of demand is zero (which is ridiculous—assuming the monopoly pricing is optimal it should be -1), if those DVDs were sold for $2 each and the Blu-Rays were sold for $5 each, with 50% of those prices being profit, this would yield a total profit of $14.5 million from DVDs and $20 million from Blu-Rays. That’s already $34.5 million. With realistic assumptions about elasticity of demand, cutting the prices this much (DVDs down from an average of $16, Blu-Rays down from an average of $20) would multiply the number of DVDs sold by at least 5 and the number of Blu-Rays sold by at least 3, which would get us all the way up to $132 million—enough to cover our new budget. (Of course this is much less than they actually made, which is why they set the prices they did—but that doesn’t mean it’s optimal from society’s perspective.)

But okay, suppose I’m wrong about the elasticity, and dropping the price from $16 to $2 for a DVD somehow wouldn’t actually increase the number purchased. What other sources of revenue would they have? Well, box office tickets would still be a thing. They’d have to come down in price, but given the high-quality high-fidelity versions that cinemas require—making them quite hard to pirate—they would still get decent money from each cinema. Let’s say the price drops by 90%—all cinemas are now $1 cinemas!—and the sales again somehow remain exactly the same (rather than dramatically increasing as they actually would). What would Avatar’s worldwide box office gross be then? $278 million. They could give the DVDs away for free and still turn a profit.

And that’s Avatar, one of the most expensive movies ever made. By cutting out the winner-takes-all salaries and huge corporate profits, the budget can be substantially reduced, and then what real costs remain can be quite well covered by box office and DVD sales at reasonable prices. If you imagine that piracy somehow undercuts everything until you have to give away things for free, you might think this is impossible; but in reality pirated versions are of unreliable quality, people do want to support artists and they are willing to pay something for their entertainment. They’re just tired of paying monopoly prices to benefit the shareholders of Viacom.

Would this end the era of the multi-millionaire movie star? Yes, I suppose it might. But it would also put about $10 billion per year back in the pockets of American consumers—and there’s little reason to think it would take away future Avatars, much less future Gone with the Winds.

“The cake is a lie”: The fundamental distortions of inequality

July 13, JDN 2457583

Inequality of wealth and income, especially when it is very large, fundamentally and radically distorts outcomes in a capitalist market. I’ve already alluded to this matter in previous posts on externalities and marginal utility of wealth, but it is so important I think it deserves to have its own post. In many ways this marks a paradigm shift: You can’t think about economics the same way once you realize it is true.

To motivate what I’m getting at, I’ll expand upon an example from a previous post.

Suppose there are only two goods in the world; let’s call them “cake” (K) and “money” (M). Then suppose there are three people, Baker, who makes cakes, Richie, who is very rich, and Hungry, who is very poor. Furthermore, suppose that Baker, Richie and Hungry all have exactly the same utility function, which exhibits diminishing marginal utility in cake and money. To make it more concrete, let’s suppose that this utility function is logarithmic, specifically: U = 10*ln(K+1) + ln(M+1)

The only difference between them is in their initial endowments: Baker starts with 10 cakes, Richie starts with $100,000, and Hungry starts with $10.

Therefore their starting utilities are:

U(B) = 10*ln(10+1)= 23.98

U(R) = ln(100,000+1) = 11.51

U(H) = ln(10+1) = 2.40

Thus, the total happiness is the sum of these: U = 37.89

Now let’s ask two very simple questions:

1. What redistribution would maximize overall happiness?
2. What redistribution will actually occur if the three agents trade rationally?

If multiple agents have the same diminishing marginal utility function, it’s actually a simple and deep theorem that the total will be maximized if they split the wealth exactly evenly. In the following blockquote I’ll prove the simplest case, which is two agents and one good; it’s an incredibly elegant proof:

Given: for all x, f(x) > 0, f'(x) > 0, f”(x) < 0.

Maximize: f(x) + f(A-x) for fixed A

f'(x) – f'(A – x) = 0

f'(x) = f'(A – x)

Since f”(x) < 0, this is a maximum.

Since f'(x) > 0, f is monotonic; therefore f is injective.

x = A – x

QED

This can be generalized to any number of agents, and for multiple goods. Thus, in this case overall happiness is maximized if the cakes and money are both evenly distributed, so that each person gets 3 1/3 cakes and $33,336.66.

The total utility in that case is:

3 * (10 ln(10/3+1) + ln(33,336.66+1)) = 3 * (14.66 + 10.414) = 3 (25.074) =75.22

That’s considerably better than our initial distribution (almost twice as good). Now, how close do we get by rational trade?

Each person is willing to trade up until the point where their marginal utility of cake is equal to their marginal utility of money. The price of cake will be set by the respective marginal utilities.

In particular, let’s look at the trade that will occur between Baker and Richie. They will trade until their marginal rate of substitution is the same.

The actual algebra involved is obnoxious (if you’re really curious, here are some solved exercises of similar trade problems), so let’s just skip to the end. (I rushed through, so I’m not actually totally sure I got it right, but to make my point the precise numbers aren’t important.)
Basically what happens is that Richie pays an exorbitant price of $10,000 per cake, buying half the cakes with half of his money.

Baker’s new utility and Richie’s new utility are thus the same:
U(R) = U(B) = 10*ln(5+1) + ln(50,000+1) = 17.92 + 10.82 = 28.74
What about Hungry? Yeah, well, he doesn’t have $10,000. If cakes are infinitely divisible, he can buy up to 1/1000 of a cake. But it turns out that even that isn’t worth doing (it would cost too much for what he gains from it), so he may as well buy nothing, and his utility remains 2.40.

Hungry wanted cake just as much as Richie, and because Richie has so much more Hungry would have gotten more happiness from each new bite. Neoclassical economists promised him that markets were efficient and optimal, and so he thought he’d get the cake he needs—but the cake is a lie.

The total utility is therefore:

U = U(B) + U(R) + U(H)

U = 28.74 + 28.74 + 2.40

U = 59.88

Note three things about this result: First, it is more than where we started at 37.89—trade increases utility. Second, both Richie and Baker are better off than they were—trade is Pareto-improving. Third, the total is less than the optimal value of 75.22—trade is not utility-maximizing in the presence of inequality. This is a general theorem that I could prove formally, if I wanted to bore and confuse all my readers. (Perhaps someday I will try to publish a paper doing that.)

This result is incredibly radical—it basically goes against the core of neoclassical welfare theory, or at least of all its applications to real-world policy—so let me be absolutely clear about what I’m saying, and what assumptions I had to make to get there.

I am saying that if people start with different amounts of wealth, the trades they would willfully engage in, acting purely under their own self interest, would not maximize the total happiness of the population. Redistribution of wealth toward equality would increase total happiness.

First, I had to assume that we could simply redistribute goods however we like without affecting the total amount of goods. This is wildly unrealistic, which is why I’m not actually saying we should reduce inequality to zero (as would follow if you took this result completely literally). Ironically, this is an assumption that most neoclassical welfare theory agrees with—the Second Welfare Theorem only makes any sense in a world where wealth can be magically redistributed between people without any harmful economic effects. If you weaken this assumption, what you find is basically that we should redistribute wealth toward equality, but beware of the tradeoff between too much redistribution and too little.

Second, I had to assume that there’s such a thing as “utility”—specifically, interpersonally comparable cardinal utility. In other words, I had to assume that there’s some way of measuring how much happiness each person has, and meaningfully comparing them so that I can say whether taking something from one person and giving it to someone else is good or bad in any given circumstance.

This is the assumption neoclassical welfare theory generally does not accept; instead they use ordinal utility, on which we can only say whether things are better or worse, but never by how much. Thus, their only way of determining whether a situation is better or worse is Pareto efficiency, which I discussed in a post a couple years ago. The change from the situation where Baker and Richie trade and Hungry is left in the lurch to the situation where all share cake and money equally in socialist utopia is not a Pareto-improvement. Richie and Baker are slightly worse off with 25.07 utilons in the latter scenario, while they had 28.74 utilons in the former.

Third, I had to assume selfishness—which is again fairly unrealistic, but again not something neoclassical theory disagrees with. If you weaken this assumption and say that people are at least partially altruistic, you can get the result where instead of buying things for themselves, people donate money to help others out, and eventually the whole system achieves optimal utility by willful actions. (It depends just how altruistic people are, as well as how unequal the initial endowments are.) This actually is basically what I’m trying to make happen in the real world—I want to show people that markets won’t do it on their own, but we have the chance to do it ourselves. But even then, it would go a lot faster if we used the power of government instead of waiting on private donations.

Also, I’m ignoring externalities, which are a different type of market failure which in no way conflicts with this type of failure. Indeed, there are three basic functions of government in my view: One is to maintain security. The second is to cancel externalities. The third is to redistribute wealth. The DOD, the EPA, and the SSA, basically. One could also add macroeconomic stability as a fourth core function—the Fed.

One way to escape my theorem would be to deny interpersonally comparable utility, but this makes measuring welfare in any way (including the usual methods of consumer surplus and GDP) meaningless, and furthermore results in the ridiculous claim that we have no way of being sure whether Bill Gates is happier than a child starving and dying of malaria in Burkina Faso, because they are two different people and we can’t compare different people. Far more reasonable is not to believe in cardinal utility, meaning that we can say an extra dollar makes you better off, but we can’t put a number on how much.

And indeed, the difficulty of even finding a unit of measure for utility would seem to support this view: Should I use QALY? DALY? A Likert scale from 0 to 10? There is no known measure of utility that is without serious flaws and limitations.

But it’s important to understand just how strong your denial of cardinal utility needs to be in order for this theorem to fail. It’s not enough that we can’t measure precisely; it’s not even enough that we can’t measure with current knowledge and technology. It must be fundamentally impossible to measure. It must be literally meaningless to say that taking a dollar from Bill Gates and giving it to the starving Burkinabe would do more good than harm, as if you were asserting that triangles are greener than schadenfreude.

Indeed, the whole project of welfare theory doesn’t make a whole lot of sense if all you have to work with is ordinal utility. Yes, in principle there are policy changes that could make absolutely everyone better off, or make some better off while harming absolutely no one; and the Pareto criterion can indeed tell you that those would be good things to do.

But in reality, such policies almost never exist. In the real world, almost anything you do is going to harm someone. The Nuremburg trials harmed Nazi war criminals. The invention of the automobile harmed horse trainers. The discovery of scientific medicine took jobs away from witch doctors. Inversely, almost any policy is going to benefit someone. The Great Leap Forward was a pretty good deal for Mao. The purges advanced the self-interest of Stalin. Slavery was profitable for plantation owners. So if you can only evaluate policy outcomes based on the Pareto criterion, you are literally committed to saying that there is no difference in welfare between the Great Leap Forward and the invention of the polio vaccine.

One way around it (that might actually be a good kludge for now, until we get better at measuring utility) is to broaden the Pareto criterion: We could use a majoritarian criterion, where you care about the number of people benefited versus harmed, without worrying about magnitudes—but this can lead to Tyranny of the Majority. Or you could use the Difference Principle developed by Rawls: find an ordering where we can say that some people are better or worse off than others, and then make the system so that the worst-off people are benefited as much as possible. I can think of a few cases where I wouldn’t want to apply this criterion (essentially they are circumstances where autonomy and consent are vital), but in general it’s a very good approach.

Neither of these depends upon cardinal utility, so have you escaped my theorem? Well, no, actually. You’ve weakened it, to be sure—it is no longer a statement about the fundamental impossibility of welfare-maximizing markets. But applied to the real world, people in Third World poverty are obviously the worst off, and therefore worthy of our help by the Difference Principle; and there are an awful lot of them and very few billionaires, so majority rule says take from the billionaires. The basic conclusion that it is a moral imperative to dramatically reduce global inequality remains—as does the realization that the “efficiency” and “optimality” of unregulated capitalism is a chimera.

Asymmetric nominal rigidity, or why everything is always “on sale”

July 9, JDN 2457579

The next time you’re watching television or shopping, I want you to count the number of items that are listed as “on sale” versus the number that aren’t. (Also, be careful to distinguish labels like “Low Price!” and “Great Value!” that are dressed up like “on sale” labels but actually indicate the usual price.) While “on sale” is presented as though it’s something rare and special, in reality anywhere from a third to half of all products are on sale at any given time. At some retailers (such as Art Van Furniture and Jos. A. Bank clothing), literally almost everything is almost always on sale.

There is a very good explanation for this in terms of cognitive economics. It is a special case of a more general phenomenon of asymmetric nominal rigidity. Asymmetric nominal rigidity is the tendency of human beings to be highly resistant to (rigidity) changes in actual (nominal) dollar prices, but only in the direction that hurts them (asymmetric). Ultimately this is an expression of the far deeper phenomenon of loss aversion, where losses are felt much more than gains.

Usually we actually talk about downward nominal wage rigidity, which is often cited as a reason why depressions can get so bad. People are extremely resistant to having their wages cut, even if there is a perfectly good reason to do so, and even if the economy is under deflation so that their real wage is not actually falling. It doesn’t just feel unpleasant; it feels unjust. People feel betrayed when they see the numbers on their paycheck go down, and they are willing to bear substantial costs to retaliate against that injustice—typically, they quit or go on strike. This reduces spending, which then exacerbates the deflation, which requires more wage cuts—and down we go into the spiral of depression, unless the government intervenes with monetary and fiscal policy.

But what does this have to do with everything being on sale? Well, for every downward wage rigidity, there is an upward price rigidity. When things become more expensive, people stop buying them—even if they could still afford them, and often even if the price increase is quite small. Again, they feel in some sense betrayed by the rising price (though not to the same degree as they feel betrayed by falling wages, due to their closer relationship to their employer). Responses to price increases are about twice as strong as responses to price decreases, just as losses are felt about twice as much as gains.

Businesses have figured this out—in some ways faster than economists did—and use it to their advantage; and thus so many things are “on sale”.

Actually, “on sale” serves two functions, which can be distinguished according to their marketing strategies. Businesses like Jos. A. Bank where almost everything is on sale are primarily exploiting anchoring—they want people to think of the listed “retail price” as the default price, and then the “sale price” that everyone actually pays feels lower as a result. If they “drop” the price of something from $300 to $150 feels like the company is doing you a favor; whereas if they had just priced it at $150 to begin with, you wouldn’t get any warm fuzzy feelings from that. This works especially well for products that people don’t purchase very often and aren’t accustomed to comparing—which is why you see it in furniture stores and high-end clothing retailers, not in grocery stores and pharmacies.

But even when people are accustomed to shopping around and are familiar with what the price ordinarily would be, sales serve a second function, because of asymmetric nominal rigidity: They escape that feeling of betrayal that comes from raising prices.

Here’s how it works: Due to the thousand natural shocks that flesh is heir to, there will always be some uncertainty in the prices you will want to set in the future. Future prices may go up, they may go down; and people spend their lives trying to predict this sort of thing and rarely outperform chance. But if you just raise and lower your prices as the winds blow (as most neoclassical economists generally assume you will), you will alienate your customers. Just as a ratchet works by turning the bolt more in one direction than the other, this sort of roller-coaster pricing would attract a small number of customers with each price decrease, then repel a larger number with each increase, until after a few cycles of rise and fall you would run out of customers. This is the real source of price rigidities, not that silly nonsense about “menu costs”. Especially in the Information Age, it costs almost nothing to change the number on the label—but change it wrong and it may cost you the customer.

One response would simply be to set your price at a reasonable estimate of the long-term optimal average price, but this leaves a lot of money on the table, as some times it will be too low (your inventory sells out and you make less profit than you could have), and even worse, other times it will be too high (customers refuse to buy your product). If only there were a way to change prices without customers feeling so betrayed!

Well, it turns out, there is, and it’s called “on sale”. You have a new product that you want to sell. You start by setting the price of the product at about the highest price you would ever need to sell it in the foreseeable future. Then, unless right now happens to be a time where demand is high and prices should also be high, you immediately put it on sale, and have the marketing team drum up some excuse about wanting to draw attention to your exciting new product. You put a deadline on that sale, which may be explicit (“Ends July 30”) or vague (“For a Limited Time!” which is technically always true—you merely promise that your sale will not last until the heat death of the universe), but clearly indicates to customers that you are not promising to keep this price forever.

Then, when demand picks up and you want to raise the price, you can! All you have to do is end the sale, which if you left the deadline vague can be done whenever you like. Even if you set explicit deadlines (which will make customers even more comfortable with the changes, and also give them a sense of urgency that may lead to more impulse buying), you can just implement a new sale each time the last one runs out, varying the discount according to market conditions. Customers won’t retaliate, because they won’t feel betrayed; you said fair and square the sale wouldn’t last forever. They will still buy somewhat less, of course; that’s the Law of Demand. But they won’t overcompensate out of spite and outrage; they’ll just buy the amount that is their new optimal purchase amount at this new price.

Coupons are a lot like sales, but they’re actually even more devious; they allow for a perfectly legal form of price discrimination. Businesses know that only certain types of people clip coupons; roughly speaking, people who are either very poor or very frugal—either way, people who are very responsive to prices. Coupons allow them to set a lower price for those groups of people, while setting a higher price for other people whose demand is more inelastic. A similar phenomenon is going on with student and senior discounts; students and seniors get lower prices because they typically have less income than other adults (though why there is so rarely a youth discount, only a student discount, I’m actually not sure—controlling for demographics, students are in general richer than non-students).

Once you realize this is what’s happening, what should you do as a customer? Basically, try to ignore whether or not a label says “on sale”. Look at the actual number of the price, and try to compare it to prices you’ve paid in the past for that product, as well as of course how much value the product is worth to you. If indeed this is a particularly low price and the product is durable, you may well be wise to purchase more and stock up for the future. But you should try to train yourself to react the same way to “On sale, now $49.99” as you would to simply “$49.99”. (Making your reaction exactly the same is probably impossible, but the closer you can get the better off you are likely to be.) Always compare prices from multiple sources for any major purchase (Amazon makes this easier than ever before), and compare actual prices you would pay—with discounts, after taxes, including shipping. The rest is window dressing.

If you get coupons or special discounts, of course use them—but only if you were going to make the purchase anyway, or were just barely on the fence about it. Rarely is it actually rational for you to buy something you wouldn’t have bought just because it’s on sale for 50% off, let alone 10% off. It’s far more likely that you’d either want to buy it anyway, or still have no reason to buy it even at the new price. Businesses are of course hoping you’ll overcompensate for the discount and buy more than you would have otherwise. Foil their plans, and thereby make your life better and our economy more efficient.