If you really want grad students to have better mental health, remove all the high-stakes checkpoints

Post 260: Oct 14 JDN 2458406

A study was recently published in Nature Biotechnology showing clear evidence of a mental health crisis among graduate students (no, I don’t know why they picked the biotechnology imprint—I guess it wasn’t good enough for Nature proper?). This is only the most recent of several studies showing exceptionally high rates of mental health issues among graduate students.

I’ve seen universities do a lot of public hand-wringing and lip service about this issue—but I haven’t seen any that were seriously willing to do what it takes to actually solve the problem.

I think this fact became clearest to me when I was required to fill out an official “Individual Development Plan” form as a prerequisite for my advancement to candidacy, which included one question about “What are you doing to support your own mental health and work/life balance?”

The irony here is absolutely excruciating, because advancement to candidacy has been overwhelmingly my leading source of mental health stress for at least the last six months. And it is only one of several different high-stakes checkpoints that grad students are expected to complete, always threatened with defunding or outright expulsion from the graduate program if the checkpoint is not met by a certain arbitrary deadline.

The first of these was the qualifying exams. Then comes advancement to candidacy. Then I have to complete and defend a second-year paper, then a third-year paper. Finally I have to complete and defend a dissertation, and then go onto the job market and go through a gauntlet of applications and interviews. I can’t think of any other time in my life when I was under this much academic and career pressure this consistently—even finishing high school and applying to college wasn’t like this.

If universities really wanted to improve my mental health, they would find a way to get rid of all that.

Granted, a single university does not have total control over all this: There are coordination problems between universities regarding qualifying exams, advancement, and dissertation requirements. One university that unilaterally tried to remove all these would rapidly lose prestige, as it would not be regarded as “rigorous” to reduce the pressure on your grad students. But that itself is precisely the problem—we have equated “rigor” with pressuring grad students until they are on the verge of emotional collapse. Universities don’t seem to know how to make graduate school difficult in the ways that would actually encourage excellence in research and teaching; they simply know how to make it difficult in ways that destroy their students psychologically.

The job market is even more complicated; in the current funding environment, it would be prohibitively expensive to open up enough faculty positions to actually accept even half of all graduating PhDs to tenure-track jobs. Probably the best answer here is to refocus graduate programs on supporting employment outside academia, recognizing both that PhD-level skills are valuable in many workplaces and that not every grad student really wants to become a professor.

But there are clearly ways that universities could mitigate these effects, and they don’t seem genuinely interested in doing so. They could remove the advancement exam, for example; you could simply advance to candidacy as a formality when your advisor decides you are ready, never needing to actually perform a high-stakes presentation before a committee—because what the hell does that accomplish anyway? Speaking of advisors, they could have a formalized matching process that starts with interviewing several different professors and being matched to the one that best fits your goals and interests, instead of expecting you to reach out on your own and hope for the best. They could have you write a dissertation, but not perform a “dissertation defense”—because, again, what can they possibly learn from forcing you to present in a high-stakes environment that they couldn’t have learned from reading your paper and talking with you about it over several months?

They could adjust or even remove funding deadlines—especially for international students. Here at UCI at least, once you are accepted to the program, you are ostensibly guaranteed funding for as long as you maintain reasonable academic progress—but then they define “reasonable progress” in such a way that you have to form an advancement committee, fill out forms, write a paper, and present before a committee all by a certain date or your funding is in jeopardy. Residents of California (which includes all US students who successfully established residency after a full year) are given more time if we need it—but international students aren’t. How is that fair?

The unwillingness of universities to take such actions clearly shows that their commitment to improving students’ mental health is paper-thin. They are only willing to help their students improve their work-life balance as long as it doesn’t require changing anything about the graduate program. They will provide us with counseling services and free yoga classes, but they won’t seriously reduce the pressure they put on us at every step of the way.
I understand that universities are concerned about protecting their prestige, but I ask them this: Does this really improve the quality of your research or teaching output? Do you actually graduate better students by selecting only the ones who can survive being emotionally crushed? Do all these arbitrary high-stakes performances actually result in greater advancement of human knowledge?

Or is it perhaps that you yourselves were put through such hazing rituals years ago, and now your cognitive dissonance won’t let you admit that it was all for naught? “This must be worth doing, or else they wouldn’t have put me through so much suffering!” Are you trying to transfer your own psychological pain onto your students, lest you be forced to face it yourself?

MSRP is tacit collusion

Oct 7 JDN 2458399

It’s been a little while since I’ve done a really straightforward economic post. It feels good to get back to that.

You are no doubt familiar with the “Manufacturer’s Suggested Retail Price” or MSRP. It can be found on everything from books to dishwashers to video games.

The MSRP is a very simple concept: The manufacturer suggests that all retailers sell it (at least the initial run) at precisely this price.

Why would they want to do that? There is basically only one possible reason: They are trying to sustain tacit collusion.

The game theory of this is rather subtle: It requires that both manufacturers and retailers engage in long-term relationships with one another, and can pick and choose who to work with based on the history of past behavior. Both of these conditions hold in most real-world situations—indeed, the fact that they don’t hold very well in the agriculture industry is probably why we don’t see MSRP on produce.

If pricing were decided by random matching with no long-term relationships or past history, MSRP would be useless. Each firm would have little choice but to set their own optimal price, probably just slightly over their own marginal cost. Even if the manufacturer suggested an MSRP, retailers would promptly and thoroughly ignore it.

This is because the one-shot Bertrand pricing game has a unique Nash equilibrium, at pricing just above marginal cost. The basic argument is as follows: If I price cheaper than you, I can claim the whole market. As long as it’s profitable for me to do that, I will. The only time it’s not profitable for me to undercut you in this way is if we are both charging just slightly above marginal cost—so that is what we shall do, in Nash equilibrium. Human beings don’t always play according to the Nash equilibrium, but for-profit corporations do so quite consistently. Humans have limited attention and moral values; corporations have accounting departments and a fanatical devotion to the One True Profit.

But the iterated Bertrand pricing game is quite different. If instead of making only one pricing decision, we make many pricing decisions over time, always with a high probability of encountering the same buyers and sellers again in the future, then I may not want to undercut your price, for fear of triggering a price war that will hurt both of our firms.

Much like how the Iterated Prisoner’s Dilemma can sustain cooperation in Nash equilibrium while the one-shot Prisoner’s Dilemma cannot, the iterated Bertrand game can sustain collusion as a Nash equilibrium.

There is in fact a vast number of possible equilibria in the iterated Bertrand game. If prices were infinitely divisible, there would be an infinite number of equilibria. In reality, there are hundreds or thousands of equilibria, depending on how finely divisible the price may be.

This makes the iterated Bertrand game a coordination gamethere are many possible equilibria, and our task is to figure out which one to coordinate on.

If we had perfect information, we could deduce what the monopoly price would be, and then all choose the monopoly price; this would be what we call “payoff dominant”, and it’s often what people actually try to choose in real-world coordination games.

But in reality, the monopoly price is a subtle and complicated thing, and might not even be the same between different retailers. So if we each try to compute a monopoly price, we may end up with different results, and then we could trigger a price war and end up driving all of our profits down. If only there were some way to communicate with one another, and say what price we all want to set?

Ah, but there is: The MSRP. Most other forms of price communication are illegal: We certainly couldn’t send each other emails and say “Let’s all charge $59.99, okay?” (When banks tried to do that with the LIBOR, it was the largest white-collar crime in history.) But for some reason economists (particularly, I note, the supposed “free market” believers of the University of Chicago) have convinced antitrust courts that MSRP is somehow different. Yet it’s obviously hardly different at all: You’ve just made the communication one-way from manufacturers to retailers, which makes it a little less reliable, but otherwise exactly the same thing.

There are all sorts of subtler arguments about how MSRP is justifiable, but as far as I can tell they all fall flat. If you’re worried about retailers not promoting your product enough, enter into a contract requiring them to promote. Proposing a suggested price is clearly nothing but an attempt to coordinate tacit—frankly not even that tacit—collusion.

MSRP also probably serves another, equally suspect, function, which is to manipulate consumers using the anchoring heuristic: If the MSRP is $59.99, then when it does go on sale for $49.99 you feel like you are getting a good deal; whereas, if it had just been priced at $49.99 to begin with, you might still have felt that it was too expensive. I see no reason why this sort of crass manipulation of consumers should be protected under the law either, especially when it would be so easy to avoid.

There are all sorts of ways for firms to tacitly collude with one another, and we may not be able to regulate them all. But the MSRP is literally printed on the box. It’s so utterly blatant that we could very easily make it illegal with hardly any effort at all. The fact that we allow such overt price communication makes a mockery of our antitrust law.

The asymmetric impact of housing prices

Jul 22 JDN 2458323

In several previous posts I’ve talked about the international crisis of high housing prices. Today, I want to talk about some features of housing that make high housing prices particularly terrible, in a way that other high prices would not be.

First, there is the fact that some amount of housing is a basic necessity, and houses are not easily divisible. So even if the houses being built are bigger than you need, you still need some kind of house, and you can’t buy half a house; the best you could really do would be to share it with someone else, and that introduces all sorts of other complications.

Second, t here is a deep asymmetry here. While rising housing prices definitely hurt people who want to buy houses, they benefit hardly anyone.


If you bought a house for $200,000 and then all housing prices doubled so it would now sell for $400,000, are you richer? You might feel richer. You might even have access to home equity loans that would give you more real liquidity. But are you actually richer?

I contend you are not, because the only way for you to access that wealth would be to sell your home, and then you’d need to buy another home, and that other home would also be twice as expensive. The amount of money you can get for your house may have increased, but the amount of house you can get for your house is exactly the same.

Conversely, suppose that housing prices fell by half, and now that house only sells for $100,000. Are you poorer? You still have your house. Even if your mortgage isn’t paid off, it’s still the same mortgage. Your payments haven’t changed. And once again, the amount of house you can get for your house will remain the same. In fact, if you are willing to accept a deed in lieu of foreclosure (it’s bad for your credit, of course), you can walk away from that underwater house and buy a new one that’s just as good with lower payments than what you are currently making. You may actually be richer because the price of your house fell.

Relative housing prices matter, certainly. If you own a $400,000 house and move to a city where housing prices have fallen to $100,000, you are definitely richer. And if you own a $100,000 house and move to a city where housing prices have risen to $400,000, you are definitely poorer. These two effects necessarily cancel out in the aggregate.

But do absolute housing prices matter for homeowners? It really seems to me that they don’t. The people who care about absolute housing prices are not homeowners; they are people trying to enter the market for the first time.
And this means that lower housing prices are almost always better. If you could buy a house for $1,000, we would live in a paradise where it was basically impossible to be homeless. (When social workers encountered someone who was genuinely homeless, they could just buy them a house then and there.) If every home cost $10 million, those who bought homes before the price surge would be little better off than they are, but the rest of us would live on the streets.

Psychologically, people very strongly resist falling housing prices. Even in very weak housing markets, most people will flatly refuse to sell their house for less than they paid for it. As a result, housing prices usually rise with inflation, but don’t usually fall in response to deflation. Rents also display similar rigidity over time. But in reality, lower prices are almost always better for almost everyone.

There is a group of people who are harmed by low housing prices, but it is a very small group of people, most of whom are already disgustingly rich: The real estate industry. Yes, if you build new housing, or flip houses, or buy and sell houses on speculation, you will be harmed by lower housing prices. Of these, literally the only one I care about even slightly is developers; and I only care about developers insofar as they are actually doing their job building housing that people need. If falling prices hurt developers, it would be because the supply of housing was so great that everyone who needs a house could have one.

There is a subtler nuance here, which is that some people may be buying more expensive housing as a speculative saving vehicle, hoping that they can cash out on their house when they retire. To that, I really only have one word of advice: Don’t. Don’t contribute to another speculative housing bubble that could cause another Great Recession. A house is not even a particularly safe investment, because it’s completely undiversified. Buy stocks. Buy all the stocks. Buy a house because you want that house, not because you hope to make money off of it.

And if the price of your house does fall someday? Don’t panic. You may be no worse off, and other people are probably much better off.

Fake skepticism

Jun 3 JDN 2458273

“You trust the mainstream media?” “Wake up, sheeple!” “Don’t listen to what so-called scientists say; do your own research!”

These kinds of statements have become quite ubiquitous lately (though perhaps the attitudes were always there, and we only began to hear them because of the Internet and social media), and are often used to defend the most extreme and bizarre conspiracy theories, from moon-landing denial to flat Earth. The amazing thing about these kinds of statements is that they can be used to defend literally anything, as long as you can find some source with less than 100% credibility that disagrees with it. (And what source has 100% credibility?)

And that, I think, should tell you something. An argument that can prove anything is an argument that proves nothing.

Reversed stupidity is not intelligence. The fact that the mainstream media, or the government, or the pharmaceutical industry, or the oil industry, or even gangsters, fanatics, or terrorists believes something does not make it less likely to be true.

In fact, the vast majority of beliefs held by basically everyone—including the most fanatical extremists—are true. I could list such consensus true beliefs for hours: “The sky is blue.” “2+2=4.” “Ice is colder than fire.”

Even if a belief is characteristic of a specifically evil or corrupt organization, that does not necessarily make it false (though it usually is evidence of falsehood in a Bayesian sense). If only terrible people belief X, then maybe you shouldn’t believe X. But if both good and bad people believe X, the fact that bad people believe X really shouldn’t matter to you.

People who use this kind of argument often present themselves as being “skeptics”. They imagine that they have seen through the veil of deception that blinds others.

In fact, quite the opposite is the case: This is fake skepticism. These people are not uniquely skeptical; they are uniquely credulous. If you think the Earth is flat because you don’t trust the mainstream scientific community, that means you do trust someone far less credible than the mainstream scientific community.

Real skepticism is difficult. It requires concerted effort and investigation, and typically takes years. To really seriously challenge the expert consensus in a field, you need to become an expert in that field. Ideally, you should get a graduate degree in that field and actually start publishing your heterodox views. Failing that, you should at least be spending hundreds or thousands of hours doing independent research. If you are unwilling or unable to do that, you are not qualified to assess the validity of the expert consensus.

This does not mean the expert consensus is always right—remarkably often, it isn’t. But it means you aren’t allowed to say it’s wrong, because you don’t know enough to assess that.

This is not elitism. This is not an argument from authority. This is a basic respect for the effort and knowledge that experts spend their lives acquiring.

People don’t like being told that they are not as smart as other people—even though, with any variation at all, that’s got to be true for a certain proportion of people. But I’m not even saying experts are smarter than you. I’m saying they know more about their particular field of expertise.

Do you walk up to construction workers on the street and critique how they lay concrete? When you step on an airplane, do you explain to the captain how to read an altimeter? When you hire a plumber, do you insist on using the snake yourself?

Probably not. And why not? Because you know these people have training; they do this for a living. Yeah, well, scientists do this for a living too—and our training is much longer. To be a plumber, you need a high school diploma and an apprenticeship that usually lasts about four years. To be a scientist, you need a PhD, which means four years of college plus an additional five or six years of graduate school.

To be clear, I’m not saying you should listen to experts speaking outside their expertise. Some of the most idiotic, arrogant things ever said by human beings have been said by physicists opining on biology or economists ranting about politics. Even within a field, some people have such narrow expertise that you can’t really trust them even on things that seem related—like macroeconomists with idiotic views on trade, or ecologists who clearly don’t understand evolution.

This is also why one of the great challenges of being a good interdisciplinary scientist is actually obtaining enough expertise in both fields you’re working in; it isn’t literally twice the work (since there is overlap—or you wouldn’t be doing it—and you do specialize in particular interdisciplinary subfields), but it’s definitely more work, and there are definitely a lot of people on each side of the fence who may never take you seriously no matter what you do.

How do you tell who to trust? This is why I keep coming back to the matter of expert consensus. The world is much too complicated for anyone, much less everyone, to understand it all. We must be willing to trust the work of others. The best way we have found to decide which work is trustworthy is by the norms and institutions of the scientific community itself. Since 97% of climatologists say that climate change is caused by humans, they’re probably right. Since 99% of biologists believe humans evolved by natural selection, that’s probably what happened. Since 87% of economists oppose tariffs, tariffs probably aren’t a good idea.

Can we be certain that the consensus is right? No. There is precious little in this universe that we can be certain about. But as in any game of chance, you need to play the best odds, and my money will always be on the scientific consensus.

Are some ideas too ridiculous to bother with?

Apr 22 JDN 2458231

Flat Earth. Young-Earth Creationism. Reptilians. 9/11 “Truth”. Rothschild conspiracies.

There are an astonishing number of ideas that satisfy two apparently-contrary conditions:

  1. They are so obviously ridiculous that even a few minutes of honest, rational consideration of evidence that is almost universally available will immediately refute them;
  2. They are believed by tens or hundreds of millions of otherwise-intelligent people.

Young-Earth Creationism is probably the most alarming, seeing as it grips the minds of some 38% of Americans.

What should we do when faced with such ideas? This is something I’ve struggled with before.

I’ve spent a lot of time and effort trying to actively address and refute them—but I don’t think I’ve even once actually persuaded someone who believes these ideas to change their mind. This doesn’t mean my time and effort were entirely wasted; it’s possible that I managed to convince bystanders, or gained some useful understanding, or simply improved my argumentation skills. But it does seem likely that my time and effort were mostly wasted.

It’s tempting, therefore, to give up entirely, and just let people go on believing whatever nonsense they want to believe. But there’s a rather serious downside to that as well: Thirty-eight percent of Americans.

These people vote. They participate in community decisions. They make choices that affect the rest of our lives. Nearly all of those Creationists are Evangelical Christians—and White Evangelical Christians voted overwhelmingly in favor of Donald Trump. I can’t be sure that changing their minds about the age of the Earth would also change their minds about voting for Trump, but I can say this: If all the Creationists in the US had simply not voted, Hillary Clinton would have won the election.

And let’s not leave the left wing off the hook either. Jill Stein is a 9/11 “Truther”, and pulled a lot of fellow “Truthers” to her cause in the election as well. Had all of Jill Stein’s votes gone to Hillary Clinton instead, again Hillary would have won, even if all the votes for Trump had remained the same. (That said, there is reason to think that if Stein had dropped out, most of those folks wouldn’t have voted at all.)

Therefore, I don’t think it is safe to simply ignore these ridiculous beliefs. We need to do something; the question is what.

We could try to censor them, but first of all that violates basic human rights—which should be a sufficient reason not to do it—and second, it probably wouldn’t even work. Censorship typically leads to radicalization, not assimilation.

We could try to argue against them. Ideally this would be the best option, but it has not shown much effect so far. The kind of person who sincerely believes that the Earth is 6,000 years old (let alone that governments are secretly ruled by reptilian alien invaders) isn’t the kind of person who is highly responsive to evidence and rational argument.

In fact, there is reason to think that these people don’t actually believe what they say the same way that you and I believe things. I’m not saying they’re lying, exactly. They think they believe it; they want to believe it. They believe in believing it. But they don’t actually believe it—not the way that I believe that cyanide is poisonous or the way I believe the sun will rise tomorrow. It isn’t fully integrated into the way that they anticipate outcomes and choose behaviors. It’s more of a free-floating sort of belief, where professing a particular belief allows them to feel good about themselves, or represent their status in a community.

To be clear, it isn’t that these beliefs are unimportant to them; on the contrary, they are in some sense more important. Creationism isn’t really about the age of the Earth; it’s about who you are and where you belong. A conventional belief can be changed by evidence about the world because it is about the world; a belief-in-belief can’t be changed by evidence because it was never really about that.

But if someone’s ridiculous belief is really about their identity, how do we deal with that? I can’t refute an identity. If your identity is tied to a particular social group, maybe they could ostracize you and cause you to lose the identity; but an outsider has no power to do that. (Even then, I strongly suspect that, for instance, most excommunicated Catholics still see themselves as Catholic.) And if it’s a personal identity not tied to a particular group, even that option is unavailable.

Where, then, does that leave us? It would seem that we can’t change their minds—but we also can’t afford not to change their minds. We are caught in a terrible dilemma.

I think there might be a way out. It’s a bit counter-intuitive, but I think what we need to do is stop taking them seriously as beliefs, and start treating them purely as announcements of identity.

So when someone says something like, “The Rothschilds run everything!”, instead of responding as though this were a coherent proposition being asserted, treat it as if someone had announced, “Boo! I hate the Red Sox!” Belief in the Rothschild conspiracies isn’t a well-defined set of propositions about the world; it’s an assertion of membership in a particular sort of political sect that is vaguely left-wing and anarchist. You don’t really think the Rothschilds rule everything. You just want to express your (quite justifiable) anger at how our current political system privileges the rich.

Likewise, when someone says they think the Earth is 6,000 years old, you could try to present the overwhelming scientific evidence that they are wrong—but it might be more productive, and it is certainly easier, to just think of this as a funny way of saying “I’m an Evangelical Christian”.

Will this eliminate the ridiculous beliefs? Not immediately. But it might ultimately do so, in the following way: By openly acknowledging the belief-in-belief as a signaling mechanism, we can open opportunities for people to develop new, less pathological methods of signaling. (Instead of saying you think the Earth is 6,000 years old, maybe you could wear a funny hat, like Orthodox Jews do. Funny hats don’t hurt anybody. Everyone loves funny hats.) People will always want to signal their identity, and there are fundamental reasons why such signals will typically be costly for those who use them; but we can try to make them not so costly for everyone else.

This also makes arguments a lot less frustrating, at least at your end. It might make them more frustrating at the other end, because people want their belief-in-belief to be treated like proper belief, and you’ll be refusing them that opportunity. But this is not such a bad thing; if we make it more frustrating to express ridiculous beliefs in public, we might manage to reduce the frequency of such expression.

Reasonableness and public goods games

Apr 1 JDN 2458210

There’s a very common economics experiment called a public goods game, often used to study cooperation and altruistic behavior. I’m actually planning on running a variant of such an experiment for my second-year paper.

The game is quite simple, which is part of why it is used so frequently: You are placed into a group of people (usually about four), and given a little bit of money (say $10). Then you are offered a choice: You can keep the money, or you can donate some of it to a group fund. Money in the group fund will be multiplied by some factor (usually about two) and then redistributed evenly to everyone in the group. So for example if you donate $5, that will become $10, split four ways, so you’ll get back $2.50.

Donating more to the group will benefit everyone else, but at a cost to yourself. The game is usually set up so that the best outcome for everyone is if everyone donates the maximum amount, but the best outcome for you, holding everyone else’s choices constant, is to donate nothing and keep it all.

Yet it is a very robust finding that most people do neither of those things. There’s still a good deal of uncertainty surrounding what motivates people to donate what they do, but certain patterns that have emerged:

  1. Most people donate something, but hardly anyone donates everything.
  2. Increasing the multiplier tends to smoothly increase how much people donate.
  3. The number of people in the group isn’t very important, though very small groups (e.g. 2) behave differently from very large groups (e.g. 50).
  4. Letting people talk to each other tends to increase the rate of donations.
  5. Repetition of the game, or experience from previous games, tends to result in decreasing donation over time.
  6. Economists donate less than other people.

Number 6 is unfortunate, but easy to explain: Indoctrination into game theory and neoclassical economics has taught economists that selfish behavior is efficient and optimal, so they behave selfishly.

Number 3 is also fairly easy to explain: Very small groups allow opportunities for punishment and coordination that don’t exist in large groups. Think about how you would respond when faced with 2 defectors in a group of 4 as opposed to 10 defectors in a group of 50. You could punish the 2 by giving less next round; but punishing the 10 would end up punishing 40 others who had contributed like they were supposed to.

Number 4 is a very interesting finding. Game theory says that communication shouldn’t matter, because there is a unique Nash equilibrium: Donate nothing. All the promises in the world can’t change what is the optimal response in the game. But in fact, human beings don’t like to break their promises, and so when you get a bunch of people together and they all agree to donate, most of them will carry through on that agreement most of the time.

Number 5 is on the frontier of research right now. There are various theoretical accounts for why it might occur, but none of the models proposed so far have much predictive power.

But my focus today will be on findings 1 and 2.

If you’re not familiar with the underlying game theory, finding 2 may seem obvious to you: Well, of course if you increase the payoff for donating, people will donate more! It’s precisely that sense of obviousness which I am going to appeal to in a moment.

In fact, the game theory makes a very sharp prediction: For N players, if the multiplier is less than N, you should always contribute nothing. Only if the multiplier becomes larger than N should you donate—and at that point you should donate everything. The game theory prediction is not a smooth increase; it’s all-or-nothing. The only time game theory predicts intermediate amounts is on the knife-edge at exactly equal to N, where each player would be indifferent between donating and not donating.

But it feels reasonable that increasing the multiplier should increase donation, doesn’t it? It’s a “safer bet” in some sense to donate $1 if the payoff to everyone is $3 and the payoff to yourself is $0.75 than if the payoff to everyone is $1.04 and the payoff to yourself is $0.26. The cost-benefit analysis comes out better: In the former case, you can gain up to $2 if everyone donates, but would only lose $0.25 if you donate alone; but in the latter case, you would only gain $0.04 if everyone donates, and would lose $0.74 if you donate alone.

I think this notion of “reasonableness” is a deep principle that underlies a great deal of human thought. This is something that is sorely lacking from artificial intelligence: The same AI that tells you the precise width of the English Channel to the nearest foot may also tell you that the Earth is 14 feet in diameter, because the former was in its database and the latter wasn’t. Yes, WATSON may have won on Jeopardy, but it (he?) also made a nonsensical response to the Final Jeopardy question.

Human beings like to “sanity-check” our results against prior knowledge, making sure that everything fits together. And, of particular note for public goods games, human beings like to “hedge our bets”; we don’t like to over-commit to a single belief in the face of uncertainty.

I think this is what best explains findings 1 and 2. We don’t donate everything, because that requires committing totally to the belief that contributing is always better. We also don’t donate nothing, because that requires committing totally to the belief that contributing is always worse.

And of course we donate more as the payoffs to donating more increase; that also just seems reasonable. If something is better, you do more of it!

These choices could be modeled formally by assigning some sort of probability distribution over other’s choices, but in a rather unconventional way. We can’t simply assume that other people will randomly choose some decision and then optimize accordingly—that just gives you back the game theory prediction. We have to assume that our behavior and the behavior of others is in some sense correlated; if we decide to donate, we reason that others are more likely to donate as well.

Stated like that, this sounds irrational; some economists have taken to calling it “magical thinking”. Yet, as I always like to point out to such economists: On average, people who do that make more money in the games. Economists playing other economists always make very little money in these games, because they turn on each other immediately. So who is “irrational” now?

Indeed, if you ask people to predict how others will behave in these games, they generally do better than the game theory prediction: They say, correctly, that some people will give nothing, most will give something, and hardly any will give everything. The same “reasonableness” that they use to motivate their own decisions, they also accurately apply to forecasting the decisions of others.

Of course, to say that something is “reasonable” may be ultimately to say that it conforms to our heuristics well. To really have a theory, I need to specify exactly what those heuristics are.

“Don’t put all your eggs in one basket” seems to be one, but it’s probably not the only one that matters; my guess is that there are circumstances in which people would actually choose all-or-nothing, like if we said that the multiplier was 0.5 (so everyone giving to the group would make everyone worse off) or 10 (so that giving to the group makes you and everyone else way better off).

“Higher payoffs are better” is probably one as well, but precisely formulating that is actually surprisingly difficult. Higher payoffs for you? For the group? Conditional on what? Do you hold others’ behavior constant, or assume it is somehow affected by your own choices?

And of course, the theory wouldn’t be much good if it only worked on public goods games (though even that would be a substantial advance at this point). We want a theory that explains a broad class of human behavior; we can start with simple economics experiments, but ultimately we want to extend it to real-world choices.

Hyperbolic discounting: Why we procrastinate

Mar 25 JDN 2458203

Lately I’ve been so occupied by Trump and politics and various ideas from environmentalists that I haven’t really written much about the cognitive economics that was originally planned to be the core of this blog. So, I thought that this week I would take a step out of the political fray and go back to those core topics.

Why do we procrastinate? Why do we overeat? Why do we fail to exercise? It’s quite mysterious, from the perspective of neoclassical economic theory. We know these things are bad for us in the long run, and yet we do them anyway.

The reason has to do with the way our brains deal with time. We value the future less than the present—but that’s not actually the problem. The problem is that we do so inconsistently.

A perfectly-rational neoclassical agent would use time-consistent discounting; what this means is that the effect of a given time interval won’t change or vary based on the stakes. If having $100 in 2019 is as good as having $110 in 2020, then having $1000 in 2019 is as good as having $1100 in 2020; and if I ask you in 2019, you’ll still agree that having $100 in 2019 is as good as having $1100 in 2020. A perfectly-rational individual would have a certain discount rate (in this case, 10% per year), and would apply it consistently at all times on all things.

This is of course not how human beings behave at all.

A much more likely pattern is that you would agree, in 2018, that having $100 in 2019 is as good as having $110 in 2020 (a discount rate of 10%). But then if I wait until 2019, and then offer you the choice between $100 immediately and $120 in a year, you’ll probably take the $100 immediately—even though a year ago, you told me you wouldn’t. Your discount rate rose from 10% to at least 20% in the intervening time.

The leading model in cognitive economics right now to explain this is called hyperbolic discounting. The precise functional form of a hyperbola has been called into question by recent research, but the general pattern is definitely right: We act as though time matters a great deal when discussing time intervals that are close to us, but treat time as unimportant when discussing time intervals that are far away.

How does this explain procrastination and other failures of self-control over time? Let’s try an example.

Let’s say that you have a project you need to finish by the end of the day Friday, which has a benefit to you, received on Saturday, that I will arbitrarily scale at 1000 utilons.

Then, let’s say it’s Monday. You have five days to work on it, and each day of work costs you 100 utilons. If you work all five days, the project will get done.

If you skip a day of work, you will need to work so much harder that one of the following days your cost of work will be 300 utilons instead of 100. If you skip two days, you’ll have to pay 300 utilons twice. And if you skip three or more days, the project will not be finished and it will all be for naught.

If you don’t discount time at all (which, over a week, is probably close to optimal), the answer is obvious: Work all five days. Pay 100+100+100+100+100 = 500, receive 1000. Net benefit: 500.

But even if you discount time, as long as you do so consistently, you still wouldn’t procrastinate.

Let’s say your discount rate is extremely high (maybe you’re dying or something), so that each day is only worth 80% as much as the previous. Benefit that’s worth 1 on Monday is worth 0.8 if it comes on Tuesday, 0.64 if it comes on Wednesday, 0.512 if it comes on Thursday, 0.4096 if it comes on Friday,a and 0.32768 if it comes on Saturday. Then instead of paying 100+100+100+100+100 to get 1000, you’re paying 100+80+64+51+41=336 to get 328. It’s not worth doing the project; you should just enjoy your last few days on Earth. That’s not procrastinating; that’s rationally choosing not to undertake a project that isn’t worthwhile under your circumstances.

Procrastinating would look more like this: You skip the first two days, then work 100 the third day, then work 300 each of the last two days, finishing the project. If you didn’t discount at all, you would pay 100+300+300=700 to get 1000, so your net benefit has been reduced to 300.

There’s no consistent discount rate that would make this rational. If it was worth giving up 200 on Thursday and Friday to get 100 on Monday and Tuesday, you must be discounting at least 26% per day. But if you’re discounting that much, you shouldn’t bother with the project at all.

There is however an inconsistent discounting by which it makes perfect sense. Suppose that instead of consistently discounting some percentage each day, psychologically it feels like this: The value is the inverse of the length of time (that’s what it means to be hyperbolic). So the same amount of benefit on Monday which is worth 1 is only worth 1/2 if it comes on Tuesday, 1/3 if on Wednesday, 1/4 if on Thursday, and 1/5 if on Friday.

So, when thinking about your weekly schedule, you realize that by pushing back Monday’s work to Thursday, you can gain 100 today at a cost of only 200/4 = 50, since Thursday is 4 days away. And by pushing back Tuesday’s work to Friday, you can gain 100/2=50 today at a cost of only 200/5=40. So now it makes perfect sense to have fun on Monday and Tuesday, start working on Wednesday, and cram the biggest work into Thursday and Friday. And yes, it still makes sense to do the project, because 1000/6 = 166 is more than the 100/3+200/4+200/5 = 123 it will cost to do the work.

But now think about what happens when you come to Wednesday. The work today costs 100. The work on Thursday costs 200/2 = 100. The work on Friday costs 200/3 = 66. The benefit of completing the project will be 1000/4 = 250. So you are paying 100+100+66=266 to get a benefit of only 250. It’s not worth it anymore! You’ve changed your mind. So you don’t work Wednesday.

At that point, it’s too late, so you don’t work Thursday, you don’t work Friday, and the project doesn’t get done. You have procrastinated away the benefits you could have gotten from doing this project. If only you could have done the work on Monday and Tuesday, then on Wednesday it would have been worthwhile to continue: 100/1+100/2+100/3 = 183 is less than the benefit of 250.

What went wrong? The key event was the preference reversal: While on Monday you preferred having fun on Monday and working on Thursday to working on both days, when the time came you changed your mind. Someone with time-consistent discounting would never do that; they would either prefer one or the other, and never change their mind.

One way to think about this is to imagine future versions of yourself as different people, who agree with you on most things, but not on everything. They’re like friends or family; you want the best for them, but you don’t always see eye-to-eye.

Generally we find that our future selves are less rational about choices than we are. To be clear, this doesn’t mean that we’re all declining in rationality over time. Rather, it comes from the fact that future decisions are inherently closer to our future selves than they are to our current selves, and the closer a decision gets the more likely we are to use irrational time discounting.

This is why it’s useful to plan and make commitments. If starting on Monday you committed yourself to working every single day, you’d get the project done on time and everything would work out fine. Better yet, if you committed yourself last week to starting work on Monday, you wouldn’t even feel conflicted; you would be entirely willing to pay a cost of 100/8+100/9+100/10+100/11+100/12=51 to get a benefit of 1000/13=77. So you could set up some sort of scheme where you tell your friends ahead of time that you can’t go out that week, or you turn off access to social media sites (there are apps that will do this for you), or you set up a donation to an “anti-charity” you don’t like that will trigger if you fail to complete the project on time (there are websites to do that for you).

There is even a simpler way: Make a promise to yourself. This one can be tricky to follow through on, but if you can train yourself to do it, it is extraordinarily powerful and doesn’t come with the additional costs that a lot of other commitment devices involve. If you can really make yourself feel as bad about breaking a promise to yourself as you would about breaking a promise to someone else, then you can dramatically increase your own self-control with very little cost. The challenge lies in actually cultivating that sort of attitude, and then in following through with making only promises you can keep and actually keeping them. This, too, can be a delicate balance; it is dangerous to over-commit to promises to yourself and feel too much pain when you fail to meet them.
But given the strong correlations between self-control and long-term success, trying to train yourself to be a little better at it can provide enormous benefits.
If you ever get around to it, that is.