Are some ideas too ridiculous to bother with?

Apr 22 JDN 2458231

Flat Earth. Young-Earth Creationism. Reptilians. 9/11 “Truth”. Rothschild conspiracies.

There are an astonishing number of ideas that satisfy two apparently-contrary conditions:

  1. They are so obviously ridiculous that even a few minutes of honest, rational consideration of evidence that is almost universally available will immediately refute them;
  2. They are believed by tens or hundreds of millions of otherwise-intelligent people.

Young-Earth Creationism is probably the most alarming, seeing as it grips the minds of some 38% of Americans.

What should we do when faced with such ideas? This is something I’ve struggled with before.

I’ve spent a lot of time and effort trying to actively address and refute them—but I don’t think I’ve even once actually persuaded someone who believes these ideas to change their mind. This doesn’t mean my time and effort were entirely wasted; it’s possible that I managed to convince bystanders, or gained some useful understanding, or simply improved my argumentation skills. But it does seem likely that my time and effort were mostly wasted.

It’s tempting, therefore, to give up entirely, and just let people go on believing whatever nonsense they want to believe. But there’s a rather serious downside to that as well: Thirty-eight percent of Americans.

These people vote. They participate in community decisions. They make choices that affect the rest of our lives. Nearly all of those Creationists are Evangelical Christians—and White Evangelical Christians voted overwhelmingly in favor of Donald Trump. I can’t be sure that changing their minds about the age of the Earth would also change their minds about voting for Trump, but I can say this: If all the Creationists in the US had simply not voted, Hillary Clinton would have won the election.

And let’s not leave the left wing off the hook either. Jill Stein is a 9/11 “Truther”, and pulled a lot of fellow “Truthers” to her cause in the election as well. Had all of Jill Stein’s votes gone to Hillary Clinton instead, again Hillary would have won, even if all the votes for Trump had remained the same. (That said, there is reason to think that if Stein had dropped out, most of those folks wouldn’t have voted at all.)

Therefore, I don’t think it is safe to simply ignore these ridiculous beliefs. We need to do something; the question is what.

We could try to censor them, but first of all that violates basic human rights—which should be a sufficient reason not to do it—and second, it probably wouldn’t even work. Censorship typically leads to radicalization, not assimilation.

We could try to argue against them. Ideally this would be the best option, but it has not shown much effect so far. The kind of person who sincerely believes that the Earth is 6,000 years old (let alone that governments are secretly ruled by reptilian alien invaders) isn’t the kind of person who is highly responsive to evidence and rational argument.

In fact, there is reason to think that these people don’t actually believe what they say the same way that you and I believe things. I’m not saying they’re lying, exactly. They think they believe it; they want to believe it. They believe in believing it. But they don’t actually believe it—not the way that I believe that cyanide is poisonous or the way I believe the sun will rise tomorrow. It isn’t fully integrated into the way that they anticipate outcomes and choose behaviors. It’s more of a free-floating sort of belief, where professing a particular belief allows them to feel good about themselves, or represent their status in a community.

To be clear, it isn’t that these beliefs are unimportant to them; on the contrary, they are in some sense more important. Creationism isn’t really about the age of the Earth; it’s about who you are and where you belong. A conventional belief can be changed by evidence about the world because it is about the world; a belief-in-belief can’t be changed by evidence because it was never really about that.

But if someone’s ridiculous belief is really about their identity, how do we deal with that? I can’t refute an identity. If your identity is tied to a particular social group, maybe they could ostracize you and cause you to lose the identity; but an outsider has no power to do that. (Even then, I strongly suspect that, for instance, most excommunicated Catholics still see themselves as Catholic.) And if it’s a personal identity not tied to a particular group, even that option is unavailable.

Where, then, does that leave us? It would seem that we can’t change their minds—but we also can’t afford not to change their minds. We are caught in a terrible dilemma.

I think there might be a way out. It’s a bit counter-intuitive, but I think what we need to do is stop taking them seriously as beliefs, and start treating them purely as announcements of identity.

So when someone says something like, “The Rothschilds run everything!”, instead of responding as though this were a coherent proposition being asserted, treat it as if someone had announced, “Boo! I hate the Red Sox!” Belief in the Rothschild conspiracies isn’t a well-defined set of propositions about the world; it’s an assertion of membership in a particular sort of political sect that is vaguely left-wing and anarchist. You don’t really think the Rothschilds rule everything. You just want to express your (quite justifiable) anger at how our current political system privileges the rich.

Likewise, when someone says they think the Earth is 6,000 years old, you could try to present the overwhelming scientific evidence that they are wrong—but it might be more productive, and it is certainly easier, to just think of this as a funny way of saying “I’m an Evangelical Christian”.

Will this eliminate the ridiculous beliefs? Not immediately. But it might ultimately do so, in the following way: By openly acknowledging the belief-in-belief as a signaling mechanism, we can open opportunities for people to develop new, less pathological methods of signaling. (Instead of saying you think the Earth is 6,000 years old, maybe you could wear a funny hat, like Orthodox Jews do. Funny hats don’t hurt anybody. Everyone loves funny hats.) People will always want to signal their identity, and there are fundamental reasons why such signals will typically be costly for those who use them; but we can try to make them not so costly for everyone else.

This also makes arguments a lot less frustrating, at least at your end. It might make them more frustrating at the other end, because people want their belief-in-belief to be treated like proper belief, and you’ll be refusing them that opportunity. But this is not such a bad thing; if we make it more frustrating to express ridiculous beliefs in public, we might manage to reduce the frequency of such expression.

Reasonableness and public goods games

Apr 1 JDN 2458210

There’s a very common economics experiment called a public goods game, often used to study cooperation and altruistic behavior. I’m actually planning on running a variant of such an experiment for my second-year paper.

The game is quite simple, which is part of why it is used so frequently: You are placed into a group of people (usually about four), and given a little bit of money (say $10). Then you are offered a choice: You can keep the money, or you can donate some of it to a group fund. Money in the group fund will be multiplied by some factor (usually about two) and then redistributed evenly to everyone in the group. So for example if you donate $5, that will become $10, split four ways, so you’ll get back $2.50.

Donating more to the group will benefit everyone else, but at a cost to yourself. The game is usually set up so that the best outcome for everyone is if everyone donates the maximum amount, but the best outcome for you, holding everyone else’s choices constant, is to donate nothing and keep it all.

Yet it is a very robust finding that most people do neither of those things. There’s still a good deal of uncertainty surrounding what motivates people to donate what they do, but certain patterns that have emerged:

  1. Most people donate something, but hardly anyone donates everything.
  2. Increasing the multiplier tends to smoothly increase how much people donate.
  3. The number of people in the group isn’t very important, though very small groups (e.g. 2) behave differently from very large groups (e.g. 50).
  4. Letting people talk to each other tends to increase the rate of donations.
  5. Repetition of the game, or experience from previous games, tends to result in decreasing donation over time.
  6. Economists donate less than other people.

Number 6 is unfortunate, but easy to explain: Indoctrination into game theory and neoclassical economics has taught economists that selfish behavior is efficient and optimal, so they behave selfishly.

Number 3 is also fairly easy to explain: Very small groups allow opportunities for punishment and coordination that don’t exist in large groups. Think about how you would respond when faced with 2 defectors in a group of 4 as opposed to 10 defectors in a group of 50. You could punish the 2 by giving less next round; but punishing the 10 would end up punishing 40 others who had contributed like they were supposed to.

Number 4 is a very interesting finding. Game theory says that communication shouldn’t matter, because there is a unique Nash equilibrium: Donate nothing. All the promises in the world can’t change what is the optimal response in the game. But in fact, human beings don’t like to break their promises, and so when you get a bunch of people together and they all agree to donate, most of them will carry through on that agreement most of the time.

Number 5 is on the frontier of research right now. There are various theoretical accounts for why it might occur, but none of the models proposed so far have much predictive power.

But my focus today will be on findings 1 and 2.

If you’re not familiar with the underlying game theory, finding 2 may seem obvious to you: Well, of course if you increase the payoff for donating, people will donate more! It’s precisely that sense of obviousness which I am going to appeal to in a moment.

In fact, the game theory makes a very sharp prediction: For N players, if the multiplier is less than N, you should always contribute nothing. Only if the multiplier becomes larger than N should you donate—and at that point you should donate everything. The game theory prediction is not a smooth increase; it’s all-or-nothing. The only time game theory predicts intermediate amounts is on the knife-edge at exactly equal to N, where each player would be indifferent between donating and not donating.

But it feels reasonable that increasing the multiplier should increase donation, doesn’t it? It’s a “safer bet” in some sense to donate $1 if the payoff to everyone is $3 and the payoff to yourself is $0.75 than if the payoff to everyone is $1.04 and the payoff to yourself is $0.26. The cost-benefit analysis comes out better: In the former case, you can gain up to $2 if everyone donates, but would only lose $0.25 if you donate alone; but in the latter case, you would only gain $0.04 if everyone donates, and would lose $0.74 if you donate alone.

I think this notion of “reasonableness” is a deep principle that underlies a great deal of human thought. This is something that is sorely lacking from artificial intelligence: The same AI that tells you the precise width of the English Channel to the nearest foot may also tell you that the Earth is 14 feet in diameter, because the former was in its database and the latter wasn’t. Yes, WATSON may have won on Jeopardy, but it (he?) also made a nonsensical response to the Final Jeopardy question.

Human beings like to “sanity-check” our results against prior knowledge, making sure that everything fits together. And, of particular note for public goods games, human beings like to “hedge our bets”; we don’t like to over-commit to a single belief in the face of uncertainty.

I think this is what best explains findings 1 and 2. We don’t donate everything, because that requires committing totally to the belief that contributing is always better. We also don’t donate nothing, because that requires committing totally to the belief that contributing is always worse.

And of course we donate more as the payoffs to donating more increase; that also just seems reasonable. If something is better, you do more of it!

These choices could be modeled formally by assigning some sort of probability distribution over other’s choices, but in a rather unconventional way. We can’t simply assume that other people will randomly choose some decision and then optimize accordingly—that just gives you back the game theory prediction. We have to assume that our behavior and the behavior of others is in some sense correlated; if we decide to donate, we reason that others are more likely to donate as well.

Stated like that, this sounds irrational; some economists have taken to calling it “magical thinking”. Yet, as I always like to point out to such economists: On average, people who do that make more money in the games. Economists playing other economists always make very little money in these games, because they turn on each other immediately. So who is “irrational” now?

Indeed, if you ask people to predict how others will behave in these games, they generally do better than the game theory prediction: They say, correctly, that some people will give nothing, most will give something, and hardly any will give everything. The same “reasonableness” that they use to motivate their own decisions, they also accurately apply to forecasting the decisions of others.

Of course, to say that something is “reasonable” may be ultimately to say that it conforms to our heuristics well. To really have a theory, I need to specify exactly what those heuristics are.

“Don’t put all your eggs in one basket” seems to be one, but it’s probably not the only one that matters; my guess is that there are circumstances in which people would actually choose all-or-nothing, like if we said that the multiplier was 0.5 (so everyone giving to the group would make everyone worse off) or 10 (so that giving to the group makes you and everyone else way better off).

“Higher payoffs are better” is probably one as well, but precisely formulating that is actually surprisingly difficult. Higher payoffs for you? For the group? Conditional on what? Do you hold others’ behavior constant, or assume it is somehow affected by your own choices?

And of course, the theory wouldn’t be much good if it only worked on public goods games (though even that would be a substantial advance at this point). We want a theory that explains a broad class of human behavior; we can start with simple economics experiments, but ultimately we want to extend it to real-world choices.

Hyperbolic discounting: Why we procrastinate

Mar 25 JDN 2458203

Lately I’ve been so occupied by Trump and politics and various ideas from environmentalists that I haven’t really written much about the cognitive economics that was originally planned to be the core of this blog. So, I thought that this week I would take a step out of the political fray and go back to those core topics.

Why do we procrastinate? Why do we overeat? Why do we fail to exercise? It’s quite mysterious, from the perspective of neoclassical economic theory. We know these things are bad for us in the long run, and yet we do them anyway.

The reason has to do with the way our brains deal with time. We value the future less than the present—but that’s not actually the problem. The problem is that we do so inconsistently.

A perfectly-rational neoclassical agent would use time-consistent discounting; what this means is that the effect of a given time interval won’t change or vary based on the stakes. If having $100 in 2019 is as good as having $110 in 2020, then having $1000 in 2019 is as good as having $1100 in 2020; and if I ask you in 2019, you’ll still agree that having $100 in 2019 is as good as having $1100 in 2020. A perfectly-rational individual would have a certain discount rate (in this case, 10% per year), and would apply it consistently at all times on all things.

This is of course not how human beings behave at all.

A much more likely pattern is that you would agree, in 2018, that having $100 in 2019 is as good as having $110 in 2020 (a discount rate of 10%). But then if I wait until 2019, and then offer you the choice between $100 immediately and $120 in a year, you’ll probably take the $100 immediately—even though a year ago, you told me you wouldn’t. Your discount rate rose from 10% to at least 20% in the intervening time.

The leading model in cognitive economics right now to explain this is called hyperbolic discounting. The precise functional form of a hyperbola has been called into question by recent research, but the general pattern is definitely right: We act as though time matters a great deal when discussing time intervals that are close to us, but treat time as unimportant when discussing time intervals that are far away.

How does this explain procrastination and other failures of self-control over time? Let’s try an example.

Let’s say that you have a project you need to finish by the end of the day Friday, which has a benefit to you, received on Saturday, that I will arbitrarily scale at 1000 utilons.

Then, let’s say it’s Monday. You have five days to work on it, and each day of work costs you 100 utilons. If you work all five days, the project will get done.

If you skip a day of work, you will need to work so much harder that one of the following days your cost of work will be 300 utilons instead of 100. If you skip two days, you’ll have to pay 300 utilons twice. And if you skip three or more days, the project will not be finished and it will all be for naught.

If you don’t discount time at all (which, over a week, is probably close to optimal), the answer is obvious: Work all five days. Pay 100+100+100+100+100 = 500, receive 1000. Net benefit: 500.

But even if you discount time, as long as you do so consistently, you still wouldn’t procrastinate.

Let’s say your discount rate is extremely high (maybe you’re dying or something), so that each day is only worth 80% as much as the previous. Benefit that’s worth 1 on Monday is worth 0.8 if it comes on Tuesday, 0.64 if it comes on Wednesday, 0.512 if it comes on Thursday, 0.4096 if it comes on Friday,a and 0.32768 if it comes on Saturday. Then instead of paying 100+100+100+100+100 to get 1000, you’re paying 100+80+64+51+41=336 to get 328. It’s not worth doing the project; you should just enjoy your last few days on Earth. That’s not procrastinating; that’s rationally choosing not to undertake a project that isn’t worthwhile under your circumstances.

Procrastinating would look more like this: You skip the first two days, then work 100 the third day, then work 300 each of the last two days, finishing the project. If you didn’t discount at all, you would pay 100+300+300=700 to get 1000, so your net benefit has been reduced to 300.

There’s no consistent discount rate that would make this rational. If it was worth giving up 200 on Thursday and Friday to get 100 on Monday and Tuesday, you must be discounting at least 26% per day. But if you’re discounting that much, you shouldn’t bother with the project at all.

There is however an inconsistent discounting by which it makes perfect sense. Suppose that instead of consistently discounting some percentage each day, psychologically it feels like this: The value is the inverse of the length of time (that’s what it means to be hyperbolic). So the same amount of benefit on Monday which is worth 1 is only worth 1/2 if it comes on Tuesday, 1/3 if on Wednesday, 1/4 if on Thursday, and 1/5 if on Friday.

So, when thinking about your weekly schedule, you realize that by pushing back Monday’s work to Thursday, you can gain 100 today at a cost of only 200/4 = 50, since Thursday is 4 days away. And by pushing back Tuesday’s work to Friday, you can gain 100/2=50 today at a cost of only 200/5=40. So now it makes perfect sense to have fun on Monday and Tuesday, start working on Wednesday, and cram the biggest work into Thursday and Friday. And yes, it still makes sense to do the project, because 1000/6 = 166 is more than the 100/3+200/4+200/5 = 123 it will cost to do the work.

But now think about what happens when you come to Wednesday. The work today costs 100. The work on Thursday costs 200/2 = 100. The work on Friday costs 200/3 = 66. The benefit of completing the project will be 1000/4 = 250. So you are paying 100+100+66=266 to get a benefit of only 250. It’s not worth it anymore! You’ve changed your mind. So you don’t work Wednesday.

At that point, it’s too late, so you don’t work Thursday, you don’t work Friday, and the project doesn’t get done. You have procrastinated away the benefits you could have gotten from doing this project. If only you could have done the work on Monday and Tuesday, then on Wednesday it would have been worthwhile to continue: 100/1+100/2+100/3 = 183 is less than the benefit of 250.

What went wrong? The key event was the preference reversal: While on Monday you preferred having fun on Monday and working on Thursday to working on both days, when the time came you changed your mind. Someone with time-consistent discounting would never do that; they would either prefer one or the other, and never change their mind.

One way to think about this is to imagine future versions of yourself as different people, who agree with you on most things, but not on everything. They’re like friends or family; you want the best for them, but you don’t always see eye-to-eye.

Generally we find that our future selves are less rational about choices than we are. To be clear, this doesn’t mean that we’re all declining in rationality over time. Rather, it comes from the fact that future decisions are inherently closer to our future selves than they are to our current selves, and the closer a decision gets the more likely we are to use irrational time discounting.

This is why it’s useful to plan and make commitments. If starting on Monday you committed yourself to working every single day, you’d get the project done on time and everything would work out fine. Better yet, if you committed yourself last week to starting work on Monday, you wouldn’t even feel conflicted; you would be entirely willing to pay a cost of 100/8+100/9+100/10+100/11+100/12=51 to get a benefit of 1000/13=77. So you could set up some sort of scheme where you tell your friends ahead of time that you can’t go out that week, or you turn off access to social media sites (there are apps that will do this for you), or you set up a donation to an “anti-charity” you don’t like that will trigger if you fail to complete the project on time (there are websites to do that for you).

There is even a simpler way: Make a promise to yourself. This one can be tricky to follow through on, but if you can train yourself to do it, it is extraordinarily powerful and doesn’t come with the additional costs that a lot of other commitment devices involve. If you can really make yourself feel as bad about breaking a promise to yourself as you would about breaking a promise to someone else, then you can dramatically increase your own self-control with very little cost. The challenge lies in actually cultivating that sort of attitude, and then in following through with making only promises you can keep and actually keeping them. This, too, can be a delicate balance; it is dangerous to over-commit to promises to yourself and feel too much pain when you fail to meet them.
But given the strong correlations between self-control and long-term success, trying to train yourself to be a little better at it can provide enormous benefits.
If you ever get around to it, that is.

Is grade inflation a real problem?

Mar 4 JDN 2458182

You can’t spend much time teaching at the university level and not hear someone complain about “grade inflation”. Almost every professor seems to believe in it, and yet they must all be participating in it, if it’s really such a widespread problem.

This could be explained as a collective action problem, a Tragedy of the Commons: If the incentives are always to have the students with the highest grades—perhaps because of administrative pressure, or in order to get better reviews from students—then even if all professors would prefer a harsher grading scheme, no individual professor can afford to deviate from the prevailing norms.

But in fact I think there is a much simpler explanation: Grade inflation doesn’t exist.

In economic growth theory, economists make a sharp distinction between inflation—increase in prices without change in underlying fundamentals—and growth—increase in the real value of output. I contend that there is no such thing as grade inflation—what we are in fact observing is grade growth.
Am I saying that students are actually smarter now than they were 30 years ago?

Yes. That’s exactly what I’m saying.

But don’t take it from me. Take it from the decades of research on the Flynn Effect: IQ scores have been rising worldwide at a rate of about 0.3 IQ points per year for as long as we’ve been keeping good records. Students today are about 10 IQ points smarter than students 30 years ago—a 2018 IQ score of 95 is equivalent to a 1988 score of 105, which is equivalent to a 1958 score of 115. There is reason to think this trend won’t continue indefinitely, since the effect is mainly concentrated at the bottom end of the distribution; but it has continued for quite some time already.

This by itself would probably be enough to explain the observed increase in grades, but there’s more: College students are also a self-selected sample, admitted precisely because they were believed to be the smartest individuals in the application pool. Rising grades at top institutions are easily explained by rising selectivity at top schools: Harvard now accepts 5.6% of applicants. In 1942, Harvard accepted 92% of applicants. The odds of getting in have fallen from 9:1 in favor to 19:1 against. Today, you need a 4.0 GPA, a 36 ACT in every category, glowing letters of recommendation, and hundreds of hours of extracurricular activities (or a family member who donated millions of dollars, of course) to get into Harvard. In the 1940s, you needed a high school diploma and a B average.

In fact, when educational researchers have tried to quantitatively study the phenomenon of “grade inflation”, they usually come back with the result that they simply can’t find it. The US department of education conducted a study in 1995 showing that average university grades had declined since 1965. Given that the Flynn effect raised IQ by almost 10 points during that time, maybe we should be panicking about grade deflation.

It really wouldn’t be hard to make that case: “Back in my day, you could get an A just by knowing basic algebra! Now they want these kids to take partial derivatives?” “We used to just memorize facts to ace the exam; but now teachers keep asking for reasoning and critical thinking?”

More recently, a study in 2013 found that grades rose at the high school level, but fell at the college level, and showed no evidence of losing any informativeness as a signaling mechanism. The only recent study I could find showing genuinely compelling evidence for grade inflation was a 2017 study of UK students estimating that grades are growing about twice as fast as the Flynn effect alone would predict. Most studies don’t even consider the possibility that students are smarter than they used to be—they just take it for granted that any increase in average grades constitutes grade inflation. Many of them don’t even control for the increase in selectivity—here’s one using the fact that Harvard’s average rose from 2.7 to 3.4 from 1960 to 2000 as evidence of “grade inflation” when Harvard’s acceptance rate fell from almost 30% to only 10% during that period.

Indeed, the real mystery is why so many professors believe in grade inflation, when the evidence for it is so astonishingly weak.

I think it’s availability heuristic. Who are professors? They are the cream of the crop. They aced their way through high school, college, and graduate school, then got hired and earned tenure—they were one of a handful of individuals who won a fierce competition with hundreds of competitors at each stage. There are over 320 million people in the US, and only 1.3 million college faculty. This means that college professors represent about the top 0.4% of high-scoring students.

Combine that with the fact that human beings assort positively (we like to spend time with people who are similar to us) and use availability heuristic (we judge how likely something is based on how many times we have seen it).

Thus, when a professor compares to her own experience of college, she is remembering her fellow top-scoring students at elite educational institutions. She is recalling the extreme intellectual demands she had to meet to get where she is today, and erroneously assuming that these are representative of most the population of her generation. She probably went to school at one of a handful of elite institutions, even if she now teaches at a mid-level community college: three quarters of college faculty come from the top one quarter of graduate schools.

And now she compares to the students she has to teach, most of whom would not be able to meet such demands—but of course most people in her generation couldn’t either. She frets for the future of humanity only because not everyone is a genius like her.

Throw in the Curse of Knowledge: The professor doesn’t remember how hard it was to learn what she has learned so far, and so the fact that it seems easy now makes her think it was easy all along. “How can they not know how to take partial derivatives!?” Well, let’s see… were you born knowing how to take partial derivatives?

Giving a student an A for work far inferior to what you’d have done in their place isn’t unfair. Indeed, it would clearly be unfair to do anything less. You have years if not decades of additional education ahead of them, and you are from self-selected elite sample of highly intelligent individuals. Expecting everyone to perform as well as you would is simply setting up most of the population for failure.

There are potential incentives for grade inflation that do concern me: In particular, a lot of international student visas and scholarship programs insist upon maintaining a B or even A- average to continue. Professors are understandably loathe to condemn a student to having to drop out or return to their home country just because they scored 81% instead of 84% on the final exam. If we really intend to make C the average score, then students shouldn’t lose funding or visas just for scoring a B-. Indeed, I have trouble defending any threshold above outright failing—which is to say, a minimum score of D-. If you pass your classes, that should be good enough to keep your funding.

Yet apparently even this isn’t creating too much upward bias, as students who are 10 IQ points smarter are still getting about the same scores as their forebears. We should be celebrating that our population is getting smarter, but instead we’re panicking over “easy grading”.

But kids these days, am I right?

You know what? Let’s repeal Obamacare. Here’s my replacement.

Feb 18 JDN 2458168

By all reasonable measures, Obamacare has been a success. Healthcare costs are down but coverage rates are up. It reduced both the federal deficit and after-tax income inequality.

But Republicans have hated it the whole time, and in particular the individual mandate provision has always been unpopular. Under the Trump administration, the individual mandate has now been repealed.

By itself, this can only be disastrous. It threatens to undermine all the successes of the entire Obamacare system. Without the individual mandate, covering pre-existing conditions means that people can simply wait to get insurance until they need it—at which point it’s not insurance anymore. The risks stop being shared and end up concentrated on whoever gets sick, then we go back to people going bankrupt because they were unlucky enough to get cancer. The individual mandate was vital to making Obamacare work.

But I do actually understand why the individual mandate is unpopular: Nobody likes being forced into buying anything.

John Roberts ruled that the individual mandate was Constitutional on the grounds that it is economically equivalent to a tax. This is absolutely correct, and I applaud his sound reasoning.

That said, the individual mandate is not in fact psychologically equivalent to a tax.

Psychologically, being forced to specifically buy something or face punishment feels a lot more coercive than simply owing a certain amount of money that the government will use to buy something. Roberts is right; economically, these two things are equivalent. The same real goods get purchased, at the same people’s expense; the accounts balance in the same way. But it feels different.

And it would feel different to me too, if I were required to actually shop for that particular avionic component on that Apache helicopter my taxes paid for, or if I had to write a check for that particular section of Highway 405 that my taxes helped maintain. Yes, I know that I give the government a certain amount of money that they spent on salaries for US military personnel; but I’d find it pretty weird if they required me to actually hand over the money in cash to some specific Marine. (On the other hand, this sort of thing might actually give people a more visceral feel for the benefits of taxes, much as microfinance agencies like to show you the faces of particular people as you give them loans, whether or not those people are actually the ones getting your money.)

There’s another reason it feels different as well: We have framed the individual mandate as a penalty, as a loss. Human beings are loss averse; losing $10 feels about twice as bad as not getting $10. That makes the mandate more unpleasant, hence more unpopular.

What could we do instead? Well, obviously, we could implement a single-payer healthcare system like we already have in Medicare, like they have in Canada and the UK, or like they have in Scandinavia (#ScandinaviaIsBetter). And that’s really what we should do.

But since that doesn’t seem to be on the table right now, here’s my compromise proposal. Okay, yes, let’s repeal Obamacare. No more individual mandate. No fines for not having health insurance.

Here’s what we would do instead: You get a bonus refundable tax credit for having health insurance.

We top off the income tax rate to adjust so that revenue ends up the same.

Say goodbye to the “individual mandate” and welcome the “health care bonus rebate”.

Most of you reading this are economically savvy enough to realize that’s the same thing. If I tax you $100, then refund $100 if you have health insurance, that’s completely equivalent to charging you a fine of $100 if you don’t have health insurance.

But it doesn’t feel the same to most people. A fine feels like a punishment, like a loss. It hurts more than a mere foregone bonus, and it contains an element of disapproval and public shame.

Whereas, we forgo refundable tax credits all the time. You’ve probably forgone dozens of refundable tax credits you could have gotten, either because you didn’t know about them or because you realized they weren’t worth it to you.

Now instead of the government punishing you for such a petty crime as not having health insurance, the government is rewarding you for the responsible civic choice of having health insurance. We have replaced a mean, vindictive government with a friendly, supportive government.

Positive reinforcement is more reliable anyway. (Any child psychologist will tell you that while punishment is largely ineffective and corporal punishment is outright counterproductive, reward systems absolutely do work.) Uptake of health insurance should be at least as good as before, but the policy will be much more popular.

It’s a very simple change to make. It could be done in a single tax bill. Economically, it makes no difference at all. But psychologically—and politically—it could make all the difference in the world.

The right (and wrong) way to buy stocks

July 9, JDN 2457944

Most people don’t buy stocks at all. Stock equity is the quintessential form of financial wealth, and 42% of financial net wealth in the United States is held by the top 1%, while the bottom 80% owns essentially none.

Half of American households do not have any private retirement savings at all, and are depending either on employee pensions or Social Security for their retirement plans.

This is not necessarily irrational. In order to save for retirement, one must first have sufficient income to live on. Indeed, I got very annoyed at a “financial planning seminar” for grad students I attended recently, trying to scare us about the fact that almost none of us had any meaningful retirement savings. No, we shouldn’t have meaningful retirement savings, because our income is currently much lower than what we can expect to get once we graduate and enter our professions. It doesn’t make sense for someone scraping by on a $20,000 per year graduate student stipend to be saving up for retirement, when they can quite reasonably expect to be making $70,000-$100,000 per year once they finally get that PhD and become a professional economist (or sociologist, or psychologist or physicist or statistician or political scientist or material, mechanical, chemical, or aerospace engineer, or college professor in general, etc.). Even social workers, historians, and archaeologists make a lot more money than grad students. If you are already in the workforce and only expect to be getting small raises in the future, maybe you should start saving for retirement in your 20s. If you’re a grad student, don’t bother. It’ll be a lot easier to save once your income triples after graduation. (Personally, I keep about $700 in stocks mostly to get a feel for what it is like owning and trading stocks that I will apply later, not out of any serious expectation to support a retirement fund. Even at Warren Buffet-level returns I wouldn’t make more than $200 a year this way.)

Total US retirement savings are over $25 trillion, which… does actually sound low to me. In a country with a GDP now over $19 trillion, that means we’ve only saved a year and change of total income. If we had a rapidly growing population this might be fine, but we don’t; our population is fairly stable. People seem to be relying on economic growth to provide for their retirement, and since we are almost certainly at steady-state capital stock and fairly near full employment, that means waiting for technological advancement.

So basically people are hoping that we get to the Wall-E future where the robots will provide for us. And hey, maybe we will; but assuming that we haven’t abandoned capitalism by then (as they certainly haven’t in Wall-E), maybe you should try to make sure you own some assets to pay for robots with?

But okay, let’s set all that aside, and say you do actually want to save for retirement. How should you go about doing it?

Stocks are clearly the way to go. A certain proportion of government bonds also makes sense as a hedge against risk, and maybe you should even throw in the occasional commodity future. I wouldn’t recommend oil or coal at this point—either we do something about climate change and those prices plummet, or we don’t and we’ve got bigger problems—but it’s hard to go wrong with corn or steel, and for this one purpose it also can make sense to buy gold as well. Gold is not a magical panacea or the foundation of all wealth, but its price does tend to correlate negatively with stock returns, so it’s not a bad risk hedge.

Don’t buy exotic derivatives unless you really know what you’re doing—they can make a lot of money, but they can lose it just as fast—and never buy non-portfolio assets as a financial investment. If your goal is to buy something to make money, make it something you can trade at the click of a button. Buy a house because you want to live in that house. Buy wine because you like drinking wine. Don’t buy a house in the hopes of making a financial return—you’ll have leveraged your entire portfolio 10 to 1 while leaving it completely undiversified. And the problem with investing in wine, ironically, is its lack of liquidity.

The core of your investment portfolio should definitely be stocks. The biggest reason for this is the equity premium; equities—that is, stocks—get returns so much higher than other assets that it’s actually baffling to most economists. Bond returns are currently terrible, while stock returns are currently fantastic. The former is currently near 0% in inflation-adjusted terms, while the latter is closer to 16%. If this continues for the next 10 years, that means that $1000 put in bonds would be worth… $1000, while $1000 put in stocks would be worth $4400. So, do you want to keep the same amount of money, or quadruple your money? It’s up to you.

Higher risk is generally associated with higher return, because rational investors will only accept additional risk when they get some additional benefit from it; and stocks are indeed riskier than most other assets, but not that much riskier. For this to be rational, people would need to be extremely risk-averse, to the point where they should never drive a car or eat a cheeseburger. (Of course, human beings are terrible at assessing risk, so what I really think is going on is that people wildly underestimate the risk of driving a car and wildly overestimate the risk of buying stocks.)

Next, you may be asking: How does one buy stocks? This doesn’t seem to be something people teach in school.

You will need a brokerage of some sort. There are many such brokerages, but they are basically all equivalent except for the fees they charge. Some of them will try to offer you various bells and whistles to justify whatever additional cut they get of your trades, but they are almost never worth it. You should choose one that has a low a trade fee as possible, because even a few dollars here and there can add up surprisingly quickly.

Fortunately, there is now at least one well-established reliable stock brokerage available to almost anyone that has a standard trade fee of zero. They are called Robinhood, and I highly recommend them. If they have any downside, it is ironically that they make trading too easy, so you can be tempted to do it too often. Learn to resist that urge, and they will serve you well and cost you nothing.

Now, which stocks should you buy? There are a lot of them out there. The answer I’m going to give may sound strange: All of them. You should buy all the stocks.

All of them? How can you buy all of them? Wouldn’t that be ludicrously expensive?

No, it’s quite affordable in fact. In my little $700 portfolio, I own every single stock in the S&P 500 and the NASDAQ. If I get a little extra money to save, I may expand to own every stock in Europe and China as well.

How? A clever little arrangement called an exchange-traded fund, or ETF for short. An ETF is actually a form of mutual fund, where the fund purchases shares in a huge array of stocks, and adjusts what they own to precisely track the behavior of an entire stock market (such as the S&P 500). Then what you can buy is shares in that mutual fund, which are usually priced somewhere between $100 and $300 each. As the price of stocks in the market rises, the price of shares in the mutual fund rises to match, and you can reap the same capital gains they do.

A major advantage of this arrangement, especially for a typical person who isn’t well-versed in stock markets, is that it requires almost no attention at your end. You can buy into a few ETFs and then leave your money to sit there, knowing that it will grow as long as the overall stock market grows.

But there is an even more important advantage, which is that it maximizes your diversification. I said earlier that you shouldn’t buy a house as an investment, because it’s not at all diversified. What I mean by this is that the price of that house depends only on one thing—that house itself. If the price of that house changes, the full change is reflected immediately in the value of your asset. In fact, if you have 10% down on a mortgage, the full change is reflected ten times over in your net wealth, because you are leveraged 10 to 1.

An ETF is basically the opposite of that. Instead of its price depending on only one thing, it depends on a vast array of things, averaging over the prices of literally hundreds or thousands of different corporations. When some fall, others will rise. On average, as long as the economy continues to grow, they will rise.

The result is that you can get the same average return you would from owning stocks, while dramatically reducing the risk you bear.

To see how this works, consider the past year’s performance of Apple (AAPL), which has done very well, versus Fitbit (FIT), which has done very poorly, compared with the NASDAQ as a whole, of which they are both part.

AAPL has grown over 50% (40 log points) in the last year; so if you’d bought $1000 of their stock a year ago it would be worth $1500. FIT has fallen over 60% (84 log points) in the same time, so if you’d bought $1000 of their stock instead, it would be worth only $400. That’s the risk you’re taking by buying individual stocks.

Whereas, if you had simply bought a NASDAQ ETF a year ago, your return would be 35%, so that $1000 would be worth $1350.

Of course, that does mean you don’t get as high a return as you would if you had managed to choose the highest-performing stock on that index. But you’re unlikely to be able to do that, as even professional financial forecasters are worse than random chance. So, would you rather take a 50-50 shot between gaining $500 and losing $600, or would you prefer a guaranteed $350?

If higher return is not your only goal, and you want to be socially responsible in your investments, there are ETFs for that too. Instead of buying the whole stock market, these funds buy only a section of the market that is associated with some social benefit, such as lower carbon emissions or better representation of women in management. On average, you can expect a slightly lower return this way; but you are also helping to make a better world. And still your average return is generally going to be better than it would be if you tried to pick individual stocks yourself. In fact, certain classes of socially-responsible funds—particularly green tech and women’s representation—actually perform better than conventional ETFs, probably because most investors undervalue renewable energy and, well, also undervalue women. Women CEOs perform better at lower prices; why would you not want to buy their companies?

In fact ETFs are not literally guaranteed—the market as a whole does move up and down, so it is possible to lose money even by buying ETFs. But because the risk is so much lower, your odds of losing money are considerably reduced. And on average, an ETF will, by construction, perform exactly as well as the average performance of a randomly-chosen stock from that market.

Indeed, I am quite convinced that most people don’t take enough risk on their investment portfolios, because they confuse two very different types of risk.

The kind you should be worried about is idiosyncratic risk, which is risk tied to a particular investment—the risk of having chosen the Fitbit instead of Apple. But a lot of the time people seem to be avoiding market risk, which is the risk tied to changes in the market as a whole. Avoiding market risk does reduce your chances of losing money, but it does so at the cost of reducing your chances of making money even more.

Idiosyncratic risk is basically all downside. Yeah, you could get lucky; but you could just as well get unlucky. Far better if you could somehow average over that risk and get the average return. But with diversification, that is exactly what you can do. Then you are left only with market risk, which is the kind of risk that is directly tied to higher average returns.

Young people should especially be willing to take more risk in their portfolios. As you get closer to retirement, it becomes important to have more certainty about how much money will really be available to you once you retire. But if retirement is still 30 years away, the thing you should care most about is maximizing your average return. That means taking on a lot of market risk, which is then less risky overall if you diversify away the idiosyncratic risk.

I hope now that I have convinced you to avoid buying individual stocks. For most people most of the time, this is the advice you need to hear. Don’t try to forecast the market, don’t try to outperform the indexes; just buy and hold some ETFs and leave your money alone to grow.

But if you really must buy individual stocks, either because you think you are savvy enough to beat the forecasters or because you enjoy the gamble, here’s some additional advice I have for you.

My first piece of advice is that you should still buy ETFs. Even if you’re willing to risk some of your wealth on greater gambles, don’t risk all of it that way.

My second piece of advice is to buy primarily large, well-established companies (like Apple or Microsoft or Ford or General Electric). Their stocks certainly do rise and fall, but they are unlikely to completely crash and burn the way that young companies like Fitbit can.

My third piece of advice is to watch the price-earnings ratio (P/E for short). Roughly speaking, this is the number of years it would take for the profits of this corporation to pay off the value of its stock. If they pay most of their profits in dividends, it is approximately how many years you’d need to hold the stock in order to get as much in dividends as you paid for the shares.

Do you want P/E to be large or small? You want it to be small. This is called value investing, but it really should just be called “investing”. The alternatives to value investing are actually not investment but speculation and arbitrage. If you are actually investing, you are buying into companies that are currently undervalued; you want them to be cheap.

Of course, it is not always easy to tell whether a company is undervalued. A common rule-of-thumb is that you should aim for a P/E around 20 (20 years to pay off means about 5% return in dividends); if the P/E is below 10, it’s a fantastic deal, and if it is above 30, it might not be worth the price. But reality is of course more complicated than this. You don’t actually care about current earnings, you care about future earnings, and it could be that a company which is earning very little now will earn more later, or vice-versa. The more you can learn about a company, the better judgment you can make about their future profitability; this is another reason why it makes sense to buy large, well-known companies rather than tiny startups.

My final piece of advice is not to trade too frequently. Especially with something like Robinhood where trades are instant and free, it can be tempting to try to ride every little ripple in the market. Up 0.5%? Sell! Down 0.3%? Buy! And yes, in principle, if you could perfectly forecast every such fluctuation, this would be optimal—and make you an almost obscene amount of money. But you can’t. We know you can’t. You need to remember that you can’t. You should only trade if one of two things happens: Either your situation changes, or the company’s situation changes. If you need the money, sell, to get the money. If you have extra savings, buy, to give those savings a good return. If something bad happened to the company and their profits are going to fall, sell. If something good happened to the company and their profits are going to rise, buy. Otherwise, hold. In the long run, those who hold stocks longer are better off.

Argumentum ab scientia is not argumentum baculo: The difference between authority and expertise

May 7, JDN 2457881

Americans are, on the whole, suspicious of authority. This is a very good thing; it shields us against authoritarianism. But it comes with a major downside, which is a tendency to forget the distinction between authority and expertise.

Argument from authority is an informal fallacy, argumentum baculo. The fact that something was said by the Pope, or the President, or the General Secretary of the UN, doesn’t make it true. (Aside: You’re probably more familiar with the phrase argumentum ad baculum, which is terrible Latin. That would mean “argument toward a stick”, when clearly the intended meaning was “argument by means of a stick”, which is argumentum baculo.)

But argument from expertise, argumentum ab scientia, is something quite different. The world is much too complicated for any one person to know everything about everything, so we have no choice but to specialize our knowledge, each of us becoming an expert in only a few things. So if you are not an expert in a subject, when someone who is an expert in that subject tells you something about that subject, you should probably believe them.

You should especially be prepared to believe them when the entire community of experts is in consensus or near-consensus on a topic. The scientific consensus on climate change is absolutely overwhelming. Is this a reason to believe in climate change? You’re damn right it is. Unless you have years of education and experience in understanding climate models and atmospheric data, you have no basis for challenging the expert consensus on this issue.

This confusion has created a deep current of anti-intellectualism in our culture, as Isaac Asimov famously recognized:

There is a cult of ignorance in the United States, and there always has been. The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that “my ignorance is just as good as your knowledge.”

This is also important to understand if you have heterodox views on any scientific topic. The fact that the whole field disagrees with you does not prove that you are wrong—but it does make it quite likely that you are wrong. Cranks often want to compare themselves to Galileo or Einstein, but here’s the thing: Galileo and Einstein didn’t act like cranks. They didn’t expect the scientific community to respect their ideas before they had gathered compelling evidence in their favor.

When behavioral economists found that neoclassical models of human behavior didn’t stand up to scrutiny, did they shout from the rooftops that economics is all a lie? No, they published their research in peer-reviewed journals, and talked with economists about the implications of their results. There may have been times when they felt ignored or disrespected by the mainstream, but they pressed on, because the data was on their side. And ultimately, the mainstream gave in: Daniel Kahneman won the Nobel Prize in Economics.

Experts are not always right, that is true. But they are usually right, and if you think they are wrong you’d better have a good reason to think so. The best reasons are the sort that come about when you yourself have spent the time and effort to become an expert, able to challenge the consensus on its own terms.

Admittedly, that is a very difficult thing to do—and more difficult than it should be. I have seen firsthand how difficult and painful the slow grind toward a PhD can be, and how many obstacles will get thrown in your way, ranging from nepotism and interdepartmental politics, to discrimination against women and minorities, to mismatches of interest between students and faculty, all the way to illness, mental health problems, and the slings and arrows of outrageous fortune in general. If you have particularly heterodox ideas, you may face particularly harsh barriers, and sometimes it behooves you to hold your tongue and toe the lie awhile.

But this is no excuse not to gain expertise. Even if academia itself is not available to you, we live in an age of unprecedented availability of information—it’s not called the Information Age for nothing. A sufficiently talented and dedicated autodidact can challenge the mainstream, if their ideas are truly good enough. (Perhaps the best example of this is the mathematician savant Srinivasa Ramanujan. But he’s… something else. I think he is about as far from the average genius as the average genius is from the average person.) No, that won’t be easy either. But if you are really serious about advancing human understanding rather than just rooting for your political team (read: tribe), you should be prepared to either take up the academic route or attack it as an autodidact from the outside.

In fact, most scientific fields are actually quite good about admitting what they don’t know. A total consensus that turns out to be wrong is actually a very rare phenomenon; much more common is a clash of multiple competing paradigms where one ultimately wins out, or they end up replaced by a totally new paradigm or some sort of synthesis. In almost all cases, the new paradigm wins not because it becomes fashionable or the ancien regime dies out (as Planck cynically claimed) but because overwhelming evidence is observed in its favor, often in the form of explaining some phenomenon that was previously impossible to understand. If your heterodox theory doesn’t do that, then it probably won’t win, because it doesn’t deserve to.

(Right now you might think of challenging me: Does my heterodox theory do that? Does the tribal paradigm explain things that either total selfishness or total altruism cannot? I think it’s pretty obvious that it does. I mean, you are familiar with a little thing called “racism”, aren’t you? There is no explanation for racism in neoclassical economics; to understand it at all you have to just impose it as an arbitrary term on the utility function. But at that point, why not throw in whatever you please? Maybe some people enjoy bashing their heads against walls, and other people take great pleasure in the taste of arsenic. Why would this particular self- (not to mention other-) destroying behavior be universal to all human societies?)

In practice, I think most people who challenge the mainstream consensus aren’t genuinely interested in finding out the truth—certainly not enough to actually go through the work of doing it. It’s a pattern you can see in a wide range of fringe views: Anti-vaxxers, 9/11 truthers, climate denialists, they all think the same way. The mainstream disagrees with my preconceived ideology, therefore the mainstream is some kind of global conspiracy to deceive us. The overwhelming evidence that vaccination is safe and (wildly) cost-effective, 9/11 was indeed perpetrated by Al Qaeda and neither planned nor anticipated by anyone in the US government , and the global climate is being changed by human greenhouse gas emissions—these things simply don’t matter to them, because it was never really about the truth. They knew the answer before they asked the question. Because their identity is wrapped up in that political ideology, they know it couldn’t possibly be otherwise, and no amount of evidence will change their mind.

How do we reach such people? That, I don’t know. I wish I did. But I can say this much: We can stop taking them seriously when they say that the overwhelming scientific consensus against them is just another “appeal to authority”. It’s not. It never was. It’s an argument from expertise—there are people who know this a lot better than you, and they think you’re wrong, so you’re probably wrong.