What would a new macroeconomics look like?

Dec 9 JDN 2458462

In previous posts I have extensively criticized the current paradigm of macroeconomics. But it’s always easier to tear the old edifice down than to build a better one in its place. So in this post I thought I’d try to be more constructive: What sort of new directions could macroeconomics take?

The most important change we need to make is to abandon the assumption of dynamic optimization. This will be a very hard sell, as most macroeconomists have become convinced that the Lucas Critique means we need to always base everything on the dynamic optimization of a single representative agent. I don’t think this was actually what Lucas meant (though maybe we should ask him; he’s still at Chicago), and I certainly don’t think it is what he should have meant. He had a legitimate point about the way macroeconomics was operating at that time: It was ignoring the feedback loops that occur when we start trying to change policies.

Goodhart’s Law is probably a better formulation: Once you make an indicator into a target, you make it less effective as an indicator. So while inflation does seem to be negatively correlated with unemployment, that doesn’t mean we should try to increase inflation to extreme levels in order to get rid of unemployment; sooner or later the economy is going to adapt and we’ll just have both inflation and unemployment at the same time. (Campbell’s Law provides a specific example that I wish more people in the US understood: Test scores would be a good measure of education if we didn’t use them to target educational resources.)

The reason we must get rid of dynamic optimization is quite simple: No one behaves that way.

It’s often computationally intractable even in our wildly oversimplified models that experts spend years working onnow you’re imagining that everyone does this constantly?

The most fundamental part of almost every DSGE model is the Euler equation; this equation comes directly from the dynamic optimization. It’s supposed to predict how people will choose to spend and save based upon their plans for an infinite sequence of future income and spending—and if this sounds utterly impossible, that’s because it is. Euler equations don’t fit the data at all, and even extreme attempts to save them by adding a proliferation of additional terms have failed. (It reminds me very much of the epicycles that astronomers used to add to the geocentric model of the universe to try to squeeze in weird results like Mars, before they had the heliocentric model.)

We should instead start over: How do people actually choose their spending? Well, first of all, it’s not completely rational. But it’s also not totally random. People spend on necessities before luxuries; they try to live within their means; they shop for bargains. There is a great deal of data from behavioral economics that could be brought to bear on understanding the actual heuristics people use in deciding how to spend and save. There have already been successful policy interventions using this knowledge, like Save More Tomorrow.

The best thing about this is that it should make our models simpler. We’re no longer asking each agent in the model to solve an impossible problem. However people actually make these decisions, we know it can be done, because it is being done. Most people don’t really think that hard, even when they probably should; so the heuristics really can’t be that complicated. My guess is that you can get a good fit—certainly better than an Euler equation—just by assuming that people set a target for how much they’re going to save (which is also probably pretty small for most people), and then spend the rest.

The second most important thing we need to add is inequality. Some people are much richer than others; this is a very important fact about economics that we need to understand. Yet it has taken the economics profession decades to figure this out, and even now I’m only aware of one class of macroeconomic models that seriously involves inequality, the Heterogeneous Agent New Keynesian (HANK) models which didn’t emerge until the last few years (the earliest publication I can find is 2016!). And these models are monsters; they are almost always computationally intractable and have a huge number of parameters to estimate.

Understanding inequality will require more parameters, that much is true. But if we abandon dynamic optimization, we won’t need as many as the HANK models have, and most of the new parameters are actually things we can observe, like the distribution of wages and years of schooling.

Observability of parameters is a big deal. Another problem with the way the Lucas Critique has been used is that we’ve been told we need to be using “deep structural parameters” like the temporal elasticity of substitution and the coefficient of relative risk aversion—but we have no idea what those actually are. We can’t observe them, and all of our attempts to measure them indirectly have yielded inconclusive or even inconsistent results. This is probably because these parameters are based on assumptions about human rationality that are simply not realistic. Most people probably don’t have a well-defined temporal elasticity of substitution, because their day-to-day decisions simply aren’t consistent enough over time for that to make sense. Sometimes they eat salad and exercise; sometimes they loaf on the couch and drink milkshakes. Likewise with risk aversion: many moons ago I wrote about how people will buy both insurance and lottery tickets, which no one with a consistent coefficient of relative risk aversion would ever do.

So if we are interested in deep structural parameters, we need to base those parameters on behavioral experiments so that we can understand actual human behavior. And frankly I don’t think we need deep structural parameters; I think this is a form of greedy reductionism, where we assume that the way to understand something is always to look at smaller pieces. Sometimes the whole is more than the sum of its parts. Economists obviously feel a lot of envy for physics; but they don’t seem to understand that aerodynamics would never have (ahem) gotten off the ground if we had first waited for an exact quantum mechanical solution of the oxygen atom (which we still don’t have, by the way). Macroeconomics may not actually need “microfoundations” in the strong sense that most economists intend; it needs to be consistent with small-scale behavior, but it doesn’t need to be derived from small-scale behavior.

This means that the new paradigm in macroeconomics does not need to be computationally intractable. Using heuristics instead of dynamic optimization and worrying less about microfoundations will make the models simpler; adding inequality need not make them so much more complicated.

Fighting the zero-sum paradigm

Dec 2 JDN 2458455

It should be obvious at this point that there are deep, perhaps even fundamental, divides between the attitudes and beliefs of different political factions. It can be very difficult to even understand, much less sympathize, with the concerns of people who are racist, misogynistic, homophobic, xenophobic, and authoritarian.
But at the end of the day we still have to live in the same country as these people, so we’d better try to understand how they think. And maybe, just maybe, that understanding will help us to change them.

There is one fundamental belief system that I believe underlies almost all forms of extremism. Right now right-wing extremism is the major threat to global democracy, but left-wing extremism subscribes to the same core paradigm (consistent with Horseshoe Theory).

I think the best term for this is the zero-sum paradigm. The idea is quite simple: There is a certain amount of valuable “stuff” (money, goods, land, status, happiness) in the world, and the only political question is who gets how much.

Thus, any improvement in anyone’s life must, necessarily, come at someone else’s expense. If I become richer, you become poorer. If I become stronger, you become weaker. Any improvement in my standard of living is a threat to your status.

If this belief were true, it would justify, or at least rationalize, all sorts of destructive behavior: Any harm I can inflict upon someone else will yield a benefit for me, by some fundamental conservation law of the universe.

Viewed in this light, beliefs like patriarchy and White supremacy suddenly become much more comprehensible: Why would you want to spend so much effort hurting women and Black people? Because, by the fundamental law of zero-sum, any harm to women is a benefit to men, and any harm to Black people is a benefit to White people. The world is made of “teams”, and you are fighting for your own against all the others.

And I can even see why such an attitude is seductive: It’s simple and easy to understand. And there are many circumstances where it can be approximately true.
When you are bargaining with your boss over a wage, one dollar more for you is one dollar less for your boss.
When your factory outsources production to China, one more job for China is one less job for you.

When we vote for President, one more vote for the Democrats is one less vote for the Republicans.

But of course the world is not actually zero-sum. Both you and your boss would be worse off if your job were to disappear; they need your work and you need their money. For every job that is outsourced to China, another job is created in the United States. And democracy itself is such a profound public good that it basically overwhelms all others.

In fact, it is precisely when a system is running well that the zero-sum paradigm becomes closest to true. In the space of all possible allocations, it is the efficient ones that behave in something like a zero-sum way, because when the system is efficient, we are already producing as much as we can.

This may be part of why populist extremism always seems to assert itself during periods of global prosperity, as in the 1920s and today: It is precisely when the world is running at its full capacity that it feels most like someone else’s gain must come at your loss.

Yet if we live according to the zero-sum paradigm, we will rapidly destroy the prosperity that made that paradigm seem plausible. A trade war between the US and China would put millions out of work in both countries. A real war with conventional weapons would kill millions. A nuclear war would kill billions.

This is what we must convey: We must show people just how good things are right now.

This is not an easy task; when people want to believe the world is falling apart, they can very easily find excuses to do so. You can point to the statistics showing a global decline in homicide, but one dramatic shooting on the TV news will wipe that all away. You can show the worldwide rise in real incomes across the board, but that won’t console someone who just lost their job and blames outsourcing or immigrants.

Indeed, many people will be offended by the attempt—the mere suggestion that the world is actually in very good shape and overall getting better will be perceived as an attempt to deny or dismiss the problems and injustices that still exist.

I encounter this especially from the left: Simply pointing out the objective fact that the wealth gap between White and Black households is slowly closing is often taken as a claim that racism no longer exists or doesn’t matter. Congratulating the meteoric rise in women’s empowerment around the world is often paradoxically viewed as dismissing feminism instead of lauding it.

I think the best case against progress can be made with regard to global climate change: Carbon emissions are not falling nearly fast enough, and the world is getting closer to the brink of truly catastrophic ecological damage. Yet even here the zero-sum paradigm is clearly holding us back; workers in fossil-fuel industries think that the only way to reduce carbon emissions is to make their families suffer, but that’s simply not true. We can make them better off too.

Talking about injustice feels righteous. Talking about progress doesn’t. Yet I think what the world needs most right now—the one thing that might actually pull us back from the brink of fascism or even war—is people talking about progress.

If people think that the world is full of failure and suffering and injustice, they will want to tear down the whole system and start over with something else. In a world that is largely democratic, that very likely means switching to authoritarianism. If people think that this is as bad as it gets, they will be willing to accept or even instigate violence in order to change to almost anything else.

But if people realize that in fact the world is full of success and prosperity and progress, that things are right now quite literally better in almost every way for almost every person in almost every country than they were a hundred—or even fifty—years ago, they will not be so eager to tear the system down and start anew. Centrism is often mocked (partly because it is confused with false equivalence), but in a world where life is improving this quickly for this many people, “stay the course” sounds awfully attractive to me.
That doesn’t mean we should ignore the real problems and injustices that still exist, of course. There is still a great deal of progress left to be made.  But I believe we are more likely to make progress if we acknowledge and seek to continue the progress we have already made, than if we allow ourselves to fall into despair as if that progress did not exist.

If you really want grad students to have better mental health, remove all the high-stakes checkpoints

Post 260: Oct 14 JDN 2458406

A study was recently published in Nature Biotechnology showing clear evidence of a mental health crisis among graduate students (no, I don’t know why they picked the biotechnology imprint—I guess it wasn’t good enough for Nature proper?). This is only the most recent of several studies showing exceptionally high rates of mental health issues among graduate students.

I’ve seen universities do a lot of public hand-wringing and lip service about this issue—but I haven’t seen any that were seriously willing to do what it takes to actually solve the problem.

I think this fact became clearest to me when I was required to fill out an official “Individual Development Plan” form as a prerequisite for my advancement to candidacy, which included one question about “What are you doing to support your own mental health and work/life balance?”

The irony here is absolutely excruciating, because advancement to candidacy has been overwhelmingly my leading source of mental health stress for at least the last six months. And it is only one of several different high-stakes checkpoints that grad students are expected to complete, always threatened with defunding or outright expulsion from the graduate program if the checkpoint is not met by a certain arbitrary deadline.

The first of these was the qualifying exams. Then comes advancement to candidacy. Then I have to complete and defend a second-year paper, then a third-year paper. Finally I have to complete and defend a dissertation, and then go onto the job market and go through a gauntlet of applications and interviews. I can’t think of any other time in my life when I was under this much academic and career pressure this consistently—even finishing high school and applying to college wasn’t like this.

If universities really wanted to improve my mental health, they would find a way to get rid of all that.

Granted, a single university does not have total control over all this: There are coordination problems between universities regarding qualifying exams, advancement, and dissertation requirements. One university that unilaterally tried to remove all these would rapidly lose prestige, as it would not be regarded as “rigorous” to reduce the pressure on your grad students. But that itself is precisely the problem—we have equated “rigor” with pressuring grad students until they are on the verge of emotional collapse. Universities don’t seem to know how to make graduate school difficult in the ways that would actually encourage excellence in research and teaching; they simply know how to make it difficult in ways that destroy their students psychologically.

The job market is even more complicated; in the current funding environment, it would be prohibitively expensive to open up enough faculty positions to actually accept even half of all graduating PhDs to tenure-track jobs. Probably the best answer here is to refocus graduate programs on supporting employment outside academia, recognizing both that PhD-level skills are valuable in many workplaces and that not every grad student really wants to become a professor.

But there are clearly ways that universities could mitigate these effects, and they don’t seem genuinely interested in doing so. They could remove the advancement exam, for example; you could simply advance to candidacy as a formality when your advisor decides you are ready, never needing to actually perform a high-stakes presentation before a committee—because what the hell does that accomplish anyway? Speaking of advisors, they could have a formalized matching process that starts with interviewing several different professors and being matched to the one that best fits your goals and interests, instead of expecting you to reach out on your own and hope for the best. They could have you write a dissertation, but not perform a “dissertation defense”—because, again, what can they possibly learn from forcing you to present in a high-stakes environment that they couldn’t have learned from reading your paper and talking with you about it over several months?

They could adjust or even remove funding deadlines—especially for international students. Here at UCI at least, once you are accepted to the program, you are ostensibly guaranteed funding for as long as you maintain reasonable academic progress—but then they define “reasonable progress” in such a way that you have to form an advancement committee, fill out forms, write a paper, and present before a committee all by a certain date or your funding is in jeopardy. Residents of California (which includes all US students who successfully established residency after a full year) are given more time if we need it—but international students aren’t. How is that fair?

The unwillingness of universities to take such actions clearly shows that their commitment to improving students’ mental health is paper-thin. They are only willing to help their students improve their work-life balance as long as it doesn’t require changing anything about the graduate program. They will provide us with counseling services and free yoga classes, but they won’t seriously reduce the pressure they put on us at every step of the way.
I understand that universities are concerned about protecting their prestige, but I ask them this: Does this really improve the quality of your research or teaching output? Do you actually graduate better students by selecting only the ones who can survive being emotionally crushed? Do all these arbitrary high-stakes performances actually result in greater advancement of human knowledge?

Or is it perhaps that you yourselves were put through such hazing rituals years ago, and now your cognitive dissonance won’t let you admit that it was all for naught? “This must be worth doing, or else they wouldn’t have put me through so much suffering!” Are you trying to transfer your own psychological pain onto your students, lest you be forced to face it yourself?

MSRP is tacit collusion

Oct 7 JDN 2458399

It’s been a little while since I’ve done a really straightforward economic post. It feels good to get back to that.

You are no doubt familiar with the “Manufacturer’s Suggested Retail Price” or MSRP. It can be found on everything from books to dishwashers to video games.

The MSRP is a very simple concept: The manufacturer suggests that all retailers sell it (at least the initial run) at precisely this price.

Why would they want to do that? There is basically only one possible reason: They are trying to sustain tacit collusion.

The game theory of this is rather subtle: It requires that both manufacturers and retailers engage in long-term relationships with one another, and can pick and choose who to work with based on the history of past behavior. Both of these conditions hold in most real-world situations—indeed, the fact that they don’t hold very well in the agriculture industry is probably why we don’t see MSRP on produce.

If pricing were decided by random matching with no long-term relationships or past history, MSRP would be useless. Each firm would have little choice but to set their own optimal price, probably just slightly over their own marginal cost. Even if the manufacturer suggested an MSRP, retailers would promptly and thoroughly ignore it.

This is because the one-shot Bertrand pricing game has a unique Nash equilibrium, at pricing just above marginal cost. The basic argument is as follows: If I price cheaper than you, I can claim the whole market. As long as it’s profitable for me to do that, I will. The only time it’s not profitable for me to undercut you in this way is if we are both charging just slightly above marginal cost—so that is what we shall do, in Nash equilibrium. Human beings don’t always play according to the Nash equilibrium, but for-profit corporations do so quite consistently. Humans have limited attention and moral values; corporations have accounting departments and a fanatical devotion to the One True Profit.

But the iterated Bertrand pricing game is quite different. If instead of making only one pricing decision, we make many pricing decisions over time, always with a high probability of encountering the same buyers and sellers again in the future, then I may not want to undercut your price, for fear of triggering a price war that will hurt both of our firms.

Much like how the Iterated Prisoner’s Dilemma can sustain cooperation in Nash equilibrium while the one-shot Prisoner’s Dilemma cannot, the iterated Bertrand game can sustain collusion as a Nash equilibrium.

There is in fact a vast number of possible equilibria in the iterated Bertrand game. If prices were infinitely divisible, there would be an infinite number of equilibria. In reality, there are hundreds or thousands of equilibria, depending on how finely divisible the price may be.

This makes the iterated Bertrand game a coordination gamethere are many possible equilibria, and our task is to figure out which one to coordinate on.

If we had perfect information, we could deduce what the monopoly price would be, and then all choose the monopoly price; this would be what we call “payoff dominant”, and it’s often what people actually try to choose in real-world coordination games.

But in reality, the monopoly price is a subtle and complicated thing, and might not even be the same between different retailers. So if we each try to compute a monopoly price, we may end up with different results, and then we could trigger a price war and end up driving all of our profits down. If only there were some way to communicate with one another, and say what price we all want to set?

Ah, but there is: The MSRP. Most other forms of price communication are illegal: We certainly couldn’t send each other emails and say “Let’s all charge $59.99, okay?” (When banks tried to do that with the LIBOR, it was the largest white-collar crime in history.) But for some reason economists (particularly, I note, the supposed “free market” believers of the University of Chicago) have convinced antitrust courts that MSRP is somehow different. Yet it’s obviously hardly different at all: You’ve just made the communication one-way from manufacturers to retailers, which makes it a little less reliable, but otherwise exactly the same thing.

There are all sorts of subtler arguments about how MSRP is justifiable, but as far as I can tell they all fall flat. If you’re worried about retailers not promoting your product enough, enter into a contract requiring them to promote. Proposing a suggested price is clearly nothing but an attempt to coordinate tacit—frankly not even that tacit—collusion.

MSRP also probably serves another, equally suspect, function, which is to manipulate consumers using the anchoring heuristic: If the MSRP is $59.99, then when it does go on sale for $49.99 you feel like you are getting a good deal; whereas, if it had just been priced at $49.99 to begin with, you might still have felt that it was too expensive. I see no reason why this sort of crass manipulation of consumers should be protected under the law either, especially when it would be so easy to avoid.

There are all sorts of ways for firms to tacitly collude with one another, and we may not be able to regulate them all. But the MSRP is literally printed on the box. It’s so utterly blatant that we could very easily make it illegal with hardly any effort at all. The fact that we allow such overt price communication makes a mockery of our antitrust law.

The asymmetric impact of housing prices

Jul 22 JDN 2458323

In several previous posts I’ve talked about the international crisis of high housing prices. Today, I want to talk about some features of housing that make high housing prices particularly terrible, in a way that other high prices would not be.

First, there is the fact that some amount of housing is a basic necessity, and houses are not easily divisible. So even if the houses being built are bigger than you need, you still need some kind of house, and you can’t buy half a house; the best you could really do would be to share it with someone else, and that introduces all sorts of other complications.

Second, t here is a deep asymmetry here. While rising housing prices definitely hurt people who want to buy houses, they benefit hardly anyone.


If you bought a house for $200,000 and then all housing prices doubled so it would now sell for $400,000, are you richer? You might feel richer. You might even have access to home equity loans that would give you more real liquidity. But are you actually richer?

I contend you are not, because the only way for you to access that wealth would be to sell your home, and then you’d need to buy another home, and that other home would also be twice as expensive. The amount of money you can get for your house may have increased, but the amount of house you can get for your house is exactly the same.

Conversely, suppose that housing prices fell by half, and now that house only sells for $100,000. Are you poorer? You still have your house. Even if your mortgage isn’t paid off, it’s still the same mortgage. Your payments haven’t changed. And once again, the amount of house you can get for your house will remain the same. In fact, if you are willing to accept a deed in lieu of foreclosure (it’s bad for your credit, of course), you can walk away from that underwater house and buy a new one that’s just as good with lower payments than what you are currently making. You may actually be richer because the price of your house fell.

Relative housing prices matter, certainly. If you own a $400,000 house and move to a city where housing prices have fallen to $100,000, you are definitely richer. And if you own a $100,000 house and move to a city where housing prices have risen to $400,000, you are definitely poorer. These two effects necessarily cancel out in the aggregate.

But do absolute housing prices matter for homeowners? It really seems to me that they don’t. The people who care about absolute housing prices are not homeowners; they are people trying to enter the market for the first time.
And this means that lower housing prices are almost always better. If you could buy a house for $1,000, we would live in a paradise where it was basically impossible to be homeless. (When social workers encountered someone who was genuinely homeless, they could just buy them a house then and there.) If every home cost $10 million, those who bought homes before the price surge would be little better off than they are, but the rest of us would live on the streets.

Psychologically, people very strongly resist falling housing prices. Even in very weak housing markets, most people will flatly refuse to sell their house for less than they paid for it. As a result, housing prices usually rise with inflation, but don’t usually fall in response to deflation. Rents also display similar rigidity over time. But in reality, lower prices are almost always better for almost everyone.

There is a group of people who are harmed by low housing prices, but it is a very small group of people, most of whom are already disgustingly rich: The real estate industry. Yes, if you build new housing, or flip houses, or buy and sell houses on speculation, you will be harmed by lower housing prices. Of these, literally the only one I care about even slightly is developers; and I only care about developers insofar as they are actually doing their job building housing that people need. If falling prices hurt developers, it would be because the supply of housing was so great that everyone who needs a house could have one.

There is a subtler nuance here, which is that some people may be buying more expensive housing as a speculative saving vehicle, hoping that they can cash out on their house when they retire. To that, I really only have one word of advice: Don’t. Don’t contribute to another speculative housing bubble that could cause another Great Recession. A house is not even a particularly safe investment, because it’s completely undiversified. Buy stocks. Buy all the stocks. Buy a house because you want that house, not because you hope to make money off of it.

And if the price of your house does fall someday? Don’t panic. You may be no worse off, and other people are probably much better off.

Fake skepticism

Jun 3 JDN 2458273

“You trust the mainstream media?” “Wake up, sheeple!” “Don’t listen to what so-called scientists say; do your own research!”

These kinds of statements have become quite ubiquitous lately (though perhaps the attitudes were always there, and we only began to hear them because of the Internet and social media), and are often used to defend the most extreme and bizarre conspiracy theories, from moon-landing denial to flat Earth. The amazing thing about these kinds of statements is that they can be used to defend literally anything, as long as you can find some source with less than 100% credibility that disagrees with it. (And what source has 100% credibility?)

And that, I think, should tell you something. An argument that can prove anything is an argument that proves nothing.

Reversed stupidity is not intelligence. The fact that the mainstream media, or the government, or the pharmaceutical industry, or the oil industry, or even gangsters, fanatics, or terrorists believes something does not make it less likely to be true.

In fact, the vast majority of beliefs held by basically everyone—including the most fanatical extremists—are true. I could list such consensus true beliefs for hours: “The sky is blue.” “2+2=4.” “Ice is colder than fire.”

Even if a belief is characteristic of a specifically evil or corrupt organization, that does not necessarily make it false (though it usually is evidence of falsehood in a Bayesian sense). If only terrible people belief X, then maybe you shouldn’t believe X. But if both good and bad people believe X, the fact that bad people believe X really shouldn’t matter to you.

People who use this kind of argument often present themselves as being “skeptics”. They imagine that they have seen through the veil of deception that blinds others.

In fact, quite the opposite is the case: This is fake skepticism. These people are not uniquely skeptical; they are uniquely credulous. If you think the Earth is flat because you don’t trust the mainstream scientific community, that means you do trust someone far less credible than the mainstream scientific community.

Real skepticism is difficult. It requires concerted effort and investigation, and typically takes years. To really seriously challenge the expert consensus in a field, you need to become an expert in that field. Ideally, you should get a graduate degree in that field and actually start publishing your heterodox views. Failing that, you should at least be spending hundreds or thousands of hours doing independent research. If you are unwilling or unable to do that, you are not qualified to assess the validity of the expert consensus.

This does not mean the expert consensus is always right—remarkably often, it isn’t. But it means you aren’t allowed to say it’s wrong, because you don’t know enough to assess that.

This is not elitism. This is not an argument from authority. This is a basic respect for the effort and knowledge that experts spend their lives acquiring.

People don’t like being told that they are not as smart as other people—even though, with any variation at all, that’s got to be true for a certain proportion of people. But I’m not even saying experts are smarter than you. I’m saying they know more about their particular field of expertise.

Do you walk up to construction workers on the street and critique how they lay concrete? When you step on an airplane, do you explain to the captain how to read an altimeter? When you hire a plumber, do you insist on using the snake yourself?

Probably not. And why not? Because you know these people have training; they do this for a living. Yeah, well, scientists do this for a living too—and our training is much longer. To be a plumber, you need a high school diploma and an apprenticeship that usually lasts about four years. To be a scientist, you need a PhD, which means four years of college plus an additional five or six years of graduate school.

To be clear, I’m not saying you should listen to experts speaking outside their expertise. Some of the most idiotic, arrogant things ever said by human beings have been said by physicists opining on biology or economists ranting about politics. Even within a field, some people have such narrow expertise that you can’t really trust them even on things that seem related—like macroeconomists with idiotic views on trade, or ecologists who clearly don’t understand evolution.

This is also why one of the great challenges of being a good interdisciplinary scientist is actually obtaining enough expertise in both fields you’re working in; it isn’t literally twice the work (since there is overlap—or you wouldn’t be doing it—and you do specialize in particular interdisciplinary subfields), but it’s definitely more work, and there are definitely a lot of people on each side of the fence who may never take you seriously no matter what you do.

How do you tell who to trust? This is why I keep coming back to the matter of expert consensus. The world is much too complicated for anyone, much less everyone, to understand it all. We must be willing to trust the work of others. The best way we have found to decide which work is trustworthy is by the norms and institutions of the scientific community itself. Since 97% of climatologists say that climate change is caused by humans, they’re probably right. Since 99% of biologists believe humans evolved by natural selection, that’s probably what happened. Since 87% of economists oppose tariffs, tariffs probably aren’t a good idea.

Can we be certain that the consensus is right? No. There is precious little in this universe that we can be certain about. But as in any game of chance, you need to play the best odds, and my money will always be on the scientific consensus.

Are some ideas too ridiculous to bother with?

Apr 22 JDN 2458231

Flat Earth. Young-Earth Creationism. Reptilians. 9/11 “Truth”. Rothschild conspiracies.

There are an astonishing number of ideas that satisfy two apparently-contrary conditions:

  1. They are so obviously ridiculous that even a few minutes of honest, rational consideration of evidence that is almost universally available will immediately refute them;
  2. They are believed by tens or hundreds of millions of otherwise-intelligent people.

Young-Earth Creationism is probably the most alarming, seeing as it grips the minds of some 38% of Americans.

What should we do when faced with such ideas? This is something I’ve struggled with before.

I’ve spent a lot of time and effort trying to actively address and refute them—but I don’t think I’ve even once actually persuaded someone who believes these ideas to change their mind. This doesn’t mean my time and effort were entirely wasted; it’s possible that I managed to convince bystanders, or gained some useful understanding, or simply improved my argumentation skills. But it does seem likely that my time and effort were mostly wasted.

It’s tempting, therefore, to give up entirely, and just let people go on believing whatever nonsense they want to believe. But there’s a rather serious downside to that as well: Thirty-eight percent of Americans.

These people vote. They participate in community decisions. They make choices that affect the rest of our lives. Nearly all of those Creationists are Evangelical Christians—and White Evangelical Christians voted overwhelmingly in favor of Donald Trump. I can’t be sure that changing their minds about the age of the Earth would also change their minds about voting for Trump, but I can say this: If all the Creationists in the US had simply not voted, Hillary Clinton would have won the election.

And let’s not leave the left wing off the hook either. Jill Stein is a 9/11 “Truther”, and pulled a lot of fellow “Truthers” to her cause in the election as well. Had all of Jill Stein’s votes gone to Hillary Clinton instead, again Hillary would have won, even if all the votes for Trump had remained the same. (That said, there is reason to think that if Stein had dropped out, most of those folks wouldn’t have voted at all.)

Therefore, I don’t think it is safe to simply ignore these ridiculous beliefs. We need to do something; the question is what.

We could try to censor them, but first of all that violates basic human rights—which should be a sufficient reason not to do it—and second, it probably wouldn’t even work. Censorship typically leads to radicalization, not assimilation.

We could try to argue against them. Ideally this would be the best option, but it has not shown much effect so far. The kind of person who sincerely believes that the Earth is 6,000 years old (let alone that governments are secretly ruled by reptilian alien invaders) isn’t the kind of person who is highly responsive to evidence and rational argument.

In fact, there is reason to think that these people don’t actually believe what they say the same way that you and I believe things. I’m not saying they’re lying, exactly. They think they believe it; they want to believe it. They believe in believing it. But they don’t actually believe it—not the way that I believe that cyanide is poisonous or the way I believe the sun will rise tomorrow. It isn’t fully integrated into the way that they anticipate outcomes and choose behaviors. It’s more of a free-floating sort of belief, where professing a particular belief allows them to feel good about themselves, or represent their status in a community.

To be clear, it isn’t that these beliefs are unimportant to them; on the contrary, they are in some sense more important. Creationism isn’t really about the age of the Earth; it’s about who you are and where you belong. A conventional belief can be changed by evidence about the world because it is about the world; a belief-in-belief can’t be changed by evidence because it was never really about that.

But if someone’s ridiculous belief is really about their identity, how do we deal with that? I can’t refute an identity. If your identity is tied to a particular social group, maybe they could ostracize you and cause you to lose the identity; but an outsider has no power to do that. (Even then, I strongly suspect that, for instance, most excommunicated Catholics still see themselves as Catholic.) And if it’s a personal identity not tied to a particular group, even that option is unavailable.

Where, then, does that leave us? It would seem that we can’t change their minds—but we also can’t afford not to change their minds. We are caught in a terrible dilemma.

I think there might be a way out. It’s a bit counter-intuitive, but I think what we need to do is stop taking them seriously as beliefs, and start treating them purely as announcements of identity.

So when someone says something like, “The Rothschilds run everything!”, instead of responding as though this were a coherent proposition being asserted, treat it as if someone had announced, “Boo! I hate the Red Sox!” Belief in the Rothschild conspiracies isn’t a well-defined set of propositions about the world; it’s an assertion of membership in a particular sort of political sect that is vaguely left-wing and anarchist. You don’t really think the Rothschilds rule everything. You just want to express your (quite justifiable) anger at how our current political system privileges the rich.

Likewise, when someone says they think the Earth is 6,000 years old, you could try to present the overwhelming scientific evidence that they are wrong—but it might be more productive, and it is certainly easier, to just think of this as a funny way of saying “I’m an Evangelical Christian”.

Will this eliminate the ridiculous beliefs? Not immediately. But it might ultimately do so, in the following way: By openly acknowledging the belief-in-belief as a signaling mechanism, we can open opportunities for people to develop new, less pathological methods of signaling. (Instead of saying you think the Earth is 6,000 years old, maybe you could wear a funny hat, like Orthodox Jews do. Funny hats don’t hurt anybody. Everyone loves funny hats.) People will always want to signal their identity, and there are fundamental reasons why such signals will typically be costly for those who use them; but we can try to make them not so costly for everyone else.

This also makes arguments a lot less frustrating, at least at your end. It might make them more frustrating at the other end, because people want their belief-in-belief to be treated like proper belief, and you’ll be refusing them that opportunity. But this is not such a bad thing; if we make it more frustrating to express ridiculous beliefs in public, we might manage to reduce the frequency of such expression.