Downsides of rent control

May 13 JDN 2458252

One of the largest ideological divides between economists and the rest of the population concerns rent control.

Tent control is very popular among the general population, especially in California—with support hovering around 60% in Orange County, San Diego County, and across California in general. About 60% of people in the UK and over 50% in Ontario, Canada also support rent control.

Meanwhile, economists overwhelmingly oppose rent control: When evaluating the statement “A ceiling on rents reduces the quantity and quality of housing available.”, over 76% of economists agreed, and 16% agreed with qualifications. For the record, I would be an “agree with qualifications” as well (as they say, there are few one-handed economists).

There is evidence of some benefits of rent control, at least for the small number of people who can actually manage to stay in rent-controlled units. People who live in rent-controlled units are about 15% more likely to stay where they are, even in places as expensive as San Francisco, which could be considered a good thing (though I’m not convinced it always is; mobility is one of the key forces driving the dynamism of the US economy).

But there are winners and losers. Landlords whose properties are rent-controlled decreased their supply of housing by an average of 15%, via a combination of converting them to condos, removing them from the market, or demolishing the buildings outright. As a result, rent control increases average rent in a city by an average of 5%. One of the most effective ways to get out of rent control is to remove a building from the market entirely; this allows you to evict all of your tenants with very little notice, and is responsible for thousands of tenants being evicted every year in Los Angeles.

Rent control disincentivizes both new housing construction and the proper maintenance of existing housing. The quality of rent-controlled homes is systematically lower than the quality of other homes.

The benefits of rent control mainly fall upon the upper-middle class, not the poor. Rent control can make an area more racially diverse—but it benefits middle-class members of racial minorities, not poor members. Most of the benefits of rent control go to older families who have lived in a city for a long time—which makes them a transfer of wealth away from young people.

Cities such as Chicago without rent control systematically have lower rents, not higher; partly this is a cause, rather than an effect, as tenants are less likely to panic and demand rent control when rents are not high. But it’s also an effect, as rent control holds down the price in part of the market but ends up driving it up in the rest. Over 40% of San Francisco’s apartments are rent-controlled, and they have the highest rents in the world.

Rent control also contributes to the tendency toward building high-end luxury apartments; if you know that you will never be able to raise the rent on your existing buildings, and may end up being stuck with whatever rent you charge the first year on your new buildings, you have a strong reason to want to charge as much as possible the first year you build new apartments. Rent control also creates subtler distortions in the size and location of apartment construction. The effects of rent control even spill over into other housing markets, such as owner-occupied homes and mobile homes.
Because it locks people into place and reduces the construction of new homes near city centers, rent control increases commute times and carbon emissions. This is probably something we should especially point out to people in California, as the two things Californians hate most are environmental degradation and traffic congestion. (Then again, the third is high rent.) California is good at avoiding the first one—our GDP/carbon emission ratio is near the best in the US. The other two? Not so much.

Of course, simply removing rent control would not immediately solve the housing shortage; while it would probably have benefits in the long run, during the transition period a lot of people currently protected by rent control would lose their homes. Even in the long run, it would probably not be enough to actually make rent affordable in the largest coastal cities.

But it’s vital not to confuse “lower rent” with “rent control”; there are much, much better ways to reduce rent prices than simply enforcing arbitrary caps on them.

We have learned not to use price controls in other markets, but not housing for some reason. Think about the gasoline market, for example. High gas prices are very politically unpopular (though frankly I never quite understood why; it’s a tiny fraction of consumption expenditure, and if we ever want to make a dent in our carbon emissions we need to make our gas prices much higher), but imagine how ridiculous it would seem for a politician to propose simply making an arbitrary cap that says you aren’t allowed to sell gasoline for more than $2.50 per gallon in a particular city. The obvious outcome would be for most gas stations in that city to immediately close, and everyone to end up buying their gas at the new gas stations that spring up just outside the city limits charging $4.00 per gallon. This is basically what happens in the housing market: Rent-controlled apartments apartments are taken off the market, and the new housing that is built ends up even more expensive.

In a future post, I’ll discuss things we can do instead of rent control that would reliably make housing more affordable. Most of these would involve additional government spending; but there are two things I’d like to say about that. First, we are already spending this money, we just don’t see it, because it comes in the form of inefficiencies and market distortions instead of a direct expenditure. Second, do we really care about making housing affordable, or not? If we really care, we should be willing to spend money on it. If we aren’t willing to spend money on it, then we must not really care.

Sympathy for the incel

Post 237: May 6 JDN 2458245

If you’ve been following the news surrounding the recent terrorist attack in Toronto, you may have encountered the word “incel” for the first time via articles in NPR, Vox, USA Today, or other sources linking the attack to the incel community.

If this was indeed your first exposure to the concept of “incel”, I think you are getting a distorted picture of their community, which is actually a surprisingly large Internet subculture. Finding out about incel this way would be like finding out about Islam from 9/11. (Actually, I’m fairly sure a lot of Americans did learn that way, which is awful.) The incel community is remarkably large one—hundreds of thousands of members at least, and quite likely millions.

While a large proportion subscribe to a toxic and misogynistic ideology, a similarly large proportion do not; while the ideology has contributed to terrorism and other violence, the vast majority of members of the community are not violent.

Note that the latter sentence is also entirely true of Islam. So if you are sympathetic toward Muslims and want to protect them from abuse and misunderstanding, I maintain that you should want to do the same for incels, and for basically the same reasons.

I want to make something abundantly clear at the outset:

This attack was terrorism. I am in no way excusing or defending the use of terrorism. Once someone crosses the line and starts attacking random civilians, I don’t care what their grievances were; the best response to their behavior involves snipers on rooftops. I frankly don’t even understand the risks police are willing to take in order to capture these people alive—especially considering how trigger-happy they are when it comes to random Black men. If you start shooting (or bombing, or crashing vehicles into) civilians, the police should shoot you. It’s that simple.

I do not want to evoke sympathy for incel-motivated terrorism. I want to evoke sympathy for the hundreds of thousands of incels who would never support terrorism and are now being publicly demonized.

I also want to make it clear that I am not throwing in my hat with the likes of Robin Hanson (who is also well-known as a behavioral economist, blogger, science fiction fan, Less Wrong devotee, and techno-utopian—so I feel a particular need to clarify my differences with him) when he defends something he calls in purposefully cold language “redistribution of sex” (that one is from right after the attack, but he has done this before, in previous blog posts).

Hanson has drunk Robert Nozick‘s Kool-Aid, and thinks that redistribution of wealth via taxation is morally equivalent to theft or even slavery. He is fond of making comparisons between redistribution of wealth and other forms of “redistribution” that obviously would be tantamount to theft and slavery, and asking “What’s the difference?” when in fact the difference is glaringly obvious to everyone but him. He is also fond of saying that “inequality between households within a nation” is a small portion of inequality, and then wondering aloud why we make such a big deal out of it. The answer here is also quite obvious: First of all, it’s not that small a portion of inequality—it’s a third of global income inequality by most measures, it’s increasing while across-nation inequality is decreasing, and the absolute magnitude of within-nation inequality is staggering: there are households with incomes over one million times that of other households within the same nation. (Where are the people who have had sex one hundred billion times, let alone the ones who had sex forty billion times in one year? Because here’s the man who has one hundred billion dollars and made almost $40 billion in one year.) Second, within-nation inequality is extremely simple to fix by public policy; just change a few numbers in the tax code—in fact, just change them back to what they were in the 1950s. Cross-national inequality is much more complicated (though I believe it can be solved, eventually) and some forms of what he’s calling “inequality” (like “inequality across periods of human history” or “inequality of innate talent”) don’t seem amenable to correction under any conceivable circumstances.

Hanson has lots of just-so stories about the evolutionary psychology of why “we don’t care” about cross-national inequality (gee, I thought maybe devoting my career to it was a pretty good signal otherwise?) or inequality in access to sex (which is thousands of times smaller than income inequality), but no clear policy suggestions for how these other forms of inequality could be in any way addressed. This whole idea of “redistribution of sex”; what does that mean, exactly? Legalized or even subsidized prostitution or sex robots would be one thing; I can see pros and cons there at least. But without clarification, it sounds like he’s endorsing the most extremist misogynist incels who think that women should be rightfully compelled to have sex with sexually frustrated men—which would be quite literally state-sanctioned rape. I think really Hanson isn’t all that interested in incels, and just wants to make fun of silly “socialists” who would dare suppose that maybe Jeff Bezos doesn’t need his 120 billion dollars as badly as some of the starving children in Africa could benefit from them, or that maybe having a tax system similar to Sweden or Denmark (which consistently rate as some of the happiest, most prosperous nations on Earth) sounds like a good idea. He takes things that are obviously much worse than redistributive taxation, and compares them to redistributive taxation to make taxation seem worse than it is.

No, I do not support “redistribution of sex”. I might be able to support legalized prostitution, but I’m concerned about the empirical data suggesting that legalized prostitution correlates with increased human sex trafficking. I think I would also support legalized sex robots, but for reasons that will become clear shortly, I strongly suspect they would do little to solve the problem, even if they weren’t ridiculously expensive. Beyond that, I’ve said enough about Hanson; Lawyers, Guns & Money nicely skewers Hanson’s argument, so I’ll not bother with it any further.
Instead, I want to talk about the average incel, one of hundreds of thousands if not millions of men who feels cast aside by society because he is socially awkward and can’t get laid. I want to talk about him because I used to be very much like him (though I never specifically identified as “incel”), and I want to talk about him because I think that he is genuinely suffering and needs help.

There is a moderate wing of the incel community, just as there is a moderate wing of the Muslim community. The moderate wing of incels is represented by sites like Love-Shy.com that try to reach out to people (mostly, but not exclusively young heterosexual men) who are lonely and sexually frustrated and often suffering from social anxiety or other mood disorders. Though they can be casually sexist (particularly when it comes to stereotypes about differences between men and women), they are not virulently misogynistic and they would never support violence. Moreover, they provide a valuable service in offering social support to men who otherwise feel ostracized by society. I disagree with a lot of things these groups say, but they are providing valuable benefits to their members and aren’t hurting anyone else. Taking out your anger against incel terrorists on Love-Shy.com is like painting graffiti on a mosque in response to 9/11 (which, of course, people did).

To some extent, I can even understand the more misogynistic (but still non-violent) wings of the incel community. I don’t want to defend their misogyny, but I can sort of understand where it might come from.

You see, men in our society (and most societies) are taught from a very young age that their moral worth as human beings is based primarily on one thing in particular: Sexual prowess. If you are having a lot of sex with a lot of women, you are a good and worthy man. If you are not, you are broken and defective. (Donald Trump has clearly internalized this narrative quite thoroughly—as have a shockingly large number of his supporters.)

This narrative is so strong and so universal, in fact, that I wouldn’t be surprised if it has a genetic component. It actually makes sense as a matter of evolutionary psychology than males would evolve to think this way; in an evolutionary sense it’s true that a male’s ultimate worth—that is, fitness, the one thing natural selection cares about—is defined by mating with a maximal number of females. But even if it has a genetic component, there is enough variation in this belief that I am confident that social norms can exaggerate or suppress it. One thing I can’t stand about popular accounts of evolutionary psychology is how they leap from “plausible evolutionary account” to “obviously genetic trait” all the way to “therefore impossible to change or compensate for”. My myopia and astigmatism are absolutely genetic; we can point to some of the specific genes. And yet my glasses compensate for them perfectly, and for a bit more money I could instead get LASIK surgery that would correct them permanently. Never think for a moment that “genetic” implies “immutable”.

Because of this powerful narrative, men who are sexually frustrated get treated like garbage by other men and even women. They feel ostracized and degraded. Often, they even feel worthless. If your worth as a human being is defined by how many women you have sex with, and you aren’t having sex with any, it follows that your worth is zero. No wonder, then, that so many become overcome with despair.
The incel community provides an opportunity to escape that despair. If you are told that you are not defective, but instead there is something wrong with society that keeps you down, you no longer have to feel worthless. It’s not that you don’t deserve to have sex, it’s that you’ve been denied what you deserve. When the only other narrative you’ve been given is that you are broken and worthless, I can see why “society is screwing you over” is an appealing counter-narrative. Indeed, it’s not even that far off from the truth.

The moderate wing of the incel community even offers some constructive solutions: They offer support to help men improve themselves, overcome their own social anxiety, and ultimately build fulfilling sexual relationships.

The extremist wing gets this all wrong: Instead of blaming the narrative that sex equals worth, they blame women—often, all women—for somehow colluding to deny them access to the sex they so justly deserve. They often link themselves to the “pick-up artist” community who try to manipulate women into having sex.

And then in the most extreme cases, they may even decide to turn their anger into violence.

But really I don’t think most of these men actually want sex at all, which is part of why I don’t think sex robots would be particularly effective.

Rather, to clarify: They want sex, as most of us do—but that’s not what they need. A simple lack of sex can be compensated reasonably well by pornography and masturbation. (Let me state this outright: Pornography and masturbation are fundamental human rights. Porn is free speech, and masturbation is part of the fundamental right of bodily autonomy. The fact that increased access to porn reduces incidence of sexual assault is nice, but secondary; porn is freedom.) Obviously it would be more satisfying to have a real sexual relationship, but with such substitutes available, a mere lack of sex does not cause suffering.

The need that these men are feeling is companionship. It is love. It is understanding. These are things that can’t be replaced, even partially, by sex robots or Internet porn.

Why do they conflate the two? Again, because society has taught them to do so. This one is clearly cultural, as it varies quite considerably between nations; it’s not nearly as bad in Southern Europe for example.
In American society (and many, but not all others), men are taught three things: First, expression of any emotion except for possibly anger, and especially expression of affection, is inherently erotic. Second, emotional vulnerability jeopardizes masculinity. Third, erotic expression must be only between men and women in a heterosexual relationship.

In principle, it might be enough to simply drop the third proposition: This is essentially what happens in the LGBT community. Gay men still generally suffer from the suspicion that all emotional expression is erotic, but have long-since abandoned their fears of expressing eroticism with other men. Often they’ve also given up on trying to sustain norms of masculinity as well. So gay men can hug each other and cry in front of each other, for example, without breaking norms within the LGBT community; the sexual subtext is often still there, but it’s considered unproblematic. (Gay men typically aren’t even as concerned about sexual infidelity as straight men; over 40% of gay couples are to some degree polyamorous, compared to 5% of straight couples.) It may also be seen as a loss of masculinity, but this too is considered unproblematic in most cases. There is a notable exception, which is the substantial segment of gay men who pride themselves upon hypermasculinity (generally abbreviated “masc”); and indeed, within that subcommunity you often see a lot of the same toxic masculinity norms that are found in the society as large.

That is also what happened in Classical Greece and Rome, I think: These societies were certainly virulently misogynistic in their own way, but their willingness to accept erotic expression between men opened them to accepting certain kinds of emotional expression between men as well, as long as it was not perceived as a threat to masculinity per se.

But when all three of those norms are in place, men find that the only emotional outlet they are even permitted to have while remaining within socially normative masculinity is a woman who is a romantic partner. Family members are allowed certain minimal types of affection—you can hug your mom, as long as you don’t seem too eager—but there is only one person in the world that you are allowed to express genuine emotional vulnerability toward, and that is your girlfriend. If you don’t have one? Get one. If you can’t get one? Well, sorry, pal, you’re just out of luck. Deal with it, or you’re not a real man.

But really what I’d like to get rid of is the first two propositions: Emotional expression should not be considered inherently sexual. Expressing emotional vulnerability should not be taken as a capitulation of your masculinity—and if I really had my druthers, the whole idea of “masculinity” would disappear or become irrelevant. This is the way that society is actually holding incels down: Not by denying them access to sex—the right to refuse sex is also a fundamental human right—but by denying them access to emotional expression and treating them like garbage because they are unable to have sex.

My sense is that what most incels are really feeling is not a dearth of sexual expression; it’s a dearth of emotional expression. But precisely because social norms have forced them into getting the two from the same place, they have conflated them. Further evidence in favor of this proposition? A substantial proportion of men who hire prostitutes spend a lot of the time they paid for simply talking.

I think what most of these men really need is psychotherapy. I’m not saying that to disparage them; I myself am a regular consumer of psychotherapy, which is one of the most cost-effective medical interventions known to humanity. I feel a need to clarify this because there is so much stigma on mental illness that saying someone is mentally ill and needs therapy can be taken as an insult; but I literally mean that a lot of these men are mentally ill and need therapy. Many of them exhibit significant signs of social anxiety, depression, or bipolar disorder.

Even for those who aren’t outright mentally ill, psychotherapy might be able to help them sort out some of these toxic narratives they’ve been fed by society, get them to think a little more carefully about what it means to be a good man and whether the “man” part is even so important. A good therapist could tease out the fabric of their tangled cognition and point out that when they say they want sex, it really sounds like they want self-worth, and when they say they want a girlfriend it really sounds like they want someone to talk to.

Such a solution won’t work on everyone, and it won’t work overnight on anyone. But the incel community did not emerge from a vacuum; it was catalyzed by a great deal of genuine suffering. Remove some of that suffering, and we might just undermine the most dangerous parts of the incel community and prevent at least some future violence.

No one owes sex to anyone. But maybe we do, as a society, owe these men a little more sympathy?

Why are humans so bad with probability?

Apr 29 JDN 2458238

In previous posts on deviations from expected utility and cumulative prospect theory, I’ve detailed some of the myriad ways in which human beings deviate from optimal rational behavior when it comes to probability.

This post is going to be a bit different: Yes, we behave irrationally when it comes to probability. Why?

Why aren’t we optimal expected utility maximizers?
This question is not as simple as it sounds. Some of the ways that human beings deviate from neoclassical behavior are simply because neoclassical theory requires levels of knowledge and intelligence far beyond what human beings are capable of; basically anything requiring “perfect information” qualifies, as does any game theory prediction that involves solving extensive-form games with infinite strategy spaces by backward induction. (Don’t feel bad if you have no idea what that means; that’s kind of my point. Solving infinite extensive-form games by backward induction is an unsolved problem in game theory; just this past week I saw a new paper presented that offered a partial potential solutionand yet we expect people to do it optimally every time?)

I’m also not going to include questions of fundamental uncertainty, like “Will Apple stock rise or fall tomorrow?” or “Will the US go to war with North Korea in the next ten years?” where it isn’t even clear how we would assign a probability. (Though I will get back to them, for reasons that will become clear.)

No, let’s just look at the absolute simplest cases, where the probabilities are all well-defined and completely transparent: Lotteries and casino games. Why are we so bad at that?

Lotteries are not a computationally complex problem. You figure out how much the prize is worth to you, multiply it by the probability of winning—which is clearly spelled out for you—and compare that to how much the ticket price is worth to you. The most challenging part lies in specifying your marginal utility of wealth—the “how much it’s worth to you” part—but that’s something you basically had to do anyway, to make any kind of trade-offs on how to spend your time and money. Maybe you didn’t need to compute it quite so precisely over that particular range of parameters, but you need at least some idea how much $1 versus $10,000 is worth to you in order to get by in a market economy.

Casino games are a bit more complicated, but not much, and most of the work has been done for you; you can look on the Internet and find tables of probability calculations for poker, blackjack, roulette, craps and more. Memorizing all those probabilities might take some doing, but human memory is astonishingly capacious, and part of being an expert card player, especially in blackjack, seems to involve memorizing a lot of those probabilities.

Furthermore, by any plausible expected utility calculation, lotteries and casino games are a bad deal. Unless you’re an expert poker player or blackjack card-counter, your expected income from playing at a casino is always negative—and the casino set it up that way on purpose.

Why, then, can lotteries and casinos stay in business? Why are we so bad at such a simple problem?

Clearly we are using some sort of heuristic judgment in order to save computing power, and the people who make lotteries and casinos have designed formal models that can exploit those heuristics to pump money from us. (Shame on them, really; I don’t fully understand why this sort of thing is legal.)

In another previous post I proposed what I call “categorical prospect theory”, which I think is a decently accurate description of the heuristics people use when assessing probability (though I’ve not yet had the chance to test it experimentally).

But why use this particular heuristic? Indeed, why use a heuristic at all for such a simple problem?

I think it’s helpful to keep in mind that these simple problems are weird; they are absolutely not the sort of thing a tribe of hunter-gatherers is likely to encounter on the savannah. It doesn’t make sense for our brains to be optimized to solve poker or roulette.

The sort of problems that our ancestors encountered—indeed, the sort of problems that we encounter, most of the time—were not problems of calculable probability risk; they were problems of fundamental uncertainty. And they were frequently matters of life or death (which is why we’d expect them to be highly evolutionarily optimized): “Was that sound a lion, or just the wind?” “Is this mushroom safe to eat?” “Is that meat spoiled?”

In fact, many of the uncertainties most important to our ancestors are still important today: “Will these new strangers be friendly, or dangerous?” “Is that person attracted to me, or am I just projecting my own feelings?” “Can I trust you to keep your promise?” These sorts of social uncertainties are even deeper; it’s not clear that any finite being could ever totally resolve its uncertainty surrounding the behavior of other beings with the same level of intelligence, as the cognitive arms race continues indefinitely. The better I understand you, the better you understand me—and if you’re trying to deceive me, as I get better at detecting deception, you’ll get better at deceiving.

Personally, I think that it was precisely this sort of feedback loop that resulting in human beings getting such ridiculously huge brains in the first place. Chimpanzees are pretty good at dealing with the natural environment, maybe even better than we are; but even young children can outsmart them in social tasks any day. And once you start evolving for social cognition, it’s very hard to stop; basically you need to be constrained by something very fundamental, like, say, maximum caloric intake or the shape of the birth canal. Where chimpanzees look like their brains were what we call an “interior solution”, where evolution optimized toward a particular balance between cost and benefit, human brains look more like a “corner solution”, where the evolutionary pressure was entirely in one direction until we hit up against a hard constraint. That’s exactly what one would expect to happen if we were caught in a cognitive arms race.

What sort of heuristic makes sense for dealing with fundamental uncertainty—as opposed to precisely calculable probability? Well, you don’t want to compute a utility function and multiply by it, because that adds all sorts of extra computation and you have no idea what probability to assign. But you’ve got to do something like that in some sense, because that really is the optimal way to respond.

So here’s a heuristic you might try: Separate events into some broad categories based on how frequently they seem to occur, and what sort of response would be necessary.

Some things, like the sun rising each morning, seem to always happen. So you should act as if those things are going to happen pretty much always, because they do happen… pretty much always.

Other things, like rain, seem to happen frequently but not always. So you should look for signs that those things might happen, and prepare for them when the signs point in that direction.

Still other things, like being attacked by lions, happen very rarely, but are a really big deal when they do. You can’t go around expecting those to happen all the time, that would be crazy; but you need to be vigilant, and if you see any sign that they might be happening, even if you’re pretty sure they’re not, you may need to respond as if they were actually happening, just in case. The cost of a false positive is much lower than the cost of a false negative.

And still other things, like people sprouting wings and flying, never seem to happen. So you should act as if those things are never going to happen, and you don’t have to worry about them.

This heuristic is quite simple to apply once set up: It can simply slot in memories of when things did and didn’t happen in order to decide which category they go in—i.e. availability heuristic. If you can remember a lot of examples of “almost never”, maybe you should move it to “unlikely” instead. If you get a really big number of examples, you might even want to move it all the way to “likely”.

Another large advantage of this heuristic is that by combining utility and probability into one metric—we might call it “importance”, though Bayesian econometricians might complain about that—we can save on memory space and computing power. I don’t need to separately compute a utility and a probability; I just need to figure out how much effort I should put into dealing with this situation. A high probability of a small cost and a low probability of a large cost may be equally worth my time.

How might these heuristics go wrong? Well, if your environment changes sufficiently, the probabilities could shift and what seemed certain no longer is. For most of human history, “people walking on the Moon” would seem about as plausible as sprouting wings and flying away, and yet it has happened. Being attacked by lions is now exceedingly rare except in very specific places, but we still harbor a certain awe and fear before lions. And of course availability heuristic can be greatly distorted by mass media, which makes people feel like terrorist attacks and nuclear meltdowns are common and deaths by car accidents and influenza are rare—when exactly the opposite is true.

How many categories should you set, and what frequencies should they be associated with? This part I’m still struggling with, and it’s an important piece of the puzzle I will need before I can take this theory to experiment. There is probably a trade-off between more categories giving you more precision in tailoring your optimal behavior, but costing more cognitive resources to maintain. Is the optimal number 3? 4? 7? 10? I really don’t know. Even I could specify the number of categories, I’d still need to figure out precisely what categories to assign.

Are some ideas too ridiculous to bother with?

Apr 22 JDN 2458231

Flat Earth. Young-Earth Creationism. Reptilians. 9/11 “Truth”. Rothschild conspiracies.

There are an astonishing number of ideas that satisfy two apparently-contrary conditions:

  1. They are so obviously ridiculous that even a few minutes of honest, rational consideration of evidence that is almost universally available will immediately refute them;
  2. They are believed by tens or hundreds of millions of otherwise-intelligent people.

Young-Earth Creationism is probably the most alarming, seeing as it grips the minds of some 38% of Americans.

What should we do when faced with such ideas? This is something I’ve struggled with before.

I’ve spent a lot of time and effort trying to actively address and refute them—but I don’t think I’ve even once actually persuaded someone who believes these ideas to change their mind. This doesn’t mean my time and effort were entirely wasted; it’s possible that I managed to convince bystanders, or gained some useful understanding, or simply improved my argumentation skills. But it does seem likely that my time and effort were mostly wasted.

It’s tempting, therefore, to give up entirely, and just let people go on believing whatever nonsense they want to believe. But there’s a rather serious downside to that as well: Thirty-eight percent of Americans.

These people vote. They participate in community decisions. They make choices that affect the rest of our lives. Nearly all of those Creationists are Evangelical Christians—and White Evangelical Christians voted overwhelmingly in favor of Donald Trump. I can’t be sure that changing their minds about the age of the Earth would also change their minds about voting for Trump, but I can say this: If all the Creationists in the US had simply not voted, Hillary Clinton would have won the election.

And let’s not leave the left wing off the hook either. Jill Stein is a 9/11 “Truther”, and pulled a lot of fellow “Truthers” to her cause in the election as well. Had all of Jill Stein’s votes gone to Hillary Clinton instead, again Hillary would have won, even if all the votes for Trump had remained the same. (That said, there is reason to think that if Stein had dropped out, most of those folks wouldn’t have voted at all.)

Therefore, I don’t think it is safe to simply ignore these ridiculous beliefs. We need to do something; the question is what.

We could try to censor them, but first of all that violates basic human rights—which should be a sufficient reason not to do it—and second, it probably wouldn’t even work. Censorship typically leads to radicalization, not assimilation.

We could try to argue against them. Ideally this would be the best option, but it has not shown much effect so far. The kind of person who sincerely believes that the Earth is 6,000 years old (let alone that governments are secretly ruled by reptilian alien invaders) isn’t the kind of person who is highly responsive to evidence and rational argument.

In fact, there is reason to think that these people don’t actually believe what they say the same way that you and I believe things. I’m not saying they’re lying, exactly. They think they believe it; they want to believe it. They believe in believing it. But they don’t actually believe it—not the way that I believe that cyanide is poisonous or the way I believe the sun will rise tomorrow. It isn’t fully integrated into the way that they anticipate outcomes and choose behaviors. It’s more of a free-floating sort of belief, where professing a particular belief allows them to feel good about themselves, or represent their status in a community.

To be clear, it isn’t that these beliefs are unimportant to them; on the contrary, they are in some sense more important. Creationism isn’t really about the age of the Earth; it’s about who you are and where you belong. A conventional belief can be changed by evidence about the world because it is about the world; a belief-in-belief can’t be changed by evidence because it was never really about that.

But if someone’s ridiculous belief is really about their identity, how do we deal with that? I can’t refute an identity. If your identity is tied to a particular social group, maybe they could ostracize you and cause you to lose the identity; but an outsider has no power to do that. (Even then, I strongly suspect that, for instance, most excommunicated Catholics still see themselves as Catholic.) And if it’s a personal identity not tied to a particular group, even that option is unavailable.

Where, then, does that leave us? It would seem that we can’t change their minds—but we also can’t afford not to change their minds. We are caught in a terrible dilemma.

I think there might be a way out. It’s a bit counter-intuitive, but I think what we need to do is stop taking them seriously as beliefs, and start treating them purely as announcements of identity.

So when someone says something like, “The Rothschilds run everything!”, instead of responding as though this were a coherent proposition being asserted, treat it as if someone had announced, “Boo! I hate the Red Sox!” Belief in the Rothschild conspiracies isn’t a well-defined set of propositions about the world; it’s an assertion of membership in a particular sort of political sect that is vaguely left-wing and anarchist. You don’t really think the Rothschilds rule everything. You just want to express your (quite justifiable) anger at how our current political system privileges the rich.

Likewise, when someone says they think the Earth is 6,000 years old, you could try to present the overwhelming scientific evidence that they are wrong—but it might be more productive, and it is certainly easier, to just think of this as a funny way of saying “I’m an Evangelical Christian”.

Will this eliminate the ridiculous beliefs? Not immediately. But it might ultimately do so, in the following way: By openly acknowledging the belief-in-belief as a signaling mechanism, we can open opportunities for people to develop new, less pathological methods of signaling. (Instead of saying you think the Earth is 6,000 years old, maybe you could wear a funny hat, like Orthodox Jews do. Funny hats don’t hurt anybody. Everyone loves funny hats.) People will always want to signal their identity, and there are fundamental reasons why such signals will typically be costly for those who use them; but we can try to make them not so costly for everyone else.

This also makes arguments a lot less frustrating, at least at your end. It might make them more frustrating at the other end, because people want their belief-in-belief to be treated like proper belief, and you’ll be refusing them that opportunity. But this is not such a bad thing; if we make it more frustrating to express ridiculous beliefs in public, we might manage to reduce the frequency of such expression.

Today would be my father’s birthday.

Apr 15 JDN 2458224

When this post goes live, it will be April 15, 2018. My father was born April 15, 1954 and died August 31, 2017, so this is the first time we will be celebrating his birthday without him.

I’m not sure that grief ever really goes away. The shock of the unexpected death fades eventually, and at last you can accept that this has really happened and make it a part of your life. But the sum total of all missed opportunities for life events you could have had together only continues to increase.

There are many cliches about this sort of thing: “Death is a part of life.” “Everything happens for a reason.” It’s all making excuses for the dragon. If we could find a way to make people stop dying, we ought to do it. The other consequences are things we could figure out later.

But, alas, we can’t, at least not in general. We have managed to cure or vaccinate against a wide variety of diseases, and as a result people do, on average, live longer than ever before in human history. But none of us live “on average”—and sometimes you get a very unlucky draw.

Yet somehow, we do learn to go on. I’m not sure how. I guess it’s a kind of desensitization: Right after my father’s death, any reminder of him was painful. But over time, that pain began to lessen. Each new reminder hurts a little less than the last, until eventually the pain is mild enough that it can mostly be ignored. It never really goes away, I think; but eventually it is below your just-noticeable-difference.

I had hoped to do more with this post. I had hoped that reflecting on the grief I’ve felt for the last several months would allow me to find some greater insight that I could share. Instead, I find myself re-writing the same sentences over and over again, trying in vain to express something that might help me, or help someone else who is going through similar grief. I keep looking for ways to distract myself, other things to think about—anything but this. Maybe there are no simple insights, no way for words to shorten the process that everyone must go through.

The extreme efficiency of environmental regulation—and the extreme inefficiency of war

Apr 8 JDN 2458217

Insofar as there has been any coherent policy strategy for the Trump administration, it has largely involved three things:

  1. Increase investment in military, incarceration, and immigration enforcement
  2. Redistribute wealth from the poor and middle class to the rich
  3. Remove regulations that affect business, particularly environmental regulations

The human cost of such a policy strategy is difficult to overstate. Literally millions of people will die around the world if such policies continue. This is almost the exact opposite of what our government should be doing.

This is because military is one of the most wasteful and destructive forms of government investment, while environmental regulation is one of the most efficient and beneficial. The magnitude of these differences is staggering.

First of all, it is not clear that the majority of US military spending provides any marginal benefit. It could quite literally be zero. The US spends more on military than the next ten countries combined.

I think it’s quite reasonable to say that the additional defense benefit becomes negligible once you exceed the sum of spending from all plausible enemies. China, Russia, and Saudi Arabia together add up to about $350 billion per year. Current US spending is $610 billion per year. (And this calculation, by the way, requires them all to band together, while simultaneously all our NATO allies completely abandon us.) That means we could probably cut $260 billion per year without losing anything.

What about the remaining $350 billion? I could be extremely generous here, and assume that nuclear weapons, alliances, economic ties, and diplomacy all have absolutely no effect, so that without our military spending we would be invaded and immediately lose, and that if we did lose a war with China or Russia it would be utterly catastrophic and result in the deaths of 10% of the US population. Since in this hypothetical scenario we are only preventing the war by the barest margin, each year of spending only adds 1 year to the lives of the war’s potential victims. That means we are paying some $350 billion per year to add 1 year to the lives of 32 million people. That is a cost of about $11,000 per QALY. If it really is saving us from being invaded, that doesn’t sound all that unreasonable. And indeed, I don’t favor eliminating all military spending.

Of course, the marginal benefit of additional spending is still negligible—and UN peacekeeping is about twice as cost-effective as US military action, even if we had to foot the entire bill ourselves.

Alternatively, I could consider only the actual, documented results of our recent military action, which has resulted in over 280,000 deaths in Iraq and 110,000 in Afghanistan, all for little or no apparent gain. Life expectancy in these countries is about 70 in Iraq and 60 in Afghanistan. Quality of life there is pretty awful, but people are also greatly harmed by war without actually dying in it, so I think a fair conversion factor is about 60 QALY per death. That’s a loss of 23.4 MQALY. The cost of the Iraq War was about $1.1 trillion, while the cost of the Afghanistan War was about a further $1.1 trillion. This means that we paid $94,000 per lost QALY. If this is right, we paid enormous amounts to destroy lives and accomplished nothing at all.

Somewhere in between, we could assume that cutting the military budget greatly would result in the US being harmed in a manner similar to World War 2, which killed about 500,000 Americans. Paying $350 billion per year to gain 500,000 QALY per year is a price of $700,000 per QALY. I think this is about right; we are getting some benefit, but we are spending an enormous amount to get it.

Now let’s compare that to the cost-effectiveness of environmental regulation.

Since 1990, the total cost of implementing the regulations in the Clean Air Act was about $65 billion. That’s over 28 years, so less than $2.5 billion per year. Compare that to the $610 billion per year we spend on the military.

Yet the Clean Air Act saves over 160,000 lives every single year. And these aren’t lives extended one more year as they were in the hypothetical scenario where we are just barely preventing a catastrophic war; most of these people are old, but go on to live another 20 years or more. That means we are gaining 3.2 MQALY for a price of $2.5 billion. This is a price of only $800 per QALY.

From 1970 to 1990, the Clean Air Act cost more to implement: about $520 billion (so, you know, less than one year of military spending). But its estimated benefit was to save over 180,000 lives per year, and its estimated economic benefit was $22 trillion.

Look at those figures again, please. Even under very pessimistic assumptions where we would be on the verge of war if not for our enormous spending, we’re spending at least $11,000 and probably more like $700,000 on the military for each QALY gained. But environmental regulation only costs us about $800 per QALY. That’s a factor of at least 14 and more likely 1000. Environmental regulation is probably about one thousand times as cost-effective as military spending.

And I haven’t even included the fact that there is a direct substitution here: Climate change is predicted to trigger thousands if not millions of deaths due to military conflict. Even if national security were literally the only thing we cared about, it would probably still be more cost-effective to invest in carbon emission reduction rather than building yet another aircraft carrier. And if, like me, you think that a child who dies from asthma is just as important as one who gets bombed by China, then the cost-benefit analysis is absolutely overwhelming; every $60,000 spent on war instead of environmental protection is a statistical murder.

This is not even particularly controversial among economists. There is disagreement about specific environmental regulations, but the general benefits of fighting climate change and keeping air and water clean are universally acknowledged. There is disagreement about exactly how much military spending is necessary, but you’d be hard-pressed to find an economist who doesn’t think we could cut our military substantially with little or no risk to security.

Reasonableness and public goods games

Apr 1 JDN 2458210

There’s a very common economics experiment called a public goods game, often used to study cooperation and altruistic behavior. I’m actually planning on running a variant of such an experiment for my second-year paper.

The game is quite simple, which is part of why it is used so frequently: You are placed into a group of people (usually about four), and given a little bit of money (say $10). Then you are offered a choice: You can keep the money, or you can donate some of it to a group fund. Money in the group fund will be multiplied by some factor (usually about two) and then redistributed evenly to everyone in the group. So for example if you donate $5, that will become $10, split four ways, so you’ll get back $2.50.

Donating more to the group will benefit everyone else, but at a cost to yourself. The game is usually set up so that the best outcome for everyone is if everyone donates the maximum amount, but the best outcome for you, holding everyone else’s choices constant, is to donate nothing and keep it all.

Yet it is a very robust finding that most people do neither of those things. There’s still a good deal of uncertainty surrounding what motivates people to donate what they do, but certain patterns that have emerged:

  1. Most people donate something, but hardly anyone donates everything.
  2. Increasing the multiplier tends to smoothly increase how much people donate.
  3. The number of people in the group isn’t very important, though very small groups (e.g. 2) behave differently from very large groups (e.g. 50).
  4. Letting people talk to each other tends to increase the rate of donations.
  5. Repetition of the game, or experience from previous games, tends to result in decreasing donation over time.
  6. Economists donate less than other people.

Number 6 is unfortunate, but easy to explain: Indoctrination into game theory and neoclassical economics has taught economists that selfish behavior is efficient and optimal, so they behave selfishly.

Number 3 is also fairly easy to explain: Very small groups allow opportunities for punishment and coordination that don’t exist in large groups. Think about how you would respond when faced with 2 defectors in a group of 4 as opposed to 10 defectors in a group of 50. You could punish the 2 by giving less next round; but punishing the 10 would end up punishing 40 others who had contributed like they were supposed to.

Number 4 is a very interesting finding. Game theory says that communication shouldn’t matter, because there is a unique Nash equilibrium: Donate nothing. All the promises in the world can’t change what is the optimal response in the game. But in fact, human beings don’t like to break their promises, and so when you get a bunch of people together and they all agree to donate, most of them will carry through on that agreement most of the time.

Number 5 is on the frontier of research right now. There are various theoretical accounts for why it might occur, but none of the models proposed so far have much predictive power.

But my focus today will be on findings 1 and 2.

If you’re not familiar with the underlying game theory, finding 2 may seem obvious to you: Well, of course if you increase the payoff for donating, people will donate more! It’s precisely that sense of obviousness which I am going to appeal to in a moment.

In fact, the game theory makes a very sharp prediction: For N players, if the multiplier is less than N, you should always contribute nothing. Only if the multiplier becomes larger than N should you donate—and at that point you should donate everything. The game theory prediction is not a smooth increase; it’s all-or-nothing. The only time game theory predicts intermediate amounts is on the knife-edge at exactly equal to N, where each player would be indifferent between donating and not donating.

But it feels reasonable that increasing the multiplier should increase donation, doesn’t it? It’s a “safer bet” in some sense to donate $1 if the payoff to everyone is $3 and the payoff to yourself is $0.75 than if the payoff to everyone is $1.04 and the payoff to yourself is $0.26. The cost-benefit analysis comes out better: In the former case, you can gain up to $2 if everyone donates, but would only lose $0.25 if you donate alone; but in the latter case, you would only gain $0.04 if everyone donates, and would lose $0.74 if you donate alone.

I think this notion of “reasonableness” is a deep principle that underlies a great deal of human thought. This is something that is sorely lacking from artificial intelligence: The same AI that tells you the precise width of the English Channel to the nearest foot may also tell you that the Earth is 14 feet in diameter, because the former was in its database and the latter wasn’t. Yes, WATSON may have won on Jeopardy, but it (he?) also made a nonsensical response to the Final Jeopardy question.

Human beings like to “sanity-check” our results against prior knowledge, making sure that everything fits together. And, of particular note for public goods games, human beings like to “hedge our bets”; we don’t like to over-commit to a single belief in the face of uncertainty.

I think this is what best explains findings 1 and 2. We don’t donate everything, because that requires committing totally to the belief that contributing is always better. We also don’t donate nothing, because that requires committing totally to the belief that contributing is always worse.

And of course we donate more as the payoffs to donating more increase; that also just seems reasonable. If something is better, you do more of it!

These choices could be modeled formally by assigning some sort of probability distribution over other’s choices, but in a rather unconventional way. We can’t simply assume that other people will randomly choose some decision and then optimize accordingly—that just gives you back the game theory prediction. We have to assume that our behavior and the behavior of others is in some sense correlated; if we decide to donate, we reason that others are more likely to donate as well.

Stated like that, this sounds irrational; some economists have taken to calling it “magical thinking”. Yet, as I always like to point out to such economists: On average, people who do that make more money in the games. Economists playing other economists always make very little money in these games, because they turn on each other immediately. So who is “irrational” now?

Indeed, if you ask people to predict how others will behave in these games, they generally do better than the game theory prediction: They say, correctly, that some people will give nothing, most will give something, and hardly any will give everything. The same “reasonableness” that they use to motivate their own decisions, they also accurately apply to forecasting the decisions of others.

Of course, to say that something is “reasonable” may be ultimately to say that it conforms to our heuristics well. To really have a theory, I need to specify exactly what those heuristics are.

“Don’t put all your eggs in one basket” seems to be one, but it’s probably not the only one that matters; my guess is that there are circumstances in which people would actually choose all-or-nothing, like if we said that the multiplier was 0.5 (so everyone giving to the group would make everyone worse off) or 10 (so that giving to the group makes you and everyone else way better off).

“Higher payoffs are better” is probably one as well, but precisely formulating that is actually surprisingly difficult. Higher payoffs for you? For the group? Conditional on what? Do you hold others’ behavior constant, or assume it is somehow affected by your own choices?

And of course, the theory wouldn’t be much good if it only worked on public goods games (though even that would be a substantial advance at this point). We want a theory that explains a broad class of human behavior; we can start with simple economics experiments, but ultimately we want to extend it to real-world choices.

Hyperbolic discounting: Why we procrastinate

Mar 25 JDN 2458203

Lately I’ve been so occupied by Trump and politics and various ideas from environmentalists that I haven’t really written much about the cognitive economics that was originally planned to be the core of this blog. So, I thought that this week I would take a step out of the political fray and go back to those core topics.

Why do we procrastinate? Why do we overeat? Why do we fail to exercise? It’s quite mysterious, from the perspective of neoclassical economic theory. We know these things are bad for us in the long run, and yet we do them anyway.

The reason has to do with the way our brains deal with time. We value the future less than the present—but that’s not actually the problem. The problem is that we do so inconsistently.

A perfectly-rational neoclassical agent would use time-consistent discounting; what this means is that the effect of a given time interval won’t change or vary based on the stakes. If having $100 in 2019 is as good as having $110 in 2020, then having $1000 in 2019 is as good as having $1100 in 2020; and if I ask you in 2019, you’ll still agree that having $100 in 2019 is as good as having $1100 in 2020. A perfectly-rational individual would have a certain discount rate (in this case, 10% per year), and would apply it consistently at all times on all things.

This is of course not how human beings behave at all.

A much more likely pattern is that you would agree, in 2018, that having $100 in 2019 is as good as having $110 in 2020 (a discount rate of 10%). But then if I wait until 2019, and then offer you the choice between $100 immediately and $120 in a year, you’ll probably take the $100 immediately—even though a year ago, you told me you wouldn’t. Your discount rate rose from 10% to at least 20% in the intervening time.

The leading model in cognitive economics right now to explain this is called hyperbolic discounting. The precise functional form of a hyperbola has been called into question by recent research, but the general pattern is definitely right: We act as though time matters a great deal when discussing time intervals that are close to us, but treat time as unimportant when discussing time intervals that are far away.

How does this explain procrastination and other failures of self-control over time? Let’s try an example.

Let’s say that you have a project you need to finish by the end of the day Friday, which has a benefit to you, received on Saturday, that I will arbitrarily scale at 1000 utilons.

Then, let’s say it’s Monday. You have five days to work on it, and each day of work costs you 100 utilons. If you work all five days, the project will get done.

If you skip a day of work, you will need to work so much harder that one of the following days your cost of work will be 300 utilons instead of 100. If you skip two days, you’ll have to pay 300 utilons twice. And if you skip three or more days, the project will not be finished and it will all be for naught.

If you don’t discount time at all (which, over a week, is probably close to optimal), the answer is obvious: Work all five days. Pay 100+100+100+100+100 = 500, receive 1000. Net benefit: 500.

But even if you discount time, as long as you do so consistently, you still wouldn’t procrastinate.

Let’s say your discount rate is extremely high (maybe you’re dying or something), so that each day is only worth 80% as much as the previous. Benefit that’s worth 1 on Monday is worth 0.8 if it comes on Tuesday, 0.64 if it comes on Wednesday, 0.512 if it comes on Thursday, 0.4096 if it comes on Friday,a and 0.32768 if it comes on Saturday. Then instead of paying 100+100+100+100+100 to get 1000, you’re paying 100+80+64+51+41=336 to get 328. It’s not worth doing the project; you should just enjoy your last few days on Earth. That’s not procrastinating; that’s rationally choosing not to undertake a project that isn’t worthwhile under your circumstances.

Procrastinating would look more like this: You skip the first two days, then work 100 the third day, then work 300 each of the last two days, finishing the project. If you didn’t discount at all, you would pay 100+300+300=700 to get 1000, so your net benefit has been reduced to 300.

There’s no consistent discount rate that would make this rational. If it was worth giving up 200 on Thursday and Friday to get 100 on Monday and Tuesday, you must be discounting at least 26% per day. But if you’re discounting that much, you shouldn’t bother with the project at all.

There is however an inconsistent discounting by which it makes perfect sense. Suppose that instead of consistently discounting some percentage each day, psychologically it feels like this: The value is the inverse of the length of time (that’s what it means to be hyperbolic). So the same amount of benefit on Monday which is worth 1 is only worth 1/2 if it comes on Tuesday, 1/3 if on Wednesday, 1/4 if on Thursday, and 1/5 if on Friday.

So, when thinking about your weekly schedule, you realize that by pushing back Monday’s work to Thursday, you can gain 100 today at a cost of only 200/4 = 50, since Thursday is 4 days away. And by pushing back Tuesday’s work to Friday, you can gain 100/2=50 today at a cost of only 200/5=40. So now it makes perfect sense to have fun on Monday and Tuesday, start working on Wednesday, and cram the biggest work into Thursday and Friday. And yes, it still makes sense to do the project, because 1000/6 = 166 is more than the 100/3+200/4+200/5 = 123 it will cost to do the work.

But now think about what happens when you come to Wednesday. The work today costs 100. The work on Thursday costs 200/2 = 100. The work on Friday costs 200/3 = 66. The benefit of completing the project will be 1000/4 = 250. So you are paying 100+100+66=266 to get a benefit of only 250. It’s not worth it anymore! You’ve changed your mind. So you don’t work Wednesday.

At that point, it’s too late, so you don’t work Thursday, you don’t work Friday, and the project doesn’t get done. You have procrastinated away the benefits you could have gotten from doing this project. If only you could have done the work on Monday and Tuesday, then on Wednesday it would have been worthwhile to continue: 100/1+100/2+100/3 = 183 is less than the benefit of 250.

What went wrong? The key event was the preference reversal: While on Monday you preferred having fun on Monday and working on Thursday to working on both days, when the time came you changed your mind. Someone with time-consistent discounting would never do that; they would either prefer one or the other, and never change their mind.

One way to think about this is to imagine future versions of yourself as different people, who agree with you on most things, but not on everything. They’re like friends or family; you want the best for them, but you don’t always see eye-to-eye.

Generally we find that our future selves are less rational about choices than we are. To be clear, this doesn’t mean that we’re all declining in rationality over time. Rather, it comes from the fact that future decisions are inherently closer to our future selves than they are to our current selves, and the closer a decision gets the more likely we are to use irrational time discounting.

This is why it’s useful to plan and make commitments. If starting on Monday you committed yourself to working every single day, you’d get the project done on time and everything would work out fine. Better yet, if you committed yourself last week to starting work on Monday, you wouldn’t even feel conflicted; you would be entirely willing to pay a cost of 100/8+100/9+100/10+100/11+100/12=51 to get a benefit of 1000/13=77. So you could set up some sort of scheme where you tell your friends ahead of time that you can’t go out that week, or you turn off access to social media sites (there are apps that will do this for you), or you set up a donation to an “anti-charity” you don’t like that will trigger if you fail to complete the project on time (there are websites to do that for you).

There is even a simpler way: Make a promise to yourself. This one can be tricky to follow through on, but if you can train yourself to do it, it is extraordinarily powerful and doesn’t come with the additional costs that a lot of other commitment devices involve. If you can really make yourself feel as bad about breaking a promise to yourself as you would about breaking a promise to someone else, then you can dramatically increase your own self-control with very little cost. The challenge lies in actually cultivating that sort of attitude, and then in following through with making only promises you can keep and actually keeping them. This, too, can be a delicate balance; it is dangerous to over-commit to promises to yourself and feel too much pain when you fail to meet them.
But given the strong correlations between self-control and long-term success, trying to train yourself to be a little better at it can provide enormous benefits.
If you ever get around to it, that is.

And so begins the trade war Trump promised us.

Mar 18 JDN 2458196

President Trump (a phrase I will never quite feel comfortable saying) has used an obscure loophole in US trade law to impose huge tariffs on steel and aluminum. The loophole is based on the idea that certain goods are vital for national security, and therefore imposing tariffs on them is in some sense the proper role of the Commander in Chief. It’s a pretty flimsy justification in general (if it’s really so important, why can’t Congress do it?), and particularly so in this case: Most of our steel and aluminum comes from Canada, and we are still totally dependent on imports for bauxite to make aluminum. Trump did finally cave in on allowing NAFTA members to be exempt, so Canada won’t be paying the tariff. The only country that could plausibly be considered an enemy that will be meaningfully affected by the tariffs is (ironically) Russia.

The European Union has threatened to respond with their own comparable tariffs—meaning that a trade war has officially begun. The last time the US started a major trade war was in 1930—which you may recognize as the start of the Great Depression. There’s a meme going around saying that 1928 was the last time the Republican Party controlled the whole US government; that isn’t actually true. Republicans have controlled all three branches as recently as 2006. Of course, the late 2000s weren’t a great time for the US economy either, so make of that what you will.

Does this mean we’re headed toward another Great Depression? I don’t think so. Our monetary policy is vastly better now than it was then. But are we headed toward another recession? That seems quite likely. By standard measures, the stock market is overvalued. The unemployment rate is now at 4%. We are basically at the ceiling right now; the only place to go is down.

Of course, maybe we will stay here awhile. We don’t have to go down, necessarily. If Obama were still President and Yellen were still Fed Chair, I might believe that. But the level of corruption, incompetence, and ideological rigidity in Trump’s economic policy is something I’ve not seen in the United States within my lifetime.

Peter Navarro, Trump’s Director of the White House Trade Council, has described his own role in an incredibly chilling way:

“This is the president’s vision. My function, really, as an economist is to try to provide the underlying analytics that confirm his intuition. And his intuition is always right in these matters. […] The owner, the coach, and the quarterback are all the president. The rest of us are all interchangeable parts.”

Well, there you have it. It’s just as the saying goes: There are liberal professional economists, conservative professional economists, and professional conservative economists. Peter Navarro has officially and proudly declared himself a professional conservative economist. He seems proud to admit that his only function is to rationalize what Trump already believes.

We really shouldn’t be surprised that Trump brought us into a trade war. Frankly, it was one of his campaign promises. When he was announcing the tariffs, he declared, “Trade wars are good, and easy to win.” In fact, trade war is much like real war, in that the only winning move is not to play.

What really worries me about all this isn’t how it will affect the US. Maybe it’ll trigger another recession, sure; but we’ve had lots of those, and we make it through eventually. (Recession might even be good for our carbon emissions, as we’re well above the Wedge.) The US economy is very strong, and can withstand a lot of mistakes. Even on a bad day we’re still the richest country in the world.

What worries me is how it will affect other countries. It’ll start with countries that export steel and aluminum, like India, China and Brazil. But as tariffs and counter-tariffs proliferate, more and more exports will be brought into the trade war. Trade is one of the most powerful tools we have for fighting global poverty, and we are now pulling the plug.

Of course, hurting China was part of Trump’s goal, so I doubt he’ll feel much remorse if the trade war results in millions of people in China thrown back into poverty. People who voted for him on the grounds that he would keep the dirty foreigners down may well be celebrating such an outcome.

There will be pain. But most of it will be felt elsewhere from here. “But those were Foreign children and it didn’t really matter.”

Forget the Doughnut. Meet the Wedge.

Mar 11 JDN 2458189

I just finished reading Kate Raworth’s book Doughnut Economics: Seven Ways to Think Like a 21st-Century Economist; Raworth also has a whole website dedicated to the concept of the Doughnut as a way of rethinking economics.

The book is very easy to read, and manages to be open to a wide audience with only basic economics knowledge without feeling patronizing or condescending. Most of the core ideas are fundamentally sound, though Raworth has a way of making it sound like she is being revolutionary even when most mainstream economists already agree with the core ideas.

For example, she makes it sound like it is some sort of dogma among neoclassical economists that GDP growth must continue at the same pace forever. As I discussed in an earlier post, the idea that growth will slow down is not radical in economics—it is basically taken for granted in the standard neoclassical growth models.

Even the core concept of the Doughnut isn’t all that radical. It’s based on the recognition that economic development is necessary to end poverty, but resources are not unlimited. Then combine that with two key assumptions: GDP growth requires growth in energy consumption, and growth in energy consumption requires increased carbon emissions. Then, the goal should be to stay within a certain range: We want to be high enough to not have poverty, but low enough to not exceed our carbon budget.

Why a doughnut? That’s… actually a really good question. The concept Raworth presents is a fundamentally one-dimensional object; there’s no reason for it to be doughnut-shaped. She could just as well have drawn it on a single continuum, with poverty at one end, unsustainability at the other end, and a sweet spot in the middle. The doughnut shape adds some visual appeal, but no real information.

But the fundamental assumptions that GDP requires energy and energy requires carbon emissions are simply false—especially the second one. Always keep one thing in mind whenever you’re reading something by environmentalists telling you we need to reduce economic output to save the Earth: Nuclear power does not produce carbon emissions.

This is how the environmentalist movement has shot itself—and the world—in the foot for the last 50 years. They continually refuse to admit that nuclear power is the best hope we have for achieving both economic development and ecological sustainability. They have let their political biases cloud their judgment on what is actually best for humanity’s future.

I will give Raworth some credit for not buying into the pipe dream that we can somehow transition rapidly to an entirely solar and wind-based power grid—renewables only produce 6% of world energy (the most they ever have), while nuclear produces 10%. And nuclear power certainly has its downsides, particularly in its high cost of construction. It may in fact be the case that we need to reduce economic output somewhat, particularly in the very richest countries, and if so, we need to find a way to do that without causing social and political collapse.

The Dougnut is a one-dimensional object glorified by a two-dimensional diagram.

So let me present you with an actual two-dimensional object, which I call the Wedge.

On this graph, the orange dots plot actual GDP per capita (at purchasing power parity) on the X axis against actual CO2 emissions per capita on the Y-axis. The green horizonal line is a CO2 emission target of 3 tonnes per person per year based on reports from the International Panel on Climate Change.

Wedge_full

As you can see, most countries are above the green line. That’s bad. We need the whole world below that green line. The countries that are below the line are largely poor countries, with a handful of middle-income countries mixed in.

But it’s the blue diagonal line that really makes this graph significant, what makes it the Wedge. That line uses Switzerland’s level of efficiency to estimate a frontier of what’s possible. Switzerland’s ratio of GDP to CO2 is the best in the world, among countries where the data actually looks reliable. A handful of other countries do better in the data, but for some (Macau) it’s obviously due to poor counting of indirect emissions and for others (Rwanda, Chad, Burundi) we just don’t have good data at all. I think Switzerland’s efficiency level of $12,000 per ton of CO2 is about as good as can be reasonably expected for most countries over the long run.

Our goal should be to move as far right on the graph as we can (toward higher levels of economic development), but always staying inside this Wedge: Above the green line, our CO2 emissions are too high. Below the blue line may not be technologically feasible (though of course it’s worth a try). We want to aim for the point of the wedge, where GDP is as high as possible but emissions are still below safe targets.

Zooming in on the graph gives a better view of the Wedge.

Wedge_zoomed

The point of the Wedge is about $38,000 per person per year. This is not as rich as the US, but it’s definitely within the range of highly-developed countries. This is about the same standard of living as Italy, Spain, or South Korea. In fact, all three of these countries exceed their targets; the closest I was able to find to a country actually hitting the point of the wedge was Latvia, at $27,300 and 3.5 tonnes per person per year. Uruguay also does quite well at $22,400 and 2.2 tonnes per person per year.

Some countries are within the Wedge; a few, like Uruguay, quite close to the point, and many, like Colombia and Bangladesh, that are below and to the left. For these countries, a “stay the course” policy is the way to go: If they keep up what they are doing, they can continue to experience economic growth without exceeding their emission targets.

 

But the most important thing about the graph is not actually the Wedge itself: It’s all the countries outside the Wedge, and where they are outside the Wedge.

There are some countries, like Sweden, France, and Switzerland, that are close to the blue line but still outside the Wedge because they are too far to the right. These are countries for whom “degrowth” policies might actually make sense: They are being as efficient in their use of resources as may be technologically feasible, but are simply producing too much output. They need to find a way to scale back their economies without causing social and political collapse. My suggestion, for what it’s worth, is progressive taxation. In addition to carbon taxes (which are a no-brainer), make income taxes so high that they start actually reducing GDP, and do so without fear, since that’s part of the point; then redistribute all the income as evenly as possible so that lower total income comes with much lower inequality and the eradication of poverty. Most of the country will then be no worse off than they were, so social and political unrest seems unlikely. Call it “socialism” if you like, but I’m not suggesting collectivization of industry or the uprising of the proletariat; I just want everyone to adopt the income tax rates the US had in the 1950s.

But most countries are not even close to the blue line; they are well above it. In all these countries, the goal should not be to reduce economic output, but to increase the carbon efficiency of that output. Increased efficiency has no downside (other than the transition cost to implement it): It makes you better off ecologically without making you worse off economically. Bahrain has about the same GDP per capita as Sweden but produces over five times the per-capita carbon emissions. Simply by copying Sweden they could reduce their emissions by almost 19 tonnes per person per year, which is more than the per-capita output of the US (and we’re hardly models of efficiency)—at absolutely no cost in GDP.

Then there are countries like Mongolia, which produces only $12,500 in GDP but 14.5 tonnes of CO2 per person per year. Mongolia is far above and to the left of the point of the Wedge, meaning that they could both increase their GDP and decrease their emissions by adopting the model of more efficient countries. Telling these countries that “degrowth” is the answer is beyond perverse—cut Mongolia’s GDP by 2/3 and you would throw them into poverty without even bringing carbon emissions down to target.

We don’t need to overthrow capitalism or even give up on GDP growth in general. We need to focus on carbon, carbon, carbon: All economic policy from this point forward should be made with CO2 reduction in mind. If that means reducing GDP, we may have to accept that; but often it won’t. Switching to nuclear power and public transit would dramatically reduce emissions but need have no harmful effect on economic output—in fact, the large investment required could pull a country out of recession.

Don’t worry about the Doughnut. Aim for the point of the Wedge.