Men and violence

Apr4 JDN 2459302

Content warning: In this post, I’m going to be talking about violence, including sexual violence. April is Sexual Assault Awareness and Prevention Month. I won’t go into any explicit detail, but I understand that discussion of such topics can still be very upsetting for many people.

After short posts for the past two weeks, get ready for a fairly long post. This is a difficult and complicated topic, and I want to make sure that I state things very clearly and with all necessary nuance.

While the overall level of violence between human societies varies tremendously, one thing is astonishingly consistent: Violence is usually committed by men.

In fact, violence is usually suffered by men as well—with the quite glaring exception of sexual violence. This is why I am particularly offended by claims like “All men benefit from male violence”; no, men who were murdered by other men did not benefit from male violence, and it is frankly appalling to say otherwise. Most men would be better off if male violence were somehow eliminated from the world. (Most women would also be much better off as well, of course.)

I therefore consider it both a matter of both moral obligation and self-interest to endeavor to reduce the amount of male violence in the world, which is almost coextensive with reducing the amount of violence in general.

On the other hand, ought implies can, and despite significant efforts I have made to seek out recommendations for concrete actions I could be taking… I haven’t been able to find very many.

The good news is that we appear to be doing something right—overall rates of violent crime have declined by nearly half since 1990. The decline in rape has been slower, only about 25% since 1990, though this is a bit misleading since the legal definition of rape has been expanded during that interval. The causes of this decline in violence are unclear: Some of the most important factors seem to be changes in policing, economic growth, and reductions in lead pollution. For whatever reason, Millennials just don’t seem to commit crimes at the same rates that Gen-X-ers or Boomers did. We are also substantially more feminist, so maybe that’s an important factor too; the truth is, we really don’t know.

But all of this still leaves me asking: What should I be doing?

When I searched for an answer to this question, a significant fraction of the answers I got from various feminist sources were some variation on “ruminate on your own complicity in male violence”. I tried it; it was painful, difficult—and basically useless. I think this is particularly bad advice for someone like me who has a history of depression.

When you ruminate on your own life, it’s easy to find mistakes; but how important were those mistakes? How harmful were they? I can’t say that I’ve never done anything in my whole life that hurt anyone emotionally (can anyone?), but I can only think of a few times I’ve harmed someone physically (mostly by accident, once in self-defense). I’ve definitely never raped or murdered anyone, and as far as I can tell I’ve never done anything that would have meaningfully contributed to anyone getting raped or murdered. If you were to somehow replace every other man in the world with a copy of me, maybe that wouldn’t immediately bring about a utopian paradise—but I’m pretty sure that rates of violence would be a lot lower. (And in this world ruled by my clones, we’d have more progressive taxes! Less military spending! A basic income! A global democratic federation! Greater investment in space travel! Hey, this sounds pretty good, actually… though inbreeding would be a definite concern.) So, okay, I’m no angel; but I don’t think it’s really fair to say that I’m complicit in something that would radically decrease if everyone behaved as I do.

The really interesting thing is, I think this is true of most men. A typical man commits less than the average amount of violence—because there is great skew in the distribution, with most men committing little or no violence and a small number of men committing lots of violence. Truly staggering amounts of violence are committed by those at the very top of the distribution—that would be mass murderers like Hitler and Stalin. It sounds strange, but if all men in the world were replaced by a typical man, the world would surely be better off. The loss of the very best men would be more than compensated by the removal of the very worst. In fact, since most men are not rapists or murderers, replacing every man in the world with the median man would automatically bring the rates of rape and murder to zero. I know that feminists don’t like to hear #NotAllMen; but it’s not even most men. Maybe the reason that the “not all men” argument keeps coming up is… it’s actually kind of true? Maybe it’s not so unreasonable for men to resent the implication that we are complicit in acts we abhor that we have never done and would never do? Maybe this whole concept that an entire sex of people, literally almost half the human race, can share responsibility for violent crimes—is wrong?

I know that most women face a nearly constant bombardment of sexual harassment, and feel pressured to remain constantly vigilant in order to protect themselves against being raped. I know that victims of sexual violence are often blamed for their victimization (though this happens in a lot of crimes, not just sex crimes). I know that #YesAllWomen is true—basically all women have been in some way harmed or threatened by sexual violence. But the fact remains that most men are already not committing sexual violence. Many people seem to confuse the fact that most women are harmed by men with the claim that most men harm women; these are not at all equivalent. As long as one man can harm many women, there don’t need to be very many harmful men for all women to be affected.

Plausible guesses would be that about 20-25% of women suffer sexual assault, committed by about 4% or 5% of men, each of whom commits an average of 4 to 6 assaults—and some of whom commit far more. If these figures are right, then 95% of men are not guilty of sexual assault. The highest plausible estimate I’ve seen is from a study which found that 11% of men had committed rape. Since it’s only one study and its sample size was pretty small, I’m actually inclined to think that this is an overestimate which got excessive attention because it was so shocking. Larger studies rarely find a number above 5%.

But even if we suppose that it’s really 11%, that leaves 89%; in what sense is 89% not “most men”? I saw some feminist sites responding to this result by saying things like “We can’t imprison 11% of men!” but, uh, we almost do already. About 9% of American men will go to prison in their lifetimes. This is probably higher than it should be—it’s definitely higher than any other country—but if those convictions were all for rape, I’d honestly have trouble seeing the problem. (In fact only about 10% of US prisoners are incarcerated for rape.) If the US were the incarceration capital of the world simply because we investigated and prosecuted rape more reliably, that would be a point of national pride, not shame. In fact, the American conservatives who don’t see the problem with our high incarceration rate probably do think that we’re mostly incarcerating people for things like rape and murder—when in fact large portions of our inmates are incarcerated for drug possession, “public order” crimes, or pretrial detention.

Even if that 11% figure is right, “If you know 10 men, one is probably a rapist” is wrong. The people you know are not a random sample. If you don’t know any men who have been to prison, then you likely don’t know any men who are rapists. 37% of prosecuted rapists have prior criminal convictions, and 60% will be convicted of another crime within 5 years. (Of course, most rapes are never even reported; but where would we get statistics on those rapists?) Rapists are not typical men. They may seem like typical men—it may be hard to tell the difference at a glance, or even after knowing someone for a long time. But the fact that narcissists and psychopaths may hide among us does not mean that all of us are complicit in the crimes of narcissists and psychopaths. If you can’t tell who is a psychopath, you may have no choice but to be wary; but telling every man to search his heart is worthless, because the only ones who will listen are the ones who aren’t psychopaths.

That, I think, is the key disagreement here: Where the standard feminist line is “any man could be a rapist, and every man should search his heart”, I believe the truth is much more like, “monsters hide among us, and we should do everything in our power to stop them”. The monsters may look like us, they may often act like us—but they are not us. Maybe there are some men who would commit rapes but can be persuaded out of it—but this is not at all the typical case. Most rapes are committed by hardened, violent criminals and all we can really do is lock them up. (And for the love of all that is good in the world, test all the rape kits!)

It may be that sexual harassment of various degrees is more spread throughout the male population; perhaps the median man indeed commits some harassment at some point in his life. But even then, I think it’s pretty clear that the really awful kinds of harassment are largely committed by a small fraction of serial offenders. Indeed, there is a strong correlation between propensity toward sexual harassment and various measures of narcissism and psychopathy. So, if most men look closely enough, maybe they can think of a few things that they do occasionally that might make women uncomfortable; okay, stop doing those things. (Hint: Do not send unsolicited dick pics. Ever. Just don’t. Anyone who wants to see your genitals will ask first.) But it isn’t going to make a huge difference in anyone’s life. As long as the serial offenders continue, women will still feel utterly bombarded.

There are other kinds of sexual violations that more men commit—being too aggressive, or persisting too much after the first rejection, or sending unsolicited sexual messages or images. I’ve had people—mostly, but not only, men—do things like that to me; but it would be obviously unfair to both these people and actual rape victims to say I’d ever been raped. I’ve been groped a few times, but it seems like quite a stretch to call it “sexual assault”. I’ve had experiences that were uncomfortable, awkward, frustrating, annoying, occasionally creepy—but never traumatic. Never violence. Teaching men (and women! There is evidence that women are not much less likely than men to commit this sort of non-violent sexual violation) not to do these things is worthwhile and valuable in itself—but it’s not going to do much to prevent rape or murder.

Thus, whatever responsibility men have in reducing sexual violence, it isn’t simply to stop; you can’t stop doing what you already aren’t doing.

After pushing through all that noise, at last I found a feminist site making a more concrete suggestion: They recommended that I read a book by Jackson Katz on the subject entitled The Macho Paradox: Why Some Men Hurt Women and How All Men Can Help.

First of all, I must say I can’t remember any other time I’ve read a book that was so poorly titled. The only mention of the phrase “macho paradox” is a brief preface that was added to the most recent edition explaining what the term was meant to mean; it occurs nowhere else in the book. And in all its nearly 300 pages, the book has almost nothing that seriously addresses either the motivations underlying sexual violence or concrete actions that most men could take in order to reduce it.

As far as concrete actions (“How all men can help”), the clearest, most consistent advice the book seems to offer that would apply to most men is “stop consuming pornography” (something like 90% of men and 60% of women regularly consume porn), when in fact there is a strong negative correlation between consumption of pornography and real-world sexual violence. (Perhaps Millennials are less likely to commit rape and murder because we are so into porn and video games!) This advice is literally worse than nothing.

The sex industry exists on a continuum from the adult-only but otherwise innocuous (smutty drawings and erotic novels), through the legal but often problematic (mainstream porn, stripping), to the usually illegal but defensible (consensual sex work), all the way to the utterly horrific and appalling (the sexual exploitation of children). I am well aware that there are many deep problems with the mainstream porn industry, but I confess I’ve never quite seen how these problems are specific to porn rather than endemic to media or even capitalism more generally. Particularly with regard to the above-board sex industry in places like Nevada or the Netherlands, it’s not obvious to me that a prostitute is more exploited than a coal miner, a sweatshop worker, or a sharecropper—indeed, given the choice between those four careers, I’d without hesitation choose to be a prostitute in Amsterdam. Many sex workers resent the paternalistic insistence by anti-porn feminists that their work is inherently degrading and exploitative. Overall, sex workers report job satisfaction not statistically different than the average for all jobs. There are a multitude of misleading statistics often reported about the sex industry that often make matters seem far worse than they are.

Katz (all-too) vividly describes the depiction of various violent or degrading sex acts in mainstream porn, but he seems unwilling to admit that any other forms of porn do or even could exist—and worse, like far too many anti-porn feminists, he seems to willfully elide vital distinctions, effectively equating fantasy depiction with genuine violence and consensual kinks with sexual abuse. I like to watch action movies and play FPS video games; does that mean I believe it’s okay to shoot people with machine guns? I know the sophisticated claim is that it somehow “desensitizes” us (whatever that means), but there’s not much evidence of that either. Given that porn and video games are negatively correlated with actual violence, it may in fact be that depicting the fantasy provides an outlet for such urges and helps prevent them from becoming reality. Or, it may simply be that keeping a bunch of young men at home in front of their computers keeps them from going out and getting into trouble. (Then again, homicides actually increased during the COVID pandemic—though most other forms of crime decreased.) But whatever the cause, the evidence is clear that porn and video games don’t increase actual violence—they decrease them.

At the very end of the book, Katz hints at a few other things men might be able to do, or at least certain groups of men: Challenge sexism in sports, the military, and similar male-dominated spaces (you know, if you have clout in such spaces, which I really don’t—I’m an effete liberal intellectual, a paradigmatic “soy boy”; do you think football players or soldiers are likely to listen to me?); educate boys with more positive concepts of masculinity (if you are in a position to do so, e.g. as a teacher or parent); or, the very best advice in the entire book, worth more than the rest of the book combined: Donate to charities that support survivors of sexual violence. Katz doesn’t give any specific recommendations, but here are a few for you: RAINN, NAESV and NSVRC.

Honestly, I’m more impressed by Upworthy’s bulleted list of things men can do, though they’re mostly things that conscientious men do anyway, and even if 90% of men did them, it probably wouldn’t greatly reduce actual violence.

As far as motivations (“Why some men hurt women”), the book does at least manage to avoid the mindless slogan “rape is about power, not sex” (there is considerable evidence that this slogan is false or at least greatly overstated). Still, Katz insists upon collective responsibility, attributing what are in fact typically individual crimes, committed mainly by psychopaths, motivated primarily by anger or sexual desire, to some kind of institutionalized system of patriarchal control that somehow permeates all of society. The fact that violence is ubiquitous does not imply that it is coordinated. It’s very much the same cognitive error as “murderism”.

I agree that sexism exists, is harmful, and may contribute to the prevalence of rape. I agree that there are many widespread misconceptions about rape. I also agree that reducing sexism and toxic masculinity are worthwhile endeavors in themselves, with numerous benefits for both women and men. But I’m just not convinced that reducing sexism or toxic masculinity would do very much to reduce the rates of rape or other forms of violence. In fact, despite widely reported success of campaigns like the “Don’t Be That Guy” campaign, the best empirical research on the subject suggests that such campaigns actually tend to do more harm than good. The few programs that seem to work are those that focus on bystander interventions—getting men who are not rapists to recognize rapists and stop them. Basically nothing has ever been shown to convince actual rapists; all we can do is deny them opportunities—and while bystander intervention can do that, the most reliable method is probably incarceration. Trying to change their sexist attitudes may be worse than useless.

Indeed, I am increasingly convinced that much—not all, but much—of what is called “sexism” is actually toxic expressions of heterosexuality. Why do most creepy male bosses only ever hit on their female secretaries? Well, maybe because they’re straight? This is not hard to explain. It’s a fair question why there are so many creepy male bosses, but one need not posit any particular misogyny to explain why their targets would usually be women. I guess it’s a bit hard to disentangle; if an incel hates women because he perceives them as univocally refusing to sleep with him, is that sexism? What if he’s a gay incel (yes they exist) and this drives him to hate men instead?

In fact, I happen to know of a particular gay boss who has quite a few rumors surrounding him regarding his sexual harassment of male employees. Or you could look at Kevin Spacey, who (allegedly) sexually abused teenage boys. You could tell a complicated story about how this is some kind of projection of misogynistic attitudes onto other men (perhaps for being too “femme” or something)—or you could tell a really simple story about how this man is only sexually abusive toward other men because that’s the gender of people he’s sexually attracted to. Occam’s Razor strongly favors the latter.

Indeed, what are we to make of the occasional sexual harasser who targets men and women equally? On the theory that abuse is caused by patriarchy, that seems pretty hard to explain. On the theory that abusive people sometimes happen to be bisexual, it’s not much of a mystery. (Though I would like to take a moment to debunk the stereotype of the “depraved bisexual”: Bisexuals are no more likely to commit sexual violence, but are far more likely to suffer it—more likely than either straight or gay people, independently of gender. Trans people face even higher risk; the acronym LGBT is in increasing order of danger of violence.)

Does this excuse such behavior? Absolutely not. Sexual harassment and sexual assault are definitely wrong, definitely harmful, and rightfully illegal. But when trying to explain why the victims are overwhelmingly female, the fact that roughly 90% of people are heterosexual is surely relevant. The key explanandum here is not why the victims are usually female, but rather why the perpetrators are usually male.

That, indeed, requires explanation; but such an explanation is really not so hard to come by. Why is it that, in nearly every human society, for nearly every form of violence, the vast majority of that violence is committed by men? It sure looks genetic to me.

Indeed, in anyother context aside from gender or race, we would almost certainly reject any explanation other than genetics for such a consistent pattern. Why is it that, in nearly every human society, about 10% of people are LGBT? Probably genetics. Why is it that, in near every human society, about 10% of people are left-handed? Genetics. Why, in nearly every human society, do smiles indicate happiness, children fear loud noises, and adults fear snakes? Genetics. Why, in nearly every human society, are men on average much taller and stronger than women? Genetics. Why, in nearly every human society, is about 90% of violence, including sexual violence, committed by men? Clearly, it’s patriarchy.

A massive body of scientific evidence from multiple sources shows a clear casual relationship between increased testosterone and increased aggression. The correlation is moderate, only about 0.38—but it’s definitely real. And men have a lot more testosterone than women: While testosterone varies a frankly astonishing amount between men and over time—including up to a 2-fold difference even over the same day—a typical adult man has about 250 to 950 ng/dL of blood testosterone, while a typical adult woman has only 8 to 60 ng/dL. (An adolescent boy can have as much as 1200 ng/dL!) This is a difference ranging from a minimum of 4-fold to a maximum of over 100-fold, with a typical value of about 20-fold. It would be astonishing if that didn’t have some effect on behavior.

This is of course far from a complete explanation: With a correlation of 0.38, we’ve only explained about 14% of the variance, so what’s the other 86%? Well, first of all, testosterone isn’t the only biological difference between men and women. It’s difficult to identify any particular genes with strong effects on aggression—but the same is true of height, and nobody disputes that the height difference between men and women is genetic.

Clearly societal factors do matter a great deal, or we couldn’t possibly explain why homicide rates vary between countries from less than 3 per million per year in Japan to nearly 400 per million per year in Hondurasa full 2 orders of magnitude! But gender inequality does not appear to strongly predict homicide rates. Japan is not a very feminist place (in fact, surveys suggest that, after Spain, Japan is second-worst highly-developed country for women). Sweden is quite feminist, and their homicide rate is relatively low; but it’s still 4 times as high as Japan’s. The US doesn’t strike me as much more sexist than Canada (admittedly subjective—surveys do suggest at least some difference, and in the expected direction), and yet our homicide rate is nearly 3 times as high. Also, I think it’s worth noting that while overall homicide rates vary enormously across societies, the fact that roughly 90% of homicides are committed by men does not. Through some combination of culture and policy, societies can greatly reduce the overall level of violence—but no society has yet managed to change the fact that men are more violent than women.

I would like to do a similar analysis of sexual assault rates across countries, but unfortunately I really can’t, because different countries have such different laws and different rates of reporting that the figures really aren’t comparable. Sweden infamously has a very high rate of reported sex crimes, but this is largely because they have very broad definitions of sex crimes and very high rates of reporting. The best I can really say for now is there is no obvious pattern of more feminist countries having lower rates of sex crimes. Maybe there really is such a pattern; but the data isn’t clear.

Yet if biology contributes anything to the causation of violence—and at this point I think the evidence for that is utterly overwhelming—then mainstream feminism has done the world a grave disservice by insisting upon only social and cultural causes. Maybe it’s the case that our best options for intervention are social or cultural, but that doesn’t mean we can simply ignore biology. And then again, maybe it’s not the case at all:A neurological treatment to cure psychopathy could cut almost all forms of violence in half.

I want to be completely clear that a biological cause is not a justification or an excuse: literally billions of men manage to have high testosterone levels, and experience plenty of anger and sexual desire, without ever raping or murdering anyone. The fact that men appear to be innately predisposed toward violence does not excuse actual violence, and the fact that rape is typically motivated at least in part by sexual desire is no excuse for committing rape.

In fact, I’m quite worried about the opposite: that the notion that sexual violence is always motivated by a desire to oppress and subjugate women will be used to excuse rape, because men who know that their motivation was not oppression will therefore be convinced that what they did wasn’t rape. If rape is always motivated by a desire to oppress women, and his desire was only to get laid, then clearly, what he did can’t be rape, right? The logic here actually makes sense. If we are to reject this argument—as we must—then we must reject the first premise, that all rape is motivated by a desire to oppress and subjugate women. I’m not saying that’s never a motivation—I’m simply saying we can’t assume it is always.

The truth is, I don’t know how to end violence, and sexual violence may be the most difficult form of violence to eliminate. I’m not even sure what most of us can do to make any difference at all. For now, the best thing to do is probably to donate money to organizations like RAINN, NAESV and NSVRC. Even $10 to one of these organizations will do more to help survivors of sexual violence than hours of ruminating on your own complicity—and cost you a lot less.

Good news for a change

Mar 28 JDN 2459302

When President Biden made his promise to deliver 100 million vaccine doses to Americans within his first 100 days, many were skeptical. Perhaps we had grown accustomed to the anti-scientific attitudes and utter incompetence of Trump’s administration, and no longer believed that the US federal government could do anything right.

The skeptics were wrong. For the promise has not only been kept, it has been greatly exceeded. As of this writing, Biden has been President for 60 days and we have already administered 121 million vaccine doses. If we continue at the current rate, it is likely that we will have administered over 200 million vaccine doses and fully vaccinated over 100 million Americans by Biden’s promised 100-day timeline—twice as fast as what was originally promised. Biden has made another bold promise: Every adult in the United States vaccinated by the end of May. I admit I’m not confident it can be done—but I wasn’t confident we’d hit 100 million by now either.

In fact, the US now has one of the best rates of COVID vaccination in the world, with the proportion of our population vaccinated far above the world average and below only Israel, UAE, Chile, the UK, and Bahrain (plus some tiny countries like Monaco). In fact, we actually have the largest absolute number of vaccinated individuals in the world, surpassing even China and India.

It turns out that the now-infamous map saying that the US and UK were among the countries best-prepared for a pandemic wasn’t so wrong after all; it’s just that having such awful administration for four years made our otherwise excellent preparedness fail. Put someone good in charge, and yes, indeed, it turns out that the US can deal with pandemics quite well.

The overall rate of new COVID cases in the US began to plummet right around the time the vaccination program gained steam, and has plateaued around 50,000 per day for the past few weeks. This is still much too high, but it is is a vast improvement over the 200,000 cases per day we had in early January. Our death rate due to COVID now hovers around 1,500 people per day—that’s still a 9/11 every two days. But this is half what our death rate was at its worst. And since our baseline death rate is 7,500 deaths per day, 1,800 of them by heart disease, this now means that COVID is no longer the leading cause of death in the United States; heart disease has once again reclaimed its throne. Of course, people dying from heart disease is still a bad thing; but it’s at least a sign of returning to normalcy.

Worldwide, the pandemic is slowing down, but still by no means defeated, with over 400,000 new cases and 7,500 deaths every day. The US rate of 17 new cases per 100,000 people per day is about 3 times the world average, but comparable to Germany (17) and Norway (18), and nowhere near as bad as Chile (30), Brazil (35), France (37), or Sweden (45), let alone the very hardest-hit places like Serbia (71), Hungary (78), Jordan (83), Czechia (90), and Estonia (110). (That big gap between Norway and Sweden? It’s because Sweden resisted using lockdowns.) And there is cause for optimism even in these places, as vaccination rates already exceed total COVID cases.

I can see a few patterns in the rate of vaccination by state: very isolated states have managed to vaccinate their population fastest—Hawaii and Alaska have done very well, and even most of the territories have done quite well (though notably not Puerto Rico). The south has done poorly (for obvious reasons), but not as poorly as I might have feared; even Texas and Mississippi have given at least one dose to 21% of their population. New England has been prioritizing getting as many people with at least one dose as possible, rather than trying to fully vaccinate each person; I think this is the right strategy.

We must continue to stay home when we can and wear masks when we go out. This will definitely continue for at least a few more months, and the vaccine rollout may not even be finished in many countries by the end of the year. In the worst-case scenario, COVID may become an endemic virus that we can’t fully eradicate and we’ll have to keep getting vaccinated every year like we do for influenza (though the good news there is that it likely wouldn’t be much more dangerous than influenza at that point either—though another influenza is nothing to, er, sneeze at).

Yet there is hope at last. Things are finally getting better.

What if everyone owned their own home?

Mar 14 JDN 2459288

In last week’s post I suggested that if we are to use the term “gentrification”, it should specifically apply to the practice of buying homes for the purpose of renting them out.

But don’t people need to be able to rent homes? Surely we couldn’t have a system where everyone always owned their own home?

Or could we?

The usual argument for why renting is necessary is that people don’t want to commit to living in one spot for 15 or 30 years, the length of a mortgage. And this is quite reasonable; very few careers today offer the kind of stability that lets you commit in advance to 15 or more years of working in the same place. (Tenured professors are one of the few exceptions, and I dare say this has given academic economists some severe blind spots regarding the costs and risks involved in changing jobs.)

But how much does renting really help with this? One does not rent a home for a few days or even few weeks at a time. If you are staying somewhere for an interval that short, you generally room with a friend or pay for a hotel. (Or get an AirBNB, which is sort of intermediate between the two.)

One only rents housing for months at a time—in fact, most leases are 12-month leases. But since the average time to sell a house is 60-90 days, in what sense is renting actually less of a commitment than buying? It feels like less of a commitment to most people—but I’m not sure it really is less of a commitment.

There is a certainty that comes with renting—you know that once your lease is up you’re free to leave, whereas selling your house will on average take two or three months, but could very well be faster or slower than that.

Another potential advantage of renting is that you have a landlord who is responsible for maintaining the property. But this advantage is greatly overstated: First of all, if they don’t do it (and many surely don’t), you actually have very little recourse in practice. Moreover, if you own your own home, you don’t actually have to do all the work yourself; you could pay carpenters and plumbers and electricians to do it for you—which is all that most landlords were going to do anyway.

All of the “additional costs” of owning over renting such as maintenance and property taxes are going to be factored into your rent in the first place. This is a good argument for recognizing that a $1000 mortgage payment is not equivalent to a $1000 rent payment—the rent payment is all-inclusive in a way the mortgage is not. But it isn’t a good argument for renting over buying in general.

Being foreclosed on a mortgage is a terrible experience—but surely no worse than being evicted from a rental. If anything, foreclosure is probably not as bad, because you can essentially only be foreclosed for nonpayment, since the bank only owns the loan; landlords can and do evict people for all sorts of reasons, because they own the home. In particular, you can’t be foreclosed for annoying your neighbors or damaging the property. If you own your home, you can cut a hole in a wall any time you like. (Not saying you should necessarily—just that you can, and nobody can take your home away for doing so.)

I think the primary reason that people rent instead of buying is the cost of a down payment. For some reason, we have decided as a society that you should be expected to pay 10%-20% of the cost of a home up front, or else you never deserve to earn any equity in your home whatsoever. This is one of many ways that being rich makes it easier to get richer—but it is probably the most important one holding back most of the middle class of the First World.

And make no mistake, that’s what this is: It’s a social norm. There is no deep economic reason why a down payment needs to be anything in particular—or even why down payments in general are necessary.

There is some evidence that higher down payments are associated with less risk of default, but it’s not as strong as many people seem to think. The big HUD study on the subject found that one percentage point of down payment reduces default risk by about as much as 5 points of credit rating: So you should prefer to offer a mortgage to someone with an 800 rating and no down payment than someone with a 650 rating and a 20% down payment.

Also, it’s not as if mortgage lenders are unprotected from default (unlike, say, credit card lenders). Above all, they can foreclose on the house. So why is it so important to reduce the risk of default in the first place? Why do you need extra collateral in the form of a down payment, when you’ve already got an entire house of collateral?

It may be that this is actually a good opportunity for financial innovation, a phrase that should in general strike terror in one’s heart. Most of the time “financial innovation” means “clever ways of disguising fraud”. Previous attempts at “innovating” mortgages have resulted in such monstrosities as “interest-only mortgages” (a literal oxymoron, since by definition a mortgage must have a termination date—a date at which the debt “dies”), “balloon payments”, and “adjustable rate mortgages”—all of which increase risk of default while as far as I can tell accomplishing absolutely nothing. “Subprime” lending created many excuses for irresponsible or outright predatory lending—and then, above all, securitization of mortgages allowed banks to offload the risk they had taken on to third parties who typically had no idea what they were getting.

Volcker was too generous when he said that the last great financial innovation was the ATM; no, that was an innovation in electronics (and we’ve had plenty of those). The last great financial innovation I can think of is the joint-stock corporation in the 1550s. But I think a new type of mortgage contract that minimizes default risk without requiring large up-front payments might actually qualify as a useful form of financial innovation.

It would also be useful to have mortgages that make it easier to move, perhaps by putting payments on hold while the home is up for sale. That way people wouldn’t have to make two mortgage payments at once as they move from one place to another, and the bank will see that money eventually—paid for by new buyer and their mortgage.

Indeed, ideally I’d like to eliminate foreclosure as well, so that no one has to be kicked out of their homes. How might we do that?

Well, as a pandemic response measure, we should have simply instituted a freeze on all evictions and foreclosures for the duration of the pandemic. Some states did, in fact—but many didn’t, and the federal moratoria on evictions were limited. This is the kind of emergency power that government should have, to protect people from a disaster. So far it appears that the number of evictions was effectively reduced from tens of millions to tens of thousands by these measures—but evicting anyone during a pandemic is a human rights violation.

But as a long-term policy, simply banning evictions wouldn’t work. No one would want to lend out mortgages, knowing that they had no recourse if the debtor stopped paying. Even buyers with good credit might get excluded from the market, since once they actually received the house they’d have very little incentive to actually make their payments on time.

But if there are no down payments and no foreclosures, that means mortgage lenders have no collateral. How are they supposed to avoid defaults?

One option would be wage garnishment. If you have the money and are simply refusing to pay it, the courts could simply require your employer to send the money directly to your creditors. If you have other assets, those could be garnished as well.

And what if you don’t have the money, perhaps because you’re unemployed? Well, then, this isn’t really a problem of incentives at all. It isn’t that you’re choosing not to pay, it’s that you can’t pay. Taking away such people’s homes would protect banks financially, but at a grave human cost.

One option would be to simply say that the banks should have to bear the risk: That’s part of what their huge profits are supposed to be compensating them for, the willingness to take on risks others won’t. The main downside here is the fact that it would probably make it more difficult to get a mortgage and raise the interest rates that you would need to pay once you do.

Another option would be some sort of government program to make up the difference, by offering grants or guaranteed loans to homeowners who can’t afford to pay their mortgages. Since most such instances are likely to be temporary, the government wouldn’t be on the hook forever—just long enough for people to get back on their feet. Here the downside would be the same as any government spending: higher taxes or larger budget deficits. But honestly it probably wouldn’t take all that much; while the total value of all mortgages is very large, only a small portion are in default at any give time. Typically only about 2-4% of all mortgages in the US are in default. Even 4% of the $10 trillion total value of all US mortgages is about $400 billion, which sounds like a lot—but the government wouldn’t owe that full amount, just whatever portion is actually late. I couldn’t easily find figures on that, but I’d be surprised if it’s more than 10% of the total value of these mortgages that would need to be paid by the government. $40 billion is about 1% of the annual federal budget.

Reforms to our healthcare system would also help tremendously, as medical expenses are a leading cause of foreclosure in the United States (and literally nowhere else—every other country with the medical technology to make medicine this expensive also has a healthcare system that shares the burden). Here there is virtually no downside: Our healthcare system is ludicrously expensive without producing outcomes any better than the much cheaper single-payer systems in Canada, the UK, and France.

All of this sounds difficult and complicated, I suppose. Some may think that it’s not worth it. But I believe that there is a very strong moral argument for universal homeownership and ending eviction: Your home is your own, and no one else’s. No one has a right to take your home away from you.

This is also fundamentally capitalist: It is the private ownership of capital by its users, the acquisition of wealth through ownership of assets. The system of landlords and renters honestly doesn’t seem so much capitalist as it does feudal: We even call them “lords”, for goodness’ sake!

As an added bonus, if everyone owned their own homes, then perhaps we wouldn’t have to worry about “gentrification”, since rising property values would always benefit residents.

In search of reasonable conservatism

Feb 21JDN 2459267

This is a very tumultuous time for American politics. Donald Trump, not once, but twice was impeached—giving him the dubious title of having been impeached as many times as the previous 45 US Presidents combined. He was not convicted either time, not because the evidence for his crimes was lacking—it was in fact utterly overwhelming—but because of obvious partisan bias: Republican Senators didn’t want to vote against a Republican President. All 50 of the Democratic Senators, but only 7 of the 50 Republican Senators, voted to convict Trump. The required number of votes to convict was 67.

Some degree of partisan bias is to be expected. Indeed, the votes looked an awful lot like Bill Clinton’s impeachment, in which all Democrats and only a handful of Republicans voted to acquit. But Bill Clinton’s impeachment trial was nowhere near as open-and-shut as Donald Trump’s. He was being tried for perjury and obstruction of justice, over lies he told about acts that were unethical, but not illegal or un-Constitutional. I’m a little disappointed that no Democrats voted against him, but I think acquittal was probably the right verdict. There’s something very odd about being tried for perjury because you lied about something that wasn’t even a crime. Ironically, had it been illegal, he could have invoked the Fifth Amendment instead of lying and they wouldn’t have been able to touch him. So the only way the perjury charge could actually stick was because it wasn’t illegal. But that isn’t what perjury is supposed to be about: It’s supposed to be used for things like false accusations and planted evidence. Refusing to admit that you had an affair that’s honestly no one’s business but your family’s really shouldn’t be a crime, regardless of your station.

So let us not imagine an equivalency here: Bill Clinton was being tried for crimes that were only crimes because he lied about something that wasn’t a crime. Donald Trump was being tried for manipulating other countries to interfere in our elections, obstructing investigations by Congress, and above all attempting to incite a coup. Partisan bias was evident in all three trials, but only Trump’s trials were about sedition against the United States.

That is to say, I expect to see partisan bias; it would be unrealistic not to. But I expect that bias to be limited. I expect there to be lines beyond which partisans will refuse to go. The Republican Party in the United States today has shown us that they have no such lines. (Or if there are, they are drawn far too high. What would he have to do, bomb an American city? He incited an invasion of the Capitol Building, for goodness’ sake! And that was after so terribly mishandling a pandemic that he caused roughly 200,000 excess American deaths!)

Temperamentally, I like to compromise. I want as many people to be happy as possible, even if that means not always getting exactly what I would personally prefer. I wanted to believe that there were reasonable conservatives in our government, professional statespersons with principles who simply had honest disagreements about various matters of policy. I can now confirm that there are at most 7 such persons in the US Senate, and at most 10 such persons in the US House of Representatives. So of the 261 Republicans in Congress, no more than 17 are actually reasonable statespersons who do not let partisan bias override their most basic principles of justice and democracy.

And even these 17 are by no means certain: There were good strategic reasons to vote against Trump, even if the actual justice meant nothing to you. Trump’s net disapproval rating was nearly the highest of any US President ever. Carter and Bush I had periods where they fared worse, but overall fared better. Johnson, Ford, Reagan, Obama, Clinton, Bush II, and even Nixon were consistently more approved than Trump. Kennedy and Eisenhower completely blew him out of the water—at their worst, Kennedy and Eisenhower were nearly 30 percentage points above Trump at his best. With Trump this unpopular, cutting ties with him would make sense for the same reason rats desert a sinking ship. And yet somehow partisan loyalty won out for 94% of Republicans in Congress.

Politics is the mind-killer, and I fear that this sort of extreme depravity on the part of Republicans in Congress will make it all too easy to dismiss conservatism as a philosophy in general. I actually worry about that; not all conservative ideas are wrong! Low corporate taxes actually make a lot of sense. Minimum wage isn’t that harmful, but it’s also not that beneficial. Climate change is a very serious threat, but it’s simply not realistic to jump directly to fully renewable energy—we need something for the transition, probably nuclear energy. Capitalism is overall the best economic system, and isn’t particularly bad for the environment. Industrial capitalism has brought us a golden age. Rent control is a really bad idea. Fighting racism is important, but there are ways in which woke culture has clearly gone too far. Indeed, perhaps the worst thing about woke culture is the way it denies past successes for civil rights and numbs us with hopelessness.

Above all, groupthink is incredibly dangerous. Once we become convinced that any deviation from the views of the group constitutes immorality or even treason, we become incapable of accepting new information and improving our own beliefs. We may start with ideas that are basically true and good, but we are not omniscient, and even the best ideas can be improved upon. Also, the world changes, and ideas that were good a generation ago may no longer be applicable to the current circumstances. The only way—the only way—to solve that problem is to always remain open to new ideas and new evidence.

Therefore my lament is not just for conservatives, who now find themselves represented by craven ideologues; it is also for liberals, who no longer have an opposition party worth listening to. Indeed, it’s a little hard to feel bad for the conservatives, because they voted for these maniacs. Maybe they didn’t know what they were getting? But they’ve had chances to remove most of them, and didn’t do so. At best I’d say I pity them for being so deluded by propaganda that they can’t see the harm their votes have done.

But I’m actually quite worried that the ideologues on the left will now feel vindicated; their caricatured view of Republicans as moustache-twirling cartoon villains turned out to be remarkably accurate, at least for Trump himself. Indeed, it was hard not to think of the ridiculous “destroying the environment for its own sake” of Captain Planet villains when Trump insisted on subsidizing coal power—which by the way didn’t even work.

The key, I think, is to recognize that reasonable conservatives do exist—there just aren’t very many of them in Congress right now. A significant number of Americans want low taxes, deregulation, and free markets but are horrified by Trump and what the Republican Party has become—indeed, at least a few write for the National Review.

The mere fact that an idea comes from Republicans is not a sufficient reason to dismiss that idea. Indeed, I’m going to say something even stronger: The mere fact that an idea comes from a racist or a bigot is not a sufficient reason to dismiss that idea. If the idea itself is racist or bigoted, yes, that’s a reason to think it is wrong. But even bad people sometimes have good ideas.

The reasonable conservatives seem to be in hiding at the moment; I’ve searched for them, and had difficulty finding more than a handful. Yet we must not give up the search. Politics should not appear one-sided.

Love in a time of quarantine

Feb 14JDN 2459260

This is our first Valentine’s Day of quarantine—and hopefully our last. With Biden now already taking action and the vaccine rollout proceeding more or less on schedule, there is good reason to think that this pandemic will be behind us by the end of this year.

Yet for now we remain isolated from one another, attempting to substitute superficial digital interactions for the authentic comforts of real face-to-face contact. And anyone who is single, or forced to live away from their loved ones, during quarantine is surely having an especially hard time right now.

I have been quite fortunate in this regard: My fiancé and I have lived together for several years, and during this long period of isolation we’ve at least had each other—if basically no one else.

But even I have felt a strong difference, considerably stronger than I expected it would be: Despite many of my interactions already being conducted via the Internet, needing to do so with all interactions feels deeply constraining. Nearly all of my work can be done remotely—but not quite all, and even what can be done remotely doesn’t always work as well remotely. I am moderately introverted, and I still feel substantially deprived; I can only imagine how awful it must be for the strongly extraverted.

As awkward as face-to-face interactions can be, and as much as I hate making phone calls, somehow Zoom video calls are even worse than either. Being unable to visit someone’s house for dinner and games, or go out to dinner and actually sit inside a restaurant, leaves a surprisingly large emotional void. Nothing in particular feels radically different, but the sum of so many small differences adds up to a rather large one. I think I felt it the most when we were forced to cancel our usual travel back to Michigan over the holiday season.

Make no mistake: Social interaction is not simply something humans enjoy, or are good at. Social interaction is a human need. We need social interaction in much the same way that we need food or sleep. The United Nations considers solitary confinement for more than two weeks to be torture. Long periods in solitary confinement are strongly correlated with suicide—so in that sense, isolation can kill you. Think about the incredibly poor quality of social interactions that goes on in most prisons: Endless conflict, abuse, racism, frequent violence—and then consider that the one thing that inmates find most frightening is to be deprived of that social contact. This is not unlike being fed nothing but stale bread and water, and then suddenly having even that taken away from you.

Even less extreme forms of social isolation—like most of us are feeling right now—have as detrimental an effect on health as smoking or alcoholism, and considerably worse than obesity. Long-term social isolation increases overall mortality risk by more than one-fourth. Robust social interaction is critical for long-term health, both physically and mentally.

This does not mean that the quarantines were a bad idea—on the contrary, we should have enforced them more aggressively, so as to contain the pandemic faster and ultimately need less time in quarantine. Timing is critical here: Successfully containing the pandemic early is much easier than trying to bring it back under control once it has already spread. When the pandemic began, lockdown might have been able to stop the spread. At this point, vaccines are really our only hope of containment.

But it does mean that if you feel terrible lately, there is a very good reason for this, and you are not alone. Due to forces much larger than any of us can control, forces that even the world’s most powerful governments are struggling to contain, you are currently being deprived of a basic human need.

And especially if you are on your own this Valentine’s Day, remember that there are people who love you, even if they can’t be there with you right now.

What happened with GameStop?

Feb 7 JDN 2459253

No doubt by now you’ve heard about the recent bubble in GameStop stock that triggered several trading stops, nearly destroyed a hedge fund, and launched a thousand memes. What really strikes me about this whole thing is how ordinary it is: This is basically the sort of thing that happens in our financial markets all the time. So why are so many people suddenly paying so much attention to it?

There are a few important ways this is unusual: Most importantly, the bubble was triggered by a large number of middle-class people investing small amounts, rather than by a handful of billionaires or hedge funds. It’s also more explicitly collusive than usual, with public statements in writing about what stocks are being manipulated rather than hushed whispers between executives at golf courses. Partly as a consequence of these, the response from the government and the financial industry has been quite different as well, trying to halt trading and block transactions in a way that they would never do if the crisis had been caused by large financial institutions.

If you’re interested in the technical details of what happened, what a short squeeze is and how it can make a hedge fund lose enormous amounts of money unexpectedly, I recommend this summary by KQED. But the gist of it is simple enough: Melvin Capital placed huge bets that GameStop stock would fall in price, and a coalition of middle-class traders coordinated on Reddit to screw them over by buying a bunch of GameStop stock and driving up the price. It worked, and now Melvin Capital lost something on the order of $3-5 billion in just a few days.

The particular kind of bet they placed is called a short, and it’s a completely routine practice on Wall Street despite the fact that I could never quite understand why it is a thing that should be allowed.

The essence of a short is quite simple: When you short, you are selling something you don’t own. You “borrow” it (it isn’t really even borrowing), and then sell it to someone else, promising to buy it back and return it to where you borrowed it from at some point in the future. This amounts to a bet that the price will decline, so that the price at which you buy it is lower than the price at which you sold it.

Doesn’t that seem like an odd thing to be allowed to do? Normally you can’t sell something you have merely borrowed. I can’t borrow a car and then sell it; car title in fact exists precisely to prevent this from happening. If I were to borrow your coat and then sell it to a thrift store, I’d have committed larceny. It’s really quite immaterial whether I plan to buy it back afterward; in general we do not allow people to sell things that they do not own.

Now perhaps the problem is that when I borrow your coat or your car, you expect me to return that precise object—not a similar coat or a car of equivalent Blue Book value, but your coat or your car. When I borrow a share of GameStop stock, no one really cares whether it is that specific share which I return—indeed, it would be almost impossible to even know whether it was. So in that way it’s a bit like borrowing money: If I borrow $20 from you, you don’t expect me to pay back that precise $20 bill. Indeed you’d be shocked if I did, since presumably I borrowed it in order to spend it or invest it, so how would I ever get it back?

But you also don’t sell money, generally speaking. Yes, there are currency exchanges and money-market accounts; but these are rather exceptional cases. In general, money is not bought and sold the way coats or cars are.

What about consumable commodities? You probably don’t care too much about any particular banana, sandwich, or gallon of gasoline. Perhaps in some circumstances we might “loan” someone a gallon of gasoline, intending them to repay us at some later time with a different gallon of gasoline. But far more likely, I think, would be simply giving a friend a gallon of gasoline and then not expecting any particular repayment except perhaps a vague offer of providing a similar favor in the future. I have in fact heard someone say the sentence “Can I borrow your sandwich?”, but it felt very odd when I heard it. (Indeed, I responded something like, “No, you can keep it.”)

And in order to actually be shorting gasoline (which is a thing that you, too, can do, perhaps even right now, if you have a margin account on a commodities exchange), it isn’t enough to borrow a gallon with the expectation of repaying a different gallon; you must also sell that gallon you borrowed. And now it seems very odd indeed to say to a friend, “Hey, can I borrow a gallon of gasoline so that I can sell it to someone for a profit?”

The usual arguments for why shorting should be allowed are much like the arguments for exotic financial instruments in general: “Increase liquidity”, “promote efficient markets”. These arguments are so general and so ubiquitous that they essentially amount to the strongest form of laissez-faire: Whatever Wall Street bankers feel like doing is fine and good and part of what makes American capitalism great.

In fact, I was never quite clear why margin accounts are something we decided to allow; margin trading is inherently high-leverage and thus inherently high-risk. Borrowing money in order to arbitrage financial assets doesn’t just seem like a very risky thing to do, it has been one way or another implicated in virtually every financial crisis that has ever occurred. It would be an exaggeration to say that leveraged arbitrage is the one single cause of financial crises, but it would be a shockingly small exaggeration. I think it absolutely is fair to say that if leveraged arbitrage did not exist, financial crises would be far rarer and further between.

Indeed, I am increasingly dubious of the whole idea of allowing arbitrage in general. Some amount of arbitrage may be unavoidable; there may always be people people who see that prices are different for the same item in two different markets, and then exploit that difference before anyone can stop them. But this is a bit like saying that theft is probably inevitable: Yes, every human society that has had a system of property ownership (which is most of them—even communal hunter-gatherers have rules about personal property), has had some amount of theft. That doesn’t mean there is nothing we can do to reduce theft, or that we should simply allow theft wherever it occurs.

The moral argument against arbitrage is straightforward enough: You’re not doing anything. No good is produced; no service is provided. You are making money without actually contributing any real value to anyone. You just make money by having money. This is what people in the Middle Ages found suspicious about lending money at interest; but lending money actually is doing something—sometimes people need more money than they have, and lending it to them is providing a useful service for which you deserve some compensation.

A common argument economists make is that arbitrage will make prices more “efficient”, but when you ask them what they mean by “efficient”, the answer they give is that it removes arbitrage opportunities! So the good thing about arbitrage is that it stops you from doing more arbitrage?

And what if it doesn’t stop you? Many of the ways to exploit price gaps (particularly the simplest ones like “where it’s cheap, buy it; where it’s expensive, sell it”) will automatically close those gaps, but it’s not at all clear to me that all the ways to exploit price gaps will necessarily do so. And even if it’s a small minority of market manipulation strategies that exploit gaps without closing them, those are precisely the strategies that will be most profitable in the long run, because they don’t undermine their own success. Then, left to their own devices, markets will evolve to use such strategies more and more, because those are the strategies that work.

That is, in order for arbitrage to be beneficial, it must always be beneficial; there must be no way to exploit price gaps without inevitably closing those price gaps. If that is not the case, then evolutionary pressure will push more and more of the financial system toward using methods of arbitrage that don’t close gaps—or even exacerbate them. And indeed, when you look at how ludicrously volatile and crisis-prone our financial system has become, it sure looks an awful lot like an evolutionary equilibrium where harmful arbitrage strategies have evolved to dominate.

A world where arbitrage actually led to efficient pricing would be a world where the S&P 500 rises a steady 0.02% per day, each and every day. Maybe you’d see a big move when there was actually a major event, like the start of a war or the invention of a vaccine for a pandemic. You’d probably see a jump up or down of a percentage point or two with each quarterly Fed announcement. But daily moves of even five or six percentage points would be a very rare occurrence—because the real expected long-run aggregate value of the 500 largest publicly-traded corporations in America is what the S&P 500 is supposed to represent, and that is not a number that should change very much very often. The fact that I couldn’t really tell you what that number is without multi-trillion-dollar error bars is so much the worse for anyone who thinks that financial markets can somehow get it exactly right every minute of every day.

Moreover, it’s not hard to imagine how we might close price gaps without simply allowing people to exploit them. There could be a bunch of economists at the Federal Reserve whose job it is to locate markets where there are arbitrage opportunities, and then a bundle of government funds that they can allocate to buying and selling assets in order to close those price gaps. Any profits made are received by the treasury; any losses taken are borne by the treasury. The economists would get paid a comfortable salary, and perhaps get bonuses based on doing a good job in closing large or important price gaps; but there is no need to give them even a substantial fraction of the proceeds, much less all of it. This is already how our money supply is managed, and it works quite well, indeed obviously much better than an alternative with “skin in the game”: Can you imagine the dystopian nightmare we’d live in if the Chair of the Federal Reserve actually received even a 1% share of the US money supply? (Actually I think that’s basically what happened in Zimbabwe: The people who decided how much money to print got to keep a chunk of the money that was printed.)

I don’t actually think this GameStop bubble is all that important in itself. A decade from now, it may be no more memorable than Left Shark or the Macarena. But what is really striking about it is how little it differs from business-as-usual on Wall Street. The fact that a few million Redditors can gather together to buy a stock “for the lulz” or to “stick it to the Man” and thereby bring hedge funds to their knees is not such a big deal in itself, but it is symptomatic of much deeper structural flaws in our financial system.

On the accuracy of testing

Jan 31 JDN 2459246

One of the most important tools we have for controlling the spread of a pandemic is testing to see who is infected. But no test is perfectly reliable. Currently we have tests that are about 80% accurate. But what does it mean to say that a test is “80% accurate”? Many people get this wrong.

First of all, it certainly does not mean that if you have a positive result, you have an 80% chance of having the virus. Yet this is probably what most people think when they hear “80% accurate”.

So I thought it was worthwhile to demystify this a little bit, an explain just what we are talking about when we discuss the accuracy of a test—which turns out to have deep implications not only for pandemics, but for knowledge in general.

There are really two key measures of a test’s accuracy, called sensitivity and specificity, The sensitivity is the probability that, if the true answer is positive (you have the virus), the test result will be positive. This is the sense in which our tests are 80% accurate. The specificity is the probability that, if the true answer is negative (you don’t have the virus), the test result is negative. The terms make sense: A test is sensitive if it always picks up what’s there, and specific if it doesn’t pick up what isn’t there.

These two measures need not be the same, and typically are quite different. In fact, there is often a tradeoff between them: Increasing the sensitivity will often decrease the specificity.

This is easiest to see with an extreme example: I can create a COVID test that has “100% accuracy” in the sense of sensitivity. How do I accomplish this miracle? I simply assume that everyone in the world has COVID. Then it is absolutely guaranteed that I will have zero false negatives.

I will of course have many false positives—indeed the vast majority of my “positive results” will be me assuming that COVID is present without any evidence. But I can guarantee a 100% true positive rate, so long as I am prepared to accept a 0% true negative rate.

It’s possible to combine tests in ways that make them more than the sum of their parts. You can first run a test with a high specificity, and then re-test with a test that has a high sensitivity. The result will have both rates higher than either test alone.

For example, suppose test A has a sensitivity of 70% and a specificity of 90%, while test B has the reverse.

Then, if the true answer is positive, test A will return true 70% of the time, while test B will return true 90% of the time. So there is a 70% + (30%)(90%) = 97% chance of getting a positive result on the combined test.

If the true answer is negative, test A will return false 90% of the time, while test B will return false 70% of the time. So there is a 90% + (10%)(70%) = 97% chance of getting a negative result on the combined test.

Actually if we are going to specify the accuracy of a test in a single number, I think it would be better to use a much more obscure term, the informedness. Informedness is sensitivity plus specificity, minus one. It ranges between -1 and 1, where 1 is a perfect test, and 0 is a test that tells you absolutely nothing. -1 isn’t the worst possible test; it’s a test that’s simply calibrated backwards! Re-label it, and you’ve got a perfect test. So really maybe we should talk about the absolute value of the informedness.

It’s much harder to play tricks with informedness: My “miracle test” that just assumes everyone has the virus actually has an informedness of zero. This makes sense: The “test” actually provides no information you didn’t already have.

Surprisingly, I was not able to quickly find any references to this really neat mathematical result for informedness, but I find it unlikely that I am the only one who came up with it: The informedness of a test is the non-unit eigenvalue of a Markov matrix representing the test. (If you don’t know what all that means, don’t worry about it; it’s not important for this post. I just found it a rather satisfying mathematical result that I couldn’t find anyone else talking about.)

But there’s another problem as well: Even if we know everything about the accuracy of a test, we still can’t infer the probability of actually having the virus from the test result. For that, we need to know the baseline prevalence. Failing to account for that is the very common base rate fallacy.

Here’s a quick example to help you see what the problem is. Suppose that 1% of the population has the virus. And suppose that the tests have 90% sensitivity and 95% specificity. If I get a positive result, what is the probability I have the virus?

If you guessed something like 90%, you have committed the base rate fallacy. It’s actually much smaller than that. In fact, the true probability you have the virus is only 15%.

In a population of 10000 people, 100 (1%) will have the virus while 9900 (99%) will not. Of the 100 who have the virus, 90 (90%) will test positive and 10 (10%) will test negative. Of the 9900 who do not have the virus, 495 (5%) will test positive and 9405 (95%) will test negative.

This means that out of 585 positive test results, only 90 will actually be true positives!

If we wanted to improve the test so that we could say that someone who tests positive is probably actually positive, would it be better to increase sensitivity or specificity? Well, let’s see.

If we increased the sensitivity to 95% and left the specificity at 95%, we’d get 95 true positives and 495 false positives. This raises the probability to only 16%.

But if we increased the specificity to 97% and left the sensitivity at 90%, we’d get 90 true positives and 297 false positives. This raises the probability all the way to 23%.

But suppose instead we care about the probability that you don’t have the virus, given that you test negative. Our original test had 9900 true negatives and 10 false negatives, so it was quite good in this regard; if you test negative, you only have a 0.1% chance of having the virus.

Which approach is better really depends on what we care about. When dealing with a pandemic, false negatives are much worse than false positives, so we care most about sensitivity. (Though my example should show why specificity also matters.) But there are other contexts in which false positives are more harmful—such as convicting a defendant in a court of law—and then we want to choose a test which has a high true negative rate, even if it means accepting a low true positive rate.

In science in general, we seem to care a lot about false positives; a p-value is simply one minus the specificity of the statistical test, and as we all know, low p-values are highly sought after. But the sensitivity of statistical tests is often quite unclear. This means that we can be reasonably confident of our positive results (provided the baseline probability wasn’t too low, the statistics weren’t p-hacked, etc.); but we really don’t know how confident to be in our negative results. Personally I think negative results are undervalued, and part of how we got a replication crisis and p-hacking was by undervaluing those negative results. I think it would be better in general for us to report 95% confidence intervals (or better yet, 95% Bayesian prediction intervals) for all of our effects, rather than worrying about whether they meet some arbitrary threshold probability of not being exactly zero. Nobody really cares whether the effect is exactly zero (and it almost never is!); we care how big the effect is. I think the long-run trend has been toward this kind of analysis, but it’s still far from the norm in the social sciences. We’ve become utterly obsessed with specificity, and basically forgot that sensitivity exists.

Above all, be careful when you encounter a statement like “the test is 80% accurate”; what does that mean? 80% sensitivity? 80% specificity? 80% informedness? 80% probability that an observed positive is true? These are all different things, and the difference can matter a great deal.

A new chapter in my life, hopefully

Jan 17 JDN 2459232

My birthday is coming up soon, and each year around this time I try to step back and reflect on how the previous year has gone and what I can expect from the next one.

Needless to say, 2020 was not a great year for me. The pandemic and its consequences made this quite a bad year for almost everyone. Months of isolation and fear have made us all stressed and miserable, and even with the vaccines coming out the end is still all too far away. Honestly I think I was luckier than most: My work could be almost entirely done remotely, and my income is a fixed stipend, so financially I faced no hardship at all. But isolation still wreaks its toll.

Most of my energy this past year has been spent on the job market. I applied to over 70 different job postings, and from that I received 6 interviews, all but one of which I’ve already finished. Then, if they liked how I did in those interviews, I will be invited to another phase, which in normal times would be a flyout where candidates visit the campus; but due to COVID it’s all being done remotely now. And then, finally, I may actually get some job offers. Statistically I think I will probably get some kind of offer at this point, but I can’t be sure—and that uncertainty is quite nerve-wracking. I may get a job and move somewhere new, or I may not and have to stay here for another year and try again. Both outcomes are still quite probable, and I really can’t plan on either one.

If I do actually get a job, this will open a new chapter in my life—and perhaps I will finally be able to settle down with a permanent career, buy a house, start a family. One downside of graduate school I hadn’t really anticipated is how it delays adulthood: You don’t really feel like you are a proper adult, because you are still in the role of a student for several additional years. I am all too ready to be done with being a student. I feel as though I’ve spent all my life preparing to do things instead of actually doing them, and I am now so very tired of preparing.

I don’t even know for sure what I want to do—I feel disillusioned with academia, I haven’t been able to snare any opportunities in government or nonprofits, and I need more financial security than I could get if I leapt headlong into full-time writing. But I am quite certain that I want to actually do something, and no longer simply be trained and prepared (and continually evaluated on that training and preparation).

I’m even reluctant to do a postdoc, because that also likely means packing up and moving again in a few year (though I would prefer it to remaining here another year).

I have to keep reminding myself that all of this is temporary: The pandemic will eventually be quelled by vaccines, and quarantine procedures will end, and life for most of us will return to normal. Even if I don’t get a job I like this year, I probably will next year; and then I can finally tie off my education with a bow and move on. Even if the first job isn’t permanent, eventually one will be, and at last I’ll be able to settle into a stable adult life.

Much of this has already dragged on longer than I thought it would. Not the job market, which has gone more or less as expected. (More accurately, my level of optimism has jumped up and down like a roller coaster, and on average what I thought would happen has been something like what actually happened so far.) But the pandemic certainly has; the early attempts at lockdown were ineffective, the virus kept spreading worse and worse, and now there are more COVID cases in the US than ever before. Southern California in particular has been hit especially hard, and hospitals here are now overwhelmed just as we feared they might be.

Even the removal of Trump has been far more arduous than I expected. First there was the slow counting of ballots because so many people had (wisely) voted absentee. Then there were the frivolous challenges to the counts—and yes, I mean frivolous in a legal sense, as 61 out of 62 lawsuits were thrown out immediately and the 1 that made it through was a minor technical issue.

And then there was an event so extreme I can barely even fathom that it actually happened: An armed mob stormed the Capitol building, forced Congress to evacuate, and made it inside with minimal resistance from the police. The stark difference in how the police reacted to this attempted insurrection and how they have responded to the Black Lives Matter protests underscores the message of Black Lives Matter better than they ever could have by themselves.

In one sense it feels like so much has happened: We have borne witness to historic events in real-time. But in another sense it feels like so little has happened: Staying home all the time under lockdown has meant that days are alway much the same, and each day blends into the next. I feel somehow unhinged frrom time, at once marveling that a year has passed already, and marveling that so much happened in only a year.

I should soon hear back from these job interviews and have a better idea what the next chapter of my life will be. But I know for sure that I’ll be relieved once this one is over.

2020 is almost over

Dec27 JDN 2459211

I don’t think there are many people who would say that 2020 was their favorite year. Even if everything else had gone right, the 1.7 million deaths from the COVID pandemic would already make this a very bad year.

As if that weren’t bad enough, shutdowns in response to the pandemic, resulting unemployment, and inadequate fiscal policy responses have in a single year thrown nearly 150 million people back into extreme poverty. Unemployment in the US this year spiked to nearly 15%, its highest level since World War 2. Things haven’t been this bad for the US economy since the Great Depression.

And this Christmas season certainly felt quite different, with most of us unable to safely travel and forced to interact with our families only via video calls. New Year’s this year won’t feel like a celebration of a successful year so much as relief that we finally made it through.

Many of us have lost loved ones. Fortunately none of my immediate friends and family have died of COVID, but I can now count half a dozen acquaintances, friends-of-friends or distant relatives who are no longer with us. And I’ve been relatively lucky overall; both I and my partner work in jobs that are easy to do remotely, so our lives haven’t had to change all that much.

Yet 2020 is nearly over, and already there are signs that things really will get better in 2021. There are many good reasons for hope.


Joe Biden won the election by a substantial margin in both the popular vote and the Electoral College.

There are now multiple vaccines for COVID that have been successfully fast-tracked, and they are proving to be remarkably effective. Current forecasts suggest that we’ll have most of the US population vaccinated by the end of next summer.

Maybe the success of this vaccine will finally convince some of the folks who have been doubting the safety and effectiveness of vaccines in general. (Or maybe not; it’s too soon to tell.)

Perhaps the greatest reason to be hopeful about the future is the fact that 2020 is a sharp deviation from the long-term trend toward a better world. That 150 million people thrown back into extreme poverty needs to be compared against the over 1 billion people who have been lifted out of extreme poverty in just the last 30 years.

Those 1.7 million deaths need to be compared against the fact that global life expectancy has increased from 45 to 73 since 1950. The world population is 7.8 billion people. The global death rate has fallen from over 20 deaths per 1000 people per year to only 7.6 deaths per 1000 people per year. Multiplied over 7.8 billion people, that’s nearly 100 million lives saved every single year by advances in medicine and overall economic development. Indeed, if we were to sustain our current death rate indefinitely, our life expectancy would rise to over 130. There are various reasons to think that probably won’t happen, mostly related to age demographics, but in fact there are medical breakthroughs we might make that would make it possible. Even according to current forecasts, world life expectancy is expected to exceed 80 years by the end of the 21st century.

There have also been some significant environmental milestones this year: Global carbon emissions fell an astonishing 7% in 2020, though much of that was from reduced economic activity in response to the pandemic. (If we could sustain that, we’d cut global emissions in half each decade!) But many other milestones were the product of hard work, not silver linings of a global disaster: Whales returned to the Hudson river, Sweden officially terminated their last coal power plant, and the Great Barrier Reef is showing signs of recovery.

Yes, it’s been a bad year for most of us—most of the world, in fact. But there are many reasons to think that next year will be much better.

Adversity is not a gift

Nov 29 JDN 2459183

For the last several weeks I’ve been participating in a program called “positive intelligence” (which they abbreviate “PQ” even though that doesn’t make sense); it’s basically a self-help program that is designed to improve mood and increase productivity. I am generally skeptical of such things, and I could tell from the start that it was being massively oversold, but I had the opportunity to participate for free, and I looked into the techniques involved and most of them seem to be borrowed from cognitive-behavioral therapy and mindfulness meditation.

Overall, I would say that the program has had small but genuine benefits for me. I think the most helpful part was actually getting the chance to participate in group sessions (via Zoom of course) with others also going through the program. That kind of mutual social support can make a big difference. The group I joined was all comprised of fellow economists (some other grad students, some faculty), so we had a lot of shared experiences.

Some of the techniques feel very foolish, and others just don’t seem to work for me; but I did find at least some of the meditation techniques (which they annoyingly insist on calling by the silly name “PQ reps”) have helped me relax.

But there’s one part of the PQ program in particular that I just can’t buy into, and this is the idea that adversity is a gift and an opportunity.

They call it the “Sage perspective”: You observe the world without judging what is good or bad, and any time you think something is bad, you find a way to transform it into a gift and an opportunity. The claim is that everything—or nearly everything—that happens to you can make you better off. There’s a lot of overlap here with the attitude “Everything happens for a reason”.

I don’t doubt that sincerely believing this would make you happier. Nevertheless, it is obviously false.

If indeed adversity were a gift, we would seek it out. If getting fired or going bankrupt or getting sick were a gift and an opportunity, we’d work to make these things happen.

Yes, it’s true that sometimes an event which seems bad at the time can turn out to have good consequences in the long run. This is simply because we are unable to foresee all future ramifications. Sometimes things turn out differently than you think they will. But most of the time, when something seems bad, it is actually bad.

There might be some small amount of discomfort or risk that would be preferable to a life of complete safety and complacency; but we are perfectly capable of seeking out whatever discomfort or risk we choose. Most of us live with far more discomfort and risk than we would prefer, and simply have no choice in the matter.

If adversity were a gift, people would thank you for giving it to them. “Thanks for dumping me!” “Thanks for firing me!” “Thanks for punching me!” These aren’t the sort of thing we hear very often (at least not sincerely).

I think this is fairly obvious, honestly, so I won’t belabor it any further. But it raises a question: Is there a way to salvage the mental health benefits of this attitude while abandoning its obvious falsehood?

“Everything happens for a reason” doesn’t work; we live in a universe of deep randomness, ruled by the blind idiot gods of natural law.

“Every cloud has a silver lining” is better; but clearly not every bad thing has an upside, or if it does the upside can be so small as to be utterly negligible. (What was the upside of Rwandan genocide?) Restricted to ordinary events like getting fired this one works pretty well; but it obviously fails for the most extreme traumas, and doesn’t seem particularly helpful for the death of a loved one either.

“What doesn’t kill me makes me stronger” is better still, but clearly not true in every case; some bad events that don’t actually kill us can traumatize us and make the rest of our lives harder. Perhaps “What doesn’t permanently damage me makes me stronger”?

I think the version of this attitude that I have found closest to the truth is “Everything is raw material”. Sometimes bad things just happen: Bad luck, or bad actions, can harm just about anyone at just about any time. But it is within our power to decide how we will respond to what happens to us, and wallowing in despair is almost never the best response.

Thus, while it is foolish to see adversity as a gift, it is not so foolish to see it as an opportunity. Don’t try to pretend that bad things aren’t bad. There’s no sense in denying that we would prefer some outcomes over others, and we feel hurt or disappointed when things don’t turn out how we wanted. Yet even what is bad can still contain within it chances to learn or make things better.