Defending yourself defends others

Mar 10 JDN 2458553

There’s a meme going around the feminist community that is very well-intentioned, but dangerously misguided. I first encountered it as a tweet, though it may have originated elsewhere:

If you’re promoting changes to women’s behaviour to “prevent” rape, you’re really saying “make sure he rapes the other girl”.

The good intention here is that we need to stop blaming victims. Victim-blaming is ubiquitous, and especially common and harmful in the case of sexual assault. If someone assaults you—or robs you, or abuses you—it is never your fault.

But I fear that there is a baby being thrown out with this bathwater: While failing to defend yourself doesn’t make it your fault, being able to defend yourself can still make you safer.

And, just as importantly, it can make others safer too. The game theory behind that is the subject of this post.

For purposes of the theory, it doesn’t matter what the crime is. So let’s set aside the intense emotional implications of sexual assault and suppose the crime is grand theft auto.

Some cars are defended—they have a LoJack system installed that will allow them to be recovered and the thieves to be prosecuted. (Don’t suppose it’s a car alarm; those don’t work.)

Other cars are not defended—once stolen, they may not be recovered.

There are two cases to consider: Defense that is visible, and defense that is invisible.

Let’s start by assuming that the defense is visible: When choosing which car to try to steal, the thieves can intentionally pick one that doesn’t have a LoJack installed. (This doesn’t work well for car theft, but it’s worth considering for the general question of self-defense. The kind of clothes you wear, the way you carry yourself, how many people are with you, and overall just how big and strong you look are visible signs of a capacity for self-defense.)

In that case, the game is one of perfect information: First each car owner chooses whether or not to install a LoJack at some cost L (in real life, about $700), and then thieves see which cars are equipped and then choose which car to steal.

Let’s say the probability of a car theft being recovered and prosecuted if it’s defended is p, and the probability of it being recovered if it’s not defended is q; p > q. In the real world, about half of stolen cars are recovered—but over 90% of LoJack-equipped vehicles are recovered, so p = 0.9 and q = 0.5.

Then let’s say the cost of being caught and prosecuted is C. This is presumably quite high: If you get convicted, you could spend time in prison. But maybe the car will be recovered and the thief won’t be convicted. Let’s ballpark that at about $30,000.

Finally, the value of successfully stealing a car is V. The average price of a used car in the US is about $20,000, so V is probably close to that.

If no cars are defended, what will the thieves choose? Assuming they are risk-neutral (car thieves don’t seem like very risk averse folks, in general), the expected benefit of stealing a car is V – q C. With the parameters above, that’s (20000)-(0.5)(30000) = $5,000. The thieves will choose a car at random and steal it.

If some cars are defended and some are not, what will the thieves choose? They will avoid the defended cars and steal one of the undefended cars.

But what if all cars are defended? Now the expected benefit is V – p C, which is (20000)-(0.9)(30000) = -$7,000. The thieves will not steal any cars at all. (This is actually the unique subgame-perfect equilibrium: Everyone installs a LoJack and no cars get stolen. Of course, that assumes perfect rationality.)

Yet that isn’t so impressive; everyone defending themselves results in everyone being defended? That sounds tautological. Expecting everyone to successfully defend themselves all the time sounds quite unreasonable. This might be what people have in mind when they say things like the quote above: It’s impossible for everyone to be defended always.

But it turns out that we don’t actually need that. Things get a lot more interesting when we assume that self-defense can be invisible. It would be very hard to know whether a car has a LoJack installed without actually opening it up, and there are many other ways to defend yourself that are not so visible—such as knowing techniques of martial arts or using a self-defense phone app.

Now the game has imperfect information. The thieves don’t know whether you have chosen to defend your car or not.

We need to add a couple more parameters. First is the number of cars per thief n. Then we need the proportion of cars that are defended. Let’s call it d. Then with probability d a given car is defended, and with probability 1-d it is not.

The expected value of stealing a car for the thieves is now this: V – p d C – q (1-d) C. If this is positive, they will steal a car; if it is negative, they will not.

Knowing this, should you install a LoJack? Remember that it costs you L to do so.

What’s the probability your car will be stolen? If they are stealing cars at all, the probability of your car being one stolen is 1/n. If that happens, you will have an expected loss of (1-p)V if you have a LoJack, or (1-q)V if you don’t. The difference between those is (p-q)V.

So your expected benefit of having a LoJack is (p-q)V/n – L. With the parameters above, that comes to: (0.9-0.5)(20000)/n – (700) = 8000/n – 700. So if there are no more than 11 cars per thief, this is positive and you should buy a LoJack. If there are 12 or more cars per thief, you’re better off taking your chances.

This only applies if the thieves are willing to steal at all. And then the interesting question is whether V – p d C – q (1-d) C is positive. For these parameters, that’s (20000) – (0.9)(30000)d – (0.5)(30000) + (0.5)(30000)d = 5000 – 12000 d. Notice that if we substitute in d=0 we get back $5,000, and at d=1 we get back -$7,000, just as before. There is a critical value of d at which the thieves aren’t sure whether to try or not: d* = 5/12 = 0.42.

Assuming that a given car is worth defending if it would be stolen (n <= 11), the equilibrium is actually when precisely d* of the cars are defended and 1-d* are not. Any less than this, and there is an undefended car that would be worth defending. Any more than this, and the thieves aren’t going to try to steal anything, so why bother defending?

Of course this is a very stylized model: In particular, we assumed that all cars are equally valuable and equally easy to steal, which is surely not true in real life.

Yet this model is still enough to make the most important point: Since presumably we do not value the welfare of the car thieves, it could happen that people choosing on their own would not defend their cars, but society as a whole would be better off if they did.

The net loss to society from a stolen car is (1-q)V if the car was not defended, or (1-p)V if it was. But if the thieves don’t steal any cars at all, the net loss to society is zero. The cost of defending a proportion d* of all cars is n d* L.

So if we are currently at d = 0, society is currently losing (1-q)V. We could eliminate this cost entirely by paying n d* L to defend a sufficient number of cars. Suppose n = 30. Then this total cost is (30)(5/12)(700) = $8,750. The loss from cars being stolen was (0.5)(20000) = $10,000. So it would be worth it, from society’s perspective, to randomly install LoJack systems in 42% of cars.

But for any given car owner, it would not be worth it; the expected benefit is 8000/30 – 700 = -$433. (I guess we could ask how much you’re willing to pay for “peace of mind”.)

Where does the extra benefit go? To all the other car owners. By defending your car, you are raising d and thereby lowering the expected payoff for a car thief. There is a positive externality; this is a public good. You get some of that benefit yourself, but others also share in that benefit.

This brings me at last to the core message of this post:

Self-defense is a public good.

The better each person defends themselves, the riskier it becomes for criminals to try to victimize anyone. Never feel guilty for trying to defend yourself; you are defending everyone else at the same time. In fact, you should consider taking actions to defend yourself even when you aren’t sure it’s worth it for you personally: That positive externality may be large enough to make your actions worthwhile for society as a whole.

Again, this does not mean we should blame victims when they are unable to defend themselves. Self-defense is easier for some people than others, and everyone is bound to slip up on occasion. (Also, eternal vigilance can quickly shade over into paranoia.) It is always the perpetrator’s fault.

The “market for love” is a bad metaphor

Feb 14 JDN 2458529

Valentine’s Day was this past week, so let’s talk a bit about love.

Economists would never be accused of being excessively romantic. To most neoclassical economists, just about everything is a market transaction. Love is no exception.

There are all sorts of articles and books and an even larger number of research papers going back multiple decades and continuing all the way through until today using the metaphor of the “marriage market”.

In a few places, marriage does actually function something like a market: In China, there are places where your parents will hire brokers and matchmakers to select a spouse for you. But even this isn’t really a market for love or marriage. It’s a market for matchmaking services. The high-tech version of this is dating sites like OkCupid.
And of course sex work actually occurs on markets; there is buying and selling of services at monetary prices. There is of course a great deal worth saying on that subject, but it’s not my topic for today.

But in general, love is really nothing like a market. First of all, there is no price. This alone should be sufficient reason to say that we’re not actually dealing with a market. The whole mechanism that makes a market a market is the use of prices to achieve equilibrium between supply and demand.

A price doesn’t necessarily have to be monetary; you can barter apples for bananas, or trade in one used video game for another, and we can still legitimately call that a market transaction with a price.

But love isn’t like that either. If your relationship with someone is so transactional that you’re actually keeping a ledger of each thing they do for you and each thing you do for them so that you could compute a price for services, that isn’t love. It’s not even friendship. If you really care about someone, you set such calculations aside. You view their interests and yours as in some sense shared, aligned toward common goals. You stop thinking in terms of “me” and “you” and start thinking in terms of “us”. You don’t think “I’ll scratch your back if you scratch mine.” You think “We’re scratching each other’s backs today.”

This is of course not to say that love never involves conflict. On the contrary, love always involves conflict. Successful relationships aren’t those where conflict never happens, they are those where conflict is effectively and responsibly resolved. Your interests and your loved ones’ are never completely aligned; there will always be some residual disagreement. But the key is to realize that your interests are still mostly aligned; those small vectors of disagreement should be outweighed by the much larger vector of your relationship.

And of course, there can come a time when that is no longer the case. Obviously, there is domestic abuse, which should absolutely be a deal-breaker for anyone. But there are other reasons why you may find that a relationship ultimately isn’t working, that your interests just aren’t as aligned as you thought they were. Eventually those disagreement vectors just get too large to cancel out. This is painful, but unavoidable. But if you reach the point where you are keeping track of actions on a ledger, that relationship is already dead. Sooner or later, someone is going to have to pull the plug.

Very little of what I’ve said in the preceding paragraphs is likely to be controversial. Why, then, would economists think that it makes sense to treat love as a market?

I think this comes down to a motte and bailey doctrine. A more detailed explanation can be found at that link, but the basic idea of a motte and bailey is this: You have a core set of propositions that is highly defensible but not that interesting (the “motte”), and a broader set of propositions that are very interesting, but not as defensible (the “bailey”). The terms are related to a medieval defensive strategy, in which there was a small, heavily fortified tower called a motte, surrounded by fertile, useful land, the bailey. The bailey is where you actually want to live, but it’s hard to defend; so if the need arises, you can pull everyone back into the motte to fight off attacks. But nobody wants to live in the motte; it’s just a cramped stone tower. There’s nothing to eat or enjoy there.

The motte comprised of ideas that almost everyone agrees with. The bailey is the real point of contention, the thing you are trying to argue for—which, by construction, other people must not already agree with.

Here are some examples, which I have intentionally chosen from groups I agree with:

Feminism can be a motte and bailey doctrine. The motte is “women are people”; the bailey is abortion rights, affirmative consent and equal pay legislation.

Rationalism can be a motte and bailey doctrine. The motte is “rationality is good”; the bailey is atheism, transhumanism, and Bayesian statistics.

Anti-fascism can be a motte and bailey doctrine. The motte is “fascists are bad”; the bailey is black bloc Antifa and punching Nazis.

Even democracy can be a motte and bailey doctrine. The motte is “people should vote for their leaders”; my personal bailey is abolition of the Electoral College, a younger voting age, and range voting.

Using a motte and bailey doctrine does not necessarily make you wrong. But it’s something to be careful about, because as a strategy it can be disingenuous. Even if you think that the propositions in the bailey all follow logically from the propositions in the motte, the people you’re talking to may not think so, and in fact you could simply be wrong. At the very least, you should be taking the time to explain how one follows from the other; and really, you should consider whether the connection is actually as tight as you thought, or if perhaps one can believe that rationality is good without being Bayesian or believe that women are people without supporting abortion rights.

I think when economists describe love or marriage as a “market”, they are applying a motte and bailey doctrine. They may actually be doing something even worse than that, by equivocating on the meaning of “market”. But even if any given economist uses the word “market” totally consistently, the fact that different economists of the same broad political alignment use the word differently adds up to a motte and bailey doctrine.

The doctrine is this: “There have always been markets.”

The motte is something like this: “Humans have always engaged in interaction for mutual benefit.”

This is undeniably true. In fact, it’s not even uninteresting. As mottes go, it’s a pretty nice one; it’s worth spending some time there. In the endless quest for an elusive “human nature”, I think you could do worse than to focus on our universal tendency to engage in interaction for mutual benefit. (Don’t other species do it too? Yes, but that’s just it—they are precisely the ones that seem most human.)

And if you want to define any mutually-beneficial interaction as a “market trade”, I guess it’s your right to do that. I think this is foolish and confusing, but legislating language has always been a fool’s errand.

But of course the more standard meaning of the word “market” implies buyers and sellers exchanging goods and services for monetary prices. You can extend it a little to include bartering, various forms of financial intermediation, and the like; but basically you’re still buying and selling.

That makes this the bailey: “Humans have always engaged in buying and selling of goods and services at prices.”

And that, dear readers, is ahistorical nonsense. We’ve only been using money for a few thousand years, and it wasn’t until the Industrial Revolution that we actually started getting the majority of our goods and services via market trades. Economists like to tell a story where bartering preceded the invention of money, but there’s basically no evidence of that. Bartering seems to be what people do when they know how money works but don’t have any money to work with.

Before there was money, there were fundamentally different modes of interaction: Sharing, ritual, debts of honor, common property, and, yes, love.

These were not markets. They perhaps shared some very broad features of markets—such as the interaction for mutual benefit—but they lacked the defining attributes that make a market a market.

Why is this important? Because this doctrine is used to transform more and more of our lives into actual markets, on the grounds that they were already “markets”, and we’re just using “more efficient” kinds of markets. But in fact what’s happening is we are trading one fundamental mode of human interaction for another: Where we used to rely upon norms or trust or mutual affection, we instead rely upon buying and selling at prices.

In some cases, this actually is a good thing: Markets can be very powerful, and are often our best tool when we really need something done. In particular, it’s clear at this point that norms and trust are not sufficient to protect us against climate change. All the “Reduce, Reuse, Recycle” PSAs in the world won’t do as much as a carbon tax. When millions of lives are at stake, we can’t trust people to do the right thing; we need to twist their arms however we can.

But markets are in some sense a brute-force last-resort solution; they commodify and alienate (Marx wasn’t wrong about that), and despite our greatly elevated standard of living, the alienation and competitive pressure of markets seem to be keeping most of us from really achieving happiness.

This is why it’s extremely dangerous to talk about a “market for love”. Love is perhaps the last bastion of our lives that has not been commodified into a true market, and if it goes, we’ll have nothing left. If sexual relationships built on mutual affection were to disappear in favor of apps that will summon a prostitute or a sex robot at the push of a button, I would count that as a great loss for human civilization. (How we should regulate prostitution or sex robots are a different question, which I said I’d leave aside for this post.) A “market for love” is in fact a world with no love at all.

Moral luck: How it matters, and how it doesn’t

Feb 10 JDN 2458525

The concept of moral luck is now relatively familiar to most philosophers, but I imagine most other people haven’t heard it before. It sounds like a contradiction, which is probably why it drew so much attention.

The term “moral luck” seems to have originated in essay by Thomas Nagel, but the intuition is much older, dating at least back to Greek philosophy (and really probably older than that; we just don’t have good records that far back).

The basic argument is this:

Most people would say that if you had no control over something, you can’t be held morally responsible for it. It was just luck.

But if you look closely, everything we do—including things we would conventionally regard as moral actions—depends heavily on things we don’t have control over.

Therefore, either we can be held responsible for things we have no control over, or we can’t be held responsible for anything at all!

Neither approach seems very satisfying; hence the conundrum.

For example, consider four drivers:

Anna is driving normally, and nothing of note happens.

Bob is driving recklessly, but nothing of note happens.

Carla is driving normally, but a child stumbles out into the street and she runs the child over.

Dan is driving recklessly, and a child stumbles out into the street and he runs the child over.

The presence or absence of a child in the street was not in the control of any of the four drivers. Yet I think most people would agree that Dan should be held more morally responsible than Bob, and Carla should be held more morally responsible than Anna. (Whether Bob should be held more morally responsible than Carla is not as clear.) Yet both Bob and Dan were driving recklessly, and both Anna and Carla were driving normally. The moral evaluation seems to depend upon the presence of the child, which was not under the drivers’ control.

Other philosophers have argued that the difference is an epistemic one: We know the moral character of someone who drove recklessly and ran over a child better than the moral character of someone who drove recklessly and didn’t run over a child. But do we, really?

Another response is simply to deny that we should treat Bob and Dan any differently, and say that reckless driving is reckless driving, and safe driving is safe driving. For this particular example, maybe that works. But it’s not hard to come up with better examples where that doesn’t work:

Ted is a psychopathic serial killer. He kidnaps, rapes, and murder people. Maybe he can control whether or not he rapes and murders someone. But the reason he rapes and murders someone is that he is a psychopath. And he can’t control that he is a psychopath. So how can we say that his actions are morally wrong?

Obviously, we want to say that his actions are morally wrong.

I have heard one alternative, which is to consider psychopaths as morally equivalent to viruses: Zero culpability, zero moral value, something morally neutral but dangerous that we should contain or eradicate as swiftly as possible. HIV isn’t evil; it’s just harmful. We should kill it not because it deserves to die, but because it will kill us if we don’t. On this theory, Ted doesn’t deserve to be executed; it’s just that we must execute him in order to protect ourselves from the danger he poses.

But this quickly becomes unsatisfactory as well:

Jonas is a medical researcher whose work has saved millions of lives. Maybe he can control the research he works on, but he only works on medical research because he was born with a high IQ and strong feelings of compassion. He can’t control that he was born with a high IQ and strong feelings of compassion. So how can we say his actions are morally right?

This is the line of reasoning that quickly leads to saying that all actions are outside our control, and therefore morally neutral; and then the whole concept of morality falls apart.

So we need to draw the line somewhere; there has to be a space of things that aren’t in our control, but nonetheless carry moral weight. That’s moral luck.

Philosophers have actually identified four types of moral luck, which turns out to be tremendously useful in drawing that line.

Resultant luck is luck that determines the consequences of your actions, how things “turn out”. Happening to run over the child because you couldn’t swerve fast enough is resultant luck.

Circumstantial luck is luck that determines the sorts of situations you are in, and what moral decisions you have to make. A child happening to stumble across the street is circumstantial luck.

Constitutive luck is luck that determines who you are, your own capabilities, virtues, intentions and so on. Having a high IQ and strong feelings of compassion is constitutive luck.

Causal luck is the inherent luck written into the fabric of the universe that determines all events according to the fundamental laws of physics. Causal luck is everything and everywhere; it is written into the universal wavefunction.

I have a very strong intuition that this list is ordered; going from top to bottom makes things “less luck” in a vital sense.

Resultant luck is pure luck, what we originally meant when we said the word “luck”. It’s the roll of the dice.

Circumstantial luck is still mostly luck, but maybe not entirely; there are some aspects of it that do seem to be under our control.

Constitutive luck is maybe luck, sort of, but not really. Yes, “You’re lucky to be so smart” makes sense, but “You’re lucky to not be a psychopath” already sounds pretty weird. We’re entering territory here where our ordinary notions of luck and responsibility really don’t seem to apply.

Causal luck is not luck at all. Causal luck is really the opposite of luck: Without a universe with fundamental laws of physics to maintain causal order, none of our actions would have any meaning at all. They wouldn’t even really be actions; they’d just be events. You can’t do something in a world of pure chaos; things only happen. And being made of physical particles doesn’t make you any less what you are; a table made of wood is still a table, and a rocket made of steel is still a rocket. Thou art physics.

And that, my dear reader, is the solution to the problem of moral luck. Forget “causal luck”, which isn’t luck at all. Then, draw a hard line at constitutive luck: regardless of how you became who you are, you are responsible for what you do.

You don’t need to have control over who you are (what would that even mean!?).

You merely need to have control over what you do.

This is how the word “control” is normally used, by the way; when we say that a manufacturing process is “under control” or a pilot “has control” of an airplane, we aren’t asserting some grand metaphysical claim of ultimate causation. We’re merely saying that the system is working as it’s supposed to; the outputs coming out are within the intended parameters. This is all we need for moral responsibility as well.

In some cases, maybe people’s brains really are so messed up that we can’t hold them morally responsible; they aren’t “under control”. Okay, we’re back to the virus argument then: Contain or eradicate. If a brain tumor makes you so dangerous that we can’t trust you around sharp objects, unless we can take out that tumor, we’ll need to lock you up somewhere where you can’t get any sharp objects. Sorry. Maybe you don’t deserve that in some ultimate sense, but it’s still obviously what we have to do. And this is obviously quite exceptional; most people are not suffering from brain tumors that radically alter their personalities—and even most psychopaths are otherwise neurologically normal.

Ironically, it’s probably my fellow social scientists who will scoff the most at this answer. “But so much of what we are is determined by our neurochemistry/cultural norms/social circumstances/political institutions/economic incentives!” Yes, that’s true. And if we want to change those things to make us and others better, I’m all for it. (Well, neurochemistry is a bit problematic, so let’s focus on the others first—but if you can make a pill that cures psychopathy, I would support mandatory administration of that pill to psychopaths in positions of power.)

When you make a moral choice, we have to hold you responsible for that choice.

Maybe Ted is psychopathic and sadistic because there was too much lead in his water as a child. That’s a good reason to stop putting lead in people’s water (like we didn’t already have plenty!); but it’s not a good reason to let Ted off the hook for all those rapes and murders.

Maybe Jonas is intelligent and compassionate because his parents were wealthy and well-educated. That’s a good reason to make sure people are financially secure and well-educated (again, did we need more?); but it’s not a good reason to deny Jonas his Nobel Prize for saving millions of lives.

Yes, “personal responsibility” has been used by conservatives as an excuse to not solve various social and economic problems (indeed, it has specifically been used to stop regulations on lead in water and public funding for education). But that’s not actually anything wrong with personal responsibility. We should hold those conservatives personally responsible for abusing the term in support of their destructive social and economic policies. No moral freedom is lost by preventing lead from turning children into psychopaths. No personal liberty is destroyed by ensuring that everyone has access to a good education.

In fact, there is evidence that telling people who are suffering from poverty or oppression that they should take personal responsibility for their choices benefits them. Self-perceived victimhood is linked to all sorts of destructive behaviors, even controlling for prior life circumstances. Feminist theorists have written about how taking responsibility even when you are oppressed can empower you to make your life better. Yes, obviously, we should be helping people when we can. But telling them that they are hopeless unless we come in to rescue them isn’t helping them.

This way of thinking may require a delicate balance at times, but it’s not inconsistent. You can both fight against lead pollution and support the criminal justice system. You can believe in both public education and the Nobel Prize. We should be working toward a world where people are constituted with more virtue for reasons beyond their control, and where people are held responsible for the actions they take that are under their control.

We can continue to talk about “moral luck” referring to constitutive luck, I suppose, but I think the term obscures more than it illuminates. The “luck” that made you a good or a bad person is very different from the “luck” that decides how things happen to turn out.