Bayesian updating with irrational belief change

Jul 27 JDN 2460884

For the last few weeks I’ve been working at a golf course. (It’s a bit of an odd situation: I’m not actually employed by the golf course; I’m contracted by a nonprofit to be a “job coach” for a group of youths who are part of a work program that involves them working at the golf course.)

I hate golf. I have always hated golf. I find it boring and pointless—which, to be fair, is my reaction to most sports—and also an enormous waste of land and water. A golf course is also a great place for oligarchs to arrange collusion.

But I noticed something about being on the golf course every day, seeing people playing and working there: I feel like I hate it a bit less now.

This is almost certainly a mere-exposure effect: Simply being exposed to something many times makes it feel familiar, and that tends to make you like it more, or at least dislike it less. (There are some exceptions: repeated exposure to trauma can actually make you more sensitive to it, hating it even more.)

I kinda thought this would happen. I didn’t really want it to happen, but I thought it would.

This is very interesting from the perspective of Bayesian reasoning, because it is a theorem (though I cannot seem to find anyone naming the theorem; it’s like a folk theorem, I guess?) of Bayesian logic that the following is true:

The prior expectation of the posterior is the expectation of the prior.

The prior is what you believe before observing the evidence; the posterior is what you believe afterward. This theorem describes a relationship that holds between them.

This theorem means that, if I am being optimally rational, I should take into account all expected future evidence, not just evidence I have already seen. I should not expect to encounter evidence that will change my beliefs—if I did expect to see such evidence, I should change my beliefs right now!

This might be easier to grasp with an example.

Suppose I am trying to predict whether it will rain at 5:00 pm tomorrow, and I currently estimate that the probability of rain is 30%. This is my prior probability.

What will actually happen tomorrow is that it will rain or it won’t; so my posterior probability will either be 100% (if it rains) or 0% (if it doesn’t). But I had better assign a 30% chance to the event that will make me 100% certain it rains (namely, I see rain), and a 70% chance to the event that will make me 100% certain it doesn’t rain (namely, I see no rain); if I were to assign any other probabilities, then I must not really think the probability of rain at 5:00 pm tomorrow is 30%.

(The keen Bayesian will notice that the expected variance of my posterior need not be the variance of my prior: My initial variance is relatively high (it’s actually 0.3*0.7 = 0.21, because this is a Bernoulli distribution), because I don’t know whether it will rain or not; but my posterior variance will be 0, because I’ll know the answer once it rains or doesn’t.)

It’s a bit trickier to analyze, but this also works even if the evidence won’t make me certain. Suppose I am trying to determine the probability that some hypothesis is true. If I expect to see any evidence that might change my beliefs at all, then I should, on average, expect to see just as much evidence making me believe the hypothesis more as I see evidence that will make me believe the hypothesis less. If that is not what I expect, I should really change how much I believe the hypothesis right now!

So what does this mean for the golf example?

Was I wrong to hate golf quite so much before, because I knew that spending time on a golf course might make me hate it less?

I don’t think so.

See, the thing is: I know I’m not perfectly rational.

If I were indeed perfectly rational, then anything I expect to change my beliefs is a rational Bayesian update, and I should indeed factor it into my prior beliefs.

But if I know for a fact that I am not perfectly rational, that there are things which will change my beliefs in ways that make them deviate from rational Bayesian updating, then in fact I should not take those expected belief changes into account in my prior beliefs—since I expect to be wrong later, updating on that would just make me wrong now as well. I should only update on the expected belief changes that I believe will be rational.

This is something that a boundedly-rational person should do that neither a perfectly-rational nor perfectly-irrational person would ever do!

But maybe you don’t find the golf example convincing. Maybe you think I shouldn’t hate golf so much, and it’s not irrational for me to change my beliefs in that direction.


Very well. Let me give you a thought experiment which provides a very clear example of a time when you definitely would think your belief change was irrational.


To be clear, I’m not suggesting the two situations are in any way comparable; the golf thing is pretty minor, and for the thought experiment I’m intentionally choosing something quite extreme.

Here’s the thought experiment.

A mad scientist offers you a deal: Take this pill and you will receive $50 million. Naturally, you ask what the catch is. The catch, he explains, is that taking the pill will make you staunchly believe that the Holocaust didn’t happen. Take this pill, and you’ll be rich, but you’ll become a Holocaust denier. (I have no idea if making such a pill is even possible, but it’s a thought experiment, so bear with me. It’s certainly far less implausible than Swampman.)

I will assume that you are not, and do not want to become, a Holocaust denier. (If not, I really don’t know what else to say to you right now. It happened.) So if you take this pill, your beliefs will change in a clearly irrational way.

But I still think it’s probably justifiable to take the pill. This is absolutely life-changing money, for one thing, and being a random person who is a Holocaust denier isn’t that bad in the scheme of things. (Maybe it would be worse if you were in a position to have some kind of major impact on policy.) In fact, before taking the pill, you could write out a contract with a trusted friend that will force you to donate some of the $50 million to high-impact charities—and perhaps some of it to organizations that specifically fight Holocaust denial—thus ensuring that the net benefit to humanity is positive. Once you take the pill, you may be mad about the contract, but you’ll still have to follow it, and the net benefit to humanity will still be positive as reckoned by your prior, more correct, self.

It’s certainly not irrational to take the pill. There are perfectly-reasonable preferences you could have (indeed, likely dohave) that would say that getting $50 million is more important than having incorrect beliefs about a major historical event.

And if it’s rational to take the pill, and you intend to take the pill, then of course it’s rational to believe that in the future, you will have taken the pill and you will become a Holocaust denier.

But it would be absolutely irrational for you to become a Holocaust denier right now because of that. The pill isn’t going to provide evidence that the Holocaust didn’t happen (for no such evidence exists); it’s just going to alter your brain chemistry in such a way as to make you believe that the Holocaust didn’t happen.

So here we have a clear example where you expect to be more wrong in the future.

Of course, if this really only happens in weird thought experiments about mad scientists, then it doesn’t really matter very much. But I contend it happens in reality all the time:

  • You know that by hanging around people with an extremist ideology, you’re likely to adopt some of that ideology, even if you really didn’t want to.
  • You know that if you experience a traumatic event, it is likely to make you anxious and fearful in the future, even when you have little reason to be.
  • You know that if you have a mental illness, you’re likely to form harmful, irrational beliefs about yourself and others whenever you have an episode of that mental illness.

Now, all of these belief changes are things you would likely try to guard against: If you are a researcher studying extremists, you might make a point of taking frequent vacations to talk with regular people and help yourself re-calibrate your beliefs back to normal. Nobody wants to experience trauma, and if you do, you’ll likely seek out therapy or other support to help heal yourself from that trauma. And one of the most important things they teach you in cognitive-behavioral therapy is how to challenge and modify harmful, irrational beliefs when they are triggered by your mental illness.

But these guarding actions only make sense precisely because the anticipated belief change is irrational. If you anticipate a rational change in your beliefs, you shouldn’t try to guard against it; you should factor it into what you already believe.

This also gives me a little more sympathy for Evangelical Christians who try to keep their children from being exposed to secular viewpoints. I think we both agree that having more contact with atheists will make their children more likely to become atheists—but we view this expected outcome differently.

From my perspective, this is a rational change, and it’s a good thing, and I wish they’d factor it into their current beliefs already. (Like hey, maybe if talking to a bunch of smart people and reading a bunch of books on science and philosophy makes you think there’s no God… that might be because… there’s no God?)

But I think, from their perspective, this is an irrational change, it’s a bad thing, the children have been “tempted by Satan” or something, and thus it is their duty to protect their children from this harmful change.

Of course, I am not a subjectivist. I believe there’s a right answer here, and in this case I’m pretty sure it’s mine. (Wouldn’t I always say that? No, not necessarily; there are lots of matters for which I believe that there are experts who know better than I do—that’s what experts are for, really—and thus if I find myself disagreeing with those experts, I try to educate myself more and update my beliefs toward theirs, rather than just assuming they’re wrong. I will admit, however, that a lot of people don’t seem to do this!)

But this does change how I might tend to approach the situation of exposing their children to secular viewpoints. I now understand better why they would see that exposure as a harmful thing, and thus be resistant to actions that otherwise seem obviously beneficial, like teaching kids science and encouraging them to read books. In order to get them to stop “protecting” their kids from the free exchange of ideas, I might first need to persuade them that introducing some doubt into their children’s minds about God isn’t such a terrible thing. That sounds really hard, but it at least clearly explains why they are willing to fight so hard against things that, from my perspective, seem good. (I could also try to convince them that exposure to secular viewpoints won’t make their kids doubt God, but the thing is… that isn’t true. I’d be lying.)

That is, Evangelical Christians are not simply incomprehensibly evil authoritarians who hate truth and knowledge; they quite reasonably want to protect their children from things that will harm them, and they firmly believe that being taught about evolution and the Big Bang will make their children more likely to suffer great harm—indeed, the greatest harm imaginable, the horror of an eternity in Hell. Convincing them that this is not the case—indeed, ideally, that there is no such place as Hell—sounds like a very tall order; but I can at least more keenly grasp the equilibrium they’ve found themselves in, where they believe that anything that challenges their current beliefs poses a literally existential threat. (Honestly, as a memetic adaptation, this is brilliant. Like a turtle, the meme has grown itself a nigh-impenetrable shell. No wonder it has managed to spread throughout the world.)

The problem with “human capital”

Dec 3 JDN 2460282

By now, human capital is a standard part of the economic jargon lexicon. It has even begun to filter down into society at large. Business executives talk frequently about “investing in their employees”. Politicians describe their education policies as “investing in our children”.

The good news: This gives businesses a reason to train their employees, and governments a reason to support education.

The bad news: This is clearly the wrong reason, and it is inherently dehumanizing.

The notion of human capital means treating human beings as if they were a special case of machinery. It says that a business may own and value many forms of productive capital: Land, factories, vehicles, robots, patents, employees.

But wait: Employees?


Businesses don’t own their employees. They didn’t buy them. They can’t sell them. They couldn’t make more of them in another factory. They can’t recycle them when they are no longer profitable to maintain.

And the problem is precisely that they would if they could.

Indeed, they used to. Slavery pre-dates capitalism by millennia, but the two quite successfully coexisted for hundreds of years. From the dawn of civilization up until all too recently, people literally were capital assets—and we now remember it as one of the greatest horrors human beings have ever inflicted upon one another.

Nor is slavery truly defeated; it has merely been weakened and banished to the shadows. The percentage of the world’s population currently enslaved is as low as it has ever been, but there are still millions of people enslaved. In Mauritania, slavery wasn’t even illegal until 1981, and those laws weren’t strictly enforced until 2007. (I had graduated from high school!) One of the most shocking things about modern slavery is how cheaply human beings are willing to sell other human beings; I have bought sandwiches that cost more than some people have paid for other people.

The notion of “human capital” basically says that slavery is the correct attitude to have toward people. It says that we should value human beings for their usefulness, their productivity, their profitability.

Business executives are quite happy to see the world in that way. It makes the way they have spent their lives seem worthwhile—perhaps even best—while allowing them to turn a blind eye to the suffering they have neglected or even caused along the way.

I’m not saying that most economists believe in slavery; on the contrary, economists led the charge of abolitionism, and the reason we wear the phrase “the dismal science” like a badge is that the accusation was first leveled at us for our skepticism toward slavery.

Rather, I’m saying that jargon is not ethically neutral. The names we use for things have power; they affect how people view the world.

This is why I always endeavor to always speak of net wealth rather than net worth—because a billionare is not worth more than other people. I’m not even sure you should speak of the net worth of Tesla Incorporated; perhaps it would be better to simply speak of its net asset value or market capitalization. But at least Tesla is something you can buy and sell (piece by piece). Elon Musk is not.

Likewise, I think we need a new term for the knowledge, skills, training, and expertise that human beings bring to their work. It is clearly extremely important; in fact in some sense it’s the most important economic asset, as it’s the only one that can substitute for literally all the others—and the one that others can least substitute for.

Human ingenuity can’t substitute for air, you say? Tell that to Buzz Aldrin—or the people who were once babies that breathed liquid for their first months of life. Yes, it’s true, you need something for human ingenuity to work with; but it turns out that with enough ingenuity, you may not need much, or even anything in particular. One day we may manufacture the air, water and food we need to live from pure energy—or we may embody our minds in machines that no longer need those things.

Indeed, it is the expansion of human know-how and technology that has been responsible for the vast majority of economic growth. We may work a little harder than many of our ancestors (depending on which ancestors you have in mind), but we accomplish with that work far more than they ever could have, because we know so many things they did not.

All that capital we have now is the work of that ingenuity: Machines, factories, vehicles—even land, if you consider all the ways that we have intentionally reshaped the landscape.

Perhaps, then, what we really need to do is invert the expression:

Humans are not machines. Machines are embodied ingenuity.

We should not think of human beings as capital. We should think of capital as the creation of human beings.

Marx described capital as “embodied labor”, but that’s really less accurate: What makes a robot a robot is much less about the hours spent building it, than the centuries of scientific advancement needed to understand how to make it in the first place. Indeed, if that robot is made by another robot, no human need ever have done any labor on it at all. And its value comes not from the work put into it, but the work that comes out of it.

Like so much of neoliberal ideology, the notion of human capital seems to treat profit and economic growth as inherent ends in themselves. Human beings only become valued insofar as we advance the will of the almighty dollar. We forget that the whole reason we should care about economic growth in the first place is that it benefits people. Money is the means, not the end; people are the end, not the means.

We should not think in terms of “investing in children”, as if they were an asset that was meant to yield a return. We should think of enriching our children—of building a better world for them to live in.

We should not speak of “investing in employees”, as though they were just another asset. We should instead respect employees and seek to treat them with fairness and justice.

That would still give us plenty of reason to support education and training. But it would also give us a much better outlook on the world and our place in it.

You are worth more than your money or your job.

The economy exists for people, not the reverse.

Don’t ever forget that.

We ignorant, incompetent gods

May 21 JDN 2460086

A review of Homo Deus

The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions and godlike technology.

E.O. Wilson

Homo Deus is a very good read—and despite its length, a quick one; as you can see, I read it cover to cover in a week. Yuval Noah Harari’s central point is surely correct: Our technology is reaching a threshold where it grants us unprecedented power and forces us to ask what it means to be human.

Biotechnology and artificial intelligence are now advancing so rapidly that advancements in other domains, such as aerospace and nuclear energy, seem positively mundane. Who cares about making flight or electricity a bit cleaner when we will soon have the power to modify ourselves or we’ll all be replaced by machines?

Indeed, we already have technology that would have seemed to ancient people like the powers of gods. We can fly; we can witness or even control events thousands of miles away; we can destroy mountains; we can wipeout entire armies in an instant; we can even travel into outer space.

Harari rightly warns us that our not-so-distant descendants are likely to have powers that we would see as godlike: Immortality, superior intelligence, self-modification, the power to create life.

And where it is scary to think about what they might do with that power if they think the way we do—as ignorant and foolish and tribal as we are—Harari points out that it is equally scary to think about what they might do if they don’t think the way we do—for then, how do they think? If their minds are genetically modified or even artificially created, who will they be? What values will they have, if not ours? Could they be better? What if they’re worse?

It is of course difficult to imagine values better than our own—if we thought those values were better, we’d presumably adopt them. But we should seriously consider the possibility, since presumably most of us believe that our values today are better than what most people’s values were 1000 years ago. If moral progress continues, does it not follow that people’s values will be better still 1000 years from now? Or at least that they could be?

I also think Harari overestimates just how difficult it is to anticipate the future. This may be a useful overcorrection; the world is positively infested with people making overprecise predictions about the future, often selling them for exorbitant fees (note that Harari was quite well-compensated for this book as well!). But our values are not so fundamentally alien from those of our forebears, and we have reason to suspect that our descendants’ values will be no more different from ours.

For instance, do you think that medieval people thought suffering and death were good? I assure you they did not. Nor did they believe that the supreme purpose in life is eating cheese. (They didn’t even believe the Earth was flat!) They did not have the concept of GDP, but they could surely appreciate the value of economic prosperity.

Indeed, our world today looks very much like a medieval peasant’s vision of paradise. Boundless food in endless variety. Near-perfect security against violence. Robust health, free from nearly all infectious disease. Freedom of movement. Representation in government! The land of milk and honey is here; there they are, milk and honey on the shelves at Walmart.

Of course, our paradise comes with caveats: Not least, we are by no means free of toil, but instead have invented whole new kinds of toil they could scarcely have imagined. If anything I would have to guess that coding a robot or recording a video lecture probably isn’t substantially more satisfying than harvesting wheat or smithing a sword; and reconciling receivables and formatting spreadsheets is surely less. Our tasks are physically much easier, but mentally much harder, and it’s not obvious which of those is preferable. And we are so very stressed! It’s honestly bizarre just how stressed we are, given the abudance in which we live; there is no reason for our lives to have stakes so high, and yet somehow they do. It is perhaps this stress and economic precarity that prevents us from feeling such joy as the medieval peasants would have imagined for us.

Of course, we don’t agree with our ancestors on everything. The medieval peasants were surely more religious, more ignorant, more misogynistic, more xenophobic, and more racist than we are. But projecting that trend forward mostly means less ignorance, less misogyny, less racism in the future; it means that future generations should see the world world catch up to what the best of us already believe and strive for—hardly something to fear. The values that I believe are surely not what we as a civilization act upon, and I sorely wish they were. Perhaps someday they will be.

I can even imagine something that I myself would recognize as better than me: Me, but less hypocritical. Strictly vegan rather than lacto-ovo-vegetarian, or at least more consistent about only buying free range organic animal products. More committed to ecological sustainability, more willing to sacrifice the conveniences of plastic and gasoline. Able to truly respect and appreciate all life, even humble insects. (Though perhaps still not mosquitoes; this is war. They kill more of us than any other animal, including us.) Not even casually or accidentally racist or sexist. More courageous, less burnt out and apathetic. I don’t always live up to my own ideals. Perhaps someday someone will.

Harari fears something much darker, that we will be forced to give up on humanist values and replace them with a new techno-religion he calls Dataism, in which the supreme value is efficient data processing. I see very little evidence of this. If it feels like data is worshipped these days, it is only because data is profitable. Amazon and Google constantly seek out ever richer datasets and ever faster processing because that is how they make money. The real subject of worship here is wealth, and that is nothing new. Maybe there are some die-hard techno-utopians out there who long for us all to join the unified oversoul of all optimized data processing, but I’ve never met one, and they are clearly not the majority. (Harari also uses the word ‘religion’ in an annoyingly overbroad sense; he refers to communism, liberalism, and fascism as ‘religions’. Ideologies, surely; but religions?)

Harari in fact seems to think that ideologies are strongly driven by economic structures, so maybe he would even agree that it’s about profit for now, but thinks it will become religion later. But I don’t really see history fitting this pattern all that well. If monotheism is directly tied to the formation of organized bureaucracy and national government, then how did Egypt and Rome last so long with polytheistic pantheons? If atheism is the natural outgrowth of industrialized capitalism, then why are Africa and South America taking so long to get the memo? I do think that economic circumstances can constrain culture and shift what sort of ideas become dominant, including religious ideas; but there clearly isn’t this one-to-one correspondence he imagines. Moreover, there was never Coalism or Oilism aside from the greedy acquisition of these commodities as part of a far more familiar ideology: capitalism.

He also claims that all of science is now, or is close to, following a united paradigm under which everything is a data processing algorithm, which suggests he has not met very many scientists. Our paradigms remain quite varied, thank you; and if they do all have certain features in common, it’s mainly things like rationality, naturalism and empiricism that are more or less inherent to science. It’s not even the case that all cognitive scientists believe in materialism (though it probably should be); there are still dualists out there.

Moreover, when it comes to values, most scientists believe in liberalism. This is especially true if we use Harari’s broad sense (on which mainline conservatives and libertarians are ‘liberal’ because they believe in liberty and human rights), but even in the narrow sense of center-left. We are by no means converging on a paradigm where human life has no value because it’s all just data processing; maybe some scientists believe that, but definitely not most of us. If scientists ran the world, I can’t promise everything would be better, but I can tell you that Bush and Trump would never have been elected and we’d have a much better climate policy in place by now.

I do share many of Harari’s fears of the rise of artificial intelligence. The world is clearly not ready for the massive economic disruption that AI is going to cause all too soon. We still define a person’s worth by their employment, and think of ourselves primarily as collection of skills; but AI is going to make many of those skills obsolete, and may make many of us unemployable. It would behoove us to think in advance about who we truly are and what we truly want before that day comes. I used to think that creative intellectual professions would be relatively secure; ChatGPT and Midjourney changed my mind. Even writers and artists may not be safe much longer.

Harari is so good at sympathetically explaining other views he takes it to a fault. At times it is actually difficult to know whether he himself believes something and wants you to, or if he is just steelmanning someone else’s worldview. There’s a whole section on ‘evolutionary humanism’ where he details a worldview that is at best Nietschean and at worst Nazi, but he makes it sound so seductive. I don’t think it’s what he believes, in part because he has similarly good things to say about liberalism and socialism—but it’s honestly hard to tell.

The weakest part of the book is when Harari talks about free will. Like most people, he just doesn’t get compatibilism. He spends a whole chapter talking about how science ‘proves we have no free will’, and it’s just the same old tired arguments hard determinists have always made.

He talks about how we can make choices based on our desires, but we can’t choose our desires; well of course we can’t! What would that even mean? If you could choose your desires, what would you choose them based on, if not your desires? Your desire-desires? Well, then, can you choose your desire-desires? What about your desire-desire-desires?

What even is this ultimate uncaused freedom that libertarian free will is supposed to consist in? No one seems capable of even defining it. (I’d say Kant got the closest: He defined it as the capacity to act based upon what ought rather than what is. But of course what we believe about ‘ought’ is fundamentally stored in our brains as a particular state, a way things are—so in the end, it’s an ‘is’ we act on after all.)

Maybe before you lament that something doesn’t exist, you should at least be able to describe that thing as a coherent concept? Woe is me, that 2 plus 2 is not equal to 5!

It is true that as our technology advances, manipulating other people’s desires will become more and more feasible. Harari overstates the case on so-called robo-rats; they aren’t really mind-controlled, it’s more like they are rewarded and punished. The rat chooses to go left because she knows you’ll make her feel good if she does; she’s still freely choosing to go left. (Dangling a carrot in front of a horse is fundamentally the same thing—and frankly, paying a wage isn’t all that different.) The day may yet come where stronger forms of control become feasible, and woe betide us when it does. Yet this is no threat to the concept of free will; we already knew that coercion was possible, and mind control is simply a more precise form of coercion.

Harari reports on a lot of interesting findings in neuroscience, which are important for people to know about, but they do not actually show that free will is an illusion. What they do show is that free will is thornier than most people imagine. Our desires are not fully unified; we are often ‘of two minds’ in a surprisingly literal sense. We are often tempted by things we know are wrong. We often aren’t sure what we really want. Every individual is in fact quite divisible; we literally contain multitudes.

We do need a richer account of moral responsibility that can deal with the fact that human beings often feel multiple conflicting desires simultaneously, and often experience events differently than we later go on to remember them. But at the end of the day, human consciousness is mostly unified, our choices are mostly rational, and our basic account of moral responsibility is mostly valid.

I think for now we should perhaps be less worried about what may come in the distant future, what sort of godlike powers our descendants may have—and more worried about what we are doing with the godlike powers we already have. We have the power to feed the world; why aren’t we? We have the power to save millions from disease; why don’t we? I don’t see many people blindly following this ‘Dataism’, but I do see an awful lot blinding following a 19th-century vision of capitalism.

And perhaps if we straighten ourselves out, the future will be in better hands.

Building a wider tent is not compromising on your principles

August 20, JDN 2457986

After humiliating defeats in the last election, the Democratic Party is now debating how to recover and win future elections. One proposal that has been particularly hotly contested is over whether to include candidates who agree with the Democratic Party on most things, but still oppose abortion.

This would almost certainly improve the chances of winning seats in Congress, particularly in the South. But many have argued that this is a bridge too far, it amounts to compromising on fundamental principles, and the sort of DINO (Democrat-In-Name-Only) we’d end up with are no better than no Democrats at all.

I consider this view deeply misguided; indeed, I think it’s a good portion of the reason why we got so close to winning the culture wars and yet suddenly there are literal Nazis marching in the streets. Insisting upon ideological purity on every issue is a fantastic way to amplify the backlash against you and ensure that you will always lose.

To show why, I offer you a simple formal model. Let’s make it as abstract as possible, and say there are five different issues, A, B, C, D, and E, and on each of them you can either choose Yes or No.

Furthermore, let’s suppose that on every single issue, the opinion of a 60% majority is “Yes”. If you are a political party that wants to support “Yes” on every issue, which of these options should you choose:
Option 1: Only run candidates who support “Yes” on every single issue

Option 2: Only run candidates who support “Yes” on at least 4 out of 5 issues

Option 3: Only run candidates who support “Yes” on at least 3 out of 5 issues

For now, let’s assume that people’s beliefs within a district are very strongly correlated (people believe what their friends, family, colleagues, and neighbors believe). Then assume that the beliefs of a given district are independently and identically distributed (each person essentially flips a weighted coin to decide their belief on each issue). These are of course wildly oversimplified, but they keep the problem simple, and I can relax them a little in a moment.

Suppose there are 100 districts up for grabs (like, say, the US Senate). Then there will be:

(0.6)^5*100 = 8 districts that support “Yes” on every single issue.

5*(0.6)^4*(0.4)*100 = 26 districts that support “Yes” on 4 out of 5 issues.

10*(0.6)^3*(0.4)^2*100 = 34 districts that support “Yes” on 3 out of 5 issues.

10*(0.6)^2*(0.4)^3*100 = 23 districts that support “Yes” on 2 out of 5 issues.

5*(0.6)^1*(0.4)^4*100 = 8 districts that support “Yes” on 1 out of 5 issues.

(0.4)^5*100 = 1 district that doesn’t support “Yes” on any issues.

The ideological purists want us to choose option 1, so let’s start with that. If you only run candidates who support “Yes” on every single issue, you will win only eight districts. Your party will lose 92 out of 100 seats. You will become a minor, irrelevant party of purists with no actual power—despite the fact that the majority of the population agrees with you on any given issue.

If you choose option 2, and run candidates who differ at most by one issue, you will still lose, but not by nearly as much. You’ll claim a total of 34 seats. That might at least be enough to win some votes or drive some committees.

If you want a majority, you need to go with option 3, and run candidates who agree on at least 3 out of 5 issues. Only then will you win 68 seats and be able to drive legislative outcomes.

But wait! you may be thinking. You only won in that case by including people who don’t agree with your core platform; so what use is it to win the seats? You could win every seat by including every possible candidate, and then accomplish absolutely nothing!

Yet notice that even under option 3, you’re still only including people who agree with the majority of your platform. You aren’t including absolutely everyone. Indeed, once you parse out all the combinations, it becomes clear that by running these candidates, you will win the vote on almost every issue.

8 of your candidates are A1, B1, C1, D1, E1, perfect partisans; they’ll support you every time.

6 of your candidates are A1, B1, C1, D1, E0, disagreeing only on issue E.

5 of your candidates are A1, B1, C1, D0, E1, disagreeing only on issue D.

5 of your candidates are A1, B1, C0, D1, E1, disagreeing only on issue C.

5 of your candidates are A1, B0, C1, D1, E1, disagreeing only on issue B.

5 of your candidates are A0, B1, C1, D1, E1, disagreeing only on issue A.

4 of your candidates are A1, B1, C1, D0, E0, disagreeing on issues D and E.

4 of your candidates are A0, B1, C1, D0, E0, disagreeing on issues E and A.

4 of your candidates are A0, B0, C1, D1, E1, disagreeing on issues B and A.

4 of your candidates are A1, B0, C1, D1, E0, disagreeing on issues E and B.

3 of your candidates are A1, B1, C0, D0, E1, disagreeing on issues D and C.

3 of your candidates are A1, B0, C0, D1, E1, disagreeing on issues C and B.

3 of your candidates are A0, B1, C1, D0, E1, disagreeing on issues D and A.

3 of your candidates are A0, B1, C0, D1, E1, disagreeing on issues C and A.

3 of your candidates are A1, B0, C1, D0, E1, disagreeing on issues D and B.

3 of your candidates are A1, B1, C0, D1, E0, disagreeing on issues C and E.

I took the liberty of rounding up or down as needed to make the numbers add up to 68. I biased toward rounding up on issue E, to concentrate all the dissent on one particular issue. This is sort of a worst-case scenario.

Since 60% of the population also agrees with you, the opposing parties couldn’t have only chosen pure partisans; they had to cast some kind of big tent as well. So I’m going to assume that the opposing candidates look like this:

8 of their candidates are A1, B0, C0, D0, E0, agreeing with you only on issue A.

8 of their candidates are A0, B1, C0, D0, E0, agreeing with you only on issue B.

8 of their candidates are A0, B0, C1, D0, E0, agreeing with you only on issue C.

8 of their candidates are A0, B0, C0, D1, E0, agreeing with you only on issue D.

This is actually very conservative; despite the fact that there should be only 9 districts that disagree with you on 4 or more issues, they somehow managed to win 32 districts with such candidates. Let’s say it was gerrymandering or something.

Now, let’s take a look at the voting results, shall we?

A vote for “Yes” on issue A will have 8 + 6 + 3*5 + 2*4 + 4*3 + 8 = 57 votes.

A vote for “Yes” on issue B will have 8 + 6 + 3*5 + 2*4 + 4*3 + 8 = 57 votes.

A vote for “Yes” on issue C will have 8 + 6 + 3*5 + 4*4 + 2*3 + 8 = 59 votes.

A vote for “Yes” on issue D will have 8 + 6 + 3*5 + 3*4 + 3*3 + 8 = 58 votes.

A vote for “Yes” on issue E will have 8 + 0 + 4*5 + 1*4 + 5*3 = 47

Final results? You win on issues A, B, C, and D, and lose very narrowly on issue E. Even if the other party somehow managed to maintain total ideological compliance and you couldn’t get a single vote from them, you’d still win on issue C and tie on issue D. If on the other hand your party can convince just 4 of your own anti-E candidates to vote in favor of E for the good of the party, you can win on E as well.

Of course, in all of the above I assumed that districts are homogeneous and independently and identically distributed. Neither of those things are true.
The homogeneity assumption actually turns out to be pretty innocuous; if each district elects a candidate by plurality vote from two major parties, the Median Voter Theorem applies and the result is as if there were a single representative median voter making the decision.

The independence assumption is not innocuous, however. In reality, there will be strong correlations between the views of different people in different districts, and strong correlations across issues among individual voters. It is in fact quite likely that people who believe A1, B1, C1, D1 are more likely to believe E1 than people who believe A0, B0, C0, D0.

Given that, all the numbers above would shift, in the following way: There would be a larger proportion of pure partisans, and a smaller proportion of moderates with totally mixed views.

Does this undermine the argument? Not really. You need an awful lot of pure partisanship to make that a viable electoral strategy. I won’t go through all the cases again because it’s a mess, but let’s just look at those voting numbers again.

Suppose that instead of it being an even 60% regardless of your other beliefs, your probability of a “Yes” belief on a given issue is 80% if the majority of your previous beliefs are “Yes”, and a probability of 40% if the majority of your previous beliefs are “No”.

Then out of 100 districts:

(0.6)^3(0.8)^2*100 = 14 will be A1, B1, C1, D1, E1 partisans.

Fourteen. Better than eight, I suppose; but not much.

Okay, let’s try even stronger partisan loyalty. Suppose that your belief on A is randomly chosen with 60% probability, but every belief thereafter is 90% “Yes” if you are A1 and 30% “Yes” if you are A0.

Then out of 100 districts:

(0.6)(0.9)^4*100 = 39 will be A1, B1, C1, D1, E1 partisans.

You will still not be able to win a majority of seats using only hardcore partisans.

Of course, you could assume even higher partisanship rates, but then it really wasn’t fair to assume that there are only five issues to choose. Even with 95% partisanship on each issue, if there are 20 issues:
(0.95)^20*100 = 36

The moral of the story is that if there is any heterogeneity across districts at all, any meaningful deviation from the party lines, you will only be able to reliably win a majority of the legislature if you cast a big tent. Even if the vast majority of people agree with you on any given issue, odds are that the vast majority of people don’t agree with you on everything.

Moreover, you are not sacrificing your principles by accepting these candidates, as you are still only accepting people who mostly agree with you into your party. Furthermore, you will still win votes on most issues—even those you felt like you were compromising on.

I therefore hope the Democratic Party makes the right choice and allows anti-abortion candidates into the party. It’s our best chance of actually winning a majority and driving the legislative agenda, including the legislative agenda on abortion.