This is a battle for the soul of America

July 9, JDN 2457944

At the time of writing, I just got back from a protest march against President Trump in Santa Ana (the featured photo is one I took at the march). I had intended to go to the much larger sister protest in Los Angeles, but the logistics were too daunting. On the upside, presumably the marginal impact of my attendance was higher at the smaller event.

Protest marches are not a common pastime of mine; I am much more of an ivory-tower policy wonk than a boots-on-the-ground political activist. The way that other people seem to be allergic to statistics, I am allergic to a lack of statistics when broad claims are made with minimal evidence. Even when I basically agree with everything being said, I still feel vaguely uncomfortable marching and chanting in unison (and constantly reminded of that scene from Life of Brian). But I made an exception for this one, because Trump represents a threat to the soul of American democracy.

We have had bad leaders many times before—even awful leaders, even leaders whose bad decisions resulted in the needless deaths of thousands. But not since the end of the Civil War have we had leaders who so directly threatened the core institutions of America itself.

We must keep reminding ourselves: This is not normal. This is not normal! Donald Trump’s casual corruption, overwhelming narcissism, authoritarianism, greed, and utter incompetence (not to mention his taste in decor) make him more like Idi Amin or Hugo Chavez than like George H.W. Bush or Ronald Reagan. (Even the comparison with Vladimir Putin would be too flattering to Trump; Putin at least is competent.) He has personally publicly insulted over 300 people, places, and things—and counting.

Trump lies almost constantly, surrounds himself with family members and sycophants, refuses to listen to intelligence briefings, and personally demeans and even threatens journalists who criticize him. Every day it seems like there is a new scandal, more outrageous than the last; and after so long, this almost seems like a strategy. Every day he finds some new way to offend and undermine the basic norms of our society, and eventually he hopes to wear us down until we give up fighting.

It is certainly an exaggeration, and perhaps a dangerous one, to say that Donald Trump is the next Adolf Hitler. But there are important historical parallels between the rise of Trump and the rise of many other populist authoritarian demagogues. He casually violates democratic norms of civility, honesty, and transparency, and incentivizes the rest of us to do the same—a temptation we must resist. Political scientists and economists are now issuing public warnings that our democratic institutions are not as strong as we may think (though, to be fair, others argue that they are indeed strong enough).

It was an agonizingly close Presidential election. Even the tiniest differences could have flipped enough states to change the outcome. If we’d had a better voting system, it would not have happened; a simple plurality vote would have elected Hillary Clinton, and as I argued in a previous post, range voting would probably have chosen Bernie Sanders. Therefore, we must not take this result as a complete indictment of American society or a complete failure of American democracy. But let it shake us out of our complacency; democracy is only as strong as the will of its citizens to defend it.

How we sold our privacy piecemeal

Apr 2, JDN 2457846

The US Senate just narrowly voted to remove restrictions on the sale of user information by Internet Service Providers. Right now, your ISP can basically sell your information to whomever they like without even telling you. The new rule that the Senate struck down would have required them to at least make you sign a form with some fine print on it, which you probably would sign without reading it. So in practical terms maybe it makes no difference.

…or does it? Maybe that’s really the mistake we’ve been making all along.

In cognitive science we have a concept called the just-noticeable difference (JND); it is basically what it sounds like. If you have two stimuli—two colors, say, or sounds of two different pitches—that differ by an amount smaller than the JND, people will not notice it. But if they differ by more than the JND, people will notice. (In practice it’s a bit more complicated than that, as different people have different JND thresholds and even within a person they can vary from case to case based on attention or other factors. But there’s usually a relatively narrow range of JND values, such that anything below that is noticed by no one and anything above that is noticed by almost everyone.)

The JND seems like an intuitively obvious concept—of course you can’t tell the difference between a color of 432.78 nanometers and 432.79 nanometers!—but it actually has profound implications. In particular it undermines the possibility of having truly transitive preferences. If you prefer some colors to others—which most of us do—but you have a nonzero JND in color wavelengths—as we all do—then I can do the following: Find one color you like (for concreteness, say you like blue of 475 nm), and another color you don’t (say green of 510 nm). Let you choose between the blue you like and another blue, 475.01 nm. Will you prefer one to the other? Of course not, the difference is within your JND. So now compare 475.01 nm and 475.02 nm; which do you prefer? Again, you’re indifferent. And I can go on and on this way a few thousand times, until finally I get to 510 nanometers, the green you didn’t like. I have just found a chain of your preferences that is intransitive; you said A = B = C = D… all the way down the line to X = Y = Z… but then at the end you said A > Z. Your preferences aren’t transitive, and therefore aren’t well-defined rational preferences. And you could do the same to me, so neither are mine.

Part of the reason we’ve so willingly given up our privacy in the last generation or so is our paranoid fear of terrorism, which no doubt triggers deep instincts about tribal warfare. Depressingly, the plurality of Americans think that our government has not gone far enough in its obvious overreaches of the Constitution in the name of defending us from a threat that has killed fewer Americans in my lifetime than die from car accidents each month.

But that doesn’t explain why we—and I do mean we, for I am as guilty as most—have so willingly sold our relationships to Facebook and our schedules to Google. Google isn’t promising to save me from the threat of foreign fanatics; they’re merely offering me a more convenient way to plan my activities. Why, then, am I so cavalier about entrusting them with so much personal data?

 

Well, I didn’t start by giving them my whole life. I created an email account, which I used on occasion. I tried out their calendar app and used it to remind myself when my classes were. And so on, and so forth, until now Google knows almost as much about me as I know about myself.

At each step, it didn’t feel like I was doing anything of significance; perhaps indeed it was below my JND. Each bit of information I was giving didn’t seem important, and perhaps it wasn’t. But all together, our combined information allows Google to make enormous amounts of money without charging most of its users a cent.

The process goes something like this. Imagine someone offering you a penny in exchange for telling them how many times you made left turns last week. You’d probably take it, right? Who cares how many left turns you made last week? But then they offer another penny in exchange for telling them how many miles you drove on Tuesday. And another penny for telling them the average speed you drive during the afternoon. This process continues hundreds of times, until they’ve finally given you say $5.00—and they know exactly where you live, where you work, and where most of your friends live, because all that information was encoded in the list of driving patterns you gave them, piece by piece.

Consider instead how you’d react if someone had offered, “Tell me where you live and work and I’ll give you $5.00.” You’d be pretty suspicious, wouldn’t you? What are they going to do with that information? And $5.00 really isn’t very much money. Maybe there’s a price at which you’d part with that information to a random suspicious stranger—but it’s probably at least $50 or even more like $500, not $5.00. But by asking it in 500 different questions for a penny each, they can obtain that information from you at a bargain price.

If you work out how much money Facebook and Google make from each user, it’s actually pitiful. Facebook has been increasing their revenue lately, but it’s still less than $20 per user per year. The stranger asks, “Tell me who all your friends are, where you live, where you were born, where you work, and what your political views are, and I’ll give you $20.” Do you take that deal? Apparently, we do. Polls find that most Americans are willing to exchange privacy for valuable services, often quite cheaply.

 

Of course, there isn’t actually an alternative social network that doesn’t sell data and instead just charges a subscription fee. I don’t think this is a fundamentally unfeasible business model, but it hasn’t succeeded so far, and it will have an uphill battle for two reasons.

The first is the obvious one: It would have to compete with Facebook and Google, who already have the enormous advantage of a built-in user base of hundreds of millions of people.

The second one is what this post is about: The social network based on conventional economics rather than selling people’s privacy can’t take advantage of the JND.

I suppose they could try—charge $0.01 per month at first, then after awhile raise it to $0.02, $0.03 and so on until they’re charging $2.00 per month and actually making a profit—but that would be much harder to pull off, and it would provide the least revenue when it is needed most, at the early phase when the up-front costs of establishing a network are highest. Moreover, people would still feel that; it’s a good feature of our monetary system that you can’t break money into small enough denominations to really consistently hide under the JND. But information can be broken down into very tiny pieces indeed. Much of the revenue earned by these corporate giants is actually based upon indexing the keywords of the text we write; we literally sell off our privacy word by word.

 

What should we do about this? Honestly, I’m not sure. Facebook and Google do in fact provide valuable services, without which we would be worse off. I would be willing to pay them their $20 per year, if I could ensure that they’d stop selling my secrets to advertisers. But as long as their current business model keeps working, they have little incentive to change. There is in fact a huge industry of data brokering, corporations you’ve probably never heard of that make their revenue entirely from selling your secrets.

In a rare moment of actual journalism, TIME ran an article about a year ago arguing that we need new government policy to protect us from this kind of predation of our privacy. But they had little to offer in the way of concrete proposals.

The ACLU does better: They have specific proposals for regulations that should be made to protect our information from the most harmful prying eyes. But as we can see, the current administration has no particular interest in pursuing such policies—if anything they seem to do the opposite.

In defense of slacktivism

Jan 22, JDN 2457776

It’s one of those awkward portmanteaus that people often make to try to express a concept in fewer syllables, while also implicitly saying that the phenomenon is specific enough to deserve its own word: “Slacktivism”, made of “slacker” and “activism”, not unlike “mansplain” is made of “man” and “explain” or “edutainment” was made of “education” and “entertainment”—or indeed “gerrymander” was made of “Elbridge Gerry” and “salamander”. The term seems to be particularly popular on Huffington Post, which has a whole category on slacktivism. There is a particular subcategory of slacktivism that is ironically against other slacktivism, which has been dubbed “snarktivism”.

It’s almost always used as a pejorative; very few people self-identify as “slacktivists” (though once I get through this post, you may see why I’m considering it myself). “Slacktivism” is activism that “isn’t real” somehow, activism that “doesn’t count”.

Of course, that raises the question: What “counts” as legitimate activism? Is it only protest marches and sit-ins? Then very few people have ever been or will ever be activists. Surely donations should count, at least? Those have a direct, measurable impact. What about calling your Congressman, or letter-writing campaigns? These have been staples of activism for decades.
If the term “slacktivism” means anything at all, it seems to point to activities surrounding raising awareness, where the goal is not to enact a particular policy or support a particular NGO but to simply get as much public attention to a topic as possible. It seems to be particularly targeted at blogging and social media—and that’s important, for reasons I’ll get to shortly. If you gather a group of people in your community and give a speech about LGBT rights, you’re an activist. If you send out the exact same speech on Facebook, you’re a slacktivist.

One of the arguments against “slacktivism” is that it can be used to funnel resources at the wrong things; this blog post makes a good point that the Kony 2012 campaign doesn’t appear to have actually accomplished anything except profits for the filmmakers behind it. (Then again: A blog post against slacktivism? Are you sure you’re not doing right now the thing you think you are against?) But is this problem unique to slacktivism, or is it a more general phenomenon that people simply aren’t all that informed about how to have the most impact? There are an awful lot of inefficient charities out there, and in fact the most important waste of charitable funds involves people giving to their local churches. Fortunately, this is changing, as people become more secularized; churches used to account for over half of US donations, and now they only account for less than a third. (Naturally, Christian organizations are pulling out their hair over this.) The 60 million Americans who voted for Trump made a horrible mistake and will cause enormous global damage; but they weren’t slacktivists, were they?

Studies do suggest that traditionally “slacktivist” activities like Facebook likes aren’t a very strong predictor of future, larger actions, and more private modes of support (like donations and calling your Congressman) tend to be stronger predictors. But so what? In order for slacktivism to be a bad thing, they would have to be a negative predictor. They would have to substitute for more effective activism, and there’s no evidence that this happens.

In fact, there’s even some evidence that slacktivism has a positive effect (normally I wouldn’t cite Fox News, but I think in this case we should expect a bias in the opposite direction, and you can read the full Georgetown study if you want):

A study from Georgetown University in November entitled “Dynamics of Cause Engagement” looked how Americans learned about and interacted with causes and other social issues, and discovered some surprising findings on Slacktivism.

While the traditional forms of activism like donating money or volunteering far outpaces slacktivism, those who engage in social issues online are twice as likely as their traditional counterparts to volunteer and participate in events. In other words, slacktivists often graduate to full-blown activism.

At worst, most slacktivists are doing nothing for positive social change, and that’s what the vast majority of people have been doing for the entirety of human history. We can bemoan this fact, but that won’t change it. Most people are simply too uniformed to know what’s going on in the world, and too broke and too busy to do anything about it.

Indeed, slacktivism may be the one thing they can do—which is why I think it’s worth defending.

From an economist’s perspective, there’s something quite odd about how people’s objections to slacktivism are almost always formulated. The rational, sensible objection would be to their small benefits—this isn’t accomplishing enough, you should do something more effective. But in fact, almost all the objections to slacktivism I have ever read focus on their small costs—you’re not a “real activist” because you don’t make sacrifices like I do.

Yet it is a basic principle of economic rationality that, all other things equal, lower cost is better. Indeed, this is one of the few principles of economic rationality that I really do think is unassailable; perfect information is unrealistic and total selfishness makes no sense at all. But cost minimization is really very hard to argue with—why pay more, when you can pay less and get the same benefit?

From an economist’s perspective, the most important thing about an activity is its cost-effectiveness, measured either by net benefitbenefit minus cost—or rate of returnbenefit divided by cost. But in both cases, a lower cost is always better; and in fact slacktivism has an astonishing rate of return, precisely because its cost is so small.

Suppose that a campaign of 10 million Facebook likes actually does have a 1% chance of changing a policy in a way that would save 10,000 lives, with a life expectancy of 50 years each. Surely this is conservative, right? I’m only giving it a 1% chance of success, on a policy with a relatively small impact (10,000 lives could be a single clause in an EPA regulatory standard), with a large number of slacktivist participants (10 million is more people than the entire population of Switzerland). Yet because clicking “like” and “share” only costs you maybe 10 seconds, we’re talking about an expected cost of (10 million)(10/86,400/365) = 0.32 QALY for an expected benefit of (10,000)(0.01)(50) = 5000 QALY. That is a rate of return of 1,500,000%—that’s 1.5 million percent.

Let’s compare this to the rate of return on donating to a top charity like UNICEF, Oxfam, the Against Malaria Foundation, or the Schistomoniasis Control Initiative, for which donating about $300 would save the life of 1 child, adding about 50 QALY. That $300 most likely cost you about 0.01 QALY (assuming an annual income of $30,000), so we’re looking at a return of 500,000%. Now, keep in mind that this is a huge rate of return, far beyond what you can ordinarily achieve, that donating $300 to UNICEF is probably one of the best things you could possibly be doing with that money—and yet slacktivism may still exceed it in efficiency. Maybe slacktivism doesn’t sound so bad after all?

Of course, the net benefit of your participation is higher in the case of donation; you yourself contribute 50 QALY instead of only contributing 0.0005 QALY. Ultimately net benefit is what matters; rate of return is a way of estimating what the net benefit would be when comparing different ways of spending the same amount of time or money. But from the figures I just calculated, it begins to seem like maybe the very best thing you could do with your time is clicking “like” and “share” on Facebook posts that will raise awareness of policies of global importance. Now, you have to include all that extra time spent poring through other Facebook posts, and consider that you may not be qualified to assess the most important issues, and there’s a lot of uncertainty involved in what sort of impact you yourself will have… but it’s almost certainly not the worst thing you could be doing with your time, and frankly running these numbers has made me feel a lot better about all the hours I have actually spent doing this sort of thing. It’s a small benefit, yes—but it’s an even smaller cost.

Indeed, the fact that so many people treat low cost as bad, when it is almost by definition good, and the fact that they also target their ire so heavily at blogging and social media, says to me that what they are really trying to accomplish here has nothing to do with actually helping people in the most efficient way possible.

Rather, it’s two things.

The obvious one is generational—it’s yet another chorus in the unending refrain that is “kids these days”. Facebook is new, therefore it is suspicious. Adults have been complaining about their descendants since time immemorial; some of the oldest written works we have are of ancient Babylonians complaining that their kids are lazy and selfish. Either human beings have been getting lazier and more selfish for thousands of years, or, you know, kids are always a bit more lazy and selfish than their parents or at least seem so from afar.

The one that’s more interesting for an economist is signaling. By complaining that other people aren’t paying enough cost for something, what you’re really doing is complaining that they aren’t signaling like you are. The costly signal has been made too cheap, so now it’s no good as a signal anymore.

“Anyone can click a button!” you say. Yes, and? Isn’t it wonderful that now anyone with a smartphone (and there are more people with access to smartphones than toilets, because #WeLiveInTheFuture) can contribute, at least in some small way, to improving the world? But if anyone can do it, then you can’t signal your status by doing it. If your goal was to make yourself look better, I can see why this would bother you; all these other people doing things that look just as good as what you do! How will you ever distinguish yourself from the riffraff now?

This is also likely what’s going on as people fret that “a college degree’s not worth anything anymore” because so many people are getting them now; well, as a signal, maybe not. But if it’s just a signal, why are we spending so much money on it? Surely we can find a more efficient way to rank people by their intellect. I thought it was supposed to be an education—in which case the meteoric rise in global college enrollments should be cause for celebration. (In reality of course a college degree can serve both roles, and it remains an open question among labor economists as to which effect is stronger and by how much. But the signaling role is almost pure waste from the perspective of social welfare; we should be trying to maximize the proportion of real value added.)

For this reason, I think I’m actually prepared to call myself a slacktivist. I aim for cost-effective awareness-raising; I want to spread the best ideas to the most people for the lowest cost. Why, would you prefer I waste more effort, to signal my own righteousness?

There is no problem of free will, just a lot of really confused people

Jan 15, JDN 2457769

I was hoping for some sort of news item to use as a segue, but none in particular emerged, so I decided to go on with it anyway. I haven’t done any cognitive science posts in awhile, and this is one I’ve been meaning to write for a long time—actually it’s the sort of thing that even a remarkable number of cognitive scientists frequently get wrong, perhaps because the structure of human personality makes cognitive science inherently difficult.

Do we have free will?

The question has been asked so many times by so many people it is now a whole topic in philosophy. The Stanford Encyclopedia of Philosophy has an entire article on free will. The Information Philosopher has a gateway page “The Problem of Free Will” linking to a variety of subpages. There are even YouTube videos about “the problem of free will”.

The constant arguing back and forth about this would be problematic enough, but what really grates me are the many, many people who write “bold” articles and books about how “free will does not exist”. Examples include Sam Harris and Jerry Coyne, and have been published in everything from Psychology Today to the Chronicle of Higher Education. There’s even a TED talk.

The worst ones are those that follow with “but you should believe in it anyway”. In The Atlantic we have “Free will does not exist. But we’re better off believing in it anyway.” Scientific American offers a similar view, “Scientists say free will probably doesn’t exist, but urge: “Don’t stop believing!””

This is a mind-bogglingly stupid approach. First of all, if you want someone to believe in something, you don’t tell them it doesn’t exist. Second, if something doesn’t exist, that is generally considered a pretty compelling reason not to believe in it. You’d need a really compelling counter-argument, and frankly I’m not even sure the whole idea is logically coherent. How can I believe in something if I know it doesn’t exist? Am I supposed to delude myself somehow?

But the really sad part is that it’s totally unnecessary. There is no problem of free will. There are just an awful lot of really, really confused people. (Fortunately not everyone is confused; there are those, such as Daniel Dennett, who actually understand what’s going on.)

The most important confusion is over what you mean by the phrase “free will”. There are really two core meanings here, and the conflation of them is about 90% of the problem.

1. Moral responsibility: We have “free will” if and only if we are morally responsible for our actions.

2. Noncausality: We have “free will” if and only if our actions are not caused by the laws of nature.

Basically, every debate over “free will” boils down to someone pointing out that noncausality doesn’t exist, and then arguing that this means that moral responsibility doesn’t exist. Then someone comes back and says that moral responsibility does exist, and then infers that this means noncausality must exist. Or someone points out that noncausality doesn’t exist, and then they realize how horrible it would be if moral responsibility didn’t exist, and then tells people they should go on believing in noncausality so that they don’t have to give up moral responsibility.

Let me be absolutely clear here: Noncausality could not possibly exist.

Noncausality isn’t even a coherent concept. Actions, insofar as they are actions, must, necessarily, by definition, be caused by the laws of nature.

I can sort of imagine an event not being caused; perhaps virtual electron-positron pairs can really pop into existence without ever being caused. (Even then I’m not entirely convinced; I think quantum mechanics might actually be deterministic at the most fundamental level.)

But an action isn’t just a particle popping into existence. It requires the coordinated behavior of some 10^26 or more particles, all in a precisely organized, unified way, structured so as to move some other similarly large quantity of particles through space in a precise way so as to change the universe from one state to another state according to some system of objectives. Typically, it involves human muscles intervening on human beings or inanimate objects. (Recently it has come to mean specifically human fingers on computer keyboards a rather large segment of the time!) If what you do is an action—not a muscle spasm, not a seizure, not a slip or a trip, but something you did on purpose—then it must be caused. And if something is caused, it must be caused according to the laws of nature, because the laws of nature are the laws underlying all causality in the universe!

And once you realize that, the “problem of free will” should strike you as one of the stupidest “problems” ever proposed. Of course our actions are caused by the laws of nature! Why in the world would you think otherwise?

If you think that noncausality is necessary—or even useful—for free will, what kind of universe do you think you live in? What kind of universe could someone live in, that would fit your idea of what free will is supposed to be?

It’s like I said in that much earlier post about The Basic Fact of Cognitive Science (we are our brains): If you don’t think a mind can be made of matter, what do you think minds are made of? What sort of magical invisible fairy dust would satisfy you? If you can’t even imagine something that would satisfy the constraints you’ve imposed, did it maybe occur to you that your constraints are too strong?

Noncausality isn’t worth fretting over for the same reason that you shouldn’t fret over the fact that pi is irrational and you can’t make a square circle. There is no possible universe in which that isn’t true. So if it bothers you, it’s not that there’s something wrong with the universe—it’s clearly that there’s something wrong with you. Your thinking on the matter must be too confused, too dependent on unquestioned intuitions, if you think that murder can’t be wrong unless 2+2=5.

In philosophical jargon I am called a “compatibilist” because I maintain that free will and determinism are “compatible”. But this is much too weak a term. I much prefer Eleizer Yudkowsky’s “requiredism”, which he explains in one of the greatest blog posts of all time (seriously, read it immediately if you haven’t before—I’m okay with you cutting off my blog post here and reading his instead, because it truly is that brilliant), entitled simply “Thou Art Physics”. This quote sums it up briefly:

My position might perhaps be called “Requiredism.” When agency, choice, control, and moral responsibility are cashed out in a sensible way, they require determinism—at least some patches of determinism within the universe. If you choose, and plan, and act, and bring some future into being, in accordance with your desire, then all this requires a lawful sort of reality; you cannot do it amid utter chaos. There must be order over at least over those parts of reality that are being controlled by you. You are within physics, and so you/physics have determined the future. If it were not determined by physics, it could not be determined by you.

Free will requires a certain minimum level of determinism in the universe, because the universe must be orderly enough that actions make sense and there isn’t simply an endless succession of random events. Call me a “requiredist” if you need to call me something. I’d prefer you just realize the whole debate is silly because moral responsibility exists and noncausality couldn’t possibly.

We could of course use different terms besides “free will”. “Moral responsibility” is certainly a good one, but it is missing one key piece, which is the issue of why we can assign moral responsibility to human beings and a few other entities (animals, perhaps robots) and not to the vast majority of entities (trees, rocks, planets, tables), and why we are sometimes willing to say that even a human being does not have moral responsibility (infancy, duress, impairment).

This is why my favored term is actually “rational volition”. The characteristic that human beings have (at least most of us, most of the time), which also many animals and possibly some robots share (if not now, then soon enough), which justifies our moral responsibility is precisely our capacity to reason. Things don’t just happen to us the way they do to some 99.999,999,999% of the universe; we do things. We experience the world through our senses, have goals we want to achieve, and act in ways that are planned to make the world move closer to achieving those goals. We have causes, sure enough; but not just any causes. We have a specific class of causes, which are related to our desires and intentions—we call these causes reasons.

So if you want to say that we don’t have “free will” because that implies some mysterious nonsensical noncausality, sure; that’s fine. But then don’t go telling us that this means we don’t have moral responsibility, or that we should somehow try to delude ourselves into believing otherwise in order to preserve moral responsibility. Just recognize that we do have rational volition.

How do I know we have rational volition? That’s the best part, really: Experiments. While you’re off in la-la land imagining fanciful universes where somehow causes aren’t really causes even though they are, I can point to not only centuries of human experience but decades of direct, controlled experiments in operant conditioning. Human beings and most other animals behave quite differently in behavioral experiments than, say, plants or coffee tables. Indeed, it is precisely because of this radical difference that it seems foolish to even speak of a “behavioral experiment” about coffee tables—because coffee tables don’t behave, they just are. Coffee tables don’t learn. They don’t decide. They don’t plan or consider or hope or seek.

Japanese, as it turns out, may be a uniquely good language for cognitive science, because it has two fundamentally different verbs for “to be” depending on whether an entity is sentient. Humans and animals imasu, while inanimate objects merely arimasu. We have free will because and insofar as we imasu.

Once you get past that most basic confusion of moral responsibility with noncausality, there are a few other confusions you might run into as well. Another one is two senses of “reductionism”, which Dennett refers to as “ordinary” and “greedy”:

1. Ordinary reductionism: All systems in the universe are ultimately made up of components that always and everywhere obey the laws of nature.

2. Greedy reductionism: All systems in the universe just are their components, and have no existence, structure, or meaning aside from those components.

I actually had trouble formulating greedy reductionism as a coherent statement, because it’s such a nonsensical notion. Does anyone really think that a pile of two-by-fours is the same thing as a house? But people do speak as though they think this about human brains, when they say that “love is just dopamine” or “happiness is just serotonin”. But dopamine in a petri dish isn’t love, any more than a pile of two-by-fours is a house; and what I really can’t quite grok is why anyone would think otherwise.

Maybe they’re simply too baffled by the fact that love is made of dopamine (among other things)? They can’t quite visualize how that would work (nor can I, nor, I think, can anyone in the world at this level of scientific knowledge). You can see how the two-by-fours get nailed together and assembled into the house, but you can’t see how dopamine and action potentials would somehow combine into love.

But isn’t that a reason to say that love isn’t the same thing as dopamine, rather than that it is? I can understand why some people are still dualists who think that consciousness is somehow separate from the functioning of the brain. That’s wrong—totally, utterly, ridiculously wrong—but I can at least appreciate the intuition that underlies it. What I can’t quite grasp is why someone would go so far the other way and say that the consciousness they are currently experiencing does not exist.

Another thing that might confuse people is the fact that minds, as far as we know, are platform independentthat is, your mind could most likely be created out of a variety of different materials, from the gelatinous brain it currently is to some sort of silicon supercomputer, to perhaps something even more exotic. This independence follows from the widely-believed Church-Turing thesis, which essentially says that all computation is computation, regardless of how it is done. This may not actually be right, but I see many reasons to think that it is, and if so, this means that minds aren’t really what they are made of at all—they could be made of lots of things. What makes a mind a mind is how it is structured and above all what it does.

If this is baffling to you, let me show you how platform-independence works on a much simpler concept: Tables. Tables are also in fact platform-independent. You can make a table out of wood, or steel, or plastic, or ice, or bone. You could take out literally every single atom of a table and replace it will a completely different atom of a completely different element—carbon for iron, for example—and still end up with a table. You could conceivably even do so without changing the table’s weight, strength, size, etc., though that would be considerably more difficult.
Does this mean that tables somehow exist “beyond” their constituent matter? In some very basic sense, I suppose so—they are, again, platform-independent. But not in any deep, mysterious sense. Start with a wooden table, take away all the wood, and you no longer have a table. Take apart the table and you have a bunch of wood, which you could use to build something else. There is no “essence” comprising the table. There is no “table soul” that would persist when the table is deconstructed.

And—now for the hard part—so it is with minds. Your mind is your brain. The constituent atoms of your brain are gradually being replaced, day by day, but your mind is the same, because it exists in the arrangement and behavior, not the atoms themselves. Yet there is nothing “extra” or “beyond” that makes up your mind. You have no “soul” that lies beyond your brain. If your brain is destroyed, your mind will also be destroyed. If your brain could be copied, your mind would also be copied. And one day it may even be possible to construct your mind in some other medium—some complex computer made of silicon and tantalum, most likely—and it would still be a mind, and in all its thoughts, feelings and behaviors your mind, if not numerically identical to you.

Thus, when we engage in rational volition—when we use our “free will” if you like that term—there is no special “extra” process beyond what’s going on in our brains, but there doesn’t have to be. Those particular configurations of action potentials and neurotransmitters are our thoughts, desires, plans, intentions, hopes, fears, goals, beliefs. These mental concepts are not in addition to the physical material; they are made of that physical material. Your soul is made of gelatin.

Again, this is not some deep mystery. There is no “paradox” here. We don’t actually know the details of how it works, but that makes this no different from a Homo erectus who doesn’t know how fire works. Maybe he thinks there needs to be some extra “fire soul” that makes it burn, but we know better; and in far fewer centuries than separate that Homo erectus from us, our descendants will know precisely how the brain creates the mind.

Until then, simply remember that any mystery here lies in us—in our ignorance—and not in the universe. And take heart that the kind of “free will” that matters—moral responsibility—has absolutely no need for the kind of “free will” that doesn’t exist—noncausality. They’re totally different things.

Why New Year’s resolutions fail

Jan 1, JDN 2457755

Last week’s post was on Christmas, so by construction this week’s post will be on New Year’s Day.

It is a tradition in many cultures, especially in the US and Europe, to start every new year with a New Year’s resolution, a promise to ourselves to change our behavior in some positive way.

Yet, over 80% of these resolutions fail. Why is this?

If we are honest, most of us would agree that there is something about our own behavior that could stand to be improved. So why do we so rarely succeed in actually making such improvements?

One possibility, which I’m guessing most neoclassical economists would favor, is to say that we don’t actually want to. We may pretend that we do in order to appease others, but ultimately our rational optimization has already chosen that we won’t actually bear the cost to make the improvement.

I think this is actually quite rare. I’ve seen too many people with resolutions they didn’t share with anyone, for example, to think that it’s all about social pressure. And I’ve seen far too many people try very hard to achieve their resolutions, day after day, and yet still fail.

Sometimes we make resolutions that are not entirely within our control, such as “get a better job” or “find a girlfriend” (last year I made a resolution to publish a work of commercial fiction or a peer-reviewed article—and alas, failed at that task, unless I somehow manage it in the next few days). Such resolutions may actually be unwise to make in the first place, as it can feel like breaking a promise to yourself when you’ve actually done all you possibly could.

So let’s set those aside and talk only about things we should be in control over, like “lose weight” or “save more money”. Even these kinds of resolutions typically fail; why? What is this “weakness of will”? How is it possible to really want something that you are in full control over, and yet still fail to accomplish it?

Well, first of all, I should be clear what I mean by “in full control over”. In some sense you’re not in full control, which is exactly the problem. Your conscious mind is not actually an absolute tyrant over your entire body; you’re more like an elected president who has to deal with a legislature in order to enact policy.

You do have a great deal of power over your own behavior, and you can learn to improve this control (much as real executive power in presidential democracies has expanded over the last century!); but there are fundamental limits to just how well you can actually consciously will your body to do anything, limits imposed by billions of years of evolution that established most of the traits of your body and nervous system millions of generations before there even was such a thing as rational conscious reasoning.

One thing that makes a surprisingly large difference lies in whether your goals are reduced to specific, actionable objectives. “Lose weight” is almost guaranteed to fail. “Lose 30 pounds” is still unlikely to succeed. “Work out for 2 hours per week,” on the other hand, might have a chance. “Save money” is never going to make it, but “move to a smaller apartment and set aside $200 per month” just might.

I think the government metaphor is helpful here; if you President of the United States and you want something done, do you state some vague, broad goal like “Improve the economy”? No, you make a specific, actionable demand that allows you to enforce compliance, like “increase infrastructure spending by 24% over the next 5 years”. Even then it is possible to fail if you can’t push it through the legislature (in the metaphor, the “legislature” is your habits, instincts and other subconscious processes), but you’re much more likely to succeed if you have a detailed plan.

Another technique that helps is to visualize the benefits of succeeding and the costs of failing, and keep these in your mind. This counteracts the tendency for the costs of succeeding and the benefits of giving up to be more salient—losing 30 pounds sounds nice in theory, but that treadmill is so much work right now!

This salience effect has a lot to do with the fact that human beings are terrible at dealing with the future.

Rationally, we are supposed to use exponential discounting; each successive moment is supposed to be worth less to us than the previous by a fixed proportion, say 5% per year. This is actually a mathematical theorem; if you don’t discount this way, your decisions will be systematically irrational.

And yet… we don’t discount that way. Some behavioral economists argue that we use hyperbolic discounting, in which instead of discounting time by a fixed proportion, we use a different formula that drops off too quickly early on and not quickly enough later on.

But I am increasingly convinced that human beings don’t actually use discounting at all. We have a series of rough-and-ready heuristics for making future judgments, which can sort of act like discounting, but require far less computation than actually calculating a proper discount rate. (Recent empirical evidence seems to be tilting this direction.)

In any case, whatever we do is clearly not a proper rational discount rate. And this means that our behavior can be time-inconsistent; a choice that seems rational at one time can not seem rational at a later time. When we’re planning out our year and saying we will hit the treadmill more, it seems like a good idea; but when we actually get to the gym and feel our legs ache as we start running, we begin to regret our decision.

The challenge, really, is determining which “version” of us is correct! A priori, we don’t actually know whether the view of our distant self contemplating the future or the view of our current self making the choice in the moment is the right one. Actually, when I frame it this way, it almost seems like the self that’s closer to the choice should have better information—and yet typically we think the exact opposite, that it is our past self making plans that really knows what’s best for us.

So where does that come from? Why do we think, at least in most cases, that the “me” which makes a plan a year in advance is the smart one, and the “me” that actually decides in the moment is untrustworthy.

Kahneman has a good explanation for this, in his model of System 1 and System 2. System 1 is simple and fast, but often gets the wrong answer. System 2 usually gets the right answer, but it is complex and slow. When we are making plans, we have a lot of time to think, and we can afford to expend the extra effort to engage the full power of System 2. But when we are living in the moment, choosing what to do right now, we don’t have that luxury of time, and we are forced to fall back on System 1. System 1 is easier—but it’s also much more likely to be wrong.

How, then, do we resolve this conflict? Commitment. (Perhaps that’s why it’s called a New Year’s resolution!)

We make promises to ourselves, commitments that we will feel bad about not following through.

If we rationally discounted, this would be a baffling thing to do; we’re just imposing costs on ourselves for no reason. But because we don’t discount rationally, commitments allow us to change the calculation for our future selves.

This brings me to one last strategy to use when making your resolutions: Include punishment.

“I will work out at least 2 hours per week, and if I don’t, I’m not allowed to watch TV all weekend.” Now that is a resolution you are actually likely to keep.

To see why, consider the decision problem for your System 2 self today versus your System 1 self throughout the year.

Your System 2 self has done the cost-benefit analysis and ruled that working out 2 hours per week is worthwhile for its health benefits.

If you left it at that, your System 1 self would each day find an excuse to procrastinate the workouts, because at least from where they’re sitting, working out for 2 hours looks a lot more painful than the marginal loss in health from missing just this one week. And of course this will keep happening, week after week—and then 52 go by and you’ve had few if any workouts.

But by adding the punishment of “no TV”, you have imposed an additional cost on your System 1 self, something that they care about. Suddenly the calculation changes; it’s not just 2 hours of workout weighed against vague long-run health benefits, but 2 hours of workout weighed against no TV all weekend. That punishment is surely too much to bear; so you’d best do the workout after all.

Do it right, and you will rarely if ever have to impose the punishment. But don’t make it too large, or then it will seem unreasonable and you won’t want to enforce it if you ever actually need to. Your System 1 self will then know this, and treat the punishment as nonexistent. (Formally the equilibrium is not subgame perfect; I am gravely concerned that our nuclear deterrence policy suffers from precisely this flaw.) “If I don’t work out, I’ll kill myself” is a recipe for depression, not healthy exercise habits.

But if you set clear, actionable objectives and sufficient but reasonable punishments, there’s at least a good chance you will actually be in the minority of people who actually succeed in keeping their New Year’s resolution.

And if not, there’s always next year.

How personality makes cognitive science hard

August 13, JDN 2457614

Why is cognitive science so difficult? First of all, let’s acknowledge that it is difficult—that even those of us who understand it better than most are still quite baffled by it in quite fundamental ways. The Hard Problem still looms large over us all, and while I know that the Chinese Room Argument is wrong, I cannot precisely pin down why.

The recursive, reflexive character of cognitive science is part of the problem; can a thing understand itself without understanding understanding itself, understanding understanding understanding itself, and on in an infinite regress? But this recursiveness applies just as much to economics and sociology, and honestly to physics and biology as well. We are physical biological systems in an economic and social system, yet most people at least understand these sciences at the most basic level—which is simply not true of cognitive science.

One of the most basic facts of cognitive science (indeed I am fond of calling it The Basic Fact of Cognitive Science) is that we are our brains, that everything human consciousness does is done by and within the brain. Yet the majority of humans believe in souls (including the majority of Americans and even the majority of Brits), and just yesterday I saw a news anchor say “Based on a new study, that feeling may originate in your brain!” He seriously said “may”. “may”? Why, next you’ll tell me that when my arms lift things, maybe they do it with muscles! Other scientists are often annoyed by how many misconceptions the general public has about science, but this is roughly the equivalent of a news anchor saying, “Based on a new study, human bodies may be made of cells!” or “Based on a new study, diamonds may be made of carbon atoms!” The misunderstanding of many sciences is widespread, but the misunderstanding of cognitive science is fundamental.

So what makes cognitive science so much harder? I have come to realize that there is a deep feature of human personality that makes cognitive science inherently difficult in a way other sciences are not.

Decades of research have uncovered a number of consistent patterns in human personality, where people’s traits tend to lie along a continuum from one extreme to another, and usually cluster near either end. Most people are familiar with a few of these, such as introversion/extraversion and optimism/pessimism; but the one that turns out to be important here is empathizing/systematizing.

Empathizers view the world as composed of sentient beings, living agents with thoughts, feelings, and desires. They are good at understanding other people and providing social support. Poets are typically empathizers.

Systematizers view the world as composed of interacting parts, interlocking components that have complex inner workings which can be analyzed and understood. They are good at solving math problems and tinkering with machines. Engineers are typically systematizers.

Most people cluster near one end of the continuum or the other; they are either strong empathizers or strong systematizers. (If you’re curious, there’s an online test you can take to find out which you are.)

But a rare few of us, perhaps as little as 2% and no more than 10%, are both; we are empathizer-systematizers, strong on both traits (showing that it’s not really a continuum between two extremes after all, and only seemed to be because the two traits are negatively correlated). A comparable number are also low on both traits, which must quite frankly make the world a baffling place in general.

Empathizer-systematizers understand the world as it truly is: Composed of sentient beings that are made of interacting parts.

The very title of this blog shows I am among this group: “human” for the empathizer, “economics” for the systematizer!

We empathizer-systematizers can intuitively grasp that there is no contradiction in saying that a person is sad because he lost his job and he is sad because serotonin levels in his cingulate gyrus are low—because it was losing his job that triggered other thoughts and memories that lowered serotonin levels in his cingulate gyrus and thereby made him sad. No one fully understands the details of how low serotonin feels like sadness—hence, the Hard Problem—but most people can’t even seem to grasp the connection at all. How can something as complex and beautiful as a human mind be made of… sparking gelatin?

Well, what would you prefer it to be made of? Silicon chips? We’re working on that. Something else? Magical fairy dust, perhaps? Pray tell, what material could the human mind be constructed from that wouldn’t bother you on a deep level?

No, what really seems to bother people is the very idea that a human mind can be constructed from material, that thoughts and feelings can be divisible into their constituent parts.

This leads people to adopt one of two extreme positions on cognitive science, both of which are quite absurd—frankly I’m not sure they are even coherent.

Pure empathizers often become dualists, saying that the mind cannot be divisible, cannot be made of material, but must be… something else, somehow, outside the material universe—whatever that means.

Pure systematizers instead often become eliminativists, acknowledging the functioning of the brain and then declaring proudly that the mind does not exist—that consciousness, emotion, and experience are all simply illusions that advanced science will one day dispense with—again, whatever that means.

I can at least imagine what a universe would be like if eliminativism were true and there were no such thing as consciousness—just a vast expanse of stars and rocks and dust, lifeless and empty. Of course, I know that I’m not in such a universe, because I am experiencing consciousness right now, and the illusion of consciousness is… consciousness. (You are not experiencing what you are experiencing right now, I say!) But I can at least visualize what such a universe would be like, and indeed it probably was our universe (or at least our solar system) up until about a billion years ago when the first sentient animals began to evolve.

Dualists, on the other hand, are speaking words, structured into grammatical sentences, but I’m not even sure they are forming coherent assertions. Sure, you can sort of imagine our souls being floating wisps of light and energy (ala the “ascended beings”, my least-favorite part of the Stargate series, which I otherwise love), but ultimately those have to be made of something, because nothing can be both fundamental and complex. Moreover, the fact that they interact with ordinary matter strongly suggests that they are made of ordinary matter (and to be fair to Stargate, at one point in the series Rodney with his already-great intelligence vastly increased declares confidently that ascended beings are indeed nothing more than “protons and electrons, protons and electrons”). Even if they were made of some different kind of matter like dark matter, they would need to obey a common system of physical laws, and ultimately we would come to think of them as matter. Otherwise, how do the two interact? If we are made of soul-stuff which is fundamentally different from other stuff, then how do we even know that other stuff exists? If we are not our bodies, then how do we experience pain when they are damaged and control them with our volition? The most coherent theory of dualism is probably Malebranche’s, which is quite literally “God did it”. Epiphenomenalism, which says that thoughts are just sort of an extra thing that also happens but has no effect (an “epiphenomenon”) on the physical brain, is also quite popular for some reason. People don’t quite seem to understand that the Law of Conservation of Energy directly forbids an “epiphenomenon” in this sense, because anything that happens involves energy, and that energy (unlike, say, money) can’t be created out of nothing; it has to come from somewhere. Analogies are often used: The whistle of a train, the smoke of a flame. But the whistle of a train is a pressure wave that vibrates the train; the smoke from a flame is made of particulates that could be used to smother the flame. At best, there are some phenomena that don’t affect each other very much—but any causal interaction at all makes dualism break down.

How can highly intelligent, highly educated philosophers and scientists make such basic errors? I think it has to be personality. They have deep, built-in (quite likely genetic) intuitions about the structure of the universe, and they just can’t shake them.

And I confess, it’s very hard for me to figure out what to say in order to break those intuitions, because my deep intuitions are so different. Just as it seems obvious to them that the world cannot be this way, it seems obvious to me that it is. It’s a bit like living in a world where 45% of people can see red but not blue and insist the American Flag is red and white, another 45% of people can see blue but not red and insist the flag is blue and white, and I’m here in the 10% who can see all colors and I’m trying to explain that the flag is red, white, and blue.

The best I can come up with is to use analogies, and computers make for quite good analogies, not least because their functioning is modeled on our thinking.

Is this word processor program (LibreOffice Writer, as it turns out) really here, or is it merely an illusion? Clearly it’s really here, right? I’m using it. It’s doing things right now. Parts of it are sort of illusions—it looks like a blank page, but it’s actually an LCD screen lit up all the way; it looks like ink, but it’s actually where the LCD turns off. But there is clearly something here, an actual entity worth talking about which has properties that are usefully described without trying to reduce them to the constituent interactions of subatomic particles.

On the other hand, can it be reduced to the interactions of subatomic particles? Absolutely. A brief sketch is something like this: It’s a software program, running on an operating system, and these in turn are represented in the physical hardware as long binary sequences, stored by ever-so-slightly higher or lower voltages in particular hardware components, which in turn are due to electrons being moved from one valence to another. Those electrons move in precise accordance with the laws of quantum mechanics, I assure you; yet this in no way changes the fact that I’m typing a blog post on a word processor.

Indeed, it’s not even particularly useful to know that the electrons are obeying the laws of quantum mechanics, and quite literally no possible computer that could be constructed in our universe could ever be large enough to fully simulate all these quantum interactions within the amount of time since the dawn of the universe. If we are to understand it at all, it must be at a much higher level—and the “software program” level really seems to be the best one for most circumstances. The vast majority of problems I’m likely to encounter are either at the software level or the macro hardware level; it’s conceivable that a race condition could emerge in the processor cache or the voltage could suddenly spike or even that a cosmic ray could randomly ionize a single vital electron, but these scenarios are far less likely to affect my life than, say, I accidentally deleted the wrong file or the battery ran out of charge because I forgot to plug it in.

Likewise, when dealing with a relationship problem, or mediating a conflict between two friends, it’s rarely relevant that some particular neuron is firing in someone’s nucleus accumbens, or that one of my friends is very low on dopamine in his mesolimbic system today. It could be, particularly if some sort of mental or neurological illness in involved, but in most cases the real issues are better understood as higher level phenomena—people being angry, or tired, or sad. These emotions are ultimately constructed of axon potentials and neurotransmitters, but that doesn’t make them any less real, nor does it change the fact that it is at the emotional level that most human matters are best understood.

Perhaps part of the problem is that human emotions take on moral significance, which other higher-level entities generally do not? But they sort of do, really, in a more indirect way. It matters a great deal morally whether or not climate change is a real phenomenon caused by carbon emissions (it is). Ultimately this moral significance can be tied to human experiences, so everything rests upon human experiences being real; but they are real, in much the same way that rocks and trees and carbon emissions are real. No amount of neuroscience will ever change that, just as no amount of biological science would disprove the existence of trees.

Indeed, some of the world’s greatest moral problems could be better solved if people were better empathizer-systematizers, and thus more willing to do cost-benefit analysis.

Lukewarm support is a lot better than opposition

July 23, JDN 2457593

Depending on your preconceptions, this statement may seem either eminently trivial or offensively wrong: Lukewarm support is a lot better than opposition.

I’ve always been in the “trivial” camp, so it has taken me awhile to really understand where people are coming from when they say things like the following.

From a civil rights activist blogger (“POC” being “person of color” in case you didn’t know):

Many of my POC friends would actually prefer to hang out with an Archie Bunker-type who spits flagrantly offensive opinions, rather than a colorblind liberal whose insidious paternalism, dehumanizing tokenism, and cognitive indoctrination ooze out between superficially progressive words.

From the Daily Kos:

Right-wing racists are much more honest, and thus easier to deal with, than liberal racists.

From a Libertarian blogger:

I can deal with someone opposing me because of my politics. I can deal with someone who attacks me because of my religious beliefs. I can deal with open hostility. I know where I stand with people like that.

They hate me or my actions for (insert reason here). Fine, that is their choice. Let’s move onto the next bit. I’m willing to live and let live if they are.

But I don’t like someone buttering me up because they need my support, only to drop me the first chance they get. I don’t need sweet talk to distract me from the knife at my back. I don’t need someone promising the world just so they can get a boost up.

In each of these cases, people are expressing a preference for dealing with someone who actively opposes them, rather than someone who mostly supports them. That’s really weird.

The basic fact that lukewarm support is better than opposition is basically a mathematical theorem. In a democracy or anything resembling one, if you have the majority of population supporting you, even if they are all lukewarm, you win; if you have the majority of the population opposing you, even if the remaining minority is extremely committed to your cause, you lose.

Yes, okay, it does get slightly more complicated than that, as in most real-world democracies small but committed interest groups actually can pressure policy more than lukewarm majorities (the special interest effect); but even then, you are talking about the choice between no special interests and a special interest actively against you.

There is a valid question of whether it is more worthwhile to get a small, committed coalition, or a large, lukewarm coalition; but at the individual level, it is absolutely undeniable that supporting you is better for you than opposing you, full stop. I mean that in the same sense that the Pythagorean theorem is undeniable; it’s a theorem, it has to be true.

If you had the opportunity to immediately replace every single person who opposes you with someone who supports you but is lukewarm about it, you’d be insane not to take it. Indeed, this is basically how all social change actually happens: Committed supporters persuade committed opponents to become lukewarm supporters, until they get a majority and start winning policy votes.

If this is indeed so obvious and undeniable, why are there so many people… trying to deny it?

I came to realize that there is a deep psychological effect at work here. I could find very little in the literature describing this effect, which I’m going to call heretic effect (though the literature on betrayal aversion, several examples of which are linked in this sentence, is at least somewhat related).

Heretic effect is the deeply-ingrained sense human beings tend to have (as part of the overall tribal paradigm) that one of the worst things you can possibly do is betray your tribe. It is worse than being in an enemy tribe, worse even than murdering someone. The one absolutely inviolable principle is that you must side with your tribe.

This is one of the biggest barriers to police reform, by the way: The Blue Wall of Silence is the result of police officers identifying themselves as a tight-knit tribe and refusing to betray one of their own for anything. I think the best option for convincing police officers to support reform is to reframe violations of police conduct as themselves betrayals—the betrayal is not the IA taking away your badge, the betrayal is you shooting an unarmed man because he was Black.

Heretic effect is a particular form of betrayal aversion, where we treat those who are similar to our tribe but not quite part of it as the very worst sort of people, worse than even our enemies, because at least our enemies are not betrayers. In fact it isn’t really betrayal, but it feels like betrayal.

I call it “heretic effect” because of the way that exclusivist religions (including all the Abrahamaic religions, and especially Christianity and Islam) focus so much of their energy on rooting out “heretics”, people who almost believe the same as you do but not quite. The Spanish Inquisition wasn’t targeted at Buddhists or even Muslims; it was targeted at Christians who slightly disagreed with Catholicism. Why? Because while Buddhists might be the enemy, Protestants were betrayers. You can still see this in the way that Muslim societies treat “apostates”, those who once believed in Islam but don’t anymore. Indeed, the very fact that Christianity and Islam are at each other’s throats, rather than Hinduism and atheism, shows that it’s the people who almost agree with you that really draw your hatred, not the people whose worldview is radically distinct.

This is the effect that makes people dislike lukewarm supporters; like heresy, lukewarm support feels like betrayal. You can clearly hear that in the last quote: “I don’t need sweet talk to distract me from the knife at my back.” Believe it or not, Libertarians, my support for replacing the social welfare state with a basic income, decriminalizing drugs, and dramatically reducing our incarceration rate is not deception. Nor do I think I’ve been particularly secretive about my desire to make taxes more progressive and environmental regulations stronger, the things you absolutely don’t agree with. Agreeing with you on some things but not on other things is not in fact the same thing as lying to you about my beliefs or infiltrating and betraying your tribe.

That said, I do sort of understand why it feels that way. When I agree with you on one thing (decriminalizing cannabis, for instance), it sends you a signal: “This person thinks like me.” You may even subconsciously tag me as a fellow Libertarian. But then I go and disagree with you on something else that’s just as important (strengthening environmental regulations), and it feels to you like I have worn your Libertarian badge only to stab you in the back with my treasonous environmentalism. I thought you were one of us!

Similarly, if you are a social justice activist who knows all the proper lingo and is constantly aware of “checking your privilege”, and I start by saying, yes, racism is real and terrible, and we should definitely be working to fight it, but then I question something about your language and approach, that feels like a betrayal. At least if I’d come in wearing a Trump hat you could have known which side I was really on. (And indeed, I have had people unfriend me or launch into furious rants at me for questioning the orthodoxy in this way. And sure, it’s not as bad as actually being harassed on the street by bigots—a thing that has actually happened to me, by the way—but it’s still bad.)

But if you can resist this deep-seated impulse and really think carefully about what’s happening here, agreeing with you partially clearly is much better than not agreeing with you at all. Indeed, there’s a fairly smooth function there, wherein the more I agree with your goals the more our interests are aligned and the better we should get along. It’s not completely smooth, because certain things are sort of package deals: I wouldn’t want to eliminate the social welfare system without replacing it with a basic income, whereas many Libertarians would. I wouldn’t want to ban fracking unless we had established a strong nuclear infrastructure, but many environmentalists would. But on the whole, more agreement is better than less agreement—and really, even these examples are actually surface-level results of deeper disagreement.

Getting this reaction from social justice activists is particularly frustrating, because I am on your side. Bigotry corrupts our society at a deep level and holds back untold human potential, and I want to do my part to undermine and hopefully one day destroy it. When I say that maybe “privilege” isn’t the best word to use and warn you about not implicitly ascribing moral responsibility across generations, this is not me being a heretic against your tribe; this is a strategic policy critique. If you are writing a letter to the world, I’m telling you to leave out paragraph 2 and correcting your punctuation errors, not crumpling up the paper and throwing it into a fire. I’m doing this because I want you to win, and I think that your current approach isn’t working as well as it should. Maybe I’m wrong about that—maybe paragraph 2 really needs to be there, and you put that semicolon there on purpose—in which case, go ahead and say so. If you argue well enough, you may even convince me; if not, this is the sort of situation where we can respectfully agree to disagree. But please, for the love of all that is good in the world, stop saying that I’m worse than the guys in the KKK hoods. Resist that feeling of betrayal so that we can have a constructive critique of our strategy. Don’t do it for me; do it for the cause.