# What about a tax on political contributions?

Jan 7, JDN 2458126

In my previous post, I argued that an advertising tax could reduce advertising, raise revenue, and produce almost no real economic distortion. Now I’m going to generalize this idea to an even bolder proposal: What if we tax political contributions?

Donations to political campaigns are very similar to advertising. A contest function framework also makes a lot of sense: Increased spending improves your odds of winning, but it doesn’t actually produce any real goods.

Suppose there’s some benefit B that I get if a given politician wins an election. That benefit could include direct benefits to me, as well as altruistic benefits to other citizens I care about, or even my concern for the world as a whole. But presumably, I do benefit in some fashion from my favored politician winning—otherwise, why are they my favored politician?

In this very simple model, let’s assume that there are only two parties and two donors (obviously in the real world there are more parties and vastly more donors; but it doesn’t fundamentally change the argument). Say I will donate x and the other side will donate y.

Assuming that donations are all that matter, the probability my party will win the election is x/(x+y).

Fortunately that isn’t the case. A lot of things matter, some that should (policy platforms, experience, qualifications, character) and some that shouldn’t (race, gender, age, heightpart of why Trump won may in fact be that he is tall; he’s about 6’1”.). So let’s put all the other factors that affect elections into a package and call that F.

The probability that my candidate wins is then x/(x+y) + F, where F can be positive or negative. If F is positive, it means that my candidate is more likely to win, while if it’s negative, it means my candidate is less likely to win. (If you want to be pedantic, the probability of winning has to be capped at 0 and 1, but this doesn’t fundamentally change the argument, and only matters for candidates that are obvious winners or obvious losers regardless of how much anyone donates.)

The donation costs me money, x. The cost in utility of that money depends on my utility function, so for now I’ll just call it a cost function C(x).
Then my net benefit is:
B*[x/(x+y)+F] – C(x)

I can maximize this by a first-order condition. Notice how the F just drops out. I like F to be large, but it doesn’t affect my choice of x.

B*y/(x+y)^2 = C'(x)

Turning that into an exact value requires knowing my cost function and my opponent’s cost function (which need not be the same, in general; unlike the advertising case, it’s not a matter of splitting fungible profits between us), but it’s actually possible to stop here. We can already tell that there is a well-defined solution: There’s a certain amount of donation x that maximizes my expected utility, given the amount y that the other side has donated. Moreover, with a little bit of calculus you can show that the optimal amount of x is strictly increasing in y, which makes intuitive sense: The more they give, the more you need to give in order to keep up. Since x is increasing in y and y is increasing in x, there is a Nash equilibrium: At some amount x and y we each are giving the optimal amount from our perspective.

We can get a precise answer if we assume that the amount of the donations is small compared to my overall wealth, so I will be approximately risk-neutral; then we can just say C(x) = x, and C'(x) = 1:

B*y/(x+y)^2 = 1
Then we get essentially the same result we did for the advertising:

x = y = B/4

According to this, I should be willing to donate up to one-fourth the benefit I’d get from my candidate winning in donations. This actually sounds quite high; I think once you take into account the fact that lots of other people are donating and political contributions aren’t that effective at winning elections, the optimal donation is actually quite a bit smaller—though perhaps still larger than most people give.

If we impose a tax rate r on political contributions, nothing changes. The cost to me of donating is still the same, and as long as the tax is proportional, the ratio x/(x+y) and the probability x/(x+y) + F will remain exactly the same as before. Therefore, I will continue to donate the same amount, as will my opponent, and each candidate will have the same probability of winning as before. The only difference is that some of the money (r of the money, to be precise) will go to the government instead of the politicians.

The total amount of donations will not change. The probability of each candidate winning will not change. All that will happen is money will be transferred from politicians to the government. If this tax revenue is earmarked for some socially beneficial function, this will obviously be an improvement in welfare.

The revenue gained is not nearly as large an amount of money as is spent on advertising (which tells you something about American society), but it’s still quite a bit: Since we currently spend about \$5 billion per year on federal elections, a tax rate of 50% could raise about \$2.5 billion.

But in fact this seriously under-estimates the benefits of such a tax. This simple model assumes that political contributions only change which candidate wins; but that’s actually not the main concern. (If F is large enough, it can offset any possible donations.)
The real concern is how political contributions affect the choices politicians make once they get into office. While outright quid-pro-quo bribery is illegal, it’s well-known that many corporations and wealthy individuals will give campaign donations with the reasonable expectation of influencing what sort of policies will be made.

You don’t think Goldman Sachs gives millions of dollars each election out of the goodness of their hearts, do you? And they give to both major parties, which really only makes sense if their goal is not to make a particular candidate win, but to make sure that whoever wins feels indebted to Goldman Sachs. (I guess it could also be to prevent third parties from winning—but they hardly ever win anyway, so that wouldn’t be a smart investment from the bank’s perspective.)

Lynda Powell at the University of Rochester has documented the many subtle but significant ways that these donations have influenced policy. Campaign donations aren’t as important as party platforms, but a lot of subtle changes across a wide variety of policies add up to large differences in outcomes.

A political contribution tax would reduce these influences. If politicians’ sole goal were to win, the tax would have no effect. But it seems quite likely that politicians enjoy various personal benefits from lobbying and campaign contributions: Fine dinners, luxurious vacations, and so on. And insofar as that is influencing politicians’ behavior, it is both obviously corrupt and clearly reduced by a political contribution tax. How large an effect this would be is difficult to say; but the direction of the effect is clearly the one we want.

Taxing donations would also allow us to protect the right to give to campaigns (which does seem to be a limited kind of civil liberty, even though the precise interpretation “money is speech” is Orwellian), while reducing corruption and allowing us to keep close track on donations that are made. Taxing a money stream, even a small amount, is often one of the best ways to incentivize close monitoring of that money stream.

With a subtle change, the tax could even be made to bias in favor of populism: All you need to do is exempt small donations from the tax. If say the first \$1000 per person per year is exempt from taxation, then the imposition of the tax will reduce the effectiveness of million-dollar contributions from Goldman Sachs and the Koch brothers without having any effect on \$50 donations from people like you and me. That would technically be “distorting” elections—but it seems like it might be a distortion worth making.

Of course, this is probably even less likely to happen than the advertising tax.

# “DSGE or GTFO”: Macroeconomics took a wrong turn somewhere

Dec 31, JDN 2458119

The state of macro is good,” wrote Oliver Blanchard—in August 2008. This is rather like the turkey who is so pleased with how the farmer has been feeding him lately, the day before Thanksgiving.

It’s not easy to say exactly where macroeconomics went wrong, but I think Paul Romer is right when he makes the analogy between DSGE (dynamic stochastic general equilbrium) models and string theory. They are mathematically complex and difficult to understand, and people can make their careers by being the only ones who grasp them; therefore they must be right! Nevermind if they have no empirical support whatsoever.

To be fair, DSGE models are at least a little better than string theory; they can at least be fit to real-world data, which is better than string theory can say. But being fit to data and actually predicting data are fundamentally different things, and DSGE models typically forecast no better than far simpler models without their bold assumptions. You don’t need to assume all this stuff about a “representative agent” maximizing a well-defined utility function, or an Euler equation (that doesn’t even fit the data), or this ever-proliferating list of “random shocks” that end up taking up all the degrees of freedom your model was supposed to explain. Just regressing the variables on a few years of previous values of each other (a “vector autoregression” or VAR) generally gives you an equally-good forecast. The fact that these models can be made to fit the data well if you add enough degrees of freedom doesn’t actually make them good models. As Von Neumann warned us, with enough free parameters, you can fit an elephant.

But really what bothers me is not the DSGE but the GTFO (“get the [expletive] out”); it’s not that DSGE models are used, but that it’s almost impossible to get published as a macroeconomic theorist using anything else. Defenders of DSGE typically don’t even argue anymore that it is good; they argue that there are no credible alternatives. They characterize their opponents as “dilettantes” who aren’t opposing DSGE because we disagree with it; no, it must be because we don’t understand it. (Also, regarding that post, I’d just like to note that I now officially satisfy the Athreya Axiom of Absolute Arrogance: I have passed my qualifying exams in a top-50 economics PhD program. Yet my enmity toward DSGE has, if anything, only intensified.)

Of course, that argument only makes sense if you haven’t been actively suppressing all attempts to formulate an alternative, which is precisely what DSGE macroeconomists have been doing for the last two or three decades. And yet despite this suppression, there are alternatives emerging, particularly from the empirical side. There are now empirical approaches to macroeconomics that don’t use DSGE models. Regression discontinuity methods and other “natural experiment” designs—not to mention actual experiments—are quickly rising in popularity as economists realize that these methods allow us to actually empirically test our models instead of just adding more and more mathematical complexity to them.

But there still seems to be a lingering attitude that there is no other way to do macro theory. This is very frustrating for me personally, because deep down I think what I would like to do as a career is macro theory: By temperament I have always viewed the world through a very abstract, theoretical lens, and the issues I care most about—particularly inequality, development, and unemployment—are all fundamentally “macro” issues. I left physics when I realized I would be expected to do string theory. I don’t want to leave economics now that I’m expected to do DSGE. But I also definitely don’t want to do DSGE.

Fortunately with economics I have a backup plan: I can always be an “applied micreconomist” (rather the opposite of a theoretical macroeconomist I suppose), directly attached to the data in the form of empirical analyses or even direct, randomized controlled experiments. And there certainly is plenty of work to be done along the lines of Akerlof and Roth and Shiller and Kahneman and Thaler in cognitive and behavioral economics, which is also generally considered applied micro. I was never going to be an experimental physicist, but I can be an experimental economist. And I do get to use at least some theory: In particular, there’s an awful lot of game theory in experimental economics these days. Some of the most exciting stuff is actually in showing how human beings don’t behave the way classical game theory predicts (particularly in the Ultimatum Game and the Prisoner’s Dilemma), and trying to extend game theory into something that would fit our actual behavior. Cognitive science suggests that the result is going to end up looking quite different from game theory as we know it, and with my cognitive science background I may be particularly well-positioned to lead that charge.

Still, I don’t think I’ll be entirely satisfied if I can’t somehow bring my career back around to macroeconomic issues, and particularly the great elephant in the room of all economics, which is inequality. Underlying everything from Marxism to Trumpism, from the surging rents in Silicon Valley and the crushing poverty of Burkina Faso, to the Great Recession itself, is inequality. It is, in my view, the central question of economics: Who gets what, and why?

That is a fundamentally macro question, but you can’t even talk about that issue in DSGE as we know it; a “representative agent” inherently smooths over all inequality in the economy as though total GDP were all that mattered. A fundamentally new approach to macroeconomics is needed. Hopefully I can be part of that, but from my current position I don’t feel much empowered to fight this status quo. Maybe I need to spend at least a few more years doing something else, making a name for myself, and then I’ll be able to come back to this fight with a stronger position.

In the meantime, I guess there’s plenty of work to be done on cognitive biases and deviations from game theory.

# Doug Julius, in memoriam

Sep 10, JDN 2458007

Douglas Patrick Julius

April 15, 1954 to August 31, 2017

My father died suddenly and unexpectedly from a ruptured intracranial aneurysm. I received a call that he was in the hospital Wednesday morning at 11:30 AM PDT, took the first flight to Michigan I could find, and arrived around 10:30 PM EDT. By the time I got there, my father was already unconscious and under intensive care. I stayed up all night in the hospital. My father never regained consciousness. He was declared dead at 8:30 AM on Thursday morning.

In lieu of a proper blog post this week, I decided to post the eulogy I gave at my father’s funeral this past Sunday. It follows below.

What is a soul? What is it made of? Most people imagine a soul as something immaterial, something somehow “beyond” this physical world. But at its core, a soul is simply what makes us who we are. Today we have cognitive science, and now understand the human soul better than it was understood by all the billions of people in all the thousands of years of human civilization before us. Thanks to cognitive science, we now know what the soul is made of: It is made of information.

My father wasn’t made of some mysterious substance “beyond” our physical world, but nor was hejust the molecules of his body you see here. My father was made of hopes and dreams, laughter and tears, words and ideas. He was made of James Joyce novels and Catullus poems, Spider-Man comics and Arnold Schwarzenegger movies, road trips across America, gazes over the Grand Canyon, spelunking in Carlsbad Caverns, walks on the beaches of the Gulf of Mexico and the Pacific, warm hugs, gentle smiles, sophisticated puns, obsessive organizing, and reading literally thousands of books, on everything from Celtic literature to quantum physics. (I think he knew the former a lot better than the latter, while for me, it is the reverse.)

And coffee. Lots and lots of coffee.

Most of what my father was is now gone, and I don’t think we should try to deny that. I don’t think it’s healthy—or even effective—to tell ourselves that he isn’t really gone or that he’s in some better place. Deep down we all know the loss we feel. We know the regrets we have of all the things we thought we’d get to do together, but now we know we never will. There are three that are especially painful for me: My father will never get to see my PhD diploma, never know me as “Doctor Patrick Neal Russell Julius.” My father will never get to see my wedding. And above all, my father will never get to meet his grandchildren. If I had known, I could have tried to make these things happen sooner, so that my father would get to share them with me. I thought that I had 20 years left to do all these things with my father beside me—but the reality turned out differently. And one of the best definitions of reality is this: Reality, when you stop believing in it, doesn’t go away. We grieve this loss for a reason. It hurts so much to lose my father because we know how much joy he once brought to our lives, and how much he would have if he’d been allowed to go on living. A friend of mine offered me this aphorism: Grief is the price we pay for love.

But my father is not completely gone, either. Our souls are made of information too, and there are little fragments of my father’s soul in every one of us. Every memory we have of him, every time he touched our lives, a fragment of him was downloaded into each of us, and as long as we remember him, he will not be entirely gone.

There are a few memories in particular I’d like to share with you all know—back them up in the cloud if you will—so that the essence of who my father was will live on awhile longer. Human long-term memory is stored in the form of narrative, so I thought it best if I told a few stories.

The first story is about gentleness. We were driving through New Mexico. I had moved recently to Long Beach to study for my master’s degree at CSU; after coming back to Ann Arbor for a visit, Dad had driven with me in my little Smart car all the way across the country. We planned our route to pass the Very Large Array, a gigantic assembly of radio telescopes probably best known for being featured in the film Contact, one of my favorites, based on a Carl Sagan novel I love even more. I had wanted to see it for a long time, so Dad added a few hours to our trip so we could go past it.

When we arrived at the array, we could hardly find any people around. Instead what we found were bugs—grasshoppers I think, and millions of them. Everywhere. The ground was literally covered in them; there wasn’t even any room to walk. Most people would probably have just gone ahead and walked right on top of them, crushing them as they went—but not my father. His gentleness extended even to the lowliest of creatures, and he wanted to make sure we didn’t harm any of the bugs. So he found a way for us to creep, slowly, across the desert, shooing away the bugs at each step, so that they would give us room to pass. We didn’t step on a single grasshopper that day, and I finally got the chance to touch one of the radio telescopes.

That’s about all I have. Thank you for listening, and taking the time to be here today. The world lost a very good man this week, and I know he will be sorely missed by all of us. No words can fully capture our sorrow, but there are a few in particular I think my father would have appreciated, said always on such occasions by one of his favorite authors:
So it goes.

# This is a battle for the soul of America

July 9, JDN 2457944

At the time of writing, I just got back from a protest march against President Trump in Santa Ana (the featured photo is one I took at the march). I had intended to go to the much larger sister protest in Los Angeles, but the logistics were too daunting. On the upside, presumably the marginal impact of my attendance was higher at the smaller event.

Protest marches are not a common pastime of mine; I am much more of an ivory-tower policy wonk than a boots-on-the-ground political activist. The way that other people seem to be allergic to statistics, I am allergic to a lack of statistics when broad claims are made with minimal evidence. Even when I basically agree with everything being said, I still feel vaguely uncomfortable marching and chanting in unison (and constantly reminded of that scene from Life of Brian). But I made an exception for this one, because Trump represents a threat to the soul of American democracy.

We have had bad leaders many times before—even awful leaders, even leaders whose bad decisions resulted in the needless deaths of thousands. But not since the end of the Civil War have we had leaders who so directly threatened the core institutions of America itself.

We must keep reminding ourselves: This is not normal. This is not normal! Donald Trump’s casual corruption, overwhelming narcissism, authoritarianism, greed, and utter incompetence (not to mention his taste in decor) make him more like Idi Amin or Hugo Chavez than like George H.W. Bush or Ronald Reagan. (Even the comparison with Vladimir Putin would be too flattering to Trump; Putin at least is competent.) He has personally publicly insulted over 300 people, places, and things—and counting.

Trump lies almost constantly, surrounds himself with family members and sycophants, refuses to listen to intelligence briefings, and personally demeans and even threatens journalists who criticize him. Every day it seems like there is a new scandal, more outrageous than the last; and after so long, this almost seems like a strategy. Every day he finds some new way to offend and undermine the basic norms of our society, and eventually he hopes to wear us down until we give up fighting.

It is certainly an exaggeration, and perhaps a dangerous one, to say that Donald Trump is the next Adolf Hitler. But there are important historical parallels between the rise of Trump and the rise of many other populist authoritarian demagogues. He casually violates democratic norms of civility, honesty, and transparency, and incentivizes the rest of us to do the same—a temptation we must resist. Political scientists and economists are now issuing public warnings that our democratic institutions are not as strong as we may think (though, to be fair, others argue that they are indeed strong enough).

It was an agonizingly close Presidential election. Even the tiniest differences could have flipped enough states to change the outcome. If we’d had a better voting system, it would not have happened; a simple plurality vote would have elected Hillary Clinton, and as I argued in a previous post, range voting would probably have chosen Bernie Sanders. Therefore, we must not take this result as a complete indictment of American society or a complete failure of American democracy. But let it shake us out of our complacency; democracy is only as strong as the will of its citizens to defend it.

# How we sold our privacy piecemeal

Apr 2, JDN 2457846

The US Senate just narrowly voted to remove restrictions on the sale of user information by Internet Service Providers. Right now, your ISP can basically sell your information to whomever they like without even telling you. The new rule that the Senate struck down would have required them to at least make you sign a form with some fine print on it, which you probably would sign without reading it. So in practical terms maybe it makes no difference.

…or does it? Maybe that’s really the mistake we’ve been making all along.

In cognitive science we have a concept called the just-noticeable difference (JND); it is basically what it sounds like. If you have two stimuli—two colors, say, or sounds of two different pitches—that differ by an amount smaller than the JND, people will not notice it. But if they differ by more than the JND, people will notice. (In practice it’s a bit more complicated than that, as different people have different JND thresholds and even within a person they can vary from case to case based on attention or other factors. But there’s usually a relatively narrow range of JND values, such that anything below that is noticed by no one and anything above that is noticed by almost everyone.)

The JND seems like an intuitively obvious concept—of course you can’t tell the difference between a color of 432.78 nanometers and 432.79 nanometers!—but it actually has profound implications. In particular it undermines the possibility of having truly transitive preferences. If you prefer some colors to others—which most of us do—but you have a nonzero JND in color wavelengths—as we all do—then I can do the following: Find one color you like (for concreteness, say you like blue of 475 nm), and another color you don’t (say green of 510 nm). Let you choose between the blue you like and another blue, 475.01 nm. Will you prefer one to the other? Of course not, the difference is within your JND. So now compare 475.01 nm and 475.02 nm; which do you prefer? Again, you’re indifferent. And I can go on and on this way a few thousand times, until finally I get to 510 nanometers, the green you didn’t like. I have just found a chain of your preferences that is intransitive; you said A = B = C = D… all the way down the line to X = Y = Z… but then at the end you said A > Z. Your preferences aren’t transitive, and therefore aren’t well-defined rational preferences. And you could do the same to me, so neither are mine.

Part of the reason we’ve so willingly given up our privacy in the last generation or so is our paranoid fear of terrorism, which no doubt triggers deep instincts about tribal warfare. Depressingly, the plurality of Americans think that our government has not gone far enough in its obvious overreaches of the Constitution in the name of defending us from a threat that has killed fewer Americans in my lifetime than die from car accidents each month.

But that doesn’t explain why we—and I do mean we, for I am as guilty as most—have so willingly sold our relationships to Facebook and our schedules to Google. Google isn’t promising to save me from the threat of foreign fanatics; they’re merely offering me a more convenient way to plan my activities. Why, then, am I so cavalier about entrusting them with so much personal data?

Well, I didn’t start by giving them my whole life. I created an email account, which I used on occasion. I tried out their calendar app and used it to remind myself when my classes were. And so on, and so forth, until now Google knows almost as much about me as I know about myself.

At each step, it didn’t feel like I was doing anything of significance; perhaps indeed it was below my JND. Each bit of information I was giving didn’t seem important, and perhaps it wasn’t. But all together, our combined information allows Google to make enormous amounts of money without charging most of its users a cent.

The process goes something like this. Imagine someone offering you a penny in exchange for telling them how many times you made left turns last week. You’d probably take it, right? Who cares how many left turns you made last week? But then they offer another penny in exchange for telling them how many miles you drove on Tuesday. And another penny for telling them the average speed you drive during the afternoon. This process continues hundreds of times, until they’ve finally given you say \$5.00—and they know exactly where you live, where you work, and where most of your friends live, because all that information was encoded in the list of driving patterns you gave them, piece by piece.

Consider instead how you’d react if someone had offered, “Tell me where you live and work and I’ll give you \$5.00.” You’d be pretty suspicious, wouldn’t you? What are they going to do with that information? And \$5.00 really isn’t very much money. Maybe there’s a price at which you’d part with that information to a random suspicious stranger—but it’s probably at least \$50 or even more like \$500, not \$5.00. But by asking it in 500 different questions for a penny each, they can obtain that information from you at a bargain price.

If you work out how much money Facebook and Google make from each user, it’s actually pitiful. Facebook has been increasing their revenue lately, but it’s still less than \$20 per user per year. The stranger asks, “Tell me who all your friends are, where you live, where you were born, where you work, and what your political views are, and I’ll give you \$20.” Do you take that deal? Apparently, we do. Polls find that most Americans are willing to exchange privacy for valuable services, often quite cheaply.

Of course, there isn’t actually an alternative social network that doesn’t sell data and instead just charges a subscription fee. I don’t think this is a fundamentally unfeasible business model, but it hasn’t succeeded so far, and it will have an uphill battle for two reasons.

The first is the obvious one: It would have to compete with Facebook and Google, who already have the enormous advantage of a built-in user base of hundreds of millions of people.

The second one is what this post is about: The social network based on conventional economics rather than selling people’s privacy can’t take advantage of the JND.

I suppose they could try—charge \$0.01 per month at first, then after awhile raise it to \$0.02, \$0.03 and so on until they’re charging \$2.00 per month and actually making a profit—but that would be much harder to pull off, and it would provide the least revenue when it is needed most, at the early phase when the up-front costs of establishing a network are highest. Moreover, people would still feel that; it’s a good feature of our monetary system that you can’t break money into small enough denominations to really consistently hide under the JND. But information can be broken down into very tiny pieces indeed. Much of the revenue earned by these corporate giants is actually based upon indexing the keywords of the text we write; we literally sell off our privacy word by word.

What should we do about this? Honestly, I’m not sure. Facebook and Google do in fact provide valuable services, without which we would be worse off. I would be willing to pay them their \$20 per year, if I could ensure that they’d stop selling my secrets to advertisers. But as long as their current business model keeps working, they have little incentive to change. There is in fact a huge industry of data brokering, corporations you’ve probably never heard of that make their revenue entirely from selling your secrets.

In a rare moment of actual journalism, TIME ran an article about a year ago arguing that we need new government policy to protect us from this kind of predation of our privacy. But they had little to offer in the way of concrete proposals.

The ACLU does better: They have specific proposals for regulations that should be made to protect our information from the most harmful prying eyes. But as we can see, the current administration has no particular interest in pursuing such policies—if anything they seem to do the opposite.

# In defense of slacktivism

Jan 22, JDN 2457776

It’s one of those awkward portmanteaus that people often make to try to express a concept in fewer syllables, while also implicitly saying that the phenomenon is specific enough to deserve its own word: “Slacktivism”, made of “slacker” and “activism”, not unlike “mansplain” is made of “man” and “explain” or “edutainment” was made of “education” and “entertainment”—or indeed “gerrymander” was made of “Elbridge Gerry” and “salamander”. The term seems to be particularly popular on Huffington Post, which has a whole category on slacktivism. There is a particular subcategory of slacktivism that is ironically against other slacktivism, which has been dubbed “snarktivism”.

It’s almost always used as a pejorative; very few people self-identify as “slacktivists” (though once I get through this post, you may see why I’m considering it myself). “Slacktivism” is activism that “isn’t real” somehow, activism that “doesn’t count”.

Of course, that raises the question: What “counts” as legitimate activism? Is it only protest marches and sit-ins? Then very few people have ever been or will ever be activists. Surely donations should count, at least? Those have a direct, measurable impact. What about calling your Congressman, or letter-writing campaigns? These have been staples of activism for decades.
If the term “slacktivism” means anything at all, it seems to point to activities surrounding raising awareness, where the goal is not to enact a particular policy or support a particular NGO but to simply get as much public attention to a topic as possible. It seems to be particularly targeted at blogging and social media—and that’s important, for reasons I’ll get to shortly. If you gather a group of people in your community and give a speech about LGBT rights, you’re an activist. If you send out the exact same speech on Facebook, you’re a slacktivist.

One of the arguments against “slacktivism” is that it can be used to funnel resources at the wrong things; this blog post makes a good point that the Kony 2012 campaign doesn’t appear to have actually accomplished anything except profits for the filmmakers behind it. (Then again: A blog post against slacktivism? Are you sure you’re not doing right now the thing you think you are against?) But is this problem unique to slacktivism, or is it a more general phenomenon that people simply aren’t all that informed about how to have the most impact? There are an awful lot of inefficient charities out there, and in fact the most important waste of charitable funds involves people giving to their local churches. Fortunately, this is changing, as people become more secularized; churches used to account for over half of US donations, and now they only account for less than a third. (Naturally, Christian organizations are pulling out their hair over this.) The 60 million Americans who voted for Trump made a horrible mistake and will cause enormous global damage; but they weren’t slacktivists, were they?

Studies do suggest that traditionally “slacktivist” activities like Facebook likes aren’t a very strong predictor of future, larger actions, and more private modes of support (like donations and calling your Congressman) tend to be stronger predictors. But so what? In order for slacktivism to be a bad thing, they would have to be a negative predictor. They would have to substitute for more effective activism, and there’s no evidence that this happens.

In fact, there’s even some evidence that slacktivism has a positive effect (normally I wouldn’t cite Fox News, but I think in this case we should expect a bias in the opposite direction, and you can read the full Georgetown study if you want):

A study from Georgetown University in November entitled “Dynamics of Cause Engagement” looked how Americans learned about and interacted with causes and other social issues, and discovered some surprising findings on Slacktivism.

While the traditional forms of activism like donating money or volunteering far outpaces slacktivism, those who engage in social issues online are twice as likely as their traditional counterparts to volunteer and participate in events. In other words, slacktivists often graduate to full-blown activism.

At worst, most slacktivists are doing nothing for positive social change, and that’s what the vast majority of people have been doing for the entirety of human history. We can bemoan this fact, but that won’t change it. Most people are simply too uniformed to know what’s going on in the world, and too broke and too busy to do anything about it.

Indeed, slacktivism may be the one thing they can do—which is why I think it’s worth defending.

From an economist’s perspective, there’s something quite odd about how people’s objections to slacktivism are almost always formulated. The rational, sensible objection would be to their small benefits—this isn’t accomplishing enough, you should do something more effective. But in fact, almost all the objections to slacktivism I have ever read focus on their small costs—you’re not a “real activist” because you don’t make sacrifices like I do.

Yet it is a basic principle of economic rationality that, all other things equal, lower cost is better. Indeed, this is one of the few principles of economic rationality that I really do think is unassailable; perfect information is unrealistic and total selfishness makes no sense at all. But cost minimization is really very hard to argue with—why pay more, when you can pay less and get the same benefit?

From an economist’s perspective, the most important thing about an activity is its cost-effectiveness, measured either by net benefitbenefit minus cost—or rate of returnbenefit divided by cost. But in both cases, a lower cost is always better; and in fact slacktivism has an astonishing rate of return, precisely because its cost is so small.

Suppose that a campaign of 10 million Facebook likes actually does have a 1% chance of changing a policy in a way that would save 10,000 lives, with a life expectancy of 50 years each. Surely this is conservative, right? I’m only giving it a 1% chance of success, on a policy with a relatively small impact (10,000 lives could be a single clause in an EPA regulatory standard), with a large number of slacktivist participants (10 million is more people than the entire population of Switzerland). Yet because clicking “like” and “share” only costs you maybe 10 seconds, we’re talking about an expected cost of (10 million)(10/86,400/365) = 0.32 QALY for an expected benefit of (10,000)(0.01)(50) = 5000 QALY. That is a rate of return of 1,500,000%—that’s 1.5 million percent.

Let’s compare this to the rate of return on donating to a top charity like UNICEF, Oxfam, the Against Malaria Foundation, or the Schistomoniasis Control Initiative, for which donating about \$300 would save the life of 1 child, adding about 50 QALY. That \$300 most likely cost you about 0.01 QALY (assuming an annual income of \$30,000), so we’re looking at a return of 500,000%. Now, keep in mind that this is a huge rate of return, far beyond what you can ordinarily achieve, that donating \$300 to UNICEF is probably one of the best things you could possibly be doing with that money—and yet slacktivism may still exceed it in efficiency. Maybe slacktivism doesn’t sound so bad after all?

Of course, the net benefit of your participation is higher in the case of donation; you yourself contribute 50 QALY instead of only contributing 0.0005 QALY. Ultimately net benefit is what matters; rate of return is a way of estimating what the net benefit would be when comparing different ways of spending the same amount of time or money. But from the figures I just calculated, it begins to seem like maybe the very best thing you could do with your time is clicking “like” and “share” on Facebook posts that will raise awareness of policies of global importance. Now, you have to include all that extra time spent poring through other Facebook posts, and consider that you may not be qualified to assess the most important issues, and there’s a lot of uncertainty involved in what sort of impact you yourself will have… but it’s almost certainly not the worst thing you could be doing with your time, and frankly running these numbers has made me feel a lot better about all the hours I have actually spent doing this sort of thing. It’s a small benefit, yes—but it’s an even smaller cost.

Indeed, the fact that so many people treat low cost as bad, when it is almost by definition good, and the fact that they also target their ire so heavily at blogging and social media, says to me that what they are really trying to accomplish here has nothing to do with actually helping people in the most efficient way possible.

Rather, it’s two things.

The obvious one is generational—it’s yet another chorus in the unending refrain that is “kids these days”. Facebook is new, therefore it is suspicious. Adults have been complaining about their descendants since time immemorial; some of the oldest written works we have are of ancient Babylonians complaining that their kids are lazy and selfish. Either human beings have been getting lazier and more selfish for thousands of years, or, you know, kids are always a bit more lazy and selfish than their parents or at least seem so from afar.

The one that’s more interesting for an economist is signaling. By complaining that other people aren’t paying enough cost for something, what you’re really doing is complaining that they aren’t signaling like you are. The costly signal has been made too cheap, so now it’s no good as a signal anymore.

“Anyone can click a button!” you say. Yes, and? Isn’t it wonderful that now anyone with a smartphone (and there are more people with access to smartphones than toilets, because #WeLiveInTheFuture) can contribute, at least in some small way, to improving the world? But if anyone can do it, then you can’t signal your status by doing it. If your goal was to make yourself look better, I can see why this would bother you; all these other people doing things that look just as good as what you do! How will you ever distinguish yourself from the riffraff now?

This is also likely what’s going on as people fret that “a college degree’s not worth anything anymore” because so many people are getting them now; well, as a signal, maybe not. But if it’s just a signal, why are we spending so much money on it? Surely we can find a more efficient way to rank people by their intellect. I thought it was supposed to be an education—in which case the meteoric rise in global college enrollments should be cause for celebration. (In reality of course a college degree can serve both roles, and it remains an open question among labor economists as to which effect is stronger and by how much. But the signaling role is almost pure waste from the perspective of social welfare; we should be trying to maximize the proportion of real value added.)

For this reason, I think I’m actually prepared to call myself a slacktivist. I aim for cost-effective awareness-raising; I want to spread the best ideas to the most people for the lowest cost. Why, would you prefer I waste more effort, to signal my own righteousness?

# There is no problem of free will, just a lot of really confused people

Jan 15, JDN 2457769

I was hoping for some sort of news item to use as a segue, but none in particular emerged, so I decided to go on with it anyway. I haven’t done any cognitive science posts in awhile, and this is one I’ve been meaning to write for a long time—actually it’s the sort of thing that even a remarkable number of cognitive scientists frequently get wrong, perhaps because the structure of human personality makes cognitive science inherently difficult.

Do we have free will?

The question has been asked so many times by so many people it is now a whole topic in philosophy. The Stanford Encyclopedia of Philosophy has an entire article on free will. The Information Philosopher has a gateway page “The Problem of Free Will” linking to a variety of subpages. There are even YouTube videos about “the problem of free will”.

The constant arguing back and forth about this would be problematic enough, but what really grates me are the many, many people who write “bold” articles and books about how “free will does not exist”. Examples include Sam Harris and Jerry Coyne, and have been published in everything from Psychology Today to the Chronicle of Higher Education. There’s even a TED talk.

The worst ones are those that follow with “but you should believe in it anyway”. In The Atlantic we have “Free will does not exist. But we’re better off believing in it anyway.” Scientific American offers a similar view, “Scientists say free will probably doesn’t exist, but urge: “Don’t stop believing!””

This is a mind-bogglingly stupid approach. First of all, if you want someone to believe in something, you don’t tell them it doesn’t exist. Second, if something doesn’t exist, that is generally considered a pretty compelling reason not to believe in it. You’d need a really compelling counter-argument, and frankly I’m not even sure the whole idea is logically coherent. How can I believe in something if I know it doesn’t exist? Am I supposed to delude myself somehow?

But the really sad part is that it’s totally unnecessary. There is no problem of free will. There are just an awful lot of really, really confused people. (Fortunately not everyone is confused; there are those, such as Daniel Dennett, who actually understand what’s going on.)

The most important confusion is over what you mean by the phrase “free will”. There are really two core meanings here, and the conflation of them is about 90% of the problem.

1. Moral responsibility: We have “free will” if and only if we are morally responsible for our actions.

2. Noncausality: We have “free will” if and only if our actions are not caused by the laws of nature.

Basically, every debate over “free will” boils down to someone pointing out that noncausality doesn’t exist, and then arguing that this means that moral responsibility doesn’t exist. Then someone comes back and says that moral responsibility does exist, and then infers that this means noncausality must exist. Or someone points out that noncausality doesn’t exist, and then they realize how horrible it would be if moral responsibility didn’t exist, and then tells people they should go on believing in noncausality so that they don’t have to give up moral responsibility.

Let me be absolutely clear here: Noncausality could not possibly exist.

Noncausality isn’t even a coherent concept. Actions, insofar as they are actions, must, necessarily, by definition, be caused by the laws of nature.

I can sort of imagine an event not being caused; perhaps virtual electron-positron pairs can really pop into existence without ever being caused. (Even then I’m not entirely convinced; I think quantum mechanics might actually be deterministic at the most fundamental level.)

But an action isn’t just a particle popping into existence. It requires the coordinated behavior of some 10^26 or more particles, all in a precisely organized, unified way, structured so as to move some other similarly large quantity of particles through space in a precise way so as to change the universe from one state to another state according to some system of objectives. Typically, it involves human muscles intervening on human beings or inanimate objects. (Recently it has come to mean specifically human fingers on computer keyboards a rather large segment of the time!) If what you do is an action—not a muscle spasm, not a seizure, not a slip or a trip, but something you did on purpose—then it must be caused. And if something is caused, it must be caused according to the laws of nature, because the laws of nature are the laws underlying all causality in the universe!

And once you realize that, the “problem of free will” should strike you as one of the stupidest “problems” ever proposed. Of course our actions are caused by the laws of nature! Why in the world would you think otherwise?

If you think that noncausality is necessary—or even useful—for free will, what kind of universe do you think you live in? What kind of universe could someone live in, that would fit your idea of what free will is supposed to be?

It’s like I said in that much earlier post about The Basic Fact of Cognitive Science (we are our brains): If you don’t think a mind can be made of matter, what do you think minds are made of? What sort of magical invisible fairy dust would satisfy you? If you can’t even imagine something that would satisfy the constraints you’ve imposed, did it maybe occur to you that your constraints are too strong?

Noncausality isn’t worth fretting over for the same reason that you shouldn’t fret over the fact that pi is irrational and you can’t make a square circle. There is no possible universe in which that isn’t true. So if it bothers you, it’s not that there’s something wrong with the universe—it’s clearly that there’s something wrong with you. Your thinking on the matter must be too confused, too dependent on unquestioned intuitions, if you think that murder can’t be wrong unless 2+2=5.

In philosophical jargon I am called a “compatibilist” because I maintain that free will and determinism are “compatible”. But this is much too weak a term. I much prefer Eleizer Yudkowsky’s “requiredism”, which he explains in one of the greatest blog posts of all time (seriously, read it immediately if you haven’t before—I’m okay with you cutting off my blog post here and reading his instead, because it truly is that brilliant), entitled simply “Thou Art Physics”. This quote sums it up briefly:

My position might perhaps be called “Requiredism.” When agency, choice, control, and moral responsibility are cashed out in a sensible way, they require determinism—at least some patches of determinism within the universe. If you choose, and plan, and act, and bring some future into being, in accordance with your desire, then all this requires a lawful sort of reality; you cannot do it amid utter chaos. There must be order over at least over those parts of reality that are being controlled by you. You are within physics, and so you/physics have determined the future. If it were not determined by physics, it could not be determined by you.

Free will requires a certain minimum level of determinism in the universe, because the universe must be orderly enough that actions make sense and there isn’t simply an endless succession of random events. Call me a “requiredist” if you need to call me something. I’d prefer you just realize the whole debate is silly because moral responsibility exists and noncausality couldn’t possibly.

We could of course use different terms besides “free will”. “Moral responsibility” is certainly a good one, but it is missing one key piece, which is the issue of why we can assign moral responsibility to human beings and a few other entities (animals, perhaps robots) and not to the vast majority of entities (trees, rocks, planets, tables), and why we are sometimes willing to say that even a human being does not have moral responsibility (infancy, duress, impairment).

This is why my favored term is actually “rational volition”. The characteristic that human beings have (at least most of us, most of the time), which also many animals and possibly some robots share (if not now, then soon enough), which justifies our moral responsibility is precisely our capacity to reason. Things don’t just happen to us the way they do to some 99.999,999,999% of the universe; we do things. We experience the world through our senses, have goals we want to achieve, and act in ways that are planned to make the world move closer to achieving those goals. We have causes, sure enough; but not just any causes. We have a specific class of causes, which are related to our desires and intentions—we call these causes reasons.

So if you want to say that we don’t have “free will” because that implies some mysterious nonsensical noncausality, sure; that’s fine. But then don’t go telling us that this means we don’t have moral responsibility, or that we should somehow try to delude ourselves into believing otherwise in order to preserve moral responsibility. Just recognize that we do have rational volition.

How do I know we have rational volition? That’s the best part, really: Experiments. While you’re off in la-la land imagining fanciful universes where somehow causes aren’t really causes even though they are, I can point to not only centuries of human experience but decades of direct, controlled experiments in operant conditioning. Human beings and most other animals behave quite differently in behavioral experiments than, say, plants or coffee tables. Indeed, it is precisely because of this radical difference that it seems foolish to even speak of a “behavioral experiment” about coffee tables—because coffee tables don’t behave, they just are. Coffee tables don’t learn. They don’t decide. They don’t plan or consider or hope or seek.

Japanese, as it turns out, may be a uniquely good language for cognitive science, because it has two fundamentally different verbs for “to be” depending on whether an entity is sentient. Humans and animals imasu, while inanimate objects merely arimasu. We have free will because and insofar as we imasu.

Once you get past that most basic confusion of moral responsibility with noncausality, there are a few other confusions you might run into as well. Another one is two senses of “reductionism”, which Dennett refers to as “ordinary” and “greedy”:

1. Ordinary reductionism: All systems in the universe are ultimately made up of components that always and everywhere obey the laws of nature.

2. Greedy reductionism: All systems in the universe just are their components, and have no existence, structure, or meaning aside from those components.

I actually had trouble formulating greedy reductionism as a coherent statement, because it’s such a nonsensical notion. Does anyone really think that a pile of two-by-fours is the same thing as a house? But people do speak as though they think this about human brains, when they say that “love is just dopamine” or “happiness is just serotonin”. But dopamine in a petri dish isn’t love, any more than a pile of two-by-fours is a house; and what I really can’t quite grok is why anyone would think otherwise.

Maybe they’re simply too baffled by the fact that love is made of dopamine (among other things)? They can’t quite visualize how that would work (nor can I, nor, I think, can anyone in the world at this level of scientific knowledge). You can see how the two-by-fours get nailed together and assembled into the house, but you can’t see how dopamine and action potentials would somehow combine into love.

But isn’t that a reason to say that love isn’t the same thing as dopamine, rather than that it is? I can understand why some people are still dualists who think that consciousness is somehow separate from the functioning of the brain. That’s wrong—totally, utterly, ridiculously wrong—but I can at least appreciate the intuition that underlies it. What I can’t quite grasp is why someone would go so far the other way and say that the consciousness they are currently experiencing does not exist.

Another thing that might confuse people is the fact that minds, as far as we know, are platform independentthat is, your mind could most likely be created out of a variety of different materials, from the gelatinous brain it currently is to some sort of silicon supercomputer, to perhaps something even more exotic. This independence follows from the widely-believed Church-Turing thesis, which essentially says that all computation is computation, regardless of how it is done. This may not actually be right, but I see many reasons to think that it is, and if so, this means that minds aren’t really what they are made of at all—they could be made of lots of things. What makes a mind a mind is how it is structured and above all what it does.

If this is baffling to you, let me show you how platform-independence works on a much simpler concept: Tables. Tables are also in fact platform-independent. You can make a table out of wood, or steel, or plastic, or ice, or bone. You could take out literally every single atom of a table and replace it will a completely different atom of a completely different element—carbon for iron, for example—and still end up with a table. You could conceivably even do so without changing the table’s weight, strength, size, etc., though that would be considerably more difficult.
Does this mean that tables somehow exist “beyond” their constituent matter? In some very basic sense, I suppose so—they are, again, platform-independent. But not in any deep, mysterious sense. Start with a wooden table, take away all the wood, and you no longer have a table. Take apart the table and you have a bunch of wood, which you could use to build something else. There is no “essence” comprising the table. There is no “table soul” that would persist when the table is deconstructed.