Financial fraud is everywhere

Jun 4, JDN 2457909
When most people think of “crime”, they probably imagine petty thieves, pickpockets, drug dealers, street thugs. In short, we think of crime as something poor people do. And certainly, that kind of crime is more visible, and typically easier to investigate and prosecute. It may be more traumatic to be victimized by it (though I’ll get back to that in a moment).

The statistics on this matter are some of the fuzziest I’ve ever come across, so estimates could be off by as much as an order of magnitude. But there is some reason to believe that, within most highly-developed countries, financial fraud may actually be more common than any other type of crime. It is definitely among the most common, and the only serious contenders for exceeding it are other forms of property crime such as petty theft and robbery.

It also appears that financial fraud is the one type of crime that isn’t falling over time. Violent crime and property crime are both at record lows; the average American’s probability of being victimized by a thief or a robber in any given year has fallen from 35% to 11% in the last 25 years. But the rate of financial fraud appears to be roughly constant, and the rate of high-tech fraud in particular is definitely rising. (This isn’t too surprising, given that the technology required is becoming cheaper and more widely available.)

In the UK, the rate of credit card fraud rose during the Great Recession, fell a little during the recovery, and has been holding steady since 2010; it is estimated that about 5% of people in the UK suffer credit card fraud in any given year.

About 1% of US car loans are estimated to contain fraudulent information (such as overestimated income or assets). As there are over $1 trillion in outstanding US car loans, that amounts to about $5 billion in fraud losses every year.

Using DOJ data, Statistic Brain found that over 12 million Americans suffer credit card fraud any given year; based on the UK data, this is probably an underestimate. They also found that higher household income had only a slight effect of increasing the probability of suffering such fraud.

The Office for Victims of Crime estimates that total US losses due to financial fraud are between $40 billion and $50 billion per year—which is to say, the GDP of Honduras or the military budget of Japan. The National Center for Victims of Crime estimated that over 10% of Americans suffer some form of financial fraud in any given year.

Why is fraud so common? Well, first of all, it’s profitable. Indeed, it appears to be the only type of crime that is. Most drug dealers live near the poverty line. Most bank robberies make off with less than $10,000.

But Bernie Madoff made over $50 billion before he was caught. Of course he was an exceptional case; the median Ponzi scheme only makes off with… $2.1 million. That’s over 200 times the median bank robbery.

Second, I think financial fraud allows the perpetrator a certain psychological distance from their victims. Just as it’s much easier to push a button telling a drone to launch a missile than to stab someone to death, it’s much easier to move some numbers between accounts than to point a gun at someone’s head and demand their wallet. Construal level theory is all about how making something seem psychologically more “distant” can change our attitudes toward it; toward things we perceive as “distant”, we think more abstractly, we accept more risks, and we are more willing to engage in violence to advance a cause. (It also makes us care less about outcomes, which may be a contributing factor in the collective apathy toward climate change.)

Perhaps related to this psychological distance, we also generally have a sense that fraud is not as bad as violent crime. Even judges and juries often act as though white-collar criminals aren’t real criminals. Often the argument seems to be that the behavior involved in committing financial fraud is not so different, after all, from the behavior of for-profit business in general; are we not all out to make an easy buck?

But no, it is not the same. (And if it were, this would be more an indictment of capitalism than it is a justification for fraud. So this sort of argument makes a lot more sense coming from socialists than it does from capitalists.)

One of the central justifications for free markets lies in the assumption that all parties involved are free, autonomous individuals acting under conditions of informed consent. Under those conditions, it is indeed hard to see why we have a right to interfere, as long as no one else is being harmed. Even if I am acting entirely out of my own self-interest, as long as I represent myself honestly, it is hard to see what I could be doing that is morally wrong. But take that away, as fraud does, and the edifice collapses; there is no such thing as a “right to be deceived”. (Indeed, it is quite common for Libertarians to say they allow any activity “except by force or fraud”, never quite seeming to realize that without the force of government we would all be surrounded by unending and unstoppable fraud.)

Indeed, I would like to present to you for consideration the possibility that large-scale financial fraud is worse than most other forms of crime, that someone like Bernie Madoff should be viewed as on a par with a rapist or a murderer. (To its credit, our justice system agrees—Madoff was given the maximum sentence of 150 years in maximum security prison.)

Suppose you were given the following terrible choice: Either you will be physically assaulted and beaten until several bones are broken and you fall unconscious—or you will lose your home and all the money you put into it. If the choice were between death and losing your home, obviously, you’d lose your home. But when it is a question of injury, that decision isn’t so obvious to me. If there is a risk of being permanently disabled in some fashion—particularly mentally disabled, as I find that especially terrifying—then perhaps I accept losing my home. But if it’s just going to hurt a lot and I’ll eventually recover, I think I prefer the beating. (Of course, if you don’t have health insurance, recovering from a concussion and several broken bones might also mean losing your home—so in that case, the dilemma is a no-brainer.) So when someone commits financial fraud on the scale of hundreds of thousands of dollars, we should consider them as having done something morally comparable to beating someone until they have broken bones.

But now let’s scale things up. What if terrorist attacks, or acts of war by a foreign power, had destroyed over one million homes, killed tens of thousands of Americans by one way or another, and cut the wealth of the median American family in half? Would we not count that as one of the greatest acts of violence in our nation’s history? Would we not feel compelled to take some overwhelming response—even be tempted toward acts of brutal vengeance? Yet that is the scale of the damage done by the Great Recession—much, if not all, preventable if our regulatory agencies had not been asleep at the wheel, lulled into a false sense of security by the unending refrain of laissez-faire. Most of the harm was done by actions that weren’t illegal, yes; but some of actually was illegal (20% of direct losses are attributable to fraud), and most of the rest should have been illegal but wasn’t. The repackaging and selling of worthless toxic assets as AAA bonds may not legally have been “fraud”, but morally I don’t see how it was different. With this in mind, the actions of our largest banks are not even comparable to murder—they are comparable to invasion or terrorism. No mere individual shooting here; this is mass murder.

I plan to make this a bit of a continuing series. I hope that by now I’ve at least convinced you that the problem of financial fraud is a large and important one; in later posts I’ll go into more detail about how it is done, who is doing it, and what perhaps can be done to stop them.

Why “marginal productivity” is no excuse for inequality

May 28, JDN 2457902

In most neoclassical models, workers are paid according to their marginal productivity—the additional (market) value of goods that a firm is able to produce by hiring that worker. This is often used as an excuse for inequality: If someone can produce more, why shouldn’t they be paid more?

The most extreme example of this is people like Maura Pennington writing for Forbes about how poor people just need to get off their butts and “do something”; but there is a whole literature in mainstream economics, particularly “optimal tax theory”, arguing based on marginal productivity that we should tax the very richest people the least and never tax capital income. The Chamley-Judd Theorem famously “shows” (by making heroic assumptions) that taxing capital just makes everyone worse off because it reduces everyone’s productivity.

The biggest reason this is wrong is that there are many, many reasons why someone would have a higher income without being any more productive. They could inherit wealth from their ancestors and get a return on that wealth; they could have a monopoly or some other form of market power; they could use bribery and corruption to tilt government policy in their favor. Indeed, most of the top 0.01% do literally all of these things.

But even if you assume that pay is related to productivity in competitive markets, the argument is not nearly as strong as it may at first appear. Here I have a simple little model to illustrate this.

Suppose there are 10 firms and 10 workers. Suppose that firm 1 has 1 unit of effective capital (capital adjusted for productivity), firm 2 has 2 units, and so on up to firm 10 which has 10 units. And suppose that worker 1 has 1 unit of so-called “human capital”, representing their overall level of skills and education, worker 2 has 2 units, and so on up to worker 10 with 10 units. Suppose each firm only needs one worker, so this is a matching problem.

Furthermore, suppose that productivity is equal to capital times human capital: That is, if firm 2 hired worker 7, they would make 2*7 = $14 of output.

What will happen in this market if it converges to equilibrium?

Well, first of all, the most productive firm is going to hire the most productive worker—so firm 10 will hire worker 10 and produce $100 of output. What wage will they pay? Well, they need a wage that is high enough to keep worker 10 from trying to go elsewhere. They should therefore pay a wage of $90—the next-highest firm productivity times the worker’s productivity. That’s the highest wage any other firm could credibly offer; so if they pay this wage, worker 10 will not have any reason to leave.

Now the problem has been reduced to matching 9 firms to 9 workers. Firm 9 will hire worker 9, making $81 of output, and paying $72 in wages.

And so on, until worker 1 at firm 1 produces $1 and receives… $0. Because there is no way for worker 1 to threaten to leave, in this model they actually get nothing. If I assume there’s some sort of social welfare system providing say $0.50, then at least worker 1 can get that $0.50 by threatening to leave and go on welfare. (This, by the way, is probably the real reason firms hate social welfare spending; it gives their workers more bargaining power and raises wages.) Or maybe they have to pay that $0.50 just to keep the worker from starving to death.

What does inequality look like in this society?
Well, the most-productive firm only has 10 times as much capital as the least-productive firm, and the most-educated worker only has 10 times as much skill as the least-educated worker, so we might think that incomes would vary only by a factor of 10.

But in fact they vary by a factor of over 100.

The richest worker makes $90, while the poorest worker makes $0.50. That’s a ratio of 180. (Still lower than the ratio of the average CEO to their average employee in the US, by the way.) The worker is 10 times as productive, but they receive 180 times as much income.

The firm profits vary along a more reasonable scale in this case; firm 1 makes a profit of $0.50 while firm 10 makes a profit of $10. Indeed, except for firm 1, firm n always makes a profit of $n. So that’s very nearly a linear scaling in productivity.

Where did this result come from? Why is it so different from the usual assumptions? All I did was change one thing: I allowed for increasing returns to scale.

If you make the usual assumption of constant returns to scale, this result can’t happen. Multiplying all the inputs by 10 should just multiply the output by 10, by assumption—since that is the definition of constant returns to scale.

But if you look at the structure of real-world incomes, it’s pretty obvious that we don’t have constant returns to scale.

If we had constant returns to scale, we should expect that wages for the same person should only vary slightly if that person were to work in different places. In particular, to have a 2-fold increase in wage for the same worker you’d need more than a 2-fold increase in capital.

This is a bit counter-intuitive, so let me explain a bit further. If a 2-fold increase in capital results in a 2-fold increase in wage for a given worker, that’s increasing returns to scale—indeed, it’s precisely the production function I assumed above.
If you had constant returns to scale, a 2-fold increase in wage would require something like an 8-fold increase in capital. This is because you should get a 2-fold increase in total production by doubling everything—capital, labor, human capital, whatever else. So doubling capital by itself should produce a much weaker effect. For technical reasons I’d rather not get into at the moment, usually it’s assumed that production is approximately proportional to capital to the one-third power—so to double production you need to multiply capital by 2^3 = 8.

I wasn’t able to quickly find really good data on wages for the same workers across different countries, but this should at least give a rough idea. In Mumbai, the minimum monthly wage for a full-time worker is about $80. In Shanghai, it is about $250. If you multiply out the US federal minimum wage of $7.25 per hour by 40 hours by 4 weeks, that comes to $1160 per month.

Of course, these are not the same workers. Even an “unskilled” worker in the US has a lot more education and training than a minimum-wage worker in India or China. But it’s not that much more. Maybe if we normalize India to 1, China is 3 and the US is 10.

Likewise, these are not the same jobs. Even a minimum wage job in the US is much more capital-intensive and uses much higher technology than most jobs in India or China. But it’s not that much more. Again let’s say India is 1, China is 3 and the US is 10.

If we had constant returns to scale, what should the wages be? Well, for India at productivity 1, the wage is $80. So for China at productivity 3, the wage should be $240—it’s actually $250, close enough for this rough approximation. But the US wage should be $800—and it is in fact $1160, 45% larger than we would expect by constant returns to scale.

Let’s try comparing within a particular industry, where the differences in skill and technology should be far smaller. The median salary for a software engineer in India is about 430,000 INR, which comes to about $6,700. If that sounds rather low for a software engineer, you’re probably more accustomed to the figure for US software engineers, which is $74,000. That is a factor of 11 to 1. For the same job. Maybe US software engineers are better than Indian software engineers—but are they that much better? Yes, you can adjust for purchasing power and shrink the gap: Prices in the US are about 4 times as high as those in India, so the real gap might be 3 to 1. But these huge price differences themselves need to be explained somehow, and even 3 to 1 for the same job in the same industry is still probably too large to explain by differences in either capital or education, unless you allow for increasing returns to scale.

In most industries, we probably don’t have quite as much increasing returns to scale as I assumed in my simple model. Workers in the US don’t make 100 times as much as workers in India, despite plausibly having both 10 times as much physical capital and 10 times as much human capital.

But in some industries, this model might not even be enough! The most successful authors and filmmakers, for example, make literally thousands of times as much money as the average author or filmmaker in their own country. J.K. Rowling has almost $1 billion from writing the Harry Potter series; this is despite having literally the same amount of physical capital and probably not much more human capital than the average author in the UK who makes only about 11,000 GBP—which is about $14,000. Harry Potter and the Philosopher’s Stone is now almost exactly 20 years old, which means that Rowling made an average of $50 million per year, some 3500 times as much as the average British author. Is she better than the average British author? Sure. Is she three thousand times better? I don’t think so. And we can’t even make the argument that she has more capital and technology to work with, because she doesn’t! They’re typing on the same laptops and using the same printing presses. Either the return on human capital for British authors is astronomical, or something other than marginal productivity is at work here—and either way, we don’t have anything close to constant returns to scale.

What can we take away from this? Well, if we don’t have constant returns to scale, then even if wage rates are proportional to marginal productivity, they aren’t proportional to the component of marginal productivity that you yourself bring. The same software developer makes more at Microsoft than at some Indian software company, the same doctor makes more at a US hospital than a hospital in China, the same college professor makes more at Harvard than at a community college, and J.K. Rowling makes three thousand times as much as the average British author—therefore we can’t speak of marginal productivity as inhering in you as an individual. It is an emergent property of a production process that includes you as a part. So even if you’re entirely being paid according to “your” productivity, it’s not really your productivity—it’s the productivity of the production process you’re involved in. A myriad of other factors had to snap into place to make your productivity what it is, most of which you had no control over. So in what sense, then, can we say you earned your higher pay?

Moreover, this problem becomes most acute precisely when incomes diverge the most. The differential in wages between two welders at the same auto plant may well be largely due to their relative skill at welding. But there’s absolutely no way that the top athletes, authors, filmmakers, CEOs, or hedge fund managers could possibly make the incomes they do by being individually that much more productive.

Our government just voted to let thousands of people die for no reason

May 14, JDN 2457888

The US House of Representatives just voted to pass a bill that will let thousands of Americans die for no reason. At the time of writing it hasn’t yet passed the Senate, but it may yet do so. And if it does, there can be little doubt that President Trump (a phrase I still feel nauseous saying) will sign it.

Some already call it Trumpcare (or “Trump-doesn’t-care”); but officially they call it the American Health Care Act. I think we should use the formal name, because it is a name which is already beginning to take on a dark irony; yes, only in America would such a terrible health care act be considered. Every other highly-developed country has a universal healthcare system; most of them have single-payer systems (and this has been true for over two decades).
The Congressional Budget Office estimates that the AHCA will increase the number of uninsured Americans by 24 million. Of these, 14 million will be people near the poverty line who lose access to Medicaid.

In 2009, a Harvard study estimated that 45,000 Americans die each year because they don’t have health insurance. This is on the higher end; other studies have estimated more like 20,000. But based on the increases in health insurance rates under Obamacare, somewhere between 5,000 and 10,000 American lives have been saved each year since it was enacted. That reduction came from insuring about 10 million people who weren’t insured before.

Making a linear projection, we can roughly estimate the number of additional Americans who will die every year if this American Health Care Act is implemented. (24 million/10 million)(5,000 to 10,000) = 12,000 to 24,000 deaths per year. For comparison, there are about 14,000 total homicides in the United States each year (and we have an exceptionally high homicide rate for a highly-developed country).
Indeed, morally, it might make sense to count these deaths as homicides (by the principle of “depraved indifference”); Trump therefore intends to double our homicide rate.

Of course, it will not be prosecuted this way. And one can even make an ethical case for why it shouldn’t be, why it would be impossible to make policy if every lawmaker had to face the consequences of every policy choice. (Start a war? A hundred thousand deaths. Fail to start a war in response to a genocide? A different hundred thousand deaths.)

But for once, I might want to make an exception. Because these deaths will not be the result of a complex policy trade-off with merits and demerits on both sides. They will not be the result of honest mistakes or unforeseen disasters. These people will die out of pure depraved indifference.

We had a healthcare bill that was working. Indeed, Obamacare was remarkably successful. It increased insurance rates and reduced mortality rates while still managing to slow the growth in healthcare expenditure.

The only real cost was an increase in taxes on the top 5% (and particularly the top 1%) of the income distribution. But the Republican Party—and make no mistake, the vote was on almost completely partisan lines, and not a single Democrat supported it—has now made it a matter of official policy that they care more about cutting taxes on millionaires than they do about poor people dying from lack of healthcare.

Yet there may be a silver lining in all of this: Once people saw that Obamacare could work, the idea of universal healthcare in the United States began to seem like a serious political position. The Overton Window has grown. Indeed, it may even have shifted to the left for once; the responses to the American Health Care Act have been almost uniformly comprised of shock and outrage, when really what the bill does is goes back to the same awful system we had before. Going backward and letting thousands of people die for no reason should appall people—but I feared that it might not, because it would seem “normal”. We in America have grown very accustomed to letting poor people die in order to slightly increase the profits of billionaires, and I thought this time might be no different—but it was different. Once Obamacare actually passed and began to work, people really saw what was happening—that all this suffering and death wasn’t necessary, it wasn’t an inextricable part of having a functioning economy. And now that they see that, they aren’t willing to go back.

Can we have property rights without violence?

Apr 23, JDN 2457867

Most likely, you have by now heard of the incident on a United Airlines flight, where a man was beaten and dragged out of a plane because the airline decided that they needed more seats than they had. In case you somehow missed all the news articles and memes, the Wikipedia page on the incident is actually fairly good.

There is a lot of gossip about the passenger’s history, which the flight crew couldn’t possibly have known and is therefore irrelevant. By far the best take I’ve seen on the ethical and legal implications of the incident can be found on Naked Capitalism, so if you do want to know more about it I highly recommend starting there. Probably the worst take I’ve read is on The Pilot Wife Life, but I suppose if you want a counterpoint there you go.

I really have little to add on this particular incident; instead my goal here is to contextualize it in a broader discussion of property rights in general.

Despite the fact that what United’s employees and contractors did was obviously unethical and very likely illegal, there are still a large number of people defending their actions. Aiming for a Woodman if not an Ironman, the most coherent defense I’ve heard offered goes something like this:

Yes, what United did in this particular case was excessive. But it’s a mistake to try to make this illegal, because any regulation that did so would necessarily impose upon fundamental property rights. United owns the airplane; they can set the rules for who is allowed to be on that airplane. And once they set those rules, they need to be able to enforce them. Sometimes, however distasteful it may be, that enforcement will require violence. But property rights are too important to give up. Would you want to live in a society where anyone could just barge into your home and you were not allowed to use force to remove them?

Understood in this context, United contractors calling airport security to get a man dragged off of a plane isn’t an isolated act of violence for no reason; it is part of a broader conflict between the protection of property rights and the reduction of violence. “Stand your ground” laws, IMF “structural adjustment” policies, even Trump’s wall against immigrants can be understood as part of this broader conflict.

One very far-left approach to resolving such a conflict—as taken by the Paste editorial “You’re not mad at United Airlines; you’re mad at America”—is to fall entirely on the side of nonviolence, and say essentially that any system which allows the use of violence to protect property rights is fundamentally corrupt and illegitimate.

I can see why such a view is tempting. It’s simple, for one thing, and that’s always appealing. But if you stop and think carefully about the consequences of this hardline stance, it becomes clear that such a system would be unsustainable. If we could truly never use violence ever to protect any property rights, that would mean that property law in general could no longer be enforced. People could in fact literally break into your home and steal your furniture, and you’d have no recourse, because the only way to stop them would involve either using violence yourself or calling the police, who would end up using violence. Property itself would lose all its meaning—and for those on the far-left who think that sounds like a good thing, I want you to imagine what the world would look like if the only things you could ever use were the ones you could physically hold onto, where you’d leave home never knowing whether your clothes or your food would still be there when you came back. A world without property sounds good if you are imagining that the insane riches of corrupt billionaires would collapse; but if you stop and think about coming home to no food and no furniture, perhaps it doesn’t sound so great. And while it does sound nice to have a world where no one is homeless because they can always find a place to sleep, that may seem less appealing if your home is the one that a dozen homeless people decide to squat in.

The Tragedy of the Commons would completely destroy any such economic system; the only way to sustain it would be either to produce such an enormous abundance of wealth that no amount of greed could ever overtake it, or, more likely, somehow re-engineer human brains so that greed no longer exists. I’m not aware of any fundamental limits on greed; as long as social status increases monotonically with wealth, there will be people who try to amass as much wealth as they possibly can, far beyond what any human being could ever actually consume, much less need. How do I know this? Because they already exist; we call them “billionaires”. A billionaire, essentially by definition, is a hoarder of wealth who owns more than any human being could consume. If someone happens upon a billion dollars and immediately donates most of it to charity (as J.K. Rowling did), they can escape such a categorization; and if they use the wealth to achieve grand visionary ambitions—and I mean real visions, not like Steve Jobs but like Elon Musk—perhaps they can as well. Saving the world from climate change and colonizing Mars are the sort of projects that really do take many billions of dollars to achieve. (Then again, shouldn’t our government be doing these things?) And if they just hold onto the wealth or reinvest it to make even more, a billionaire is nothing less than a hoarder, seeking gratification and status via ownership itself.

Indeed, I think the maximum amount of wealth one could ever really need is probably around $10 million in today’s dollars; with that amount, even a very low-risk investment portfolio could supply enough income to live wherever you want, wear whatever you want, drive whatever you want, eat whatever you want, travel whenever you want. At even a 5% return, that’s $500,000 per year to spend without ever working or depleting your savings. At 10%, you’d get a million dollars a year for sitting there and doing nothing. And yet there are people with one thousand times as much wealth as this.

But not all property is of this form. I was about to say “the vast majority” is not, but actually that’s not true; a large proportion of wealth is in fact in the form of capital hoarded by the rich. Indeed, about 50% of the world’s wealth is owned by the richest 1%. (To be fair, the world’s top 1% is a broader category than one might think; the top 1% in the world is about the top 5% in the US; based on census data, that puts the cutoff at about $250,000 in net wealth.) But the majority of people have wealth in some form, and would stand to suffer if property rights were not enforced at all.

So we might be tempted to the other extreme, as the far-right seems to be, and say that any force is justified in the protection of fundamental property rights—that if vagrants step onto my land, I am well within my rights to get out my shotgun. (You know, hypothetically; not that I own a shotgun, or, for that matter, any land.) This seems to appeal especially to those who nostalgize the life on the frontier, “living off the land” (often losing family members to what now seem like trivial bacterial illnesses), “self-sufficient” (with generous government subsidies), in the “unspoiled wilderness” (from which the Army had forcibly removed Native Americans). Westerns have given us this sense that frontier life offers a kind of freedom and adventure that this urbane civilization lacks. And I suppose I am a fan of at least one Western, since one should probably count Firefly.

Yet of course this is madness; no civilization could survive if it really allowed people to just arbitrarily “defend” whatever property claims they decided to make. Indeed, it’s really just the flip side of the coin; as we’ve seen in Somalia (oh, by the way, we’re deploying troops there again), not protecting property and allowing universal violence to defend any perceived property largely amount to the same thing. If anything, the far-left fantasy seems more appealing; at least then we would not be subject to physical violence, and could call upon the authorities to protect us from that. In the far-right fantasy, we could accidentally step on what someone else claims to be his land and end up shot in the head.

So we need to have rules about who can use violence to defend what property and why. And that, of course, is complicated. We can start by having a government that defines property claims and places limits on their enforcement; but that still leaves the question of which sort of property claims and enforcement mechanisms the government should allow.

I think the principle should essentially be minimum force. We do need to protect property rights, yes; but if there is a way of doing so without committing violence, that’s the way we should do it. And if we do need to use violence, we should use as little as possible.

In theory we already do this: We have “rules of engagement” for the military and “codes of conduct” for police. But in practice, these rules are rarely enforced; they only get applied to really extreme violations, and sometimes not even then. The idea seems to be that enforcing strict rules on our soldiers and police officers constitutes disloyalty, even treason. We should “let them do their jobs”. This is the norm that must change. Those rules are their jobs. If they break those rules, they aren’t doing their jobs—they’re doing something else, something that endangers the safety and security of our society. The disloyalty is not in investigating and enforcing rules against police misconduct—the disloyalty is in police misconduct. If you want to be a cop but you’re not willing to follow the rules, you don’t actually want to be a cop—you want to be a bully with a gun and a badge.

And of course, one need not be a government agency in order to use excessive force. Many private corporations have security forces of their own, which frequently abuse and assault people. Most terrifying of all, there are whole corporations of “private military contractors”—let’s call them what they are: mercenaries—like Academi, formerly known as Blackwater. The whole reason these corporations even exist is to evade regulations on military conduct, and that is why they must be eliminated.

In the United case, there was obviously a nonviolent answer; all they had to do was offer to pay people to give up their seats, and bid up the price until enough people left. Someone would have left eventually; there clearly was a market-clearing price. That would have cost $2,000, maybe $5,000 at the most—a lot better than the $255 million lost in United’s stock value as a result of the bad PR.

If a homeless person decides to squat in your house, yes, perhaps you’d be justified in calling the police to remove them. Clearly you’re under no obligation to provide them room and board indefinitely. But there may be better solutions: Is there a homeless shelter in the area? Could you give them a ride there, or at least bus fare?

When immigrants cross our borders, may we turn them away? Now, here’s one where I’m pretty strongly tempted to go all the way and say we have no right whatsoever to stop them. There are no requirements for being born into citizenship, after all—so on what grounds do we add requirements to acquire citizenship? Is there something in the water of the Great Lakes and the Mississippi River that, when you drink it for 18 years (processed by municipal water systems of course; what are we, barbarians?), automatically makes you into a patriotic American? Does one become more law-abiding, or less capable of cruelty or fanaticism, by being brought into the world on one side of an imaginary line in the sand? If there are going to be requirements for citizenship, shouldn’t they be applied to everyone, and not just people who were born in the wrong place?

Yes, when we have no other choice, we must be prepared to use violence to defend property—because otherwise, there’s no such thing as property. But more often than not, we use violence when we didn’t need to, or use much more violence than was actually necessary. The principle that violence can be justified in defense of property does not entail that any violence is always justified in defense of property.

Unpaid work and the double burden

Apr 16, JDN 2457860

When we say the word “work”, what leaps to mind is usually paid work in the formal sector—the work people do for employers. When you “go to work” each morning, you are going to do your paid work in the formal sector.

But a large quantity of the world’s labor does not take this form. First, there is the informal sectorwork done for cash “under the table”, where there is no formal employment structure and often no reporting or payment of taxes. Many economists estimate that the majority of the world’s workers are employed in the informal sector. The ILO found that informal employment comprises as much as 70% of employment in some countries. However, it depends how you count: A lot of self-employment could be considered either formal or informal. If you base it on whether you do any work outside an employer-employee relationship, informal sector work is highly prevalent around the world. If you base it on not reporting to the government to avoid taxes, informal sector work is less common. If it must be your primary source of income, whether or not you pay taxes, informal sector work is uncommon. And if you only include informal sector work when it is your primary income source and not reported to the government, informal sector work is relatively rare and largely restricted to underdeveloped countries.

But that’s not really my focus for today, because you at least get paid in the informal sector. Nor am I talking about forced laborthat is, slavery, essentially—which is a serious human rights violation that sadly still goes on in many countries.

No, the unpaid work I want to talk about today is work that people willingly do for free.

I’m also excluding internships and student work, where (at least in theory) the idea is that instead of getting paid you are doing the work in order to acquire skills and experience that will be valuable to you later on. I’m talking about work that you do for its own sake.

Such work can be divided into three major categories.
First there is vocation—the artist who would paint even if she never sold a single canvas; the author who is compelled to write day and night and would give the books away for free. Vocation is work that you do for fun, or because it is fulfilling. It doesn’t even feel like “work” in quite the same sense. For me, writing and research are vocation, at least in part; even if I had $5 million in stocks I would still do at least some writing and research as part of what gives my life meaning.

Second there is volunteering—the soup kitchen, the animal shelter, the protest march. Volunteering is work done out of altruism, to help other people or work toward some greater public goal. You don’t do it for yourself, you do it for others.

Third, and really my main focus for this post, is domestic labor—vacuuming the rug, mopping the floor, washing the dishes, fixing the broken faucet, changing the baby’s diapers. This is generally not work that anyone finds particularly meaningful or fulfilling, nor is it done out of any great sense of altruism (perhaps toward your own family, but that’s about the extent of it). But you also don’t get paid to do it. You do it because it must be done.

There is also considerable overlap, of course: Many people find meaning in their activism or charitable work, and part of what motivates artists and authors is a desire to change the world.

Vocation is ultimately what I would like to see the world move towards. One of the great promises of a basic income is that it might finally free us from the grind of conventional employment that has gripped us ever since we first managed to escape the limitations of subsistence farming—which in turn gripped us ever since we escaped the desperation of hunter-gatherer survival. The fourth great stage in human prosperity might finally be a world where we can work not for food or for pay, but for meaning. A world of musicians and painters, of authors and playwrights, of sculptors and woodcutters, yes; but also a world of cinematographers and video remixers, of 3D modelers and holographers, of VR designers and video game modders. If you ever fret that no work would be done without the constant pressure of the wage incentive, spend some time on Stack Overflow or the Steam Workshop. People will spend hundreds of person-hours at extremely high-skill tasks—I’m talking AI programming and 3D modeling here—not for money but for fun.

Volunteering is frankly kind of overrated; as the Effective Altruism community will eagerly explain to you any chance they get, it’s usually more efficient for you to give money rather than time, because money is fungible while giving your time only makes sense if your skills are actually the ones that the project needs. If this criticism of so much well-intentioned work sounds petty, note that literally thousands of lives would be saved each year if instead of volunteering people donated an equivalent amount of money so that charities could hire qualified workers instead. Unskilled volunteers and donations of useless goods after a disaster typically cause what aid professionals call the “second disaster”. Still, people do find meaning in volunteering, and there is value in that; and also there are times when you really are the best one to do it, particularly when it comes to local politics.

But what should we do with domestic labor?

Some of it can and will be automated away—the Parable of the Dishwasher with literal dishwashers. But it will be awhile before it all can, and right now it’s still a bit expensive. Maybe instead of vacuuming I should buy a Roomba—but $500 feels like a lot of money right now.

Much domestic labor we could hire out to someone else, but we simply choose not to. I could always hire someone to fix my computer, unclog my bathtub, or even mop my floors; I just don’t because it seems too expensive.
From the perspective of an economist, it’s actually a bit odd that it seems too expensive. I might have a comparative advantage in fixing my computer—it’s mine, after all, so I know its ins and outs, and while I’m no hotshot Google admin I am a reasonably competent programmer and debugger in my own right. And while for many people auto repair is a household chore, I do actually hire auto mechanics; I don’t even change my own oil, though partly that’s because my little Smart has an extremely compact design that makes it hard to work on. But I surely have no such comparative advantage in cleaning my floors or unclogging my pipes; so why doesn’t it seem worth it to hire someone else to do that?

Maybe I’m being irrational; hiring a cleaning service isn’t that expensive after all. I could hire a cleaning service to do my whole apartment for something like $80, and if I scheduled a regular maid it would probably be something like that per month. That’s what I would charge for two hours of tutoring, so maybe it would behoove me to hire a maid and spend that extra time tutoring or studying.

Or maybe it’s this grad student budget of mine; money is pretty tight at the moment, as I go through this strange societal ritual where young adults go through a period of near-poverty, overwhelming workload and constant anxiety not in spite but because we are so intelligent and hard-working. Perhaps if and when I get that $70,000 job as a professional economist my marginal utility of wealth will decrease and I will feel more inclined to hire maid services.

There are also transaction costs I save on by doing the work myself. A maid would have to commute here, first of all, reducing the efficiency gains from their comparative advantage in the work; but more than that, there’s a lot of effort I’d have to put in just to prepare for the maid and deal with any problems that might arise. There are scheduling issues, and the work probably wouldn’t get done as quickly unless I were to spend enough to hire a maid on a regular basis. There’s also a psychological cost in comfort and privacy to dealing with a stranger in one’s home, and a small but nontrivial risk that the maid might damage or steal something important.

But honestly it might be as simple as social norms (remember: to a first approximation, all human behavior is social norms). Regardless of whether or not it is affordable, it feels strange to hire a maid. That’s the sort of thing only rich, decadent people do. A responsible middle-class adult is supposed to mop their own floors and do their own laundry. Indeed, while hiring a plumber or an auto mechanic feels like paying for a service, hiring a maid crosses a line and feels like hiring a servant. (I honestly always feel a little awkward around the gardeners hired by our housing development for that reason. I’m only paying them indirectly, but there’s still this vague sense that they are somehow subservient—and surely, we are of quite distinct socioeconomic classes. Maybe it would help if I brushed up on my Spanish and got to know them better?)

And then there’s the gender factor. Being in a same-sex couple household changes the domestic labor dynamic quite a bit relative to the conventional opposite-sex couple household. Even in ostensibly liberal, feminist, egalitarian households, and even when both partners are employed full-time, it usually ends up being the woman who does most of the housework. This is true in the US; it is true in the UK; it is true in Europe; indeed it’s true in most if not all countries around the world, and, unsurprisingly, it is worst in India, where women spend a whopping five hours per day more on housework than men. (I was not surprised by the fact that Japan and China also do poorly, given their overall gender norms; but I’m a bit shocked at how badly Ireland and Italy do on this front.) And yes, while #ScandinaviaIsBetter, still in Sweden and Norway women spend half an hour to an hour more on housework on an average day than men.

Which, of course, supports the social norm theory. Any time you see both an overwhelming global trend against women and considerable cross-country variation within that trend, your first hypothesis should be sexism. Without the cross-country variation, maybe it could be biology—the sex differences in height and upper-body strength, for example, are pretty constant across countries. But women doing half an hour more in Norway but five hours more in India looks an awful lot like sexism.

This is called the double burden: To meet the social norms of being responsible middle-class adults, men are merely expected to work full-time at a high-paying job, but women are expected to do both the full effort of maintaining a household and the full effort of working at a full-time job. This is surely an improvement over the time when women were excluded from the formal workforce, not least because of the financial freedom that full-time work affords many women; but it would be very nice if we could also find a way to share some of that domestic burden as well. There has been some trend toward a less unequal share of housework as more women enter the workforce, but it still has a long way to go, even in highly-developed countries.

So, we can start by trying to shift the social norm that housework is gendered: Women clean the floors and change the diapers, while men fix the car and paint the walls. Childcare in particular is something that should be done equally by all parents, and while it’s plausible that one person may be better or worse at mopping or painting, it strains credulity to think that it’s always the woman who is better at mopping and the man who is better at painting.

Yet perhaps this is a good reason to try to shift away from another social norm as well, the one where only rich people hire maids and maids are servants. Unfortunately, it’s likely that most maids will continue to be women for the foreseeable future—cleaning services are gendered in much the same way that nursing and childcare are gendered. But at least by getting paid to clean, one can fulfill the “job” norm and the “housekeeping” norm in one fell swoop; and then women who are in other professions can carry only one burden instead of two. And if we can begin to think of cleaning services as more like plumbing and auto repair—buying a service, not hiring a servant—this is likely to improve the condition and social status of a great many maids. I doubt we’d ever get to the point where mopping floors is as prestigious as performing neurosurgery, but maybe we can at least get to the point where being a maid is as respectable as being a plumber. Cleaning needs done; it shouldn’t be shameful to be someone who is very good at doing it and gets paid to do so. (That is perhaps the most pernicious aspect of socioeconomic class, this idea that some jobs are “shameful” because they are done by workers with less education or involve more physical labor.)
This also makes good sense in terms of economic efficiency: Your comparative advantage is probably not in cleaning services, or if it is then perhaps you should do that as a career. So by selling your labor at whatever you are good at and then buying the services of someone who is especially good at cleaning, you should, at least in theory, be able to get the same cleaning done and maintain the same standard of living for yourself while also accomplishing more at whatever it is you do in your profession and providing income for whomever you hire to do the cleaning.

So, should I go hire a cleaning service after all? I don’t know, that still sounds pretty expensive.

Is intellectual property justified?

Feb 12, JDN 2457797

I had hoped to make this week’s post more comprehensive, but as I’ve spent the last week suffering from viral bronchitis I think I will keep this one short and revisit the topic in a few weeks.

Intellectual property underlies an increasingly large proportion of the world’s economic activity, more so now than ever before. We don’t just patent machines anymore; we patent drugs, and software programs, and even plants. Compared to that, copyrights on books, music, and movies seem downright pedestrian.

Though surely not the only cause, this is almost certainly contributing to the winner-takes-all effect; if you own the patent to something important, you can appropriate a huge amount of wealth to yourself with very little effort.

Moreover, this is not something that happened automatically as a natural result of market forces or autonomous human behavior. This is a policy, one that requires large investments in surveillance and enforcement to maintain. Intellectual property is probably the single largest market intervention that our government makes, and it is in a very strange direction: With antitrust law, the government seeks to undermine monopolies; but with intellectual property, the government seeks to protect monopolies.

So it’s important to ask: What is the justification for intellectual property? Do we actually have a good reason for doing this?

The basic argument goes something like this:

Many intellectual endeavors, such as research, invention, and the creation of art, require a large up-front investment of resources to complete, but once completed it costs almost nothing to disseminate the results. There is a very large fixed cost that makes it difficult to create these goods at all, but once they exist, the marginal cost of producing more of them is minimal.

If we didn’t have any intellectual property, once someone created an invention or a work of art, someone else could simply copy it and sell it at a much lower price. If enough competition emerged to drive price down to marginal cost, the original creator of the good would not only not profit, but would actually take an enormous loss, as they paid that large fixed cost but none of their competitors did.

Thus, knowing that they will take a loss if they do, individuals will not create inventions or works of art in the first place. Without intellectual property, all research, invention, and art would grind to a halt.

 

That last sentence sounds terrible, right? What would we do without research, invention, or art? But then if you stop and think about it for a minute, it becomes clear that this can’t possibly be the outcome of eliminating intellectual property. Most societies throughout the history of human civilization have not had a system of intellectual property, and yet they have all had art, and most of them have had research and invention as well.

If intellectual property is to be defended, it can’t be because we would have none of these things without it—it must be that we would have less, and so much less that it offsets the obvious harms of concentrating so much wealth and power in a handful of individuals.

I had hoped to get into the empirical results of different intellectual property regimes, but due to my illness I’m going to save that for another day.

Instead I’m just going to try to articulate what the burden of proof here really needs to be.

First of all, showing that we spend a lot of money on patents contributes absolutely nothing useful to defending them. Yes, we all know patents are expensive. The question is whether they are worth it. To show that this is not a strawman, here’s an article by IP Watchdog that takes the fact that “a new study showing that academic patent licensing contributed more than $1 trillion to the U.S. economy over eighteen years” is some kind of knockdown argument in favor of patents. If you actually showed that this economic activity would not exist without patents, then that would be an argument for patents. But all this study actually does is shows that we spend that much on patents, which says nothing about whether this is a good use of resources. It’s like when people try to defend the F-35 boondoggle by saying “it supports thousands of jobs!”; well, yes, but what about the millions of jobs we could be supporting instead if we used that money for something more efficient? (And indeed, the evidence is quite clear that spending on the F-35 destroys more jobs than it creates.) So any serious of estimate of economic benefits of intellectual property must also come with an estimate of the economic cost of intellectual property, or it is just propaganda.
It’s not enough to show some non-negligible (much less “statistically significant”) increase in innovation as a result of intellectual property. The effect size is critical; the increase in innovation needs to be large enough that it justifies having world-spanning monopolies that concentrate the world’s wealth in the hands of a few individuals. Because we already know that intellectual property concentrates wealth; they are monopolies, and monopolies concentrate wealth. It’s not enough to show that there is a benefit; that benefit must be greater than the cost, and there must be no alternative methods that allow us to achieve a greater net benefit.
It’s also important to be clear what we mean by “innovation”; this can be a very difficult thing to measure. But in principle what we really want to know is whether we are supporting important innovation—whether we will get more Mona Lisas and more polio vaccines, not simply whether we will get more Twilight and more Viagra. And one of the key problems with intellectual property as a method of funding innovation is that there is only a vague link between the profits that can be extracted and the benefits of the innovation. (Though to be fair, this is actually a more general problem; it is literally a mathematical theorem that competitive markets only maximize utility if you value rich people more, in inverse proportion to their marginal utility of wealth.)

Innovation is certainly important. Indeed, it is no exaggeration to say that innovation is the foundation of economic development and civilization itself. Defenders of intellectual property often want you to stop the conversation there: “Innovation is important!” Don’t let them. It’s not enough to say that innovation is important; intellectual property must also be the best way of achieving that innovation.

Is it? Well, in a few weeks I’ll get back to what the data actually says on this. There is some evidence supporting intellectual property—but the case is a lot weaker than you have probably been led to believe.

In defense of slacktivism

Jan 22, JDN 2457776

It’s one of those awkward portmanteaus that people often make to try to express a concept in fewer syllables, while also implicitly saying that the phenomenon is specific enough to deserve its own word: “Slacktivism”, made of “slacker” and “activism”, not unlike “mansplain” is made of “man” and “explain” or “edutainment” was made of “education” and “entertainment”—or indeed “gerrymander” was made of “Elbridge Gerry” and “salamander”. The term seems to be particularly popular on Huffington Post, which has a whole category on slacktivism. There is a particular subcategory of slacktivism that is ironically against other slacktivism, which has been dubbed “snarktivism”.

It’s almost always used as a pejorative; very few people self-identify as “slacktivists” (though once I get through this post, you may see why I’m considering it myself). “Slacktivism” is activism that “isn’t real” somehow, activism that “doesn’t count”.

Of course, that raises the question: What “counts” as legitimate activism? Is it only protest marches and sit-ins? Then very few people have ever been or will ever be activists. Surely donations should count, at least? Those have a direct, measurable impact. What about calling your Congressman, or letter-writing campaigns? These have been staples of activism for decades.
If the term “slacktivism” means anything at all, it seems to point to activities surrounding raising awareness, where the goal is not to enact a particular policy or support a particular NGO but to simply get as much public attention to a topic as possible. It seems to be particularly targeted at blogging and social media—and that’s important, for reasons I’ll get to shortly. If you gather a group of people in your community and give a speech about LGBT rights, you’re an activist. If you send out the exact same speech on Facebook, you’re a slacktivist.

One of the arguments against “slacktivism” is that it can be used to funnel resources at the wrong things; this blog post makes a good point that the Kony 2012 campaign doesn’t appear to have actually accomplished anything except profits for the filmmakers behind it. (Then again: A blog post against slacktivism? Are you sure you’re not doing right now the thing you think you are against?) But is this problem unique to slacktivism, or is it a more general phenomenon that people simply aren’t all that informed about how to have the most impact? There are an awful lot of inefficient charities out there, and in fact the most important waste of charitable funds involves people giving to their local churches. Fortunately, this is changing, as people become more secularized; churches used to account for over half of US donations, and now they only account for less than a third. (Naturally, Christian organizations are pulling out their hair over this.) The 60 million Americans who voted for Trump made a horrible mistake and will cause enormous global damage; but they weren’t slacktivists, were they?

Studies do suggest that traditionally “slacktivist” activities like Facebook likes aren’t a very strong predictor of future, larger actions, and more private modes of support (like donations and calling your Congressman) tend to be stronger predictors. But so what? In order for slacktivism to be a bad thing, they would have to be a negative predictor. They would have to substitute for more effective activism, and there’s no evidence that this happens.

In fact, there’s even some evidence that slacktivism has a positive effect (normally I wouldn’t cite Fox News, but I think in this case we should expect a bias in the opposite direction, and you can read the full Georgetown study if you want):

A study from Georgetown University in November entitled “Dynamics of Cause Engagement” looked how Americans learned about and interacted with causes and other social issues, and discovered some surprising findings on Slacktivism.

While the traditional forms of activism like donating money or volunteering far outpaces slacktivism, those who engage in social issues online are twice as likely as their traditional counterparts to volunteer and participate in events. In other words, slacktivists often graduate to full-blown activism.

At worst, most slacktivists are doing nothing for positive social change, and that’s what the vast majority of people have been doing for the entirety of human history. We can bemoan this fact, but that won’t change it. Most people are simply too uniformed to know what’s going on in the world, and too broke and too busy to do anything about it.

Indeed, slacktivism may be the one thing they can do—which is why I think it’s worth defending.

From an economist’s perspective, there’s something quite odd about how people’s objections to slacktivism are almost always formulated. The rational, sensible objection would be to their small benefits—this isn’t accomplishing enough, you should do something more effective. But in fact, almost all the objections to slacktivism I have ever read focus on their small costs—you’re not a “real activist” because you don’t make sacrifices like I do.

Yet it is a basic principle of economic rationality that, all other things equal, lower cost is better. Indeed, this is one of the few principles of economic rationality that I really do think is unassailable; perfect information is unrealistic and total selfishness makes no sense at all. But cost minimization is really very hard to argue with—why pay more, when you can pay less and get the same benefit?

From an economist’s perspective, the most important thing about an activity is its cost-effectiveness, measured either by net benefitbenefit minus cost—or rate of returnbenefit divided by cost. But in both cases, a lower cost is always better; and in fact slacktivism has an astonishing rate of return, precisely because its cost is so small.

Suppose that a campaign of 10 million Facebook likes actually does have a 1% chance of changing a policy in a way that would save 10,000 lives, with a life expectancy of 50 years each. Surely this is conservative, right? I’m only giving it a 1% chance of success, on a policy with a relatively small impact (10,000 lives could be a single clause in an EPA regulatory standard), with a large number of slacktivist participants (10 million is more people than the entire population of Switzerland). Yet because clicking “like” and “share” only costs you maybe 10 seconds, we’re talking about an expected cost of (10 million)(10/86,400/365) = 0.32 QALY for an expected benefit of (10,000)(0.01)(50) = 5000 QALY. That is a rate of return of 1,500,000%—that’s 1.5 million percent.

Let’s compare this to the rate of return on donating to a top charity like UNICEF, Oxfam, the Against Malaria Foundation, or the Schistomoniasis Control Initiative, for which donating about $300 would save the life of 1 child, adding about 50 QALY. That $300 most likely cost you about 0.01 QALY (assuming an annual income of $30,000), so we’re looking at a return of 500,000%. Now, keep in mind that this is a huge rate of return, far beyond what you can ordinarily achieve, that donating $300 to UNICEF is probably one of the best things you could possibly be doing with that money—and yet slacktivism may still exceed it in efficiency. Maybe slacktivism doesn’t sound so bad after all?

Of course, the net benefit of your participation is higher in the case of donation; you yourself contribute 50 QALY instead of only contributing 0.0005 QALY. Ultimately net benefit is what matters; rate of return is a way of estimating what the net benefit would be when comparing different ways of spending the same amount of time or money. But from the figures I just calculated, it begins to seem like maybe the very best thing you could do with your time is clicking “like” and “share” on Facebook posts that will raise awareness of policies of global importance. Now, you have to include all that extra time spent poring through other Facebook posts, and consider that you may not be qualified to assess the most important issues, and there’s a lot of uncertainty involved in what sort of impact you yourself will have… but it’s almost certainly not the worst thing you could be doing with your time, and frankly running these numbers has made me feel a lot better about all the hours I have actually spent doing this sort of thing. It’s a small benefit, yes—but it’s an even smaller cost.

Indeed, the fact that so many people treat low cost as bad, when it is almost by definition good, and the fact that they also target their ire so heavily at blogging and social media, says to me that what they are really trying to accomplish here has nothing to do with actually helping people in the most efficient way possible.

Rather, it’s two things.

The obvious one is generational—it’s yet another chorus in the unending refrain that is “kids these days”. Facebook is new, therefore it is suspicious. Adults have been complaining about their descendants since time immemorial; some of the oldest written works we have are of ancient Babylonians complaining that their kids are lazy and selfish. Either human beings have been getting lazier and more selfish for thousands of years, or, you know, kids are always a bit more lazy and selfish than their parents or at least seem so from afar.

The one that’s more interesting for an economist is signaling. By complaining that other people aren’t paying enough cost for something, what you’re really doing is complaining that they aren’t signaling like you are. The costly signal has been made too cheap, so now it’s no good as a signal anymore.

“Anyone can click a button!” you say. Yes, and? Isn’t it wonderful that now anyone with a smartphone (and there are more people with access to smartphones than toilets, because #WeLiveInTheFuture) can contribute, at least in some small way, to improving the world? But if anyone can do it, then you can’t signal your status by doing it. If your goal was to make yourself look better, I can see why this would bother you; all these other people doing things that look just as good as what you do! How will you ever distinguish yourself from the riffraff now?

This is also likely what’s going on as people fret that “a college degree’s not worth anything anymore” because so many people are getting them now; well, as a signal, maybe not. But if it’s just a signal, why are we spending so much money on it? Surely we can find a more efficient way to rank people by their intellect. I thought it was supposed to be an education—in which case the meteoric rise in global college enrollments should be cause for celebration. (In reality of course a college degree can serve both roles, and it remains an open question among labor economists as to which effect is stronger and by how much. But the signaling role is almost pure waste from the perspective of social welfare; we should be trying to maximize the proportion of real value added.)

For this reason, I think I’m actually prepared to call myself a slacktivist. I aim for cost-effective awareness-raising; I want to spread the best ideas to the most people for the lowest cost. Why, would you prefer I waste more effort, to signal my own righteousness?