We are in a golden age of corporate profits

Sep 2 JDN 245836

Take a good look at this graph, from the Federal Reserve Economic Database:

corporate_profits
The red line is corporate profits before tax. It is, unsurprisingly, the largest. The purple line is corporate profits after tax, with the standard adjustments for inventory depletion and capital costs. The green line is revenue from the federal corporate tax. Finally, I added a dashed blue line which multiplies before-tax profits by 30% to compare more directly with tax revenues. All these figures are annual, inflation-adjusted using the GDP deflator. The units are hundreds of billions of 2012 dollars.

The first thing you should notice is that the red and purple lines are near the highest they have ever been. Before-tax profits are over $2 trillion. After-tax profits are over $1.6 trillion.

Yet, corporate tax revenues are not the highest they have ever been. In 2006, they were over $400 billion; yet this year they don’t even reach $300 billion. The obvious reason for this is that we have been cutting corporate taxes. The more important reason is that corporations have gotten very good at avoiding whatever corporate taxes we charge.

On the books, we used to have a corporate tax rate of about 35%, which Trump just cut to 21%. But if you look at my dashed line, you can see that corporations haven’t actually paid more than 30% of their profits in taxes since 1970—and back then, the rate on the books was almost 50%.

Corporations have always avoided taxes. The effective tax rate—tax revenue divided by profits—is always much lower than the rate on the books. In 1951, the statutory tax rate was 50.75%; the effective rate was 47%. In 1970, the statutory rate was 49.2%; the effective rate was 31%. In 1993, the statutory rate was 35%; the effective rate was 26%. On average, corporations paid about 2/3 to 3/4 of what the statutory rate said.

corporate_tax_rate

You can even see how the effective rate trended steadily downward, much faster than the statutory rate. Corporations got better and better at finding and creating loopholes to let them avoid taxes. In 1950, the statutory rate was 38%—and sure enough, the effective rate was… 38%. Under Truman, corporations actually paid what they said they paid. Compare that to 1987, under Reagan, when the statutory rate was 40%—but the effective rate was only 26%.

Yet even with that downward trend, something happened under George W. Bush that widened the gap even further. While the statutory rate remained fixed at 35%, the effective rate plummeted from 26% in 2000 to 16% in 2002. The effective rate never again rose above 19%, and in 2009 it hit a minimum of just over 10%—less than one-third the statutory tax rate. It was trending upward, making it as “high” as 15%, until Trump’s tax cuts hit; in 2017 it was 13%, and it is projected to be even lower this year.

This is why it has always been disingenuous to compare our corporate tax rates with other countries and complain that they are too high. Our effective corporate tax rates have been in line with most other highly-developed countries for a long time now. The idea of “cutting rates and removing loopholes” sounds good in principle—but never actually seems to happen. George W. Bush’s “tax reforms” which were supposed to do this added so many loopholes that the effective tax rate plummeted.

I’m actually fairly ambivalent about corporate taxes in general. Their incidence really isn’t well-understood, though as Krugman has pointed out, so much of corporate profit is now monopoly rent that we can reasonably expect most of the incidence to fall on shareholders. What I’d really like to see happen is a repeal of the corporate tax combined with an increase in capital gains taxes. But we haven’t been increasing capital gains taxes; we’ve just been cutting corporate taxes.

The result has been a golden age for corporate profits. Make higher profits than ever before, and keep almost all of them without paying taxes! Nevermind that the deficit is exploding and our infrastructure is falling apart. America was founded in part on a hatred of taxes, so I guess we’re still carrying on that proud tradition.

What does a central bank actually do?

Aug 26 JDN 2458357

Though central banks are a cornerstone of the modern financial system, I don’t think most people have a clear understanding of how they actually function. (I think this may be by design; there are many ways we could make central banking more transparent, but policymakers seem reluctant to show their hand.)

I’ve even seen famous economists make really severe errors in their understanding of monetary policy, as John Taylor did when he characterized low-interest-rate policy as a “price ceiling”.

Central banks “print money” and “set interest rates”. But how exactly do they do these things, and what on Earth do they have to do with each other?

The first thing to understand is that most central banks don’t actually print money. In the US, cash is actually printed by the Department of the Treasury. But cash is only a small part of the money in circulation. The monetary base consists of cash in vaults and in circulation; the US monetary base is about $3.6 trillion. The money supply can be measured a few different ways, but the standard way is to include checking accounts, traveler’s checks, savings accounts, money market accounts, short-term certified deposits, and basically anything that can be easily withdrawn and spent as money. This is called the M2 money supply, and in the US it is currently over $14.1 trillion. That means that only 25% of our money supply is in actual, physical cash—the rest is all digital. This is actually a relatively high proportion for actual cash, as the monetary base was greatly increased in response to the Great Recession. When we say that the Fed “prints money”, what we really mean is that they are increasing the money supply—but typically they do so in a way that involves little if any actual printing of cash.

The second thing to understand is that central banks don’t exactly set interest rates either. They target interest rates. What’s the difference, you ask?

Well, setting interest rates would mean that they made a law or something saying you have to charge exactly 2.7%, and you get fined or something if you don’t do that.

Targeting interest rates is a subtler art. The Federal Reserve decides what interest rates they want banks to charge, and then they engage in what are called open-market operations to try to make that happen. Banks hold reservesmoney that they are required to keep as collateral for their loans. Since we are in a fractional-reserve system, they are allowed to keep only a certain proportion (usually about 10%). In open-market operations, the Fed buys and sells assets (usually US Treasury bonds) in order to either increase or decrease the amount of reserves available to banks, to try to get them to lend to each other at the targeted interest rates.

Why not simply set the interest rate by law? Because then it wouldn’t be the market-clearing interest rate. There would be shortages or gluts of assets.

It might be easier to grasp this if we step away from money for a moment and just think about the market for some other good, like televisions.

Suppose that the government wants to set the price of a television in the market to a particular value, say $500. (Why? Who knows. Let’s just run with it for a minute.)

If they simply declared by law that the price of a television must be $500, here’s what would happen: Either that would be too low, in which case there would be a shortage of televisions as demand exceeded supply; or that would be too high, in which case there would be a glut of televisions as supply exceeded demand. Only if they got spectacularly lucky and the market price already was $500 per television would they not have to worry about such things (and then, why bother?).

But suppose the government had the power to create and destroy televisions virtually at will with minimal cost.
Now, they have a better way; they can target the price of a television, and buy and sell televisions as needed to bring the market price to that target. If the price is too low, the government can buy and destroy a lot of televisions, to bring the price up. If the price is too high, the government can make and sell a lot of televisions, to bring the price down.

Now, let’s go back to money. This power to create and destroy at will is hard to believe for televisions, but absolutely true for money. The government can create and destroy almost any amount of money at will—they are limited only by the very inflation and deflation the central bank is trying to affect.

This allows central banks to intervene in the market without creating shortages or gluts; even though they are effectively controlling the interest rate, they are doing so in a way that avoids having a lot of banks wanting to take loans they can’t get or wanting to give loans they can’t find anyone to take.

The goal of all this manipulation is ultimately to reduce inflation and unemployment. Unfortunately it’s basically impossible to eliminate both simultaneously; the Phillips curve describes the relationship generally found that decreased inflation usually comes with increased unemployment and vice-versa. But the basic idea is that we set reasonable targets for each (usually about 2% inflation and 5% unemployment; frankly I’d prefer we swap the two, which was more or less what we did in the 1950s), and then if inflation is too high we raise interest rate targets, while if unemployment is too high we lower interest rate targets.

What if they’re both too high? Then we’re in trouble. This has happened; it is called stagflation. The money supply isn’t the other thing affecting inflation and unemployment, and sometimes we get hit with a bad shock that makes both of them high at once. In that situation, there isn’t much that monetary policy can do; we need to find other solutions.

But how does targeting interest rates lead to inflation? To be quite honest, we don’t actually know.

The basic idea is that lower interest rates should lead to more borrowing, which leads to more spending, which leads to more inflation. But beyond that, we don’t actually understand how interest rates translate into prices—this is the so-called transmission mechanism, which remains an unsolved problem in macroeconomics. Based on the empirical data, I lean toward the view that the mechanism is primarily via housing prices; lower interest rates lead to more mortgages, which raises the price of real estate, which raises the price of everything else. This also makes sense theoretically, as real estate consists of large, illiquid assets for which the long-term interest rate is very important. Your decision to buy an apple or even a television is probably not greatly affected by interest rates—but your decision to buy a house surely is.

If that is indeed the case, it’s worth thinking about whether this is really the right way to intervene on inflation and unemployment. High housing prices are an international crisis; maybe we need to be looking at ways to decrease unemployment without affecting housing prices. But that is a tale for another time.

Slides from my presentation at Worldcon

Whether you are a regular reader curious about my Worldcon talk, or a Worldcon visitor interested in seeing the slides, The slides from my presentation, “How do we get to the Federation from here?” can be found here.

I will be presenting at Worldcon this year!

I interrupt my usual broadcast for this special report. I will be speaking at Worldcon 76 in San Jose this year. My talk, “How do we get to the Federation from here?” is on world government, and will be held in room 212C of the convention center at 5:00 PM on Sunday, August 19. (Here is Worldcon’s complete program guide.

In lieu of my regular blog post next week, I’ll be posting the slides from my talk.

What would a game with realistic markets look like?

Aug 12 JDN 2458343

From Pokemon to Dungeons & Dragons, Final Fantasy to Mass Effect, almost all role-playing games have some sort of market: Typically, you buy and sell equipment, and often can buy services such as sleeping at inns. Yet the way those markets work is extremely rigid and unrealistic.

(I’m of course excluding games like EVE Online that actually create real markets between players; those markets are so realistic I actually think they would provide a good opportunity for genuine controlled experiments in macroeconomics.)

The weirdest thing about in-game markets is the fact that items almost always come with a fixed price. Sometimes there is some opportunity for haggling, or some randomization between different merchants; but the notion always persists that the item has a “true price” that is being adjusted upward or downward. This is more or less the opposite of how prices actually work in real markets.

There is no “true price” of a car or a pizza. Prices are whatever buyers and sellers make them. There is a true value—the amount of real benefit that can be obtained from a good—but even this is something that varies between individuals and also changes based on the other goods being consumed. The value of a pizza is considerably higher for someone who hasn’t eaten in days than to someone who just finished eating another pizza.

There is also what is called “The Law of One Price”, but like all laws of economics, it’s like the Pirate Code, more what you’d call a “guideline”, and it only applies to a particular good in a particular market at a particular time. The Law of One Price doesn’t even say that a pizza should have the same price tomorrow as it does today, or that the same pizza can’t be sold to two different customers at two different prices; it only says that the same pizza shouldn’t have two different prices in the same place at the same time for the same customer. (It seems almost tautological, right? And yet it still fails empirically, and does so again and again. I have seen offers for the same book in the same condition posted on the same website that differed by as much as 50%.)

In well-developed capitalist markets in large First World countries, we can lull ourselves into the illusion that there is only one price for a good, because markets are highly liquid and either highly competitive or controlled by a strong and stable oligopoly that enforces a particular price across places and times. The McDonald’s Dollar Menu is a policy choice by a massive multinational corporation; it’s not what would occur naturally if those items were sold on a competitive market.

Even then, this illusion can be broken when we are faced with a large economic shock, such as the OPEC price shock in 1973 or a natural disaster like Hurricane Katrina. It also tends to be broken for illiquid goods such as real estate.

If we consider the environment in which most role-playing games take place, it’s usually a sort of quasi-medieval or quasi-Renaissance feudal society, where a given government controls only a small region and traveling between towns is difficult and dangerous. Not only should the prices of goods differ substantially between towns, the currency used should frequently differ as well. Yes, most places would accept gold and silver; but a kingdom with a stable government will generally have a currency of significant seignorage, with coins worth considerably more than the gold used to mint them—yet the value of that seignorage will drop off as you move further away from that kingdom and its sphere of influence.

Moreover, prices should be inconsistent even between traders in the same town, and extremely volatile. When a town is mostly self-sufficient and trade is only a small part of its economy, even a small shock such as a bad thunderstorm or a brief drought can yield massive shifts in prices. Shortages and gluts will be frequent, as both supply and demand are small and ever-changing.

This wouldn’t be that difficult to implement. The simplest way would just be to institute random shocks to prices that vary by place and time. A more sophisticated method would be to actually simulate supply and demand for different goods, and then have prices respond to realistic shocks (e.g. a drought makes wheat more expensive, and the price of swords suddenly skyrockets after news of an impending dragon attack). Experiments have shown that competitive market outcomes can be achieved by simulating even a dozen or so traders using very simple heuristics like “don’t pay more than you can afford” and “don’t charge less than it cost you”.

Why don’t game designers implement this? I think there are two reasons.

The first is simply that it would be more complicated. This is a legitimate concern in many cases; I particularly think Pokemon can justify using a simple economy, given its target audience. I particularly agree that having more than a handful of currencies would be too much for players to keep track of; though perhaps having two or three (one for each major faction?) is still more interesting than only having one.

Also, tabletop games are inherently more limited in the amount of computation they can use, compared to video games. But for a game as complicated as say Skyrim, this really isn’t much of a defense. Skyrim actually simulated the daily routines of over a hundred different non-player characters; it could have been simulating markets in the background as well—in fact, it could have simply had those same non-player characters buy and sell goods with each other in a double-auction market that would automatically generate the prices that players face.

The more important reason, I think, is that game designers have a paralyzing fear of arbitrage.

I find it particularly aggravating how frequently games will set it up so that the price at which you buy and the price at which you sell are constrained so that the buying price is always higher, often as much as twice as high. This is not at all how markets work in the real world; frankly it’s only even close to true for goods like cars that rapidly depreciate. It make senses that a given merchant will not sell you a good for less than what they would pay to buy it from you; but that only requires each individual merchant to have a well-defined willingness-to-pay and willingness-to-accept. It certainly does not require the arbitrary constraint that you can never sell something for more than what you bought it for.

In fact, I would probably even allow players who specialize in social skills to short-change and bamboozle merchants for profit, as this is absolutely something that happens in the real world, and was likely especially common under the very low levels of literacy and numeracy that prevailed in the Middle Ages.

To many game designers (and gamers), the ability to buy a good in one place, travel to another place, and sell that good for a higher price seems like cheating. But this practice is call being a merchant. That is literally what the entire retail industry does. The rules of your game should allow you to profit from activities that are in fact genuinely profitable real economic services in the real world.

I remember a similar complaint being raised against Skyrim shortly after its release, that one could acquire a pickaxe, collect iron ore, smelt it into steel, forge weapons out of it, and then sell the weapons for a sizeable profit. To some people, this sounded like cheating. To me, it sounds like being a blacksmith. This is especially true because Skyrim’s skill system allowed you to improve the quality of your smithed items over time, just like learning a trade through practice (though it ramped up too fast, as it didn’t take long to make yourself clearly the best blacksmith in all of Skyrim). Frankly, this makes far more sense than being able to acquire gold by adventuring through the countryside and slaughtering monsters or collecting lost items from caves. Blacksmiths were a large part of the medieval economy; spelunking adventurers were not. Indeed, it bothers me that there weren’t more opportunities like this; you couldn’t make your wealth by being a farmer, a vintner, or a carpenter, for instance.

Even if you managed to pull off pure arbitrage, providing no real services, such as by buying and selling between two merchants in the same town, or the same merchant on two consecutive days, that is also a highly profitable industry. Most of our financial system is built around it, frankly. If you manage to make your wealth selling wheat futures instead of slaying dragons, I say more power to you. After all, there were an awful lot of wheat-future traders in the Middle Ages, and to my knowledge no actually successful dragon-slayers.

Of course, if your game is about slaying dragons, it should include some slaying of dragons. And if you really don’t care about making a realistic market in your game, so be it. But I think that more realistic markets could actually offer a great deal of richness and immersion into a world without greatly increasing the difficulty or complexity of the game. A world where prices change in response to the events of the story just feels more real, more alive.

The ability to profit without violence might actually draw whole new modes of play to the game (as has indeed occurred with Skyrim, where a small but significant proportion of players have chosen to live out peaceful lives as traders or blacksmiths). I would also enrich the experience of more conventional players and helping them recover from setbacks (if the only way to make money is to fight monsters and you keep getting killed by monsters, there isn’t much you can do; but if you have the option of working as a trader or a carpenter for awhile, you could save up for better equipment and try the fighting later).

And hey, game designers: If any of you are having trouble figuring out how to implement such a thing, my consulting fees are quite affordable.

Is a job guarantee better than a basic income?

Aug 5 JDN 2458336

In previous posts I’ve written about both the possibilities and challenges involved in creating a universal basic income. Today I’d like to address what I consider the most serious counter-argument against a basic income, an alternative proposal known as a job guarantee.

Whereas a basic income is literally just giving everyone free money, a job guarantee entails offering everyone who wants to work a job paid by the government. They’re not necessarily contradictory, but I’ve noticed a clear pattern: While basic income proponents are generally open to the idea of a job guarantee on the side, job guarantee proponents are often vociferously opposed to a basic income—even calling it “sinister”. I think the reason for this is that we see jobs as irrelevant, so we’re okay with throwing them in if you feel you must, while they see jobs as essential, so they meet any attempt to remove them with overwhelming resistance.

Where a basic income is extremely simple and could be implemented by a single act of the legislature, a job guarantee is considerably more complicated. The usual proposal for a job guarantee involves federal funding but local implementation, which is how most of our social welfare system is implemented—and why social welfare programs are so much better in liberal states like California than in conservative states like Mississippi, because California actually believes in what it’s implementing and Mississippi doesn’t. Anyone who wants a job guarantee needs to take that aspect seriously: In the places where poverty is worst, you’re offering control over the policy to the very governments that made poverty worst—and whether it is by malice or incompetence, what makes you think that won’t continue?

Another argument that I think job guarantee proponents don’t take seriously enough is the concern about “make-work”. They insist that a job guarantee is not “make-work”, but real work that’s just somehow not being done. They seem to think that there are a huge number of jobs that we could just create at the snap of a finger, which would be both necessary and useful on the one hand, and a perfect match for the existing skills of the unemployed population on the other hand. If that were the case, we would already be creating those jobs. It doesn’t even require a particularly strong faith in capitalism to understand this: If there is a profit to be made at hiring people to do something, there is probably already a business hiring people to do that. I don’t think of myself as someone with an overriding faith in capitalism, but a lot of the socialist arguments for job guarantees make me feel that way by comparison: They seem to think that there’s this huge untapped reserve of necessary work that the market is somehow failing to provide, and I’m just not seeing it.

There are public goods projects which aren’t profitable but would still be socially beneficial, like building rail lines and cleaning up rivers. But proponents of a job guarantee don’t seem to understand that these are almost all highly specialized jobs at our level of technology. We don’t need a bunch of people with shovels. We need engineers and welders and ecologists.

If you propose using people with shovels where engineers would be more efficient, that is make-work, whether you admit it or not. If you’re making people work in a less-efficient way in order to create jobs, then the jobs you are creating are fake jobs that aren’t worth creating. The line is often credited to Milton Friedman, but actually said first by William Aberhart in 1935:

Taking up the policy of a public works program as a solution for unemployment, it was criticized as a plan that took no account of the part that machinery played in modern construction, with a road-making machine instanced as an example. He saw, said Mr. Aberhart, work in progress at an airport and was told that the men were given picks and shovels in order to lengthen the work, to which he replied why not give them spoons and forks instead of picks and shovels if the object was to lengthen out the task.

I’m all for spending more on building rail lines and cleaning up rivers, but that’s not an anti-poverty program. The people who need the most help are precisely the ones who are least qualified to work on these projects: Children, old people, people with severe disabilities. Job guarantee proponents either don’t understand this fact or intentionally ignore it. If you aren’t finding jobs for 7-year-olds with autism and 70-year-olds with Parkinson’s disease, this program will not end poverty. And if you are, I find it really hard to believe that these are real, productive jobs and not useless “make-work”. A basic income would let the 7-year-olds stay in school and the 70-year-olds live in retirement homes—and keep them both out of poverty.

Another really baffling argument for a job guarantee over basic income is that a basic income would act as a wage subsidy, encouraging employers to reduce wages. That’s not how a basic income works. Not at all. A basic income would provide a pure income effect, necessarily increasing wage demands. People would not be as desperate for work, so they’d be more comfortable turning down unreasonable wage offers. A basic income would also incentivize some people to leave the labor force by retiring or going back to school; the reduction in labor supply would further increase wages. The Earned Income Tax Credit is in many respects similar to a wage subsidy. While superficially it might seem similar, a basic income would have the exact opposite effect.

One reasonable argument against a basic income is the possibility that it could cause inflation. This is something that can’t really be tested with small-scale experiments, so we really won’t know for sure until we try it. But there is reason to think that the inflation would be small, as the people removed from the labor force will largely be the ones who are least-productive to begin with. There is a growing body of empirical evidence suggesting that inflationary effects of a basic income would be small. For example, data on cash transfer programs in Mexico show only a small inflationary effect despite large reductions in poverty. The whole reason a basic income looks attractive is that automation technology is now so advanced is that we really don’t need everyone to be working anymore. Productivity is so high now that a policy of universal 40-hour work weeks just doesn’t make sense in the 21st century.

Probably the best argument for a job guarantee over a basic income concerns cost. A basic income is very expensive, there’s no doubt about that; and a job guarantee could be much cheaper. That is something I take very seriously: Saving $1.5 trillion a year is absolutely a good reason. Indeed, I don’t really object to this argument; the calculations are correct. I merely think that a basic income is enough better that its higher cost is justifiable. A job guarantee can eliminate unemployment, but not poverty.

But the argument for a job guarantee that most people seem to be find most compelling concerns meaning. The philosopher John Danaher expressed this one most cogently. Unemployment is an extremely painful experience for most people, far beyond what could be explained simply by their financial circumstances. Most people who win large sums of money in the lottery cut back their hours, but continue working—so work itself seems to have some value. What seems to happen is that when people lose the chance to work, they feel that they have lost a vital source of meaning in their lives.

Yet this raises two more questions:

First, would a job guarantee actually solve that problem?
Second, are there ways we could solve it under a basic income?

With regard to the first question, I want to re-emphasize the fact that a large proportion of these guaranteed jobs necessarily cannot be genuinely efficient production. If efficient production would have created these jobs, we would most likely already have created them. Our society does not suffer from an enormous quantity of necessary work that could be done with the skills already possessed by the unemployed population, which is somehow not getting done—indeed, it is essentially impossible for a capitalist economy with a highly-liquid financial system to suffer such a malady. If the work is so valuable, someone will probably take out a loan to hire someone to do it. If that’s not happening, either the unemployed people don’t have the necessary skills, or the work really can’t be all that productive. There are some public goods projects that would be beneficial but aren’t being done, but that’s a different problem, and the match between the public goods projects that need done and the skills of the unemployed population is extremely poor. Displaced coal miners aren’t useful for maintaining automated photovoltaic factories. Truckers who get replaced by robot trucks won’t be much good for building maglev rails.

With this in mind, it’s not clear to me that people would really be able to find much meaning in a guaranteed job. You can’t be fired, so the fact that you have the job doesn’t mean anyone is impressed by the quality of your work. Your work wasn’t actually necessary, or the private sector would already have hired someone to do it. The government went out of its way to find a job that precisely matched what you happen to be good at, regardless of whether that job was actually accomplishing anything to benefit society. How is that any better than not working at all? You are spending hours of drudgery to accomplish… what, exactly? If our goal was simply to occupy people’s time, we could do that with Netflix or video games.

With regard to the second question, note that a basic income is quite different from other social welfare programs in that everyone gets it. So it’s very difficult to attach a social stigma to receiving basic income payments—it would require attaching the stigma to literally everyone. Much of the lost meaning, I suspect, from being unemployed comes from the social stigma attached.

Now, it’s still possible to attach social stigma to people who only get the basic income—there isn’t much we can do to prevent that. But in the worst-case scenario, this means unemployed people get the same stigma as before but more money. Moreover, it’s much harder to detect a basic income recipient than, say, someone who eats at a soup kitchen or buys food using EBT; since it goes in your checking account, all everyone else sees is you spending money from your debit card, just like everyone else. People who know you personally would probably know; but people who know you personally are also less likely to destroy your well-being by imposing a high stigma. Maybe they’ll pressure you to get off the couch and get a job, but they’ll do so because they genuinely want to help you, not because they think you are “one of those lazy freeloaders”.

And, as BIEN points out, think about retired people: They don’t seem to be so unhappy. Being on basic income is more like being retired than like being unemployed. It’s something everyone gets, not some special handout for “those people”. It’s permanent, so it’s not like you need to scramble to get a job before it goes away. You just get money automatically, so you don’t have to navigate a complex bureaucracy to get it. Controlling for income, retired people don’t seem to be any less happy than working people—so maybe work doesn’t actually provide all that much meaning after all.

I guess I can’t rule out the possibility that people need jobs to find meaning in their lives, but I both hope and believe that this is not generally the case. You can find meaning in your family, your friends, your community, your hobbies. You can still work even if you don’t need to work for a living: Build a shed, mow your lawn, tune up your car, upgrade your computer, write a story, learn a musical instrument, or try your hand at painting.

If you need to be taking orders from a corporation five days a week in order to have meaning in your life, you have bigger problems. I think what has happened to many people is that employment has so drained their lives of the real sources of meaning that they cling to it as the only thing they have left. But in fact work is not the cure to your ennui—it is the cause of it. Finally being free of the endless toil that has plagued humanity since the dawn of our species will give you the chance to reconnect with what really matters in life. Show your children that you love them in person, to their faces, instead of in this painfully indirect way of “providing for” them by going to work every day. Find ways to apply your skills in volunteering or creating works of art, instead of in endless drudgery for the profit of some faceless corporation.

How (not) to destroy an immoral market

Jul 29 JDN 2458329

In this world there are people of primitive cultures, with a population that is slowly declining, trying to survive a constant threat of violence in the aftermath of colonialism. But you already knew that, of course.

What you may not have realized is that some of these people are actively hunted by other people, slaughtered so that their remains can be sold on the black market.

I am referring of course to elephants. Maybe those weren’t the people you first had in mind?

Elephants are not human in the sense of being Homo sapiens; but as far as I am concerned, they are people in a moral sense.

Elephants take as long to mature as humans, and spend most of their childhood learning. They are born with brains only 35% of the size of their adult brains, much as we are born with brains 28% the size of our adult brains. Their encephalization quotients range from about 1.5 to 2.4, comparable to chimpanzees.

Elephants have problem-solving intelligence comparable to chimpanzees, cetaceans, and corvids. Elephants can pass the “mirror test” of self-identification and self-awareness. Individual elephants exhibit clearly distinguishable personalities. They exhibit empathy toward humans and other elephants. They can think creatively and develop new tools.

Elephants distinguish individual humans or elephants by sight or by voice, comfort each other when distressed, and above all mourn their dead. The kind of mourning behaviors elephants exhibit toward the remains of their dead family members have only been observed in humans and chimpanzees.

On a darker note, elephants also seek revenge. In response to losing loved ones to poaching or collisions with trains, elephants have orchestrated organized counter-attacks against human towns. This is not a single animal defending itself, as almost any will do; this is a coordinated act of vengeance after the fact. Once again, we have only observed similar behaviors in humans, great apes, and cetaceans.

Huffington Post backed off and said “just kidding” after asserting that elephants are people—but I won’t. Elephants are people. They do not have an advanced civilization, to be sure. But as far as I am concerned they display all the necessary minimal conditions to be granted the fundamental rights of personhood. Killing an elephant is murder.

And yet, the ivory trade continues to be profitable. Most of this is black-market activity, though it was legal in some places until very recently; China only restored their ivory trade ban this year, and Hong Kong’s ban will not take full effect until 2021. Some places are backsliding: A proposal (currently on hold) by the US Fish and Wildlife Service under the Trump administration would also legalize some limited forms of ivory trade.
With this in mind, I can understand why people would support the practice of ivory-burning, symbolically and publicly destroying ivory by fire so that no one can buy it. Two years ago, Kenya organized a particularly large ivory-burning that set ablaze 105 tons of elephant tusk and 1.35 tons of rhino horn.

But as economist, when I first learned about ivory-burning, it seemed like a really, really bad idea.

Why? Supply and demand. By destroying supply, you have just raised the market price of ivory. You have therefore increased the market incentives for poaching elephants and rhinos.

Yet it turns out I was wrong about this, as were many other economists. I looked at the empirical research, and changed my mind substantially. Ivory-burning is not such a bad idea after all.

Here was my reasoning before: If I want to reduce the incentives to produce something, what do I need to do? Lower the price. How do I do that? I need to increase the supply. Economists have made several proposals for how to do that, and until I looked at the data I would have expected them to work; but they haven’t.

The best way to increase supply is to create synthetic ivory that is cheap and very difficult to tell apart from the real thing. This has been done, but it didn’t work. For some reason, sellers try to hide the expensive real ivory in with the cheap synthetic ivory. I admit I actually have trouble understanding this; if you can’t sell it at full price, why even bother with the illegal real ivory? Maybe their customers have methods of distinguishing the two that the regulators don’t? If so, why aren’t the regulators using those methods? Another concern with increasing the supply of ivory is that it might reduce the stigma of consuming ivory, thereby also increasing the demand.

A similar problem has arisen with so-called “ghost ivory”; for obvious reasons, existing ivory products were excluded from the ban imposed in 1947, lest the government be forced to confiscate millions of billiard balls and thousands of pianos. Yet poachers have learned ways to hide new, illegal ivory and sell it as old, legal ivory.

Another proposal was to organize “sustainable ivory harvesting”, which based on past experience with similar regulations is unlikely to be enforceable. Moreover, this is not like sustainable wood harvesting, where our only concern is environmental. I for one care about the welfare of individual elephants, and I don’t think they would want to be “harvested”, sustainably or otherwise.
There is one way of doing “sustainable harvesting” that might not be so bad for the elephants, which would be to set up a protected colony of elephants, help them to increase their population, and then when elephants die of natural causes, take only the tusks and sell those as ivory, stamped with an official seal as “humanely and sustainably produced”. Even then, elephants are among a handful of species that would be offended by us taking their ancestors’ remains. But if it worked, it could save many elephant lives. The bigger problem is how expensive such a project would be, and how long it would take to show any benefit; elephant lifespans are about half as long as ours, (except in zoos, where their mortality rate is much higher!) so a policy that might conceivably solve a problem in 30 to 40 years doesn’t really sound so great. More detailed theoretical and empirical analysis has made this clear: you just can’t get ivory fast enough to meet existing demand this way.

In any case, China’s ban on all ivory trade had an immediate effect at dropping the price of ivory, which synthetic ivory did not. Before that, strengthened regulations in the US (particularly in New York and California) had been effective at reducing ivory sales. The CITES treaty in 1989 that banned most international ivory trade was followed by an immediate increase in elephant populations.

The most effective response to ivory trade is an absolutely categorical ban with no loopholes. To fight “ghost ivory”, we should remove exceptions for old ivory, offering buybacks for any antiques with a verifiable pedigree and a brief period of no-penalty surrender for anything with no such records. The only legal ivory must be for medical and scientific purposes, and its sourcing records must be absolutely impeccable—just as we do with human remains.

Even synthetic ivory must also be banned, at least if it’s convincing enough that real ivory could be hidden in it. You can make something you call “synthetic ivory” that serves a similar consumer function, but it must be different enough that it can be easily verified at customs inspections.

We must give no quarter to poachers; Kenya was right to impose a life sentence for aggravated poaching. The Tanzanian proposal to “shoot to kill” was too extreme; summary execution is never acceptable. But if indeed someone currently has a weapons pointed at an elephant and refuses to drop it, I consider it justifiable to shoot them, just as I would if that weapon were aimed at a human.

The need for a categorical ban is what makes the current US proposal dangerous. The particular exceptions it carves out are not all that large, but the fact that it carves out exceptions at all makes enforcement much more difficult. To his credit, Trump himself doesn’t seem very keen on the proposal, which may mean that it is dead in the water. I don’t get to say this often, but so far Trump seems to be making the right choice on this one.

Though the economic theory predicted otherwise, the empirical data is actually quite clear: The most effective way to save elephants from poaching is an absolutely categorical ban on ivory.

Ivory-burning is a signal of commitment to such a ban. Any ivory we find being sold, we will burn. Whoever was trying to sell it will lose their entire investment. Find more, and we will burn that too.

The housing shortage is an international phenomenon

Jul 1 JDN 2458301

My posts for the next couple of weeks are going to be shorter, since I am in Europe and will be either on vacation (at the time I write this) or busy with a conference and a workshop (by the time this post goes live).

For today, I’d just like to point out that the crisis of extremely high housing prices is not unique to California or even the United States. In some respects it may even be worse elsewhere.

San Francisco remains especially bad; the median price for a home in San Francisco is a horrifying $1.6 million.

But London (where I am at the time of writing) is also terrible; the median price for a home in London recently fell to 430,000 pounds (about $600,000 at current exchange rates). The most expensive flat—not house, flat—sold a couple years ago for the mind-boggling sum of 150 million pounds (about $200 million). If I had $200 million, I would definitely not use it to buy a flat. At that point it would literally be cheaper to buy a yacht with a helipad, park it in the harbor, and commute by helicopter. Here’s a yacht with a helipad for only $20 million, and a helicopter to go with it for $6 million. That leaves $174 million; keep $20 million in stocks to be independently wealthy for the rest of your life, and then donate the remaining $154 million to charity.

The median price of a house in Vancouver stands at 1.1 million Canadian dollars, about $830,000 US.

A global comparison finds that on a per-square-meter basis, the most expensive real estate in the world is in Monaco, where $1 million US will only buy you 15 square meters. The remaining cities in the top 10 are Hong Kong, London, Singapore, Geneva, New York, Sydney, Paris, Moscow, and Shanghai.

There is astonishing variation in the level of housing prices, even within countries. Some of the most affordable markets in the US (like San Antonio and Oklahoma City) cost as little as $80 per square foot; that means that $1 million would buy you 1,160 square meters. That’s not an error; real estate in Monaco is literally 77 times more expensive than real estate in Oklahoma City. 15 square meters is a studio apartment; 1,160 square meters is a small mansion. Just comparing within the US, the price per square foot in San Francisco is over $1,120, 14 times as high as Oklahoma City. $1 million in San Francisco will buy you about 80 square meters, which is at least a two or three-bedroom house.

This says to me that policy choices matter. It may not be possible to make San Francisco as cheap as Oklahoma City—most people would definitely rather live in San Francisco, so demand is always going to be higher there. But I don’t think it’s very plausible to say that housing is just inherently 14 times as expensive to construct as housing in Oklahoma City. If it’s really that much more expensive to construct (and that may not even be the issue—this could be more a matter of oligopoly than high costs), it must be at least in part because of something the local and state governments are doing differently. Cross-national comparisons underscore that point even further: The geography of Hong Kong and Taiwan is not that different, but housing prices in Taiwan are not nearly as high.

What exactly are different cities (and countries) doing differently that has such large effects on housing prices? That’s something I’ll try to figure out in future posts.

The inherent atrocity of “border security”

Jun 24 JDN 2458294

By now you are probably aware of the fact that a new “zero tolerance” border security policy under the Trump administration has resulted in 2,000 children being forcibly separated from their parents by US government agents. If you weren’t, here are a variety of different sources all telling the same basic story of large-scale state violence and terror.

Make no mistake: This is an atrocity. The United Nations has explicitly condemned this human rights violation—to which Trump responded by making an unprecedented threat of withdrawing unilaterally from the UN Human Rights Council.

#ThisIsNotNormal, and Trump was everything we feared—everything we warned—he would be: Corrupt, incompetent, cruel, and authoritarian.

Yet Trump’s border policy differs mainly in degree, not kind, from existing US border policy. There is much more continuity here than most of us would like to admit.

The Trump administration has dramatically increased “interior removals”, the most obviously cruel acts, where ICE agents break into the houses of people living in the US and take them away. Don’t let the cold language fool you; this is literally people with guns breaking into your home and kidnapping members of your family. This is characteristic of totalitarian governments, not liberal democracies.

And yet, the Obama administration actually holds the record for most deportations (though only because they included “at-border deportations” which other administrations did not). A major policy change by George W. Bush started this whole process of detaining people at the border instead of releasing them and requiring them to return for later court dates.

I could keep going back; US border enforcement has gotten more and more aggressive as time goes on. US border security staffing has quintupled since just 1990. There was a time when the United States was a land of opportunity that welcomed “your tired, your poor, your huddled masses”; but that time is long past.

And this, in itself, is a human rights violation. Indeed, I am convinced that border security itself is inherently a human rights violation, always and everywhere; future generations will not praise us for being more restrained than Trump’s abject and intentional cruelty, but condemn us for acting under the same basic moral framework that justified it.

There is an imaginary line in the sand just a hundred miles south of where I sit now. On one side of the line, a typical family makes $66,000 per year. On the other side, a typical family makes only $20,000. On one side of the line, life expectancy is 81 years; on the other, 77. This means that over their lifetime, someone on this side of the line can expect to make over one million dollars more than they would if they had lived on the other side. Step across this line, get a million dollars; it sounds ridiculous, but it’s an empirical fact.

This would be bizarre enough by itself; but now consider that on that line there are fences, guard towers, and soldiers who will keep you from crossing it. If you have appropriate papers, you can cross; but if you don’t, they will arrest and detain you, potentially for months. This is not how we treat you if you are carrying contraband or have a criminal record. This is how we treat you if you don’t have a passport.

How can we possibly reconcile this with the principles of liberal democracy? Philosophers have tried, to be sure. Yet they invariably rely upon some notion that the people who want to cross our border are coming from another country where they were already granted basic human rights and democratic representation—which is almost never the case. People who come here from the UK or the Netherlands or generally have the proper visas. Even people who come here from China usually have visas—though China is by no means a liberal democracy. It’s people who come here from Haiti and Nicaragua who don’t—and these are some of the most corrupt and impoverished nations in the world.

As I said in an earlier post, I was not offended that Trump characterized countries like Haiti and Syria as “shitholes”. By any objective standard, that is accurate; these countries are terrible, terrible places to live. No, what offends me is that he thinks this gives us a right to turn these people away, as though the horrible conditions of their country somehow “rub off” on them and make them less worthy as human beings. On the contrary, we have a word for people who come from “shithole” countries seeking help, and that word is “refugee”.

Under international law, “refugee” has a very specific legal meaning, under which most immigrants do not qualify. But in a broader moral sense, almost every immigrant is a refugee. People don’t uproot themselves and travel thousands of miles on a whim. They are coming here because conditions in their home country are so bad that they simply cannot tolerate them anymore, and they come to us desperately seeking our help. They aren’t asking for handouts of free money—illegal immigrants are a net gain for our fiscal system, paying more in taxes than they receive in benefits. They are looking for jobs, and willing to accept much lower wages than the workers already here—because those wages are still dramatically higher than what they had where they came from.

Of course, that does potentially mean they are competing with local low-wage workers, doesn’t it? Yes—but not as much as you might think. There is only a very weak relationship between higher immigration and lower wages (some studies find none at all!), even at the largest plausible estimates, the gain in welfare for the immigrants is dramatically higher than the loss in welfare for the low-wage workers who are already here. It’s not even a question of valuing them equally; as long as you value an immigrant at least one tenth as much as a native-born citizen, the equation comes out favoring more immigration.

This is for two reasons: One, most native-born workers already are unwilling to do the jobs that most immigrants do, such as picking fruit and laying masonry; and two, increased spending by immigrants boosts the local economy enough to compensate for any job losses.

 

But even aside from the economic impacts, what is the moral case for border security?

I have heard many people argue that “It’s our home, we should be able to decide who lives here.” First of all, there are some major differences between letting someone live in your home and letting someone come into your country. I’m not saying we should allow immigrants to force themselves into people’s homes, only that we shouldn’t arrest them when they try cross the border.

But even if I were to accept the analogy, if someone were fleeing oppression by an authoritarian government and asked to live in my home, I would let them. I would help hide them from the government if they were trying to escape persecution. I would even be willing to house people simply trying to escape poverty, as long as it were part of a well-organized program designed to ensure that everyone actually gets helped and the burden on homeowners and renters was not too great. I wouldn’t simply let homeless people come live here, because that creates all sorts of coordination problems (I can only fit so many, and how do I prioritize which ones?); but I’d absolutely participate in a program that coordinates placement of homeless families in apartments provided by volunteers. (In fact, maybe I should try to petition for such a program, as Southern California has a huge homelessness rate due to our ridiculous housing prices.)

Many people seem to fear that immigrants will bring crime, but actually they reduce crime rates. It’s really kind of astonishing how much less crime immigrants commit than locals. My hypothesis is that immigrants are a self-selected sample; the kind of person willing to move thousands of miles isn’t the kind of person who commits a lot of crimes.
I understand wanting to keep out terrorists and drug smugglers, but there are already plenty of terrorists and drug smugglers here in the US; if we are unwilling to set up border security between California and Nevada, I don’t see why we should be setting it up between California and Baja California. But okay, fine, we can keep the customs agents who inspect your belongings when you cross the border. If someone doesn’t have proper documentation, we can even detain and interrogate them—for a few hours, not a few months. The goal should be to detect dangerous criminals and nothing else. Once we are confident that you have not committed any felonies, we should let you through—frankly, we should give you a green card. We should only be willing to detain someone at the border for the same reasons we would be willing to detain a citizen who already lives here—that is, probable cause for an actual crime. (And no, you don’t get to count “illegal border crossing” as a crime, because that’s begging the question. By the same logic I could justify detaining people for jaywalking.)

A lot of people argue that restricting immigration is necessary to “preserve local culture”; but I’m not even sure that this is a goal sufficiently important to justify arresting and detaining people, and in any case, that’s really not how culture works. Culture is not advanced by purism and stagnation, but by openness and cross-pollination. From anime to pizza, many of our most valued cultural traditions would not exist without interaction across cultural boundaries. Introducing more Spanish speakers into the US may make us start saying no problemo and vamonos, but it’s not going to destroy liberal democracy. If you value culture, you should value interactions across different societies.

Most importantly, think about what you are trying to justify. Even if we stop doing Trump’s most extreme acts of cruelty, we are still talking about using military force to stop people from crossing an imaginary line. ICE basically treats people the same way the SS did. “Papers, please” isn’t something we associate with free societies—it’s characteristic of totalitarianism. We are so accustomed to border security (or so ignorant of its details) that we don’t see it for the atrocity it so obviously is.

National borders function something very much like feudal privilege. We have our “birthright”, which grants us all sorts of benefits and special privileges—literally tripling our incomes and extending our lives. We did nothing to earn this privilege. If anything, we show ourselves to be less deserving (e.g. by committing more crimes). And we use the government to defend our privilege by force.

Are people born on the other side of the line less human? Are they less morally worthy? On what grounds do we point guns at them and lock them away for the “crime” of wanting to live here?

What Trump is doing right now is horrific. But it is not that much more horrific than what we were already doing. My hope is that this will finally open our eyes to the horrors that we had been participating in all along.

What we could, what we should, and what we must

May 27 JDN 2458266

In one of the most famous essays in all of ethical philosophy, Peter Singer famously argued that we are morally obligated to give so much to charity that we would effectively reduce ourselves to poverty only slightly better than what our donations sought to prevent. His argument is a surprisingly convincing one, especially for such a radical proposition. Indeed, one of the core activities of the Effective Altruism movement has basically been finding ways to moderate Singer’s argument without giving up on its core principles, because it’s so obvious both that we ought to do much more to help people around the world and that there’s no way we’re ever going to do what that argument actually asks of us.

The most cost-effective charities in the world can save a human life for an average cost of under $4,000. The maneuver that Singer basically makes is quite simple: If you know that you could save someone’s life for $4,000, you have $4,000 to spend, and instead you spend that $4,000 on something else, aren’t you saying that whatever you did spend it on was more important than saving that person’s life? And is that really something you believe?

But if you think a little more carefully, it becomes clear that things are not quite so simple. You aren’t being paid $4,000 to kill someone, first of all. If you were willing to accept $4,000 as sufficient payment to commit a murder, you would be, quite simply, a monster. Implicitly the “infinite identical psychopath” of neoclassical rational agent models would be willing to do such a thing, but very few actual human beings—even actual psychopaths—are that callous.

Obviously, we must refrain from murdering people, even for amounts far in excess of $4,000. If you were offered the chance to murder someone for $4 billion dollars, I can understand why you would be tempted to do such a thing. Think of what you could do with all that money! Not only would you and everyone in your immediate family be independently wealthy for life, you could donate billions of dollars to charity and save as much as a million lives. What’s one life for a million? Even then, I have a strong intuition that you shouldn’t commit this murder—but I have never been able to find a compelling moral argument for why. The best I’ve been able to come up with a sort of Kantian notion: What if everyone did this?

Since the most plausible scenario is that the $4 billion comes from existing wealth, all those murders would simply be transferring wealth around, from unknown sources. If you stipulate where the wealth comes from, the dilemma can change quite a bit.

Suppose for example the $4 billion is confiscated from Bashar Al-Assad. That would be in itself a good thing, lessening the power of a genocidal tyrant. So we need to add that to the positive side of the ledger. It is probably worth killing one innocent person just to undermine Al-Assad’s power; indeed, the US Air Force certainly seems to think so, as they average more than one civilian fatality every day in airstrikes.

Now suppose the wealth was extracted by clever financial machinations that took just a few dollars out of every bank account in America. This would be in itself a bad thing, but perhaps not a terrible thing, especially since we’re planning on giving most of it to UNICEF. Those people should have given it anyway, right? This sounds like a pretty good movie, actually; a cyberpunk Robin Hood basically.

Next, suppose it was obtained by stealing the life savings of a million poor people in Africa. Now the method of obtaining the money is so terrible that it’s not clear that funneling it through UNICEF would compensate, even if you didn’t have to murder someone to get it.

Finally, suppose that the wealth is actually created anew—not printed money from the Federal Reserve, but some new technology that will increase the world’s wealth by billions of dollars yet requires the death of an innocent person to create. In this scenario, the murder has become something more like the inherent risk in human subjects biomedical research, and actually seems justifiable. And indeed, that fits with the Kantian answer, for if we all had the chance to kill one person in order to create something that would increase the wealth of the world by $4 billion, we could turn this planet into a post-scarcity utopia within a generation for fewer deaths than are currently caused by diabetes.

Anyway, my point here is that the detailed context of a decision actually matters a great deal. We can’t simply abstract away from everything else in the world and ask whether the money is worth the life.

When we consider this broader context with regard to the world’s most cost-effective charities, it becomes apparent that a small proportion of very dedicated people giving huge proportions of their income to charity is not the kind of world we want to see.

If I actually gave so much that I equalized my marginal utility of wealth to that of a child dying of malaria in Ghana, I would have to donate over 95% of my income—and well before that point, I would be homeless and impoverished. This actually seems penny-wise and pound-foolish even from the perspective of total altruism: If I stop paying rent, it gets a lot harder for me to finish my doctorate and become a development economist. And even if I never donated another dollar, the world would be much better off with one more good development economist than with even another $23,000 to the Against Malaria Foundation. Once you factor in the higher income I’ll have (and proportionately higher donations I’ll make), it’s obviously the wrong decision for me to give 95% of $25,000 today rather than 10% of $70,000 every year for the next 20 years after I graduate.

But the optimal amount for me to donate from that perspective is whatever the maximum would be that I could give without jeopardizing my education and career prospects. This is almost certainly more than I am presently giving. Exactly how much more is actually not all that apparent: It’s not enough to say that I need to be able to pay rent, eat three meals a day, and own a laptop that’s good enough for programming and statistical analysis. There’s also a certain amount that I need for leisure, to keep myself at optimal cognitive functioning for the next several years. Do I need that specific video game, that specific movie? Surely not—but if I go the next ten years without ever watching another movie or playing another video game, I’m probably going to be in trouble psychologically. But what exactly is the minimum amount to keep me functioning well? And how much should I be willing to spend attending conferences? Those can be important career-building activities, but they can also be expensive wastes of time.

Singer acts as though jeopardizing your career prospects is no big deal, but this is clearly wrong: The harm isn’t just to your own well-being, but also to your productivity and earning power that could have allowed you to donate more later. You are a human capital asset, and you are right to invest in yourself. Exactly how much you should invest in yourself is a much harder question.
Such calculations are extremely difficult to do. There are all sorts of variables I simply don’t know, and don’t have any clear way of finding out. It’s not a good sign for an ethical theory when even someone with years of education and expertise on specifically that topic still can’t figure out the answer. Ethics is supposed to be something we can apply to everyone.

So I think it’s most helpful to think in those terms: What could we apply to everyone? What standard of donation would be high enough if we could get everyone on board?

World poverty is rapidly declining. The direct poverty gap at the UN poverty line of $1.90 per day is now only $80 billion. Realistically, we couldn’t simply close that gap precisely (there would also be all sorts of perverse incentives if we tried to do it that way). But the standard estimate that it would take about $300 billion per year in well-targeted spending to eliminate world hunger is looking very good.

How much would each person, just those in the middle class or above within the US or the EU, have to give in order to raise this much?
89% of US income is received by the top 60% of households (who I would say are unambiguously “middle class or above”). Income inequality is not as extreme within the EU, so the proportion of income received by the top 60% seems to be more like 75%.

89% of US GDP plus 75% of EU GDP is all together about $29 trillion per year. This means that in order to raise $300 billion, each person in the middle class or above would need to donate just over one percent of their income.

Not 95%. Not 25%. Not even 10%. Just 1%. That would be enough.

Of course, more is generally better—at least until you start jeopardizing your career prospects. So by all means, give 2% or 5% or even 10%. But I really don’t think it’s helpful to make people feel guilty about not giving 95% when all we really needed was for everyone to give 1%.

There is an important difference between what we could do, what we should do, and what we must do.

What we must do are moral obligations so strong they are essentially inviolable: We must not murder people. There may be extreme circumstances where exceptions can be made (such as collateral damage in war), and we can always come up with hypothetical scenarios that would justify almost anything, but for the vast majority of people the vast majority of time, these ethical rules are absolutely binding.

What we should do are moral obligations that are strong enough to be marks against your character if you break them, but not so absolutely binding that you have to be a monster not to follow them. This is where I put donating at least 1% of your income. (This is also where I put being vegetarian, but perhaps that is a topic for another time.) You really ought to do it, and you are doing something wrongful if you don’t—but most people don’t, and you are not a terrible person if you don’t.

This latter category is in part socially constructed, based on the norms people actually follow. Today, slavery is obviously a grave crime, and to be a human trafficker who participates in it you must be a psychopath. But two hundred years ago, things were somewhat different: Slavery was still wrong, yes, but it was quite possible to be an ordinary person who was generally an upstanding citizen in most respects and yet still own slaves. I would still condemn people who owned slaves back then, but not nearly as forcefully as I would condemn someone who owned slaves today. Two hundred years from now, perhaps vegetarianism will move up a category: The norm will be that everyone eats only plants, and someone who went out of their way to kill and eat a pig would have to be a psychopath. Eating meat is already wrong today—but it will be more wrong in the future. I’d say the same about donating 1% of your income, but actually I’m hoping that by two hundred years from now there will be no more poverty left to eradicate, and donation will no longer be necessary.

Finally, there is what we could do—supererogatory, even heroic actions of self-sacrifice that would make the world a better place, but cannot be reasonably expected of us. This is where donating 95% or even 25% of your income would fall. Yes, absolutely, that would help more people than donating 1%; but you don’t owe the world that much. It’s not wrong for you to contribute less than this. You don’t need to feel guilty for not giving this much.

But I do want to make you feel guilty if you don’t give at least 1%. Don’t tell me you can’t. You can. If your income is $30,000 per year, that’s $300 per year. If you needed that much for a car repair, or dental work, or fixing your roof, you’d find a way to come up with it. No one in the First World middle class is that liquidity-constrained. It is true that half of Americans say they couldn’t come up with $400 in an emergency, but I frankly don’t believe it. (I believe it for the bottom 25% or so, who are actually poor; but not half of Americans.) If you have even one credit card that’s not maxed out, you can do this—and frankly even if a card is maxed out, you can probably call them and get them to raise your limit. There is something you could cut out of your spending that would allow you to get back 1% of your annual income. I don’t know what it is, necessarily: Restaurants? Entertainment? Clothes? But I’m not asking you to give a third of your income—I’m asking you to give one penny out of every dollar.

I give considerably more than that; my current donation target is 8% and I’m planning on raising it to 10% or more once I get a high-paying job. I live on a grad student salary which is less than the median personal income in the US. So I know it can be done. But I am very intentionally not asking you to give this much; that would be above and beyond the call of duty. I’m only asking you to give 1%.