The Prisoner’s Dilemma

JDN 2457348
When this post officially goes live, it will have been one full week since I launched my Patreon, on which I’ve already received enough support to be more than halfway to my first funding goal. After this post, I will be far enough ahead in posting that I can release every post one full week ahead of time for my Patreon patrons (can I just call them Patreons?).

It’s actually fitting that today’s topic is the Prisoner’s Dilemma, for Patreon is a great example of how real human beings can find solutions to this problem even if infinite identical psychopaths could not.

The Prisoner’s Dilemma is the most fundamental problem in game theory—arguably the reason game theory is worth bothering with in the first place. There is a standard story that people generally tell to set up the dilemma, but honestly I find that they obscure more than they illuminate. You can find it in the Wikipedia article if you’re interested.

The basic idea of the Prisoner’s Dilemma is that there are many times in life when you have a choice: You can do the nice thing and cooperate, which costs you something, but benefits the other person more; or you can do the selfish thing and defect, which benefits you but harms the other person more.

The game can basically be defined as four possibilities: If you both cooperate, you each get 1 point. If you both defect, you each get 0 points. If you cooperate when the other player defects, you lose 1 point while the other player gets 2 points. If you defect when the other player cooperates, you get 2 points while the other player loses 1 point.

P2 Cooperate P2 Defect
P1 Cooperate +1, +1 -1, +2
P2 Defect +2, -1 0, 0

These games are nonzero-sum, meaning that the total amount of benefit or harm incurred is not constant; it depends upon what players choose to do. In my example, the total benefit varies from +2 (both cooperate) to +1 (one cooperates, one defects) to 0 (both defect).

The answer which is “neat, plausible, and wrong” (to use Mencken’s oft-misquoted turn of phrase) is to reason this way: If the other player cooperates, I can get +1 if I cooperate, or +2 if I defect. So I should defect. If the other player defects, I can get -1 if I cooperate, or 0 if I defect. So I should defect. In either case I defect, therefore I should always defect.

The problem with this argument is that your behavior affects the other player. You can’t simply hold their behavior fixed when making your choice. If you always defect, the other player has no incentive to cooperate, so you both always defect and get 0. But if you credibly promise to cooperate every time they also cooperate, you create an incentive to cooperate that can get you both +1 instead.

If there were a fixed amount of benefit, the game would be zero-sum, and cooperation would always be damaging yourself. In zero-sum games, the assumption that acting selfishly maximizes your payoffs is correct; we could still debate whether it’s necessarily more rational (I don’t think it’s always irrational to harm yourself to benefit someone else an equal amount), but it definitely is what maximizes your money.

But in nonzero-sum games, that assumption no longer holds; we can both end up better off by cooperating than we would have been if we had both defected.
Below is a very simple zero-sum game (notice how indeed in each outcome, the payoffs sum to zero; any zero-sum game can be written so that this is so, hence the name):

Player 2 cooperates Player 2 defects
Player 1 cooperates 0, 0 -1, +1
Player 1 defects +1, -1 0, 0

In that game, there really is no reason for you to cooperate; you make yourself no better off if they cooperate, and you give them a strong incentive to defect and make you worse off. But that game is not a Prisoner’s Dilemma, even though it may look superficially similar.

The real world, however, is full of variations on the Prisoner’s Dilemma. This sort of situation is fundamental to our experience; it probably happens to you multiple times every single day.
When you finish eating at a restaurant, you could pay the bill (cooperate) or you could dine and dash (defect). When you are waiting in line, you could quietly take your place in the queue (cooperate) or you could cut ahead of people (defect). If you’re married, you could stay faithful to your spouse (cooperate) or you could cheat on them (defect). You could pay more for the shoes made in the USA (cooperate), or buy the cheap shoes that were made in a sweatshop (defect). You could pay more to buy a more fuel-efficient car (cooperate), or buy that cheap gas-guzzler even though you know how much it pollutes (defect). Most of us cooperate most of the time, but occasionally are tempted into defecting.

The “Prisoner’s Dilemma” is honestly not much of a dilemma. A lot of neoclassical economists really struggle with it; their model of rational behavior is so narrow that it keeps putting out the result that they are supposed to always defect, but they know that this results in a bad outcome. More recently we’ve done experiments and we find that very few people actually behave that way (though typically neoclassical economists do), and also that people end up making more money in these experimental games than they would if they behaved as neoclassical economics says would be optimal.

Let me repeat that: People make more money than they would if they acted according to what’s supposed to be optimal according to neoclassical economists. I think that’s why it feels like such a paradox to them; their twin ideals of infinite identical psychopaths and maximizing the money you make have shown themselves to be at odds with one another.

But in fact, it’s really not that paradoxical: Rationality doesn’t mean being maximally selfish at every opportunity. It also doesn’t mean maximizing the money you make, but even if it did, it still wouldn’t mean being maximally selfish.

We have tested experimentally what sort of strategy is most effective at making the most money in the Prisoner’s Dilemma; basically we make a bunch of competing computer programs to play the game against one another for points, and tally up the points. When we do that, the winner is almost always a remarkably simple strategy, called “Tit for Tat”. If your opponent cooperated last time, cooperate. If your opponent defected last time, defect. Reward cooperation, punish defection.

In more complex cases (such as allowing for random errors in behavior), some subtle variations on that strategy turn out to be better, but are still basically focused around rewarding cooperation and punishing defection.
This probably seems quite intuitive, yes? It may even be the strategy that it occurred to you to try when you first learned about the game. This strategy comes naturally to humans, not because it is actually obvious as a mathematical result (the obvious mathematical result is the neoclassical one that turns out to be wrong), but because it is effective—human beings evolved to think this way because it gave us the ability to form stable cooperative coalitions. This is what gives us our enormous evolutionary advantage over just about everything else; we have transcended the limitations of a single individual and now work together in much larger groups. E.O. Wilson likes to call us “eusocial”, a term formally applied only to a very narrow range of species such as ants and bees (and for some reason, naked mole rats); but I don’t think this is actually strong enough, because human beings are social in a way that even ants are not. We cooperate on the scale of millions of individuals, who are basically unrelated genetically (or very distantly related). That is what makes us the species who eradicate viruses and land robots on other planets. Much more so than intelligence per se, the human superpower is cooperation.

Indeed, it is not a great exaggeration to say that morality exists as a concept in the human mind because cooperation is optimal in many nonzero-sum games such as these. If the world were zero-sum, morality wouldn’t work; the immoral action would always make you better off, and the bad guys would always win. We probably would never even have evolved to think in moral terms, because any individual or species that started to go that direction would be rapidly outcompeted by those that remained steadfastly selfish.

Just give people money!

JDN 2457332 EDT 17:02.

Today is the Fifth of November, on which a bunch of people who liked a Hollywood movie start posting images in support of a fanatical religious terrorist in his plot to destroy democracy in the United Kingdom a few centuries ago. It’s really weird, but I’m not particularly interested in that.

Instead I’d like to talk about the solution to poverty, which we’ve known for a long time—in fact, it’s completely obvious—and yet have somehow failed to carry out. Many people doubt that it even works, not based on the empirical evidence, but because it just feels like it can’t be right, like it’s so obvious that surely it was tried and didn’t work and that’s why we moved on to other things. When you first tell a kindergartner that there are poor people in the world, that child will very likely ask: “Why don’t we just give them some money?”

Why not indeed?

Formally this is called a “direct cash transfer”, and it comes in many different variants, but basically they run along a continuum from unconditional—we just give it to everybody, no questions asked—to more and more conditional—you have to be below a certain income, or above a certain age, or have kids, or show up at our work program, or take a drug test, etc. The EU has a nice little fact sheet about the different types of cash transfer programs in use.

Actually, I’d argue that at the very far extreme is government salaries—the government will pay you $40,000 per year, provided that you teach high school every weekday. We don’t really think of that as a “conditional cash transfer” because it involves you providing a useful service (and is therefore more like an ordinary, private-sector salary), but many of the conditions imposed on cash transfers actually have this sort of character—we want people to do things that we think are useful to society, in order to justify us giving them the money. It really seems to be a continuum, from just giving money to everyone, to giving money to some people based on them doing certain things, to specifically hiring people to do something.

Social programs in different countries can be found at different places on this continuum. In the United States, our programs are extremely conditional, and also the total amount we give out is relatively small. In Europe, programs are not as conditional—though still conditional—and they give out more. And sure enough, after-tax poverty in Europe is considerably lower, even though before-tax poverty is about the same.

In fact, the most common way to make transfers conditional is to make them “in-kind”; instead of giving you money, we give you something—healthcare, housing, food. Sometimes this makes sense; actually I think for healthcare it makes the most sense, because price signals don’t work in a market as urgent and inelastic as healthcare (that is, you don’t shop around for an emergency room—in fact, people don’t even really shop around for a family doctor). But often it’s simply a condition we impose for political reasons; we don’t want those “lazy freeloaders” to do anything else with the money that we wouldn’t like, such as buying alcohol or gambling. Even poor people in India buy into this sort of reasoning. Nevermind that they generally don’t do that, or that they could just shift away spending they would otherwise be making (warning: technical economics paper within) to do those things anyway—it’s the principle of the thing.

Direct cash transfers not only work—they work about as well as the best things we’ve tried. Spending on cash transfers is about as cost-effective as spending on medical aid and malaria nets.

Other than in experiments (the largest of which I’m aware of was a town in Canada, unless you count Alaska’s Permanent Fund Dividend, which is unconditional but quite small), we have never really tried implementing a fully unconditional cash transfer system. “Too expensive” is usually the complaint, and it would indeed be relatively expensive (probably greater than all of what we currently spend on Social Security and Medicare, which are two of our biggest government budget items). Implementing a program with a cost on the order of $2 trillion per year is surely not something to be done lightly. But it would have one quite substantial benefit: It would eliminate poverty in the United States immediately and forever.

This is why I really like the “abolish poverty” movement; we must recognize that at our current level of economic development, poverty is no longer a natural state, a complex problem to solve. It is a policy decision that we are making. We are saying, as a society, that we would rather continue to have poverty than spend that $2 trillion per year, about 12% of our $17.4 trillion GDP. We are saying that we’d rather have people who are homeless and starving than lose 12 cents of every dollar we make. (To be fair, if we include the dynamic economic impact of this tax-and-transfer system it might actually turn out to be more than that; but it could in fact be less—the increased spending would boost the economy, just as the increased taxes would restrain it—and seems very unlikely to be more than 20% of GDP.)

For most of human history—and in most countries today—that is not the case. India could not abolish poverty immediately by a single tax policy; nor could China. Probably not Brazil either. Maybe Greece could do it, but then again maybe not. But Germany could; the United Kingdom could; France could; and we could in the United States. We have enough wealth now that with a moderate increase in government spending we could create an economic floor below which no person could fall. It is incumbent upon us at the very least to justify why we don’t.

I have heard it said that poverty is not a natural condition, but the result of human action. Even Nelson Mandela endorsed this view. This is false, actually. In general, poverty is the natural condition of all life forms on Earth (and probably all life forms in the universe). Natural selection evolves us toward fitting as many gene-packages into the environment as possible, not toward maximizing the happiness of the sentient beings those gene-packages may happen to be. To a first approximation, all life forms suffer in poverty.

We live at a unique time in human history; for no more than the last century—and perhaps not even that—we have actually had so much wealth that we could eliminate poverty by choice. For hundreds of thousands of years human beings toiled in poverty because there was no such choice. Perhaps good policy in Greece could end poverty today, but it couldn’t have during the reign of Pericles. Good policy in Italy could end poverty now, but not when Caesar was emperor. Good policy in the United Kingdom could easily end poverty immediately, but even under Queen Victoria that wasn’t feasible.

Maybe that’s why we aren’t doing it? Our cultural memory was forged in a time decades or centuries ago, before we had this much wealth to work with. We speak of “end world hunger” in the same breath as “cure cancer” or “conquer death”, a great dream that has always been impossible and perhaps still is—but in fact we should speak of it in the same breath as “split the atom” and “land on the Moon”, seminal achievements that our civilization is now capable of thanks to economic and technological revolution.

Capitalism also seems to have a certain momentum to it; once you implement a market economy that maximizes wealth by harnessing self-interest, people seem to forget that we are fundamentally altruistic beings. I may never forget that economist who sent me an email with “altruism” in scare quotes, as though it was foolish (or at best imprecise) to say that human beings care about one another. But in fact we are the most altruistic species on Earth, without question, in a sense so formal and scientific it can literally be measured quantitatively.

There are real advantages to harnessing self-interest—not least, I know my own interests considerably better than I know yours, no matter who you are—and that is part of how we have achieved this great level of wealth (though personally I think science, democracy, and the empowerment of women are the far greater causes of our prosperity). But we must not let it forget us why we wanted to have wealth in the first place: Not to concentrate power in a handful of individuals who will pass it on to their heirs; not to “maximize work incentives”; not to give us the fanciest technological gadgets. The reason we wanted to have wealth was so that we could finally free ourselves from the endless toil that was our lot by birth and that of all other beings—to let us finally live, instead of merely survive. There is a peak to Maslow’s pyramid, and we could stand there now, together; but we must find the will to give up that 12 cents of every dollar.

Elasticity and the Law of Demand

JDN 2457289 EDT 21:04

This will be the second post in my new bite-size format, the first one that’s in the middle of the week.

I’ve alluded previously to the subject of demand elasticity, but I think it’s worth explaining in a little more detail. The basic concept is fairly straightforward: Demand is more elastic when the amount that people want to buy changes a large amount for a small change in price. The opposite is inelastic.

Apples are a relatively elastic good. If the price of apples goes up, people buy fewer apples. Maybe they buy other fruit instead, such as oranges or bananas; or maybe they give up on fruit and eat something else, like rice.

Salt is an extremely inelastic good. No matter what the price of salt is, at least within the range it has been for the last few centuries, people are going to continue to buy pretty much the same amount of salt. (In ancient times salt was actually expensive enough that people couldn’t afford enough of it, which was particularly harmful in desert regions. Mark Kulansky’s book Salt on this subject is surprisingly compelling, given the topic.)
Specifically, the elasticity is equal to the proportional change in quantity demanded, divided by the proportional change in price.

For example, if the price of gas rises from $2 per gallon to $3 per gallon, that’s a 50% increase. If the quantity of gas purchase then falls from 100 billion gallons to 90 billion gallons, that’s a 10% decrease. If increasing the price by 50% decreased the quantity demanded by 10%, that would be a demand elasticity of -10%/50% = -1/5 = -0.2

In practice, measuring elasticity is more complicated than that, because supply and demand are both changing at the same time; so when we see a price change and a quantity change, it isn’t always clear how much of each change is due to supply and how much is due to demand. Sophisticated econometric techniques have been developed to try to separate these two effects (in future posts I plan to explain the basics of some of these techniques), but it’s difficult and not always successful.

In general, markets function better when supply and demand are more elastic. When shifts in price trigger large shifts in quantity, this creates pressure on the price to remain at a fixed level rather than jumping up and down. This in turn means that the market will generally be predictable and stable.

It’s also much harder to make monopoly profits in a market with elastic demand; even if you do have a monopoly, if demand is highly elastic then raising the price won’t make you any money, because whatever you gain in selling each gizmo for more, you’ll lose in selling fewer gizmos. In fact, the profit margin for a monopoly is inversely proportional to the elasticity of demand.

Markets do not function well when supply and demand are highly inelastic. Monopolies can become very powerful and result in very large losses of human welfare. A particularly vivid example of this was in the news recently, when a company named Turing purchased the rights to a drug called Daraprim used primarily by AIDS patients, then hiked the price from $13.50 to $750. This made enough people mad that the CEO has since promised to bring it back down, though he hasn’t said how far.

That price change was only possible because Daraprim has highly inelastic demand—if you’ve got AIDS, you’re going to take AIDS medicine, as much as prescribed, provided only that it doesn’t drive you completely bankrupt. (Not an unreasonable fear, as medical costs are the leading cause of bankruptcy in the United States.) This raised price probably would bankrupt a few people, but for the most part it wouldn’t affect the amount of drug sold; it would just funnel a huge amount of money from AIDS patients to the company. This is probably part of why it made people so mad; that and there would probably be a few people who died because they couldn’t afford this new expensive medication.

Imagine if a company had tried to pull the same stunt for a more elastic good, like apples. “CEO buys up all apple farms, raises price of apples from $2 per pound to $100 per pound.” What’s going to happen then? People are not going to buy any apples. Perhaps a handful of the most die-hard apple lovers still would, but the rest of us are going to meet our fruit needs elsewhere.

For most goods most of the time, elasticity of demand is negative, meaning that as price increases, quantity demanded decreases. This is in fact called the Law of Demand; but as I’ve said, “laws” in economics are like the Pirate Code: They’re really more what you’d call “guidelines”.
There are three major exceptions to the Law of Demand. The first one is the one most economists talk about, and it almost never happens. The second one is talked about occasionally, and it’s quite common. The third one is almost never talked about, and yet it is by far the most common and one of the central driving forces in modern capitalism.
The exception that we usually talk about in economics is called the Giffen Effect. A Giffen good is a good that’s so cheap and such a bare necessity that when it becomes more expensive, you won’t be able to buy less of it; instead you’ll buy more of it, and buy less of other things with your reduced income.

It’s very hard to come up with empirical examples of Giffen goods, but it’s an easy theoretical argument to make. Suppose you’re buying grapes for a party, and you know you need 4 bags of grapes. You have $10 to spend. Suppose there are green grapes selling for $1 per bag and red grapes selling for $4 per bag, and suppose you like red grapes better. With your $10, you can buy 2 bags of green grapes and 2 bags of red grapes, and that’s the 4 bags you need. But now suppose that the price of green grapes rises to $2 per bag. In order to afford 4 bags of grapes, you now need to buy 3 bags of green grapes and only 1 bag of red grapes. Even though it was the price of green grapes that rose, you ended up buying more green grapes. In this scenario, green grapes are a Giffen good.

The exception that is talked about occasionally and occurs a lot in real life is the Veblen Effect. Whereas a Giffen good is a very cheap bare necessity, a Veblen good is a very expensive pure luxury.

The whole point of buying a Veblen good is to prove that you can. You don’t buy a Ferrari because a Ferrari is a particularly nice automobile (a Prius is probably better, and a Tesla certainly is); you buy a Ferrari to show off that you’re so rich you can buy a Ferrari.

On my previous post, jenszorn asked: “Much of consumer behavior is irrational by your standards. But people often like to spend money just for the sake of spending and for showing off. Why else does a Rolex carry a price tag for $10,000 for a Rolex watch when a $100 Seiko keeps better time and requires far less maintenance?” Veblen goods! It’s not strictly true that Veblen goods are irrational; it can be in any particular individual’s best interest is served by buying Veblen goods in order to signal their status and reap the benefits of that higher status. However, it’s definitely true that Veblen goods are inefficient; because ostentatious displays of wealth are a zero-sum game (it’s not about what you have, it’s about what you have that others don’t), any resources spent on rich people proving how rich they are are resources that society could otherwise have used, say, feeding the poor, curing diseases, building infrastructure, or colonizing other planets.

Veblen goods can also result in a violation of the Law of Demand, because raising the price of a Veblen good like Ferraris or Rolexes can make them even better at showing off how rich you are, and therefore more appealing to the kind of person who buys them. Conversely, lowering the price might not result in any more being purchased, because they wouldn’t seem as impressive anymore. Currently a Ferrari costs about $250,000; if they reduced that figure to $100,000, there aren’t a lot of people who would suddenly find it affordable, but many people who currently buy Ferraris might switch to Bugattis or Lamborghinis instead. There are limits to this, of course: If the price of a Ferrari dropped to $2,000, people wouldn’t buy them to show off anymore; but the far larger effect would be the millions of people buying them because you can now get a perfectly good car for $2,000. Yes, I would sell my dear little Smart if it meant I could buy a Ferrari instead and save $8,000 at the same time.

But the third major exception to the Law of Demand is actually the most important one, yet it’s the one that economists hardly ever talk about: Speculation.

The most common reason why people would buy more of something that has gotten more expensive is that they expect it to continue getting more expensive, and then they will be able to sell what they bought at an even higher price and make a profit.

When the price of Apple stock goes up, do people stop buying Apple stock? On the contrary, they almost certainly start buying more—and then the price goes up even further still. If rising prices get self-fulfilling enough, you get an asset bubble; it grows and grows until one day it can’t, and then the bubble bursts and prices collapse again. This has happened hundreds of times in history, from the Tulip Mania to the Beanie Baby Bubble to the Dotcom Boom to the US Housing Crisis.

It isn’t necessarily irrational to participate in a bubble; some people must be irrational, but most people can buy above what they would be willing to pay by accurately predicting that they’ll find someone else who is willing to pay an even higher price later. It’s called Greater Fool Theory: The price I paid may be foolish, but I’ll find someone who is even more foolish to take it off my hands. But like Veblen goods, speculation goods are most definitely inefficient; nothing good comes from prices that rise and fall wildly out of sync with the real value of goods.

Speculation goods are all around us, from stocks to gold to real estate. Most speculation goods also serve some other function (though some, like gold, are really mostly just Veblen goods otherwise; actual useful applications of gold are extremely rare), but their speculative function often controls their price in a way that dominates all other considerations. There’s no real limit to how high or low the price can go for a speculation good; no longer tied to the real value of the good, it simply becomes a question of how much people decide to pay.

Indeed, speculation bubbles are one of the fundamental problems with capitalism as we know it; they are one of the chief causes of the boom-and-bust business cycle that has cost the world trillions of dollars and thousands of lives. Most of our financial industry is now dedicated to the trading of speculation goods, and finance is taking over a larger and larger section of our economy all the time. Many of the world’s best and brightest are being funneled into finance instead of genuinely productive industries; 15% of Harvard grads take a job in finance, and almost half did just before the crash. The vast majority of what goes on in our financial system is simply elaborations on speculation; very little real productivity ever enters into the equation.

In fact, as a general rule I think when we see a violation of the Law of Demand, we know that something is wrong in the economy. If there are Giffen goods, some people are too poor to buy what they really need. If there are Veblen goods, inequality is too large and people are wasting resources competing for status. And since there are always speculation goods, the history of capitalism has been a history of market instability.

Fortunately, elasticity of demand is usually negative: As the price goes up, people want to buy less. How much less is the elasticity.

Advertising: Someone is being irrational

JDN 2457285 EDT 12:52

I’m working on moving toward a slightly different approach to posting; instead of one long 3000-word post once a week, I’m going to try to do two more bite-sized posts of about 1500 words or less spread throughout the week. I’m actually hoping to work toward setting up a Patreon and making blogging into a source of income.

Today’s bite-sized post is about advertising, and a rather simple, basic argument that shows that irrational economic behavior is widespread.

First, there are advertisements that don’t make sense. They don’t tell you anything about the product, they are often completely absurd, and while sometimes entertaining they are rarely so entertaining that people would pay to see them in theaters or buy them on DVD—which means that any entertainment value they had is outweighed by the opportunity cost of seeing them instead of the actual TV show, movie, or whatever else it was you wanted to see.

If you doubt that there are advertisements that don’t make sense, I have one example in particular for you which I think will settle this matter:

If you didn’t actually watch it, you must. It is too absurd to be explained.

And of course there are many other examples, from Coca-Cola’s weird associations with polar bears to the series of GEICO TV spots about Neanderthals that they thought were so entertaining as to deserve a TV show (the world proved them wrong), to M&M commercials that present a terrifying world in which humans regularly consume the chocolatey flesh of other sapient citizens (and I thought beef was bad!).

Or here’s another good one:

In the above commercial, Walmart attempts to advertise themselves by showing a heartwarming story of a child who works hard to make money by doing odd jobs, including using the model of door-to-door individual sales that Walmart exists to make obsolete. The only contribution Walmart makes to the story is apparently “we have affordable bicycles for children”. Coca-Cola is also thrown in for some reason.

Certain products seem to attract nonsensical advertising more than others, with car insurance being the prime culprit of totally nonsensical and irrelevant commercials, perhaps because of GEICO in particular who do not actually seem to be any good at providing car insurance but instead spend all of their resources making commercials.

Commercials for cars themselves are an interesting case, as certain ads actually appeal in at least a general way to the quality of the vehicle itself:

Then there are those that vaguely allude to qualities of their vehicles, but mostly immerse us in optimistic cyberpunk:

Others, however, make no attempt to say anything about the vehicle, instead spinning us exciting tales of giant hamsters who use the car and the power of dance to somehow form a truce between warring robot factions in a dystopian future (if you haven’t seen this commercial, none of that is a joke; see for yourself below):

So, I hope that I have satisfied you that there are in fact advertisements which don’t make sense, which could not possibly give anyone a rational reason to purchase the product contained within.

Therefore, at least one of the following statements must be true:

1. Consumers behave irrationally by buying products for irrational reasons
2. Corporations behave irrationally by buying advertisements that don’t work

Both could be true (in fact I think both are true), but at least one must be, on pain of contradiction, as long as you accept that there are advertisements which don’t provide rational reasons to buy products. There’s no wiggling out of this one, neoclassicists.

Advertising forms a large part of our economy—Americans spend $171 billion per year on ads, more than the federal government spends on education, and also more than the nominal GDP of Hungary or Vietnam. This figure is growing thanks to the Internet and its proliferation of “free” ad-supported content. Insofar as advertising is irrational, this money is being thrown down the drain.

The waste from spending on ads that don’t work is limited; you can’t waste more than you actually spent. But the waste from buying things you don’t actually need is not limited in the same way; an ad that cost $1 million to air (cheaper than a typical Super Bowl ad) could lead to $10 million in worthless purchases.

I wouldn’t say that all advertising is irrational; some ads do actually provide enough meaningful information about a product that they could reasonably motivate you to buy it (or at least look into buying it), and it is in both your best interest and the company’s best interest for you to have such information.

But I think it’s not unreasonable to estimate that about half of our advertising spending is irrational, either by making people buy things for bad reasons or by making corporations waste time and money on buying ads that don’t work. This amounts to some $85 billion per year, or enough to pay every undergraduate tuition at every public university in the United States.

This state of affairs is not inevitable.

Most meaningless ads could be undermined by regulation; instead of the current “blacklist” model where an ad is legal as long as it doesn’t explicitly state anything that is verifiably false, we could move to a “whitelist” model where an ad is illegal if it states anything that isn’t verifiably true. Red Bull cannot give you wings, Maxwell House isn’t good to the last drop, and Volkswagen needs to be more specific than “round for a reason”. We may never be able to completely eliminate irrelevant emotionally-salient allusions (pictures of families, children, puppies, etc.), but as long as the actual content of the words is regulated it would be much harder to deluge people with advertisements that provide no actual information.

We have a choice, as a civilization: Do we want to continue to let meaningless ads invade our brains and waste the resources of our society?

Free trade, fair trade, or what?

JDN 2457271 EDT 11:34.

As I mentioned in an earlier post, almost all economists are opposed to protectionism. In a survey of 264 AEA economists, 87% opposed tariffs to protect US workers against foreign competition.

(By the way, 58% said they usually vote Democrat and only 23% said they usually vote Republican. Given that economists are overwhelmingly middle-age rich White males—only 12% of tenured faculty economists are women and the median income of economists is over $90,000—that’s saying something. Dare I suggest it’s saying that Democrat economic policy is usually better?)

There are a large number of published research papers showing large positive effects of free trade agreements, such as this paper, and this paper, and this paper, and this paper. It’s hard to find any good papers showing any significant negative effects. This is probably why the consensus is so strong; the empirical evidence is overwhelming.

Yet protectionism is very popular among the general public. The majority of both Democrat and Republican voters believe that free trade agreements have harmed the United States. For decades, protectionism has always been the politically popular answer.

To be fair, it’s actually possible to think that free trade harms the US but still support free trade; actually there are some economists who argue that free trade has harmed the US, but has benefited other countries like China and India so much more that it is worth it, making free trade an act of global altruism and good will (for the opposite view, here’s a pretty good article about how “free trade” in principle is often mercantilism in practice, and by no means altruistic). As Krugman talks about, there is some evidence that income inequality in the First World has been exacerbated by globalization—but it’s clearly not the primary reason for rising inequality.

What’s going on here? Are economists ignoring the negative impacts of free trade because it doesn’t fit their elegant mathematical models? Is the general public ignorant of how trade actually works? Does the way free trade works, or its interaction with human psychology, inherently obscure its benefits while emphasizing its harms?

Yes. All of the above.

One of the central mistakes of neoclassical economics is the tendency to over-aggregate. Instead of looking at the impact on individuals, it’s much easier to look at the impact on aggregated abstractions like trade flows and GDP. To some extent this is inevitable—there are simply too many people in the world to keep track of them all. But we need to be aware of what welose when we aggregate, and we need to test the robustness of our theories by applying different models of aggregation (such as comparing “how does this affect Americans” with “how does this affect the First World middle class”).

It is absolutely unambiguous that free trade increases trade flows and GDP, and for small countries these benefits can be mind-bogglingly huge. A key part of the amazing success story of economic development that is Korea is that they dramatically increased their openness to global trade.

The reason for this is absolutely fundamental to economics, and in grasping it in 1776 Adam Smith basically founded the field: Voluntary trade benefits both parties.

As most economists would put it today, comparative advantage leads to Pareto-improving gains from trade. Or as I’d tend to put it, more succinctly yet just as thoroughly based in modern game theory: Trade is nonzero-sum.

When you sell a product to someone, it is because the money they’re offering you is worth more to you than the product—and because the product is worth more to them than the money. You each lose something you value less and gain something you value more—so you are both better off.

This mutual benefit occurs whether you are individuals, corporations, or nations. It’s a fundamental principle of economics that underlies the operation of markets at every scale.

This is what I think most people don’t understand when they say they want to “stop sending jobs overseas”. If by that all you mean is ensuring that there aren’t incentives to offshore and outsource, that’s quite reasonable. Even some degree of incentive to keep businesses in the US might make sense, to avoid a race-to-the-bottom in global wages. But I get the sense that it is more than this, that people have a general notion that jobs are zero-sum and if we hire a million people in China that means a million people must lose their jobs in the US. This is not simply wrong, it is fundamentally wrong; it misses the entire point of economics. If there is one core principle that defines economics, I think it would be that the universe is nonzero-sum; gains for some can also be gains for others. There is not a fixed amount of stuff in the world that we distribute; we can make more stuff. Handled properly, a trade that results in a million people hired in China can mean an extra million people hired in the US.

Once you introduce a competitive market, things get more complicated, because there aren’t just winners—there are also losers. When you have competitors, someone can buy from them instead of you, and the two of them benefit, but you are harmed. By the standard methods of calculating benefits and harms (which admittedly leave much to be desired), we can show quite clearly that in general, on average, the benefits outweigh the harms.

But of course we don’t live “in general, on average”. Despite the overwhelming, unambiguous benefit to the economy as a whole, there is some evidence that free trade can produce a good deal of harm to specific individuals.

Suppose you live in the US and your job is to assemble iPads. You’re good at it, you like it, it pays pretty well. But now Apple says that they want to “reduce labor costs” (they are in fact doing nothing of the sort; to really reduce labor costs in a deep economic sense you’d have to make work easier, more productive, or more fun—the wage and the cost are fundamentally different things), so they outsource production to Foxconn in China, who pay wages 1/30 of what you were being paid.

The net result of this change to the economy as a whole is almost certainly positive—the price of iPads goes down, we all get to have iPads. (There’s a meme going around claiming that the price of an iPad would be almost $15,000 if it were made in the US; no, it would cost about $1000 even if our productivity were no higher and Apple could keep their current profit margin intact, both of which are clearly overestimates. But since it’s currently selling for about $500, that’s still a big difference.) Apple makes more profits, which is why they did it—and we do have to count that in our GDP. Most importantly, workers in China get employed in safe, high-skill jobs instead of working in coal mines, subsistence farming, or turning to drugs and prostitution. More stuff, more profits, better jobs for some of the world’s poorest workers. These are all good things, and overall they outweigh the harm of you losing your job.

Well, from a global perspective, anyway. I doubt they outweigh the harm from your perspective. You still lost a good job; you’re now unemployed, and may have skills so specific that they can’t be transferred to anything else. You’ll need to retrain, which means going back to school or else finding one of those rare far-sighted companies that actually trains their workers. Since the social welfare system in the US is such a quagmire of nonsensical programs, you may be ineligible for support, or eligible in theory and unable to actually get it in practice. (Recently I got a notice from Medicaid that I need to prove again that my income is sufficiently low. Apparently it’s because I got hired at a temporary web development gig, which paid me a whopping $700 over a few weeks—why, that’s almost the per-capita GDP of Ghana, so clearly I am a high-roller who doesn’t need help affording health insurance. I wonder how much they spend sending out these notices.)

If we had a basic income—I know I harp on this a lot, but seriously, it solves almost every economic problem you can think of—losing your job wouldn’t make you feel so desperate, and owning a share in GDP would mean that the rising tide actually would lift all boats. This might make free trade more popular.

But even with ideal policies (which we certainly do not have), the fact remains that human beings are loss-averse. We care more about losses than we do about gains. The pain you feel from losing $100 is about the same as the joy you feel from gaining $200. The pain you feel from losing your job is about twice as intense as the joy you feel from finding a new one.

Because of loss aversion, the constant churn of innovation and change, the “creative destruction” that Schumpeter considered the defining advantage of capitalism—well, it hurts. The constant change and uncertainty is painful, and we want to run away from it.

But the truth is, we can’t. There’s no way to stop the change in the global economy, and most of our attempts to insulate ourselves from it only end up hurting us more. This, I think, is the fundamental reason why protectionism is popular among the general public but not economists: The general public sees protectionism as a way of holding onto the past, while economists recognize that it is simply a way of damaging the future. That constant churning of people gaining and losing jobs isn’t a bug, it’s a feature—it’s the reason that capitalism is so efficient in the first place.

There are a few ways we can reduce the pain of this churning, but we need to focus on that—reducing the pain—rather than trying to stop the churning itself. We should provide social welfare programs that allow people to survive while they are unemployed. We should use active labor market policies to train new workers and match them with good jobs. We may even want to provide some sort of subsidy or incentive to companies that don’t outsource—a small one, to make sure they don’t do so needlessly, but not a large one, so they’ll still do it when it’s actually necessary.

But the one thing we must not do is stop creating jobs overseas. And yes, that is what we are doing, creating jobs. We are not sending jobs that already exist, we are creating new ones. In the short run we also destroy some jobs here, but if we do it right we can replace them—and usually we do okay.

If we stop creating jobs in India and China and around the world, millions of people will starve.

Yes, it is as stark as that. Millions of lives depend upon continued open trade. We in the United States are a manufacturing, technological and agricultural superpower—we could wall ourselves off from the world and only see a few percentage points shaved off of GDP. But a country like Nicaragua or Ghana or Vietnam doesn’t have that option; if they cut off trade, people start dying.

This is actually the main reason why our trade agreements are often so unfair; we are in by far the stronger bargaining position, so we can make them cut their tariffs on textiles even as we maintain our subsidies on agriculture. We are Mr. Bumble dishing out gruel and they are Oliver Twist begging for another bite.

We can’t afford to stop free trade. We can’t even afford to significantly slow it down. A global economy is the best hope we have for global peace and global prosperity.

That is not to say that we should leave trade completely unregulated; trade policy can and should be used to enforce human rights standards. That enormous asymmetry in bargaining power doesn’t have to be used to maximize profits; it can be used to advance human rights.

This is not as simple as saying we should never trade with nations that have bad human rights records, by the way. First of all that would require we cut off Saudi Arabia and China, which is totally unrealistic and would impoverish millions of people; second it doesn’t actually solve the problem. Instead we should use sanctions, tariffs, and trade agreements to provide incentives to improve human rights, rewarding governments that do and punishing governments that don’t. We could have a sliding tariff that decreases every time you show improvement in human rights standards. Think of it like behavioral reinforcement; reward good behavior and you’ll get more of it.

We do need to have sweatshops—but as Krugman has come around to realizing, we can make sweatshops safer. We can put pressure on other countries to treat their workers better, pay them more—and actually make the global economy more efficient, because right now their wages are held down below the efficient level by the power that corporations wield over them. We should not demand that they pay the same they would here in the First World—that’s totally unrealistic, given the difference in productivity—but we should demand that they pay what their workers actually deserve.

Similar incentives should apply to individual corporations, which these days are as powerful as some governments. For example, as part of a zero-tolerance program against forced labor, any company caught using or outsourcing to forced labor should have its profits garnished for damages and the executives who made the decision imprisoned. Sometimes #Scandinaviaisnotbetter; IKEA was involved in such outsourcing during the Cold War, and it is currently being litigated just how much they knew and what they could have done about it. If they knew and did nothing, some IKEA executive should be going to prison. If that seems extreme, let me remind you what they did: They used slaves.

My standard for penalizing human rights violations, whether by corporations or governments, is basically like this: Follow the decision-making up the chain of command, stopping only when the next-higher executive can clearly show to the preponderance of evidence that they were kept out of the loop. If no executive can provide sufficient evidence, the highest-ranking executive at the time the crime was committed will be held responsible. If you don’t want to be held responsible for crimes committed by people who work for you, it’s your responsibility to bring them to justice. Negligence in oversight will not be exonerating because you didn’t know; it will be incriminating because you should have. When your bank is caught laundering money for terrorists and drug lords, it isn’t enough to have your chief of compliance resign; he should be imprisoned—and if his superiors knew about it, so should they.

In fact maybe the focus should be on corporations, because we have the legal authority to do that. When dealing with other countries, there are United Nations rules and simply the de facto power of large trade flows and national standing armies. With Saudi Arabia or China, there’s a very real chance that they’ll simply tell us where we can shove it; but if we get that same kind of response from HSBC or Goldman Sachs (which, actually, we did), we can start taking out handcuffs (that, we did not do—but I think we should have).

We can also use consumer pressure to change the behavior of corporations, such as Fair Trade. There’s some debate about just how effective these things are, but the comparison that is often made between Fair Trade and tariffs is ridiculous; this is a change in consumer behavior, not a change in government policy. There is absolutely no loss of freedom. Choosing not to buy something does not constitute coercion against someone else. Maybe there are more efficient ways to spend money (like donating it directly to the best global development charities), but if you start going down that road you quickly turn into Peter Singer and start saying that wearing nicer shoes means you’re committing murder. By all means, let’s empirically study different methods of fighting poverty and focus on the ones that work best; but there’s a perverse smugness to criticisms of Fair Trade that says to me this isn’t actually about that at all. Instead, I think most people who criticize Fair Trade don’t support the idea of altruism at all—they’re far-right Randian libertarians who honestly believe that selfishness is the highest form of human morality. (It is in fact the second-lowest, according to Kohlberg.) Maybe it will turn out that Fair Trade is actually ineffective at fighting poverty, but it’s clear that an unregulated free market isn’t good at that either. Those aren’t the only options, and the best way to find out which methods work is to give them a try. Consumer pressure clearly can work in some cases, and it’s a low-cost zero-regulation solution. They say the road to Hell is paved with good intentions—but would you rather we have bad intentions instead?

By these two methods we could send a clear message to multinational corporations that if they want to do business in the US—and trust me, they do—they have to meet certain standards of human rights. This in turn will make those corporations put pressure on their suppliers, all the way down the supply chain, to uphold the standards lest they lose their contracts. With some companies upholding labor standards in Third World countries, others will be forced to, as workers refuse to work for companies that don’t. This could make life better for many millions of people.

But this whole plan only works on one condition: We need to have trade.

How much should we save?

JDN 2457215 EDT 15:43.

One of the most basic questions in macroeconomics has oddly enough received very little attention: How much should we save? What is the optimal level of saving?

At the microeconomic level, how much you should save basically depends on what you think your income will be in the future. If you have more income now than you think you’ll have later, you should save now to spend later. If you have less income now than you think you’ll have later, you should spend now and dissave—save negatively, otherwise known as borrowing—and pay it back later. The life-cycle hypothesis says that people save when they are young in order to retire when they are old—in its strongest form, it says that we keep our level of spending constant across our lifetime at a value equal to our average income. The strongest form is utterly ridiculous and disproven by even the most basic empirical evidence, so usually the hypothesis is studied in a weaker form that basically just says that people save when they are young and spend when they are old—and even that runs into some serious problems.

The biggest problem, I think, is that the interest rate you receive on savings is always vastly less than the interest rate you pay on borrowing, which in turn is related to the fact that people are credit-constrainedthey generally would like to borrow more than they actually can. It also has a lot to do with the fact that our financial system is an oligopoly; banks make more profits if they can pay savers less and charge borrowers more, and by colluding with each other they can control enough of the market that no major competitors can seriously undercut them. (There is some competition, however, particularly from credit unions—and if you compare these two credit card offers from University of Michigan Credit Union at 8.99%/12.99% and Bank of America at 12.99%/22.99% respectively, you can see the oligopoly in action as the tiny competitor charges you a much fairer price than the oligopoly beast. 9% means doubling in just under eight years, 13% means doubling in a little over five years, and 23% means doubling in three years.) Another very big problem with the life-cycle theory is that human beings are astonishingly bad at predicting the future, and thus our expectations about our future income can vary wildly from the actual future income we end up receiving. People who are wise enough to know that they do not know generally save more than they think they’ll need, which is called precautionary saving. Combine that with our limited capacity for self-control, and I’m honestly not sure the life-cycle hypothesis is doing any work for us at all.

But okay, let’s suppose we had a theory of optimal individual saving. That would still leave open a much larger question, namely optimal aggregate saving. The amount of saving that is best for each individual may not be best for society as a whole, and it becomes a difficult policy challenge to provide incentives to make people save the amount that is best for society.

Or it would be, if we had the faintest idea what the optimal amount of saving for society is. There’s a very simple rule-of-thumb that a lot of economists use, often called the golden rule (not to be confused with the actual Golden Rule, though I guess the idea is that a social optimum is a moral optimum), which is that we should save exactly the same amount as the share of capital in income. If capital receives one third of income (This figure of one third has been called a “law”, but as with most “laws” in economics it’s really more like the Pirate Code; labor’s share of income varies across countries and years. I doubt you’ll be surprised to learn that it is falling around the world, meaning more income is going to capital owners and less is going to workers.), then one third of income should be saved to make more capital for next year.

When you hear that, you should be thinking: “Wait. Saved to make more capital? You mean invested to make more capital.” And this is the great sleight of hand in the neoclassical theory of economic growth: Saving and investment are made to be the same by definition. It’s called the savings-investment identity. As I talked about in an earlier post, the model seems to be that there is only one kind of good in the world, and you either use it up or save it to make more.

But of course that’s not actually how the world works; there are different kinds of goods, and if people stop buying tennis shoes that doesn’t automatically lead to more factories built to make tennis shoes—indeed, quite the opposite.If people reduce their spending, the products they no longer buy will now accumulate on shelves and the businesses that make those products will start downsizing their production. If people increase their spending, the products they now buy will fly off the shelves and the businesses that make them will expand their production to keep up.

In order to make the savings-investment identity true by definition, the definition of investment has to be changed. Inventory accumulation, products building up on shelves, is counted as “investment” when of course it is nothing of the sort. Inventory accumulation is a bad sign for an economy; indeed the time when we see the most inventory accumulation is right at the beginning of a recession.

As a result of this bizarre definition of “investment” and its equation with saving, we get the famous Paradox of Thrift, which does indeed sound paradoxical in its usual formulation: “A global increase in marginal propensity to save can result in a reduction in aggregate saving.” But if you strip out the jargon, it makes a lot more sense: “If people suddenly stop spending money, companies will stop investing, and the economy will grind to a halt.” There’s still a bit of feeling of paradox from the fact that we tried to save more money and ended up with less money, but that isn’t too hard to understand once you consider that if everyone else stops spending, where are you going to get your money from?

So what if something like this happens, we all try to save more and end up having no money? The government could print a bunch of money and give it to people to spend, and then we’d have money, right? Right. Exactly right, in fact. You now understand monetary policy better than most policymakers. Like a basic income, for many people it seems too simple to be true; but in a nutshell, that is Keynesian monetary policy. When spending falls and the economy slows down as a result, the government should respond by expanding the money supply so that people start spending again. In practice they usually expand the money supply by a really bizarre roundabout way, buying and selling bonds in open market operations in order to change the interest rate that banks charge each other for loans of reserves, the Fed funds rate, in the hopes that banks will change their actual lending interest rates and more people will be able to borrow, thus, ultimately, increasing the money supply (because, remember, banks don’t have the money they lend you—they create it).

We could actually just print some money and give it to people (or rather, change a bunch of numbers in an IRS database), but this is very unpopular, particularly among people like Ron Paul and other gold-bug Republicans who don’t understand how monetary policy works. So instead we try to obscure the printing of money behind a bizarre chain of activities, opening many more opportunities for failure: Chiefly, we can hit the zero lower bound where interest rates are zero and can’t go any lower (or can they?), or banks can be too stingy and decide not to lend, or people can be too risk-averse and decide not to borrow; and that’s not even to mention the redistribution of wealth that happens when all the money you print is given to banks. When that happens we turn to “unconventional monetary policy”, which basically just means that we get a little bit more honest about the fact that we’re printing money. (Even then you get articles like this one insisting that quantitative easing isn’t really printing money.)

I don’t know, maybe there’s actually some legitimate reason to do it this way—I do have to admit that when governments start openly printing money it often doesn’t end well. But really the question is why you’re printing money, whom you’re giving it to, and above all how much you are printing. Weimar Germany printed money to pay off odious war debts (because it totally makes sense to force a newly-established democratic government to pay the debts incurred by belligerent actions of the monarchy they replaced; surely one must repay one’s debts). Hungary printed money to pay for rebuilding after the devastation of World War 2. Zimbabwe printed money to pay for a war (I’m sensing a pattern here) and compensate for failed land reform policies. In all three cases the amount of money they printed was literally billions of times their original money supply. Yes, billions. They found their inflation cascading out of control and instead of stopping the printing, they printed even more. The United States has so far printed only about three times our original monetary base, still only about a third of our total money supply. (Monetary base is the part that the Federal reserve controls; the rest is created by banks. Typically 90% of our money is not monetary base.) Moreover, we did it for the right reasons—in response to deflation and depression. That is why, as Matthew O’Brien of The Atlantic put it so well, the US can never be Weimar.

I was supposed to be talking about saving and investment; why am I talking about money supply? Because investment is driven by the money supply. It’s not driven by saving, it’s driven by lending.

Now, part of the underlying theory was that lending and saving are supposed to be tied together, with money lent coming out of money saved; this is true if you assume that things are in a nice tidy equilibrium. But we never are, and frankly I’m not sure we’d want to be. In order to reach that equilibrium, we’d either need to have full-reserve banking, or banks would have to otherwise have their lending constrained by insufficient reserves; either way, we’d need to have a constant money supply. Any dollar that could be lent, would have to be lent, and the whole debt market would have to be entirely constrained by the availability of savings. You wouldn’t get denied for a loan because your credit rating is too low; you’d get denied for a loan because the bank would literally not have enough money available to lend you. Banking would have to be perfectly competitive, so if one bank can’t do it, no bank can. Interest rates would have to precisely match the supply and demand of money in the same way that prices are supposed to precisely match the supply and demand of products (and I think we all know how well that works out). This is why it’s such a big problem that most macroeconomic models literally do not include a financial sector. They simply assume that the financial sector is operating at such perfect efficiency that money in equals money out always and everywhere.

So, recognizing that saving and investment are in fact not equal, we now have two separate questions: What is the optimal rate of saving, and what is the optimal rate of investment? For saving, I think the question is almost meaningless; individuals should save according to their future income (since they’re so bad at predicting it, we might want to encourage people to save extra, as in programs like Save More Tomorrow), but the aggregate level of saving isn’t an important question. The important question is the aggregate level of investment, and for that, I think there are two ways of looking at it.

The first way is to go back to that original neoclassical growth model and realize it makes a lot more sense when the s term we called “saving” actually is a funny way of writing “investment”; in that case, perhaps we should indeed invest the same proportion of income as the income that goes to capital. An interesting, if draconian, way to do so would be to actually require this—all and only capital income may be used for business investment. Labor income must be used for other things, and capital income can’t be used for anything else. The days of yachts bought on stock options would be over forever—though so would the days of striking it rich by putting your paycheck into a tech stock. Due to the extreme restrictions on individual freedom, I don’t think we should actually do such a thing; but it’s an interesting thought that might lead to an actual policy worth considering.

But a second way that might actually be better—since even though the model makes more sense this way, it still has a number of serious flaws—is to think about what we might actually do in order to increase or decrease investment, and then consider the costs and benefits of each of those policies. The simplest case to analyze is if the government invests directly—and since the most important investments like infrastructure, education, and basic research are usually done this way, it’s definitely a useful example. How is the government going to fund this investment in, say, a nuclear fusion project? They have four basic ways: Cut spending somewhere else, raise taxes, print money, or issue debt. If you cut spending, the question is whether the spending you cut is more or less important than the investment you’re making. If you raise taxes, the question is whether the harm done by the tax (which is generally of two flavors; first there’s the direct effect of taking someone’s money so they can’t use it now, and second there’s the distortions created in the market that may make it less efficient) is outweighed by the new project. If you print money or issue debt, it’s a subtler question, since you are no longer pulling from any individual person or project but rather from the economy as a whole. Actually, if your economy has unused capacity as in a depression, you aren’t pulling from anywhere—you’re simply adding new value basically from thin air, which is why deficit spending in depressions is such a good idea. (More precisely, you’re putting resources to use that were otherwise going to lay fallow—to go back to my earlier example, the tennis shoes will no longer rest on the shelves.) But if you do not have sufficient unused capacity, you will get crowding-out; new debt will raise interest rates and make other investments more expensive, while printing money will cause inflation and make everything more expensive. So you need to weigh that cost against the benefit of your new investment and decide whether it’s worth it.

This second way is of course a lot more complicated, a lot messier, a lot more controversial. It would be a lot easier if we could just say: “The target investment rate should be 33% of GDP.” But even then the question would remain as to which investments to fund, and which consumption to pull from. The abstraction of simply dividing the economy into “consumption” versus “investment” leaves out matters of the utmost importance; Paul Allen’s 400-foot yacht and food stamps for children are both “consumption”, but taxing the former to pay for the latter seems not only justified but outright obligatory. The Bridge to Nowhere and the Humane Genome Project are both “investment”, but I think we all know which one had a higher return for human society. The neoclassical model basically assumes that the optimal choices for consumption and investment are decided automatically (automagically?) by the inscrutable churnings of the free market, but clearly that simply isn’t true.

In fact, it’s not always clear what exactly constitutes “consumption” versus “investment”, and the particulars of answering that question may distract us from answering the questions that actually matter. Is a refrigerator investment because it’s a machine you buy that sticks around and does useful things for you? Or is it consumption because consumers buy it and you use it for food? Is a car an investment because it’s vital to getting a job? Or is it consumption because you enjoy driving it? Someone could probably argue that the appreciation on Paul Allen’s yacht makes it an investment, for instance. Feeding children really is an investment, in their so-called “human capital” that will make them more productive for the rest of their lives. Part of the money that went to the Humane Genome Project surely paid some graduate student who then spent part of his paycheck on a keg of beer, which would make it consumption. And so on. The important question really isn’t “is this consumption or investment?” but “Is this worth doing?” And thus, the best answer to the question, “How much should we save?” may be: “Who cares?”

How to change the world

JDN 2457166 EDT 17:53.

I just got back from watching Tomorrowland, which is oddly appropriate since I had already planned this topic in advance. How do we, as they say in the film, “fix the world”?

I can’t find it at the moment, but I vaguely remember some radio segment on which a couple of neoclassical economists were interviewed and asked what sort of career can change the world, and they answered something like, “Go into finance, make a lot of money, and then donate it to charity.”

In a slightly more nuanced form this strategy is called earning to give, and frankly I think it’s pretty awful. Most of the damage that is done to the world is done in the name of maximizing profits, and basically what you end up doing is stealing people’s money and then claiming you are a great altruist for giving some of it back. I guess if you can make enormous amounts of money doing something that isn’t inherently bad and then donate that—like what Bill Gates did—it seems better. But realistically your potential income is probably not actually raised that much by working in finance, sales, or oil production; you could have made the same income as a college professor or a software engineer and not be actively stripping the world of its prosperity. If we actually had the sort of ideal policies that would internalize all externalities, this dilemma wouldn’t arise; but we’re nowhere near that, and if we did have that system, the only billionaires would be Nobel laureate scientists. Albert Einstein was a million times more productive than the average person. Steve Jobs was just a million times luckier. Even then, there is the very serious question of whether it makes sense to give all the fruits of genius to the geniuses themselves, who very quickly find they have all they need while others starve. It was certainly Jonas Salk’s view that his work should only profit him modestly and its benefits should be shared with as many people as possible. So really, in an ideal world there might be no billionaires at all.

Here I would like to present an alternative. If you are an intelligent, hard-working person with a lot of talent and the dream of changing the world, what should you be doing with your time? I’ve given this a great deal of thought in planning my own life, and here are the criteria I came up with:

  1. You must be willing and able to commit to doing it despite great obstacles. This is another reason why earning to give doesn’t actually make sense; your heart (or rather, limbic system) won’t be in it. You’ll be miserable, you’ll become discouraged and demoralized by obstacles, and others will surpass you. In principle Wall Street quantitative analysts who make $10 million a year could donate 90% to UNICEF, but they don’t, and you know why? Because the kind of person who is willing and able to exploit and backstab their way to that position is the kind of person who doesn’t give money to UNICEF.
  2. There must be important tasks to be achieved in that discipline. This one is relatively easy to satisfy; I’ll give you a list in a moment of things that could be contributed by a wide variety of fields. Still, it does place some limitations: For one, it rules out the simplest form of earning to give (a more nuanced form might cause you to choose quantum physics over social work because it pays better and is just as productive—but you’re not simply maximizing income to donate). For another, it rules out routine, ordinary jobs that the world needs but don’t make significant breakthroughs. The world needs truck drivers (until robot trucks take off), but there will never be a great world-changing truck driver, because even the world’s greatest truck driver can only carry so much stuff so fast. There are no world-famous secretaries or plumbers. People like to say that these sorts of jobs “change the world in their own way”, which is a nice sentiment, but ultimately it just doesn’t get things done. We didn’t lift ourselves into the Industrial Age by people being really fantastic blacksmiths; we did it by inventing machines that make blacksmiths obsolete. We didn’t rise to the Information Age by people being really good slide-rule calculators; we did it by inventing computers that work a million times as fast as any slide-rule. Maybe not everyone can have this kind of grand world-changing impact; and I certainly agree that you shouldn’t have to in order to live a good life in peace and happiness. But if that’s what you’re hoping to do with your life, there are certain professions that give you a chance of doing so—and certain professions that don’t.
  3. The important tasks must be currently underinvested. There are a lot of very big problems that many people are already working on. If you work on the problems that are trendy, the ones everyone is talking about, your marginal contribution may be very small. On the other hand, you can’t just pick problems at random; many problems are not invested in precisely because they aren’t that important. You need to find problems people aren’t working on but should be—problems that should be the focus of our attention but for one reason or another get ignored. A good example here is to work on pancreatic cancer instead of breast cancer; breast cancer research is drowning in money and really doesn’t need any more; pancreatic cancer kills 2/3 as many people but receives less than 1/6 as much funding. If you want to do cancer research, you should probably be doing pancreatic cancer.
  4. You must have something about you that gives you a comparative—and preferably, absolute—advantage in that field. This is the hardest one to achieve, and it is in fact the reason why most people can’t make world-changing breakthroughs. It is in fact so hard to achieve that it’s difficult to even say you have until you’ve already done something world-changing. You must have something special about you that lets you achieve what others have failed. You must be one of the best in the world. Even as you stand on the shoulders of giants, you must see further—for millions of others stand on those same shoulders and see nothing. If you believe that you have what it takes, you will be called arrogant and naïve; and in many cases you will be. But in a few cases—maybe 1 in 100, maybe even 1 in 1000, you’ll actually be right. Not everyone who believes they can change the world does so, but everyone who changes the world believed they could.

Now, what sort of careers might satisfy all these requirements?

Well, basically any kind of scientific research:

Mathematicians could work on network theory, or nonlinear dynamics (the first step: separating “nonlinear dynamics” into the dozen or so subfields it should actually comprise—as has been remarked, “nonlinear” is a bit like “non-elephant”), or data processing algorithms for our ever-growing morasses of unprocessed computer data.

Physicists could be working on fusion power, or ways to neutralize radioactive waste, or fundamental physics that could one day unlock technologies as exotic as teleportation and faster-than-light travel. They could work on quantum encryption and quantum computing. Or if those are still too applied for your taste, you could work in cosmology and seek to answer some of the deepest, most fundamental questions in human existence.

Chemists could be working on stronger or cheaper materials for infrastructure—the extreme example being space elevators—or technologies to clean up landfills and oceanic pollution. They could work on improved batteries for solar and wind power, or nanotechnology to revolutionize manufacturing.

Biologists could work on any number of diseases, from cancer and diabetes to malaria and antibiotic-resistant tuberculosis. They could work on stem-cell research and regenerative medicine, or genetic engineering and body enhancement, or on gerontology and age reversal. Biology is a field with so many important unsolved problems that if you have the stomach for it and the interest in some biological problem, you can’t really go wrong.

Electrical engineers can obviously work on improving the power and performance of computer systems, though I think over the last 20 years or so the marginal benefits of that kind of research have begun to wane. Efforts might be better spent in cybernetics, control systems, or network theory, where considerably more is left uncharted; or in artificial intelligence, where computing power is only the first step.

Mechanical engineers could work on making vehicles safer and cheaper, or building reusable spacecraft, or designing self-constructing or self-repairing infrastructure. They could work on 3D printing and just-in-time manufacturing, scaling it up for whole factories and down for home appliances.

Aerospace engineers could link the world with hypersonic travel, build satellites to provide Internet service to the farthest reaches of the globe, or create interplanetary rockets to colonize Mars and the moons of Jupiter and Saturn. They could mine asteroids and make previously rare metals ubiquitous. They could build aerial drones for delivery of goods and revolutionize logistics.

Agronomists could work on sustainable farming methods (hint: stop farming meat), invent new strains of crops that are hardier against pests, more nutritious, or higher-yielding; on the other hand a lot of this is already being done, so maybe it’s time to think outside the box and consider what we might do to make our food system more robust against climate change or other catastrophes.

Ecologists will obviously be working on predicting and mitigating the effects of global climate change, but there are a wide variety of ways of doing so. You could focus on ocean acidification, or on desertification, or on fishery depletion, or on carbon emissions. You could work on getting the climate models so precise that they become completely undeniable to anyone but the most dogmatically opposed. You could focus on endangered species and habitat disruption. Ecology is in general so underfunded and undersupported that basically anything you could do in ecology would be beneficial.

Neuroscientists have plenty of things to do as well: Understanding vision, memory, motor control, facial recognition, emotion, decision-making and so on. But one topic in particular is lacking in researchers, and that is the fundamental Hard Problem of consciousness. This one is going to be an uphill battle, and will require a special level of tenacity and perseverance. The problem is so poorly understood it’s difficult to even state clearly, let alone solve. But if you could do it—if you could even make a significant step toward it—it could literally be the greatest achievement in the history of humanity. It is one of the fundamental questions of our existence, the very thing that separates us from inanimate matter, the very thing that makes questions possible in the first place. Understand consciousness and you understand the very thing that makes us human. That achievement is so enormous that it seems almost petty to point out that the revolutionary effects of artificial intelligence would also fall into your lap.

The arts and humanities also have a great deal to contribute, and are woefully underappreciated.

Artists, authors, and musicians all have the potential to make us rethink our place in the world, reconsider and reimagine what we believe and strive for. If physics and engineering can make us better at winning wars, art and literature and remind us why we should never fight them in the first place. The greatest works of art can remind us of our shared humanity, link us all together in a grander civilization that transcends the petty boundaries of culture, geography, or religion. Art can also be timeless in a way nothing else can; most of Aristotle’s science is long-since refuted, but even the Great Pyramid thousands of years before him continues to awe us. (Aristotle is about equidistant chronologically between us and the Great Pyramid.)

Philosophers may not seem like they have much to add—and to be fair, a great deal of what goes on today in metaethics and epistemology doesn’t add much to civilization—but in fact it was Enlightenment philosophy that brought us democracy, the scientific method, and market economics. Today there are still major unsolved problems in ethics—particularly bioethics—that are in need of philosophical research. Technologies like nanotechnology and genetic engineering offer us the promise of enormous benefits, but also the risk of enormous harms; we need philosophers to help us decide how to use these technologies to make our lives better instead of worse. We need to know where to draw the lines between life and death, between justice and cruelty. Literally nothing could be more important than knowing right from wrong.

Now that I have sung the praises of the natural sciences and the humanities, let me now explain why I am a social scientist, and why you probably should be as well.

Psychologists and cognitive scientists obviously have a great deal to give us in the study of mental illness, but they may actually have more to contribute in the study of mental health—in understanding not just what makes us depressed or schizophrenic, but what makes us happy or intelligent. The 21st century may not simply see the end of mental illness, but the rise of a new level of mental prosperity, where being happy, focused, and motivated are matters of course. The revolution that biology has brought to our lives may pale in comparison to the revolution that psychology will bring. On the more social side of things, psychology may allow us to understand nationalism, sectarianism, and the tribal instinct in general, and allow us to finally learn to undermine fanaticism, encourage critical thought, and make people more rational. The benefits of this are almost impossible to overstate: It is our own limited, broken, 90%-or-so heuristic rationality that has brought us from simians to Shakespeare, from gorillas to Godel. To raise that figure to 95% or 99% or 99.9% could be as revolutionary as was whatever evolutionary change first brought us out of the savannah as Australopithecus africanus.

Sociologists and anthropologists will also have a great deal to contribute to this process, as they approach the tribal instinct from the top down. They may be able to tell us how nations are formed and undermined, why some cultures assimilate and others collide. They can work to understand combat bigotry in all its forms, racism, sexism, ethnocentrism. These could be the fields that finally end war, by understanding and correcting the imbalances in human societies that give rise to violent conflict.

Political scientists and public policy researchers can allow us to understand and restructure governments, undermining corruption, reducing inequality, making voting systems more expressive and more transparent. They can search for the keystones of different political systems, finding the weaknesses in democracy to shore up and the weaknesses in autocracy to exploit. They can work toward a true international government, representative of all the world’s people and with the authority and capability to enforce global peace. If the sociologists don’t end war and genocide, perhaps the political scientists can—or more likely they can do it together.

And then, at last, we come to economists. While I certainly work with a lot of ideas from psychology, sociology, and political science, I primarily consider myself an economist. Why is that? Why do I think the most important problems for me—and perhaps everyone—to be working on are fundamentally economic?

Because, above all, economics is broken. The other social sciences are basically on the right track; their theories are still very limited, their models are not very precise, and there are decades of work left to be done, but the core principles upon which they operate are correct. Economics is the field to work in because of criterion 3: Almost all the important problems in economics are underinvested.

Macroeconomics is where we are doing relatively well, and yet the Keynesian models that allowed us to reduce the damage of the Second Depression nonetheless had no power to predict its arrival. While inflation has been at least somewhat tamed, the far worse problem of unemployment has not been resolved or even really understood.

When we get to microeconomics, the neoclassical models are totally defective. Their core assumptions of total rationality and total selfishness are embarrassingly wrong. We have no idea what controls assets prices, or decides credit constraints, or motivates investment decisions. Our models of how people respond to risk are all wrong. We have no formal account of altruism or its limitations. As manufacturing is increasingly automated and work shifts into services, most economic models make no distinction between the two sectors. While finance takes over more and more of our society’s wealth, most formal models of the economy don’t even include a financial sector.

Economic forecasting is no better than chance. The most widely-used asset-pricing model, CAPM, fails completely in empirical tests; its defenders concede this and then have the audacity to declare that it doesn’t matter because the mathematics works. The Black-Scholes derivative-pricing model that caused the Second Depression could easily have been predicted to do so, because it contains a term that assumes normal distributions when we know for a fact that financial markets are fat-tailed; simply put, it claims certain events will never happen that actually occur several times a year.

Worst of all, economics is the field that people listen to. When a psychologist or sociologist says something on television, people say that it sounds interesting and basically ignore it. When an economist says something on television, national policies are shifted accordingly. Austerity exists as national policy in part due to a spreadsheet error by two famous economists.

Keynes already knew this in 1936: “The ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else. Practical men, who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back.”

Meanwhile, the problems that economics deals with have a direct influence on the lives of millions of people. Bad economics gives us recessions and depressions; it cripples our industries and siphons off wealth to an increasingly corrupt elite. Bad economics literally starves people: It is because of bad economics that there is still such a thing as world hunger. We have enough food, we have the technology to distribute it—but we don’t have the economic policy to lift people out of poverty so that they can afford to buy it. Bad economics is why we don’t have the funding to cure diabetes or colonize Mars (but we have the funding for oil fracking and aircraft carriers, don’t we?). All of that other scientific research that needs done probably could be done, if the resources of our society were properly distributed and utilized.

This combination of both overwhelming influence, overwhelming importance and overwhelming error makes economics the low-hanging fruit; you don’t even have to be particularly brilliant to have better ideas than most economists (though no doubt it helps if you are). Economics is where we have a whole bunch of important questions that are unanswered—or the answers we have are wrong. (As Will Rogers said, “It isn’t what we don’t know that gives us trouble, it’s what we know that ain’t so.”)

Thus, rather than tell you go into finance and earn to give, those economists could simply have said: “You should become an economist. You could hardly do worse than we have.”

Why the Republican candidates like flat income tax—and we really, really don’t

JDN 2456160 EDT 13:55.

The Republican Party is scrambling to find viable Presidential candidates for next year’s election. The Democrats only have two major contenders: Hillary Clinton looks like the front-runner (and will obviously have the most funding), but Bernie Sanders is doing surprisingly well, and is particularly refreshing because he is running purely on his principles and ideas. He has no significant connections, no family dynasty (unlike Jeb Bush and, again, Hillary Clinton) and not a huge amount of wealth (Bernie’s net wealth is about $500,000, making him comfortably upper-middle class; compare to Hillary’s $21.5 million and her husband’s $80 million); but he has ideas that resonate with people. Bernie Sanders is what politics is supposed to be. Clinton’s campaign will certainly raise more than his; but he has already raised over $4 million, and if he makes it to about $10 million studies suggest that additional spending above that point is largely negligible. He actually has a decent chance of winning, and if he did it would be a very good sign for the future of America.

But the Republican field is a good deal more contentious, and the 19 candidates currently running have been scrambling to prove that they are the most right-wing in order to impress far-right primary voters. (When the general election comes around, whoever wins will of course pivot back toward the center, changing from, say, outright fascism to something more like reactionism or neo-feudalism. If you were hoping they’d pivot so far back as to actually be sensible center-right capitalists, think again; Hillary Clinton is the only one who will take that role, and they’ll go out of their way to disagree with her in every way they possibly can, much as they’ve done with Obama.) One of the ways that Republicans are hoping to prove their right-wing credentials is by proposing a flat income tax and eliminating the IRS.

Unlike most of their proposals, I can see why many people think this actually sounds like a good idea. It would certainly dramatically reduce bureaucracy, and that’s obviously worthwhile since excess bureaucracy is pure deadweight loss. (A surprising number of economists seem to forget that government does other things besides create excess bureaucracy, but I must admit it does in fact create excess bureaucracy.)

Though if they actually made the flat tax rate 20% or even—I can’t believe this is seriously being proposed—10%, there is no way the federal government would have enough revenue. The only options would be (1) massive increases in national debt (2) total collapse of government services—including their beloved military, mind you, or (3) directly linking the Federal Reserve quantitative easing program to fiscal policy and funding the deficit with printed money. Of these, 3 might not actually be that bad (it would probably trigger some inflation, but actually we could use that right now), but it’s extremely unlikely to happen, particularly under Republicans. In reality, after getting a taste of 2, we’d clearly end up with 1. And then they’d be complaining about the debt and clamor for more spending cuts, more spending cuts, ever more spending cuts, but there would simply be no way to run a functioning government on 10% of GDP in anything like our current system. Maybe you could do it on 20%—maybe—but we currently spend more like 35%, and that’s already a very low amount of spending for a First World country. The UK is more typical at 47%, while Germany is a bit low at 44%; Sweden spends 52% and France spends a whopping 57%. Anyone who suggests we cut government spending from 35% to 20% needs to explain which 3/7 of government services are going to immediately disappear—not to mention which 3/7 of government employees are going to be immediately laid off.

And then they want to add investment deductions; in general investment deductions are a good thing, as long as you tie them to actual investments in genuinely useful things like factories and computer servers. (Or better yet, schools, research labs, or maglev lines, but private companies almost never invest in that sort of thing, so the deduction wouldn’t apply.) The kernel of truth in the otherwise ridiculous argument that we should never tax capital is that taxing real investment would definitely be harmful in the long run. As I discussed with Miles Kimball (a cognitive economist at Michigan and fellow econ-blogger I hope to work with at some point), we could minimize the distortionary effects of corporate taxes by establishing a strong deduction for real investment, and this would allow us to redistribute some of this enormous wealth inequality without dramatically harming economic growth.

But if you deduct things that aren’t actually investments—like stock speculation and derivatives arbitrage—then you reduce your revenue dramatically and don’t actually incentivize genuinely useful investments. This is the problem with our current system, in which GE can pay no corporate income tax on $108 billion in annual profit—and you know they weren’t using all that for genuinely productive investment activities. But then, if you create a strong enforcement system for ensuring it is real investment, you need bureaucracy—which is exactly what the flat tax was claimed to remove. At the very least, the idea of eliminating the IRS remains ridiculous if you have any significant deductions.

Thus, the benefits of a flat income tax are minimal if not outright illusory; and the costs, oh, the costs are horrible. In order to have remotely reasonable amounts of revenue, you’d need to dramatically raise taxes on the majority of people, while significantly lowering them on the rich. You would create a direct transfer of wealth from the poor to the rich, increasing our already enormous income inequality and driving millions of people into poverty.

Thus, it would be difficult to more clearly demonstrate that you care only about the interests of the top 1% than to propose a flat income tax. I guess Mitt Romney’s 47% rant actually takes the cake on that one though (Yes, all those freeloading… soldiers… and children… and old people?).

Many Republicans are insisting that a flat tax would create a surge of economic growth, but that’s simply not how macroeconomics works. If you steeply raise taxes on the majority of people while cutting them on the rich, you’ll see consumer spending plummet and the entire economy will be driven into recession. Rich people simply don’t spend their money in the same way as the rest of us, and the functioning of the economy depends upon a continuous flow of spending. There is a standard neoclassical economic argument about how reducing spending and increasing saving would lead to increased investment and greater prosperity—but that model basically assumes that we have a fixed amount of stuff we’re either using up or making more stuff with, which is simply not how money works; as James Kroeger cogently explains on his blog “Nontrivial Pursuits”, money is created as it is needed; investment isn’t determined by people saving what they don’t spend. Indeed, increased consumption generally leads to increased investment, because our economy is currently limited by demand, not supply. We could build a lot more stuff, if only people could afford to buy it.

And that’s not even considering the labor incentives; as I already talked about in my previous post on progressive taxation, there are two incentives involved when you increase someone’s hourly wage. On the one hand, they get paid more for each hour, which is a reason to work; that’s the substitution effect. But on the other hand, they have more money in general, which is a reason they don’t need to work; that’s the income effect. Broadly speaking, the substitution effect dominates at low incomes (about $20,000 or less), the income effect dominates at high incomes (about $100,000 or more), and the two effects cancel out at moderate incomes. Since a tax on your income hits you in much the same way as a reduction in your wage, this means that raising taxes on the poor makes them work less, while raising taxes on the rich makes them work more. But if you go from our currently slightly-progressive system to a flat system, you raise taxes on the poor and cut them on the rich, which would mean that the poor would work less, and the rich would also work less! This would reduce economic output even further. If you want to maximize the incentive to work, you want progressive taxes, not flat taxes.

Flat taxes sound appealing because they are so simple; even the basic formula for our current tax rates is complicated, and we combine it with hundreds of pages of deductions and credits—not to mention tens of thousands of pages of case law!—making it a huge morass of bureaucracy that barely anyone really understands and corporate lawyers can easily exploit. I’m all in favor of getting rid of that; but you don’t need a flat tax to do that. You can fit the formula for a progressive tax on a single page—indeed, on a single line: r = 1 – I^-p

That’s it. It’s simple enough to be plugged into any calculator that is capable of exponents, not to mention efficiently implemented in Microsoft Excel (more efficiently than our current system in fact).

Combined with that simple formula, you could list all of the sensible deductions on a couple of additional pages (business investments and educational expenses, mostly—poverty should be addressed by a basic income, not by tax deductions on things like heating and housing, which are actually indirect corporate subsidies), along with a land tax (one line: $3000 per hectare), a basic income (one more line: $8,000 per adult and $4,000 per child), and some additional excise taxes on goods with negative externalities (like alcohol, tobacco, oil, coal, and lead), with a line for each; then you can provide a supplementary manual of maybe 50 pages explaining the detailed rules for applying each of those deductions in unusual cases. The entire tax code should be readable by an ordinary person in a single sitting no longer than a few hours. That means no more than 100 pages and no more than a 7th-grade reading level.

Why do I say this? Isn’t that a ridiculous standard? No, it is a Constitutional imperative. It is a fundamental violation of your liberty to tax you according to rules you cannot reasonably understand—indeed, bordering on Kafkaesque. While this isn’t taxation without representation—we do vote for representatives, after all—it is something very much like it; what good is the ability to change rules if you don’t even understand the rules in the first place? Nor would it be all that difficult: You first deduct these things from your income, then plug the result into this formula.

So yes, I absolutely agree with the basic principle of tax reform. The tax code should be scrapped and recreated from scratch, and the final product should be a primary form of only a few pages combined with a supplementary manual of no more than 100 pages. But you don’t need a flat tax to do that, and indeed for many other reasons a flat tax is a terrible idea, particularly if the suggested rate is 10% or 15%, less than half what we actually spend. The real question is why so many Republican candidates think that this will appeal to their voter base—and why they could actually be right about that.

Part of it is the entirely justified outrage at the complexity of our current tax system, and the appealing simplicity of a flat tax. Part of it is the long history of American hatred of taxes; we were founded upon resisting taxes, and we’ve been resisting taxes ever since. In some ways this is healthy; taxes per se are not a good thing, they are a bad thing, a necessary evil.

But those two things alone cannot explain why anyone would advocate raising taxes on the poorest half of the population while dramatically cutting them on the top 1%. If you are opposed to taxes in general, you’d cut them on everyone; and if you recognize the necessity of taxation, you’d be trying to find ways to minimize the harm while ensuring sufficient tax revenue, which in general means progressive taxation.

To understand why they would be pushing so hard for flat taxes, I think we need to say that many Republicans, particularly those in positions of power, honestly do think that rich people are better than poor people and we should always give more to the rich and less to the poor. (Maybe it’s partly halo effect, in which good begets good and bad begets bad? Or maybe just world theory, the ingrained belief that the world is as it ought to be?)

Romney’s 47% rant wasn’t an exception; it was what he honestly believes, what he says when he doesn’t know he’s on camera. He thinks that he earned every penny of his $250 million net wealth; yes, even the part he got from marrying his wife and the part he got from abusing tax laws, arbitraging assets and liquidating companies. He thinks that people who live on $4,000 or even $400 a year are simply lazy freeloaders, who could easily work harder, perhaps do some arbitrage and liquidation of their own (check out these alleged “rags to riches” stories including the line “tried his hand at mortgage brokering”), but choose not to, and as a result deserve what they get. (It’s important to realize just how bizarre this moral attitude truly is; even if I thought you were the laziest person on Earth, I wouldn’t let you starve to death.) He thinks that the social welfare programs which have reduced poverty but never managed to eliminate it are too generous—if he even thinks they should exist at all. And in thinking these things, he is not some bizarre aberration; he is representing an entire class of people, nearly all of whom vote Republican.

The good news is, these people are still in the minority. They hold significant sway over the Republican primary, but will not have nearly as much impact in the general election. And right now, the Republican candidates are so numerous and so awful that I have trouble seeing how the Democrats could possibly lose. (But please, don’t take that as a challenge, you guys.)

Happy Capybara Day! Or the power of culture

JDN 2457131 EDT 14:33.

Did you celebrate Capybara Day yesterday? You didn’t? Why not? We weren’t able to find any actual capybaras this year, but maybe next year we’ll be able to plan better and find a capybara at a zoo; unfortunately the nearest zoo with a capybara appears to be in Maryland. But where would we be without a capybara to consult annually on the stock market?

Right now you are probably rather confused, perhaps wondering if I’ve gone completely insane. This is because Capybara Day is a holiday of my own invention, one which only a handful of people have even heard about.

But if you think we’d never have a holiday so bizarre, think again: For all I did was make some slight modifications to Groundhog Day. Instead of consulting a groundhog about the weather every February 2, I proposed that we consult a capybara about the stock market every April 17. And if you think you have some reason why groundhogs are better at predicting the weather (perhaps because they at least have some vague notion of what weather is) than capybaras are at predicting the stock market (since they have no concept of money or numbers), think about this: Capybara Day could produce extremely accurate predictions, provided only that people actually believed it. The prophecy of rising or falling stock prices could very easily become self-fulfilling. If it were a cultural habit of ours to consult capybaras about the stock market, capybaras would become good predictors of the stock market.

That might seem a bit far-fetched, but think about this: Why is there a January Effect? (To be fair, some researchers argue that there isn’t, and the apparent correlation between higher stock prices and the month of January is simply an illusion, perhaps the result of data overfitting.)

But I think it probably is real, and moreover has some very obvious reasons behind it. In this I’m in agreement with Richard Thaler, a founder of cognitive economics who wrote about such anomalies in the 1980s. December is a time when two very culturally-important events occur: The end of the year, during which many contracts end, profits are assessed, and tax liabilities are determined; and Christmas, the greatest surge of consumer spending and consumer debt.

The first effect means that corporations are very likely to liquidate assets—particularly assets that are running at a loss—in order to minimize their tax liabilities for the year, which will drive down prices. The second effect means that consumers are in search of financing for extravagant gift purchases, and those who don’t run up credit cards may instead sell off stocks. This is if anything a more rational way of dealing with the credit constraint, since interest rates on credit cards are typically far in excess of stock returns. But this surge of selling due to credit constraints further depresses prices.

In January, things return to normal; assets are repurchased, debt is repaid. This brings prices back up to where they were, which results in a higher than normal return for January.

Neoclassical economists are loath to admit that such a seasonal effect could exist, because it violates their concept of how markets work—and to be fair, the January Effect is actually weak enough to be somewhat ambiguous. But actually it doesn’t take much deviation from neoclassical models to explain the effect: Tax policies and credit constraints are basically enough to do it, so you don’t even need to go that far into understanding human behavior. It’s perfectly rational to behave this way given the distortions that are created by taxes and credit limits, and the arbitrage opportunity is one that you can only take advantage of if you have large amounts of credit and aren’t worried about minimizing your tax liabilities. It’s important to remember just how strong the assumptions of models like CAPM truly are; in addition to the usual infinite identical psychopaths, CAPM assumes there are no taxes, no transaction costs, and unlimited access to credit. I’d say it’s amazing that it works at all, but actually, it doesn’t—check out this graph of risk versus return and tell me if you think CAPM is actually giving us any information at all about how stock markets behave. It frankly looks like you could have drawn a random line through a scatter plot and gotten just as good a fit. Knowing how strong its assumptions are, we would not expect CAPM to work—and sure enough, it doesn’t.

Of course, that leaves the question of why our tax policy would be structured in this way—why make the year end on December 31 instead of some other date? And for that, you need to go back through hundreds of years of history, the Gregorian calendar, which in turn was influenced by Christianity, and before that the Julian calendar—in other words, culture.

Culture is one of the most powerful forces that influences human behavior—and also one of the strangest and least-understood. Economic theory is basically silent on the matter of culture. Typically it is ignored entirely, assumed to be irrelevant against the economic incentives that are the true drivers of human action. (There’s a peculiar emotion many neoclassical economists express that I can best describe as self-righteous cynicism, the attitude that we alone—i.e., economists—understand that human beings are not the noble and altruistic creatures many imagine us to be, nor beings of art and culture, but simply cold, calculating machines whose true motives are reducible to profit incentives—and all who think otherwise are being foolish and naïve; true enlightenment is understanding that human beings are infinite identical psychopaths. This is the attitude epitomized by the economist who once sent me an email with “altruism” written in scare quotes.)

Occasionally culture will be invoked as an external (in jargon, exogenous) force, to explain some aspect of human behavior that is otherwise so totally irrational that even invoking nonsensical preferences won’t make it go away. When a suicide bomber blows himself up in a crowd of people, it’s really pretty hard to explain that in terms of rational profit incentives—though I have seen it tried. (It could be self-interest at a larger scale, like families or nations—but then, isn’t that just the tribal paradigm I’ve been arguing for all along?)

But culture doesn’t just motivate us to do extreme or wildly irrational things. It motivates us all the time, often in quite beneficial ways; we wait in line, hold doors for people walking behind us, tip waiters who serve us, and vote in elections, not because anyone pressures us directly to do so (unlike say Australia we do not have compulsory voting) but because it’s what we feel we ought to do. There is a sense of altruism—and altruism provides the ultimate justification for why it is right to do these things—but the primary motivator in most cases is culture—that’s what people do, and are expected to do, around here.

Indeed, even when there is a direct incentive against behaving a certain way—like criminal penalties against theft—the probability of actually suffering a direct penalty is generally so low that it really can’t be our primary motivation. Instead, the reason we don’t cheat and steal is that we think we shouldn’t, and a major part of why we think we shouldn’t is that we have cultural norms against it.

We can actually observe differences in cultural norms across countries in the laboratory. In this 2008 study by Massimo Castro (PDF) comparing British and Italian people playing an economic game called the public goods game in which you can pay a cost yourself to benefit the group as a whole, it was found not only that people were less willing to benefit groups of foreigners than groups of compatriots, British people were overall more generous than Italian people. This 2010 study by Gachter et. al. (actually Joshua Greene talked about it last week) compared how people play the game in various cities, they found three basic patterns: In Western European and American cities such as Zurich, Copenhagen and Boston, cooperation started out high and remained high throughout; people were just cooperative in general. In Asian cities such as Chengdu and Seoul, cooperation started out low, but if people were punished for not cooperating, cooperation would improve over time, eventually reaching about the same place as in the highly cooperative cities. And in Mediterranean cities such as Istanbul, Athens, and Riyadh, cooperation started low and stayed low—even when people could be punished for not cooperating, nobody actually punished them. (These patterns are broadly consistent with the World Bank corruption ratings of these regions, by the way; Western Europe shows very low corruption, while Asia and the Mediterranean show high corruption. Of course this isn’t all that’s going on—and Asia isn’t much less corrupt than the Middle East, while this experiment might make you think so.)

Interestingly, these cultural patterns showed Melbourne as behaving more like an Asian city than a Western European one—perhaps being in the Pacific has worn off on Australia more than they realize.

This is very preliminary, cutting-edge research I’m talking about, so be careful about drawing too many conclusions. But in general we’ve begun to find some fairly clear cultural differences in economic behavior across different societies. While this would not be at all surprising to a sociologist or anthropologist, it’s the sort of thing that economists have insisted for years is impossible.

This is the frontier of cognitive economics, in my opinion. We know that culture is a very powerful motivator of our behavior, and it is time for us to understand how it works—and then, how it can be changed. We know that culture can be changed—cultural norms do change over time, sometimes remarkably rapidly; but we have only a faint notion of how or why they change. Changing culture has the power to do things that simply changing policy cannot, however; policy requires enforcement, and when the enforcement is removed the behavior will often disappear. But if a cultural norm can be imparted, it could sustain itself for a thousand years without any government action at all.

What do we mean by “risk”?

JDN 2457118 EDT 20:50.

In an earlier post I talked about how, empirically, expected utility theory can’t explain the fact that we buy both insurance and lottery tickets, and how, normatively it really doesn’t make a lot of sense to buy lottery tickets precisely because of what expected utility theory says about them.

But today I’d like to talk about one of the major problems with expected utility theory, which I consider one of the major unexplored frontiers of economics: Expected utility theory treats all kinds of risk exactly the same.

In reality there are three kinds of risk: The first is what I’ll call classical risk, which is like the game of roulette; the odds are well-defined and known in advance, and you can play the game a large number of times and average out the results. This is where expected utility theory really shines; if you are dealing with classical risk, expected utility is obviously the way to go and Von Neumann and Morgenstern quite literally proved mathematically that anything else is irrational.

The second is uncertainty, a distinction which was most famously expounded by Frank Knight, an economist at the University of Chicago. (Chicago is a funny place; on the one hand they are a haven for the madness that is Austrian economics; on the other hand they have led the charge in behavioral and cognitive economics. Knight was a perfect fit, because he was a little of both.) Uncertainty is risk under ill-defined or unknown probabilities, where there is no way to play the game twice. Most real-world “risk” is actually uncertainty: Will the People’s Republic of China collapse in the 21st century? How many deaths will global warming cause? Will human beings ever colonize Mars? Is P = NP? None of those questions have known answers, but nor can we clearly assign probabilities either; Either P = NP or not, as a mathematical theorem (or, like the continuum hypothesis, it’s independent of ZFC, the most bizarre possibility of all), and it’s not as if someone is rolling dice to decide how many people global warming will kill. You can think of this in terms of “possible worlds”, though actually most modal theorists would tell you that we can’t even say that P=NP is possible (nor can we say it isn’t possible!) because, as a necessary statement, it can only be possible if it is actually true; this follows from the S5 axiom of modal logic, and you know what, even I am already bored with that sentence. Clearly there is some sense in which P=NP is possible, and if that’s not what modal logic says then so much the worse for modal logic. I am not a modal realist (not to be confused with a moral realist, which I am); I don’t think that possible worlds are real things out there somewhere. I think possibility is ultimately a statement about ignorance, and since we don’t know that P=NP is false then I contend that it is possible that it is true. Put another way, it would not be obviously irrational to place a bet that P=NP will be proved true by 2100; but if we can’t even say that it is possible, how can that be?

Anyway, that’s the mess that uncertainty puts us in, and almost everything is made of uncertainty. Expected utility theory basically falls apart under uncertainty; it doesn’t even know how to give an answer, let alone one that is correct. In reality what we usually end up doing is waving our hands and trying to assign a probability anyway—because we simply don’t know what else to do.

The third one is not one that’s usually talked about, yet I think it’s quite important; I will call it one-shot risk. The probabilities are known or at least reasonably well approximated, but you only get to play the game once. You can also generalize to few-shot risk, where you can play a small number of times, where “small” is defined relative to the probabilities involved; this is a little vaguer, but basically what I have in mind is that even though you can play more than once, you can’t play enough times to realistically expect the rarest outcomes to occur. Expected utility theory almost works on one-shot and few-shot risk, but you have to be very careful about taking it literally.

I think an example make things clearer: Playing the lottery is a few-shot risk. You can play the lottery multiple times, yes; potentially hundreds of times in fact. But hundreds of times is nothing compared to the 1 in 400 million chance you have of actually winning. You know that probability; it can be computed exactly from the rules of the game. But nonetheless expected utility theory runs into some serious problems here.

If we were playing a classical risk game, expected utility would obviously be right. So for example if you know that you will live one billion years, and you are offered the chance to play a game (somehow compensating for the mind-boggling levels of inflation, economic growth, transhuman transcendence, and/or total extinction that will occur during that vast expanse of time) in which at each year you can either have a guaranteed $40,000 of inflation-adjusted income or a 99.999,999,75% chance of $39,999 of inflation-adjusted income and a 0.000,000,25% chance of $100 million in inflation-adjusted income—which will disappear at the end of the year, along with everything you bought with it, so that each year you start afresh. Should you take the second option? Absolutely not, and expected utility theory explains why; that one or two years where you’ll experience 8 QALY per year isn’t worth dropping from 4.602056 QALY per year to 4.602049 QALY per year for the other nine hundred and ninety-eight million years. (Can you even fathom how long that is? From here, one billion years is all the way back to the Mesoproterozoic Era, which we think is when single-celled organisms first began to reproduce sexually. The gain is to be Mitt Romney for a year or two; the loss is the value of a dollar each year over and over again for the entire time that has elapsed since the existence of gamete meiosis.) I think it goes without saying that this whole situation is almost unimaginably bizarre. Yet that is implicitly what we’re assuming when we use expected utility theory to assess whether you should buy lottery tickets.

The real situation is more like this: There’s one world you can end up in, and almost certainly will, in which you buy lottery tickets every year and end up with an income of $39,999 instead of $40,000. There is another world, so unlikely as to be barely worth considering, yet not totally impossible, in which you get $100 million and you are completely set for life and able to live however you want for the rest of your life. Averaging over those two worlds is a really weird thing to do; what do we even mean by doing that? You don’t experience one world 0.000,000,25% as much as the other (whereas in the billion-year scenario, that is exactly what you do); you only experience one world or the other.

In fact, it’s worse than this, because if a classical risk game is such that you can play it as many times as you want as quickly as you want, we don’t even need expected utility theory—expected money theory will do. If you can play a game where you have a 50% chance of winning $200,000 and a 50% chance of losing $50,000, which you can play up to once an hour for the next 48 hours, and you will be extended any credit necessary to cover any losses, you’d be insane not to play; your 99.9% confidence level of wealth at the end of the two days is from $850,000 to $6,180,000. While you may lose money for awhile, it is vanishingly unlikely that you will end up losing more than you gain.

Yet if you are offered the chance to play this game only once, you probably should not take it, and the reason why then comes back to expected utility. If you have good access to credit you might consider it, because going $50,000 into debt is bad but not unbearably so (I did, going to college) and gaining $200,000 might actually be enough better to justify the risk. Then the effect can be averaged over your lifetime; let’s say you make $50,000 per year over 40 years. Losing $50,000 means making your average income $48,750, while gaining $200,000 means making your average income $55,000; so your QALY per year go from a guaranteed 4.70 to a 50% chance of 4.69 and a 50% chance of 4.74; that raises your expected utility from 4.70 to 4.715.

But if you don’t have good access to credit and your income for this year is $50,000, then losing $50,000 means losing everything you have and living in poverty or even starving to death. The benefits of raising your income by $200,000 this year aren’t nearly great enough to take that chance. Your expected utility goes from 4.70 to a 50% chance of 5.30 and a 50% chance of zero.

So expected utility theory only seems to properly apply if we can play the game enough times that the improbable events are likely to happen a few times, but not so many times that we can be sure our money will approach the average. And that’s assuming we know the odds and we aren’t just stuck with uncertainty.

Unfortunately, I don’t have a good alternative; so far expected utility theory may actually be the best we have. But it remains deeply unsatisfying, and I like to think we’ll one day come up with something better.