For labor day, thoughts on socialism

Planned Post 255: Sep 9 JDN 2458371

This week includes Labor Day, the holiday where we are perhaps best justified in taking the whole day off from work and doing nothing. Labor Day is sort of the moderate social democratic counterpart to the explicitly socialist holiday May Day.

The right wing in this country has done everything in their power to expand the definition of “socialism”, which is probably why most young people now have positive views of socialism. There was a time when FDR was seen as an alternative to socialism; but now I’m pretty sure he’d just be called a socialist.

Because of this, I am honestly not sure whether I should be considered a socialist. I definitely believe in the social democratic welfare state epitomized by Scandinavia, but I definitely don’t believe in total collectivization of all means of production.

I am increasingly convinced that shareholder capitalism is a terrible system (the renowned science fiction author Charles Stross actually gave an excellent talk on this subject), but I would not want to abandon free markets.
The best answer might be worker-owned cooperatives. The empirical data is actually quite consistent in showing worker co-ops to be as efficient if not more efficient than conventional corporations, and by construction their pay systems produce less inequality than corporations.

Indeed, I think there is reason to believe that a worker co-op is a much more natural outcome for free markets under a level playing field than a conventional corporation, and the main reason we have corporations is actually that capitalism arose out of (and in response to) feudalism.

Think about it: Why should most things be owned by the top 1%? (Okay, not quite “most”: to be fair, the top 1% only owns 40% of all US net wealth.) Why is 80% of the value of the stock market held by the top 10% of the population?

Most things aren’t done by the top 1%. There are a handful of individuals (namely, scientists who make seminal breakthroughs: Charles Darwin, Marie Curie, Albert Einstein, Rosalind Franklin, Alan Turing, Jonas Salk) who are so super-productive that they might conceivably deserve billionaire-level compensation—but they are almost never the ones who are actually billionaires. If markets were really distributing capital to those who would use it most productively, there’s no reason to think that inequality would be so self-sustaining—much less self-enhancing as it currently seems to be.

But when you realize that capitalism emerged out of a system where the top 1% (or less) already owned most things, and did so by a combination of “divine right” ideology and direct, explicit violence, this inequality becomes a lot less baffling. We never had a free market on a level playing field. The closest we’ve ever gotten has always been through social-democratic reforms (like the New Deal and Scandinavia).

How does this result in corporations? Well, when all the wealth is held by a small fraction of individuals, how do you start a business? You have to borrow money from the people who have it. Borrowing makes you beholden to your creditors, and puts you at great risk if your venture fails (especially back in the days when there were debtor’s prisons—and we’re starting to go back that direction!). Equity provides an alternative: In exchange for giving them the downside risk if your venture fails, you also give your creditors—now shareholders—the upside risk if your venture succeeds. But at the end of the day when your business has succeeded, where did most of the profits go? Into the hands of the people who already had money to begin with, who did nothing to actually contribute to society. The world would be better off if those people had never existed and their wealth had simply been shared with everyone else.

Compare this to what would happen if we all started with similar levels of wealth. (How much would each of us have? Total US wealth of about $44 trillion, spread among a population of 328 million, is about $130,000 each. I don’t know about you, but I think I could do quite a bit with that.) When starting a business, you wouldn’t go heavily into debt or sign away ownership of your company to some billionaire; you’d gather a group of dedicated partners, each of whom would contribute money and effort into building the business. As you added on new workers, it would make sense to pool their assets, and give them a share of the company as well. The natural structure for your business would be not a shareholder corporation, but a worker-owned cooperative.

I think on some level the super-rich actually understand this. If you look closely at the sort of policies they fight for, they really aren’t capitalist. They don’t believe in free, unfettered markets where competition reigns. They believe in monopoly, lobbying, corruption, nepotism, and above all, low taxes. (There’s actually nothing in the basic principles of capitalism that says taxes should be low. Taxes should be as high as they need to be to cover public goods—no higher, and no lower.) They don’t want to provide nationalized healthcare, not because they believe that private healthcare competition is more efficient (no one who looks at the data for even a few minutes can honestly believe that—US healthcare is by far the most expensive in the world), but because they know that it would give their employees too much freedom to quit and work elsewhere. Donald Trump doesn’t want a world where any college kid with a brilliant idea and a lot of luck can overthrow his empire; he wants a world where everyone owes him and his family personal favors that he can call in to humiliate them and exert his power. That’s not capitalism—it’s feudalism.

Crowdfunding also provides an interesting alternative; we might even call it the customer-owned cooperative. Kickstarter and Patreon provide a very interesting new economic model—still entirely within the realm of free markets—where customers directly fund production and interact with producers to decide what will be produced. This might turn out to be even more efficient—and notice that it would run a lot more smoothly if we had all started with a level playing field.

Establishing such a playing field, of course, requires a large amount of redistribution of wealth. Is this socialism? If you insist. But I think it’s more accurate to describe it as reparations for feudalism (not to mention colonialism). We aren’t redistributing what was fairly earned in free markets; we are redistributing what was stolen, so that from now on, wealth can be fairly earned in free markets.

How (not) to destroy an immoral market

Jul 29 JDN 2458329

In this world there are people of primitive cultures, with a population that is slowly declining, trying to survive a constant threat of violence in the aftermath of colonialism. But you already knew that, of course.

What you may not have realized is that some of these people are actively hunted by other people, slaughtered so that their remains can be sold on the black market.

I am referring of course to elephants. Maybe those weren’t the people you first had in mind?

Elephants are not human in the sense of being Homo sapiens; but as far as I am concerned, they are people in a moral sense.

Elephants take as long to mature as humans, and spend most of their childhood learning. They are born with brains only 35% of the size of their adult brains, much as we are born with brains 28% the size of our adult brains. Their encephalization quotients range from about 1.5 to 2.4, comparable to chimpanzees.

Elephants have problem-solving intelligence comparable to chimpanzees, cetaceans, and corvids. Elephants can pass the “mirror test” of self-identification and self-awareness. Individual elephants exhibit clearly distinguishable personalities. They exhibit empathy toward humans and other elephants. They can think creatively and develop new tools.

Elephants distinguish individual humans or elephants by sight or by voice, comfort each other when distressed, and above all mourn their dead. The kind of mourning behaviors elephants exhibit toward the remains of their dead family members have only been observed in humans and chimpanzees.

On a darker note, elephants also seek revenge. In response to losing loved ones to poaching or collisions with trains, elephants have orchestrated organized counter-attacks against human towns. This is not a single animal defending itself, as almost any will do; this is a coordinated act of vengeance after the fact. Once again, we have only observed similar behaviors in humans, great apes, and cetaceans.

Huffington Post backed off and said “just kidding” after asserting that elephants are people—but I won’t. Elephants are people. They do not have an advanced civilization, to be sure. But as far as I am concerned they display all the necessary minimal conditions to be granted the fundamental rights of personhood. Killing an elephant is murder.

And yet, the ivory trade continues to be profitable. Most of this is black-market activity, though it was legal in some places until very recently; China only restored their ivory trade ban this year, and Hong Kong’s ban will not take full effect until 2021. Some places are backsliding: A proposal (currently on hold) by the US Fish and Wildlife Service under the Trump administration would also legalize some limited forms of ivory trade.
With this in mind, I can understand why people would support the practice of ivory-burning, symbolically and publicly destroying ivory by fire so that no one can buy it. Two years ago, Kenya organized a particularly large ivory-burning that set ablaze 105 tons of elephant tusk and 1.35 tons of rhino horn.

But as economist, when I first learned about ivory-burning, it seemed like a really, really bad idea.

Why? Supply and demand. By destroying supply, you have just raised the market price of ivory. You have therefore increased the market incentives for poaching elephants and rhinos.

Yet it turns out I was wrong about this, as were many other economists. I looked at the empirical research, and changed my mind substantially. Ivory-burning is not such a bad idea after all.

Here was my reasoning before: If I want to reduce the incentives to produce something, what do I need to do? Lower the price. How do I do that? I need to increase the supply. Economists have made several proposals for how to do that, and until I looked at the data I would have expected them to work; but they haven’t.

The best way to increase supply is to create synthetic ivory that is cheap and very difficult to tell apart from the real thing. This has been done, but it didn’t work. For some reason, sellers try to hide the expensive real ivory in with the cheap synthetic ivory. I admit I actually have trouble understanding this; if you can’t sell it at full price, why even bother with the illegal real ivory? Maybe their customers have methods of distinguishing the two that the regulators don’t? If so, why aren’t the regulators using those methods? Another concern with increasing the supply of ivory is that it might reduce the stigma of consuming ivory, thereby also increasing the demand.

A similar problem has arisen with so-called “ghost ivory”; for obvious reasons, existing ivory products were excluded from the ban imposed in 1947, lest the government be forced to confiscate millions of billiard balls and thousands of pianos. Yet poachers have learned ways to hide new, illegal ivory and sell it as old, legal ivory.

Another proposal was to organize “sustainable ivory harvesting”, which based on past experience with similar regulations is unlikely to be enforceable. Moreover, this is not like sustainable wood harvesting, where our only concern is environmental. I for one care about the welfare of individual elephants, and I don’t think they would want to be “harvested”, sustainably or otherwise.
There is one way of doing “sustainable harvesting” that might not be so bad for the elephants, which would be to set up a protected colony of elephants, help them to increase their population, and then when elephants die of natural causes, take only the tusks and sell those as ivory, stamped with an official seal as “humanely and sustainably produced”. Even then, elephants are among a handful of species that would be offended by us taking their ancestors’ remains. But if it worked, it could save many elephant lives. The bigger problem is how expensive such a project would be, and how long it would take to show any benefit; elephant lifespans are about half as long as ours, (except in zoos, where their mortality rate is much higher!) so a policy that might conceivably solve a problem in 30 to 40 years doesn’t really sound so great. More detailed theoretical and empirical analysis has made this clear: you just can’t get ivory fast enough to meet existing demand this way.

In any case, China’s ban on all ivory trade had an immediate effect at dropping the price of ivory, which synthetic ivory did not. Before that, strengthened regulations in the US (particularly in New York and California) had been effective at reducing ivory sales. The CITES treaty in 1989 that banned most international ivory trade was followed by an immediate increase in elephant populations.

The most effective response to ivory trade is an absolutely categorical ban with no loopholes. To fight “ghost ivory”, we should remove exceptions for old ivory, offering buybacks for any antiques with a verifiable pedigree and a brief period of no-penalty surrender for anything with no such records. The only legal ivory must be for medical and scientific purposes, and its sourcing records must be absolutely impeccable—just as we do with human remains.

Even synthetic ivory must also be banned, at least if it’s convincing enough that real ivory could be hidden in it. You can make something you call “synthetic ivory” that serves a similar consumer function, but it must be different enough that it can be easily verified at customs inspections.

We must give no quarter to poachers; Kenya was right to impose a life sentence for aggravated poaching. The Tanzanian proposal to “shoot to kill” was too extreme; summary execution is never acceptable. But if indeed someone currently has a weapons pointed at an elephant and refuses to drop it, I consider it justifiable to shoot them, just as I would if that weapon were aimed at a human.

The need for a categorical ban is what makes the current US proposal dangerous. The particular exceptions it carves out are not all that large, but the fact that it carves out exceptions at all makes enforcement much more difficult. To his credit, Trump himself doesn’t seem very keen on the proposal, which may mean that it is dead in the water. I don’t get to say this often, but so far Trump seems to be making the right choice on this one.

Though the economic theory predicted otherwise, the empirical data is actually quite clear: The most effective way to save elephants from poaching is an absolutely categorical ban on ivory.

Ivory-burning is a signal of commitment to such a ban. Any ivory we find being sold, we will burn. Whoever was trying to sell it will lose their entire investment. Find more, and we will burn that too.

Why are humans so bad with probability?

Apr 29 JDN 2458238

In previous posts on deviations from expected utility and cumulative prospect theory, I’ve detailed some of the myriad ways in which human beings deviate from optimal rational behavior when it comes to probability.

This post is going to be a bit different: Yes, we behave irrationally when it comes to probability. Why?

Why aren’t we optimal expected utility maximizers?
This question is not as simple as it sounds. Some of the ways that human beings deviate from neoclassical behavior are simply because neoclassical theory requires levels of knowledge and intelligence far beyond what human beings are capable of; basically anything requiring “perfect information” qualifies, as does any game theory prediction that involves solving extensive-form games with infinite strategy spaces by backward induction. (Don’t feel bad if you have no idea what that means; that’s kind of my point. Solving infinite extensive-form games by backward induction is an unsolved problem in game theory; just this past week I saw a new paper presented that offered a partial potential solutionand yet we expect people to do it optimally every time?)

I’m also not going to include questions of fundamental uncertainty, like “Will Apple stock rise or fall tomorrow?” or “Will the US go to war with North Korea in the next ten years?” where it isn’t even clear how we would assign a probability. (Though I will get back to them, for reasons that will become clear.)

No, let’s just look at the absolute simplest cases, where the probabilities are all well-defined and completely transparent: Lotteries and casino games. Why are we so bad at that?

Lotteries are not a computationally complex problem. You figure out how much the prize is worth to you, multiply it by the probability of winning—which is clearly spelled out for you—and compare that to how much the ticket price is worth to you. The most challenging part lies in specifying your marginal utility of wealth—the “how much it’s worth to you” part—but that’s something you basically had to do anyway, to make any kind of trade-offs on how to spend your time and money. Maybe you didn’t need to compute it quite so precisely over that particular range of parameters, but you need at least some idea how much $1 versus $10,000 is worth to you in order to get by in a market economy.

Casino games are a bit more complicated, but not much, and most of the work has been done for you; you can look on the Internet and find tables of probability calculations for poker, blackjack, roulette, craps and more. Memorizing all those probabilities might take some doing, but human memory is astonishingly capacious, and part of being an expert card player, especially in blackjack, seems to involve memorizing a lot of those probabilities.

Furthermore, by any plausible expected utility calculation, lotteries and casino games are a bad deal. Unless you’re an expert poker player or blackjack card-counter, your expected income from playing at a casino is always negative—and the casino set it up that way on purpose.

Why, then, can lotteries and casinos stay in business? Why are we so bad at such a simple problem?

Clearly we are using some sort of heuristic judgment in order to save computing power, and the people who make lotteries and casinos have designed formal models that can exploit those heuristics to pump money from us. (Shame on them, really; I don’t fully understand why this sort of thing is legal.)

In another previous post I proposed what I call “categorical prospect theory”, which I think is a decently accurate description of the heuristics people use when assessing probability (though I’ve not yet had the chance to test it experimentally).

But why use this particular heuristic? Indeed, why use a heuristic at all for such a simple problem?

I think it’s helpful to keep in mind that these simple problems are weird; they are absolutely not the sort of thing a tribe of hunter-gatherers is likely to encounter on the savannah. It doesn’t make sense for our brains to be optimized to solve poker or roulette.

The sort of problems that our ancestors encountered—indeed, the sort of problems that we encounter, most of the time—were not problems of calculable probability risk; they were problems of fundamental uncertainty. And they were frequently matters of life or death (which is why we’d expect them to be highly evolutionarily optimized): “Was that sound a lion, or just the wind?” “Is this mushroom safe to eat?” “Is that meat spoiled?”

In fact, many of the uncertainties most important to our ancestors are still important today: “Will these new strangers be friendly, or dangerous?” “Is that person attracted to me, or am I just projecting my own feelings?” “Can I trust you to keep your promise?” These sorts of social uncertainties are even deeper; it’s not clear that any finite being could ever totally resolve its uncertainty surrounding the behavior of other beings with the same level of intelligence, as the cognitive arms race continues indefinitely. The better I understand you, the better you understand me—and if you’re trying to deceive me, as I get better at detecting deception, you’ll get better at deceiving.

Personally, I think that it was precisely this sort of feedback loop that resulting in human beings getting such ridiculously huge brains in the first place. Chimpanzees are pretty good at dealing with the natural environment, maybe even better than we are; but even young children can outsmart them in social tasks any day. And once you start evolving for social cognition, it’s very hard to stop; basically you need to be constrained by something very fundamental, like, say, maximum caloric intake or the shape of the birth canal. Where chimpanzees look like their brains were what we call an “interior solution”, where evolution optimized toward a particular balance between cost and benefit, human brains look more like a “corner solution”, where the evolutionary pressure was entirely in one direction until we hit up against a hard constraint. That’s exactly what one would expect to happen if we were caught in a cognitive arms race.

What sort of heuristic makes sense for dealing with fundamental uncertainty—as opposed to precisely calculable probability? Well, you don’t want to compute a utility function and multiply by it, because that adds all sorts of extra computation and you have no idea what probability to assign. But you’ve got to do something like that in some sense, because that really is the optimal way to respond.

So here’s a heuristic you might try: Separate events into some broad categories based on how frequently they seem to occur, and what sort of response would be necessary.

Some things, like the sun rising each morning, seem to always happen. So you should act as if those things are going to happen pretty much always, because they do happen… pretty much always.

Other things, like rain, seem to happen frequently but not always. So you should look for signs that those things might happen, and prepare for them when the signs point in that direction.

Still other things, like being attacked by lions, happen very rarely, but are a really big deal when they do. You can’t go around expecting those to happen all the time, that would be crazy; but you need to be vigilant, and if you see any sign that they might be happening, even if you’re pretty sure they’re not, you may need to respond as if they were actually happening, just in case. The cost of a false positive is much lower than the cost of a false negative.

And still other things, like people sprouting wings and flying, never seem to happen. So you should act as if those things are never going to happen, and you don’t have to worry about them.

This heuristic is quite simple to apply once set up: It can simply slot in memories of when things did and didn’t happen in order to decide which category they go in—i.e. availability heuristic. If you can remember a lot of examples of “almost never”, maybe you should move it to “unlikely” instead. If you get a really big number of examples, you might even want to move it all the way to “likely”.

Another large advantage of this heuristic is that by combining utility and probability into one metric—we might call it “importance”, though Bayesian econometricians might complain about that—we can save on memory space and computing power. I don’t need to separately compute a utility and a probability; I just need to figure out how much effort I should put into dealing with this situation. A high probability of a small cost and a low probability of a large cost may be equally worth my time.

How might these heuristics go wrong? Well, if your environment changes sufficiently, the probabilities could shift and what seemed certain no longer is. For most of human history, “people walking on the Moon” would seem about as plausible as sprouting wings and flying away, and yet it has happened. Being attacked by lions is now exceedingly rare except in very specific places, but we still harbor a certain awe and fear before lions. And of course availability heuristic can be greatly distorted by mass media, which makes people feel like terrorist attacks and nuclear meltdowns are common and deaths by car accidents and influenza are rare—when exactly the opposite is true.

How many categories should you set, and what frequencies should they be associated with? This part I’m still struggling with, and it’s an important piece of the puzzle I will need before I can take this theory to experiment. There is probably a trade-off between more categories giving you more precision in tailoring your optimal behavior, but costing more cognitive resources to maintain. Is the optimal number 3? 4? 7? 10? I really don’t know. Even I could specify the number of categories, I’d still need to figure out precisely what categories to assign.

Reasonableness and public goods games

Apr 1 JDN 2458210

There’s a very common economics experiment called a public goods game, often used to study cooperation and altruistic behavior. I’m actually planning on running a variant of such an experiment for my second-year paper.

The game is quite simple, which is part of why it is used so frequently: You are placed into a group of people (usually about four), and given a little bit of money (say $10). Then you are offered a choice: You can keep the money, or you can donate some of it to a group fund. Money in the group fund will be multiplied by some factor (usually about two) and then redistributed evenly to everyone in the group. So for example if you donate $5, that will become $10, split four ways, so you’ll get back $2.50.

Donating more to the group will benefit everyone else, but at a cost to yourself. The game is usually set up so that the best outcome for everyone is if everyone donates the maximum amount, but the best outcome for you, holding everyone else’s choices constant, is to donate nothing and keep it all.

Yet it is a very robust finding that most people do neither of those things. There’s still a good deal of uncertainty surrounding what motivates people to donate what they do, but certain patterns that have emerged:

  1. Most people donate something, but hardly anyone donates everything.
  2. Increasing the multiplier tends to smoothly increase how much people donate.
  3. The number of people in the group isn’t very important, though very small groups (e.g. 2) behave differently from very large groups (e.g. 50).
  4. Letting people talk to each other tends to increase the rate of donations.
  5. Repetition of the game, or experience from previous games, tends to result in decreasing donation over time.
  6. Economists donate less than other people.

Number 6 is unfortunate, but easy to explain: Indoctrination into game theory and neoclassical economics has taught economists that selfish behavior is efficient and optimal, so they behave selfishly.

Number 3 is also fairly easy to explain: Very small groups allow opportunities for punishment and coordination that don’t exist in large groups. Think about how you would respond when faced with 2 defectors in a group of 4 as opposed to 10 defectors in a group of 50. You could punish the 2 by giving less next round; but punishing the 10 would end up punishing 40 others who had contributed like they were supposed to.

Number 4 is a very interesting finding. Game theory says that communication shouldn’t matter, because there is a unique Nash equilibrium: Donate nothing. All the promises in the world can’t change what is the optimal response in the game. But in fact, human beings don’t like to break their promises, and so when you get a bunch of people together and they all agree to donate, most of them will carry through on that agreement most of the time.

Number 5 is on the frontier of research right now. There are various theoretical accounts for why it might occur, but none of the models proposed so far have much predictive power.

But my focus today will be on findings 1 and 2.

If you’re not familiar with the underlying game theory, finding 2 may seem obvious to you: Well, of course if you increase the payoff for donating, people will donate more! It’s precisely that sense of obviousness which I am going to appeal to in a moment.

In fact, the game theory makes a very sharp prediction: For N players, if the multiplier is less than N, you should always contribute nothing. Only if the multiplier becomes larger than N should you donate—and at that point you should donate everything. The game theory prediction is not a smooth increase; it’s all-or-nothing. The only time game theory predicts intermediate amounts is on the knife-edge at exactly equal to N, where each player would be indifferent between donating and not donating.

But it feels reasonable that increasing the multiplier should increase donation, doesn’t it? It’s a “safer bet” in some sense to donate $1 if the payoff to everyone is $3 and the payoff to yourself is $0.75 than if the payoff to everyone is $1.04 and the payoff to yourself is $0.26. The cost-benefit analysis comes out better: In the former case, you can gain up to $2 if everyone donates, but would only lose $0.25 if you donate alone; but in the latter case, you would only gain $0.04 if everyone donates, and would lose $0.74 if you donate alone.

I think this notion of “reasonableness” is a deep principle that underlies a great deal of human thought. This is something that is sorely lacking from artificial intelligence: The same AI that tells you the precise width of the English Channel to the nearest foot may also tell you that the Earth is 14 feet in diameter, because the former was in its database and the latter wasn’t. Yes, WATSON may have won on Jeopardy, but it (he?) also made a nonsensical response to the Final Jeopardy question.

Human beings like to “sanity-check” our results against prior knowledge, making sure that everything fits together. And, of particular note for public goods games, human beings like to “hedge our bets”; we don’t like to over-commit to a single belief in the face of uncertainty.

I think this is what best explains findings 1 and 2. We don’t donate everything, because that requires committing totally to the belief that contributing is always better. We also don’t donate nothing, because that requires committing totally to the belief that contributing is always worse.

And of course we donate more as the payoffs to donating more increase; that also just seems reasonable. If something is better, you do more of it!

These choices could be modeled formally by assigning some sort of probability distribution over other’s choices, but in a rather unconventional way. We can’t simply assume that other people will randomly choose some decision and then optimize accordingly—that just gives you back the game theory prediction. We have to assume that our behavior and the behavior of others is in some sense correlated; if we decide to donate, we reason that others are more likely to donate as well.

Stated like that, this sounds irrational; some economists have taken to calling it “magical thinking”. Yet, as I always like to point out to such economists: On average, people who do that make more money in the games. Economists playing other economists always make very little money in these games, because they turn on each other immediately. So who is “irrational” now?

Indeed, if you ask people to predict how others will behave in these games, they generally do better than the game theory prediction: They say, correctly, that some people will give nothing, most will give something, and hardly any will give everything. The same “reasonableness” that they use to motivate their own decisions, they also accurately apply to forecasting the decisions of others.

Of course, to say that something is “reasonable” may be ultimately to say that it conforms to our heuristics well. To really have a theory, I need to specify exactly what those heuristics are.

“Don’t put all your eggs in one basket” seems to be one, but it’s probably not the only one that matters; my guess is that there are circumstances in which people would actually choose all-or-nothing, like if we said that the multiplier was 0.5 (so everyone giving to the group would make everyone worse off) or 10 (so that giving to the group makes you and everyone else way better off).

“Higher payoffs are better” is probably one as well, but precisely formulating that is actually surprisingly difficult. Higher payoffs for you? For the group? Conditional on what? Do you hold others’ behavior constant, or assume it is somehow affected by your own choices?

And of course, the theory wouldn’t be much good if it only worked on public goods games (though even that would be a substantial advance at this point). We want a theory that explains a broad class of human behavior; we can start with simple economics experiments, but ultimately we want to extend it to real-world choices.

Hyperbolic discounting: Why we procrastinate

Mar 25 JDN 2458203

Lately I’ve been so occupied by Trump and politics and various ideas from environmentalists that I haven’t really written much about the cognitive economics that was originally planned to be the core of this blog. So, I thought that this week I would take a step out of the political fray and go back to those core topics.

Why do we procrastinate? Why do we overeat? Why do we fail to exercise? It’s quite mysterious, from the perspective of neoclassical economic theory. We know these things are bad for us in the long run, and yet we do them anyway.

The reason has to do with the way our brains deal with time. We value the future less than the present—but that’s not actually the problem. The problem is that we do so inconsistently.

A perfectly-rational neoclassical agent would use time-consistent discounting; what this means is that the effect of a given time interval won’t change or vary based on the stakes. If having $100 in 2019 is as good as having $110 in 2020, then having $1000 in 2019 is as good as having $1100 in 2020; and if I ask you in 2019, you’ll still agree that having $100 in 2019 is as good as having $1100 in 2020. A perfectly-rational individual would have a certain discount rate (in this case, 10% per year), and would apply it consistently at all times on all things.

This is of course not how human beings behave at all.

A much more likely pattern is that you would agree, in 2018, that having $100 in 2019 is as good as having $110 in 2020 (a discount rate of 10%). But then if I wait until 2019, and then offer you the choice between $100 immediately and $120 in a year, you’ll probably take the $100 immediately—even though a year ago, you told me you wouldn’t. Your discount rate rose from 10% to at least 20% in the intervening time.

The leading model in cognitive economics right now to explain this is called hyperbolic discounting. The precise functional form of a hyperbola has been called into question by recent research, but the general pattern is definitely right: We act as though time matters a great deal when discussing time intervals that are close to us, but treat time as unimportant when discussing time intervals that are far away.

How does this explain procrastination and other failures of self-control over time? Let’s try an example.

Let’s say that you have a project you need to finish by the end of the day Friday, which has a benefit to you, received on Saturday, that I will arbitrarily scale at 1000 utilons.

Then, let’s say it’s Monday. You have five days to work on it, and each day of work costs you 100 utilons. If you work all five days, the project will get done.

If you skip a day of work, you will need to work so much harder that one of the following days your cost of work will be 300 utilons instead of 100. If you skip two days, you’ll have to pay 300 utilons twice. And if you skip three or more days, the project will not be finished and it will all be for naught.

If you don’t discount time at all (which, over a week, is probably close to optimal), the answer is obvious: Work all five days. Pay 100+100+100+100+100 = 500, receive 1000. Net benefit: 500.

But even if you discount time, as long as you do so consistently, you still wouldn’t procrastinate.

Let’s say your discount rate is extremely high (maybe you’re dying or something), so that each day is only worth 80% as much as the previous. Benefit that’s worth 1 on Monday is worth 0.8 if it comes on Tuesday, 0.64 if it comes on Wednesday, 0.512 if it comes on Thursday, 0.4096 if it comes on Friday,a and 0.32768 if it comes on Saturday. Then instead of paying 100+100+100+100+100 to get 1000, you’re paying 100+80+64+51+41=336 to get 328. It’s not worth doing the project; you should just enjoy your last few days on Earth. That’s not procrastinating; that’s rationally choosing not to undertake a project that isn’t worthwhile under your circumstances.

Procrastinating would look more like this: You skip the first two days, then work 100 the third day, then work 300 each of the last two days, finishing the project. If you didn’t discount at all, you would pay 100+300+300=700 to get 1000, so your net benefit has been reduced to 300.

There’s no consistent discount rate that would make this rational. If it was worth giving up 200 on Thursday and Friday to get 100 on Monday and Tuesday, you must be discounting at least 26% per day. But if you’re discounting that much, you shouldn’t bother with the project at all.

There is however an inconsistent discounting by which it makes perfect sense. Suppose that instead of consistently discounting some percentage each day, psychologically it feels like this: The value is the inverse of the length of time (that’s what it means to be hyperbolic). So the same amount of benefit on Monday which is worth 1 is only worth 1/2 if it comes on Tuesday, 1/3 if on Wednesday, 1/4 if on Thursday, and 1/5 if on Friday.

So, when thinking about your weekly schedule, you realize that by pushing back Monday’s work to Thursday, you can gain 100 today at a cost of only 200/4 = 50, since Thursday is 4 days away. And by pushing back Tuesday’s work to Friday, you can gain 100/2=50 today at a cost of only 200/5=40. So now it makes perfect sense to have fun on Monday and Tuesday, start working on Wednesday, and cram the biggest work into Thursday and Friday. And yes, it still makes sense to do the project, because 1000/6 = 166 is more than the 100/3+200/4+200/5 = 123 it will cost to do the work.

But now think about what happens when you come to Wednesday. The work today costs 100. The work on Thursday costs 200/2 = 100. The work on Friday costs 200/3 = 66. The benefit of completing the project will be 1000/4 = 250. So you are paying 100+100+66=266 to get a benefit of only 250. It’s not worth it anymore! You’ve changed your mind. So you don’t work Wednesday.

At that point, it’s too late, so you don’t work Thursday, you don’t work Friday, and the project doesn’t get done. You have procrastinated away the benefits you could have gotten from doing this project. If only you could have done the work on Monday and Tuesday, then on Wednesday it would have been worthwhile to continue: 100/1+100/2+100/3 = 183 is less than the benefit of 250.

What went wrong? The key event was the preference reversal: While on Monday you preferred having fun on Monday and working on Thursday to working on both days, when the time came you changed your mind. Someone with time-consistent discounting would never do that; they would either prefer one or the other, and never change their mind.

One way to think about this is to imagine future versions of yourself as different people, who agree with you on most things, but not on everything. They’re like friends or family; you want the best for them, but you don’t always see eye-to-eye.

Generally we find that our future selves are less rational about choices than we are. To be clear, this doesn’t mean that we’re all declining in rationality over time. Rather, it comes from the fact that future decisions are inherently closer to our future selves than they are to our current selves, and the closer a decision gets the more likely we are to use irrational time discounting.

This is why it’s useful to plan and make commitments. If starting on Monday you committed yourself to working every single day, you’d get the project done on time and everything would work out fine. Better yet, if you committed yourself last week to starting work on Monday, you wouldn’t even feel conflicted; you would be entirely willing to pay a cost of 100/8+100/9+100/10+100/11+100/12=51 to get a benefit of 1000/13=77. So you could set up some sort of scheme where you tell your friends ahead of time that you can’t go out that week, or you turn off access to social media sites (there are apps that will do this for you), or you set up a donation to an “anti-charity” you don’t like that will trigger if you fail to complete the project on time (there are websites to do that for you).

There is even a simpler way: Make a promise to yourself. This one can be tricky to follow through on, but if you can train yourself to do it, it is extraordinarily powerful and doesn’t come with the additional costs that a lot of other commitment devices involve. If you can really make yourself feel as bad about breaking a promise to yourself as you would about breaking a promise to someone else, then you can dramatically increase your own self-control with very little cost. The challenge lies in actually cultivating that sort of attitude, and then in following through with making only promises you can keep and actually keeping them. This, too, can be a delicate balance; it is dangerous to over-commit to promises to yourself and feel too much pain when you fail to meet them.
But given the strong correlations between self-control and long-term success, trying to train yourself to be a little better at it can provide enormous benefits.
If you ever get around to it, that is.

“DSGE or GTFO”: Macroeconomics took a wrong turn somewhere

Dec 31, JDN 2458119

The state of macro is good,” wrote Oliver Blanchard—in August 2008. This is rather like the turkey who is so pleased with how the farmer has been feeding him lately, the day before Thanksgiving.

It’s not easy to say exactly where macroeconomics went wrong, but I think Paul Romer is right when he makes the analogy between DSGE (dynamic stochastic general equilbrium) models and string theory. They are mathematically complex and difficult to understand, and people can make their careers by being the only ones who grasp them; therefore they must be right! Nevermind if they have no empirical support whatsoever.

To be fair, DSGE models are at least a little better than string theory; they can at least be fit to real-world data, which is better than string theory can say. But being fit to data and actually predicting data are fundamentally different things, and DSGE models typically forecast no better than far simpler models without their bold assumptions. You don’t need to assume all this stuff about a “representative agent” maximizing a well-defined utility function, or an Euler equation (that doesn’t even fit the data), or this ever-proliferating list of “random shocks” that end up taking up all the degrees of freedom your model was supposed to explain. Just regressing the variables on a few years of previous values of each other (a “vector autoregression” or VAR) generally gives you an equally-good forecast. The fact that these models can be made to fit the data well if you add enough degrees of freedom doesn’t actually make them good models. As Von Neumann warned us, with enough free parameters, you can fit an elephant.

But really what bothers me is not the DSGE but the GTFO (“get the [expletive] out”); it’s not that DSGE models are used, but that it’s almost impossible to get published as a macroeconomic theorist using anything else. Defenders of DSGE typically don’t even argue anymore that it is good; they argue that there are no credible alternatives. They characterize their opponents as “dilettantes” who aren’t opposing DSGE because we disagree with it; no, it must be because we don’t understand it. (Also, regarding that post, I’d just like to note that I now officially satisfy the Athreya Axiom of Absolute Arrogance: I have passed my qualifying exams in a top-50 economics PhD program. Yet my enmity toward DSGE has, if anything, only intensified.)

Of course, that argument only makes sense if you haven’t been actively suppressing all attempts to formulate an alternative, which is precisely what DSGE macroeconomists have been doing for the last two or three decades. And yet despite this suppression, there are alternatives emerging, particularly from the empirical side. There are now empirical approaches to macroeconomics that don’t use DSGE models. Regression discontinuity methods and other “natural experiment” designs—not to mention actual experiments—are quickly rising in popularity as economists realize that these methods allow us to actually empirically test our models instead of just adding more and more mathematical complexity to them.

But there still seems to be a lingering attitude that there is no other way to do macro theory. This is very frustrating for me personally, because deep down I think what I would like to do as a career is macro theory: By temperament I have always viewed the world through a very abstract, theoretical lens, and the issues I care most about—particularly inequality, development, and unemployment—are all fundamentally “macro” issues. I left physics when I realized I would be expected to do string theory. I don’t want to leave economics now that I’m expected to do DSGE. But I also definitely don’t want to do DSGE.

Fortunately with economics I have a backup plan: I can always be an “applied micreconomist” (rather the opposite of a theoretical macroeconomist I suppose), directly attached to the data in the form of empirical analyses or even direct, randomized controlled experiments. And there certainly is plenty of work to be done along the lines of Akerlof and Roth and Shiller and Kahneman and Thaler in cognitive and behavioral economics, which is also generally considered applied micro. I was never going to be an experimental physicist, but I can be an experimental economist. And I do get to use at least some theory: In particular, there’s an awful lot of game theory in experimental economics these days. Some of the most exciting stuff is actually in showing how human beings don’t behave the way classical game theory predicts (particularly in the Ultimatum Game and the Prisoner’s Dilemma), and trying to extend game theory into something that would fit our actual behavior. Cognitive science suggests that the result is going to end up looking quite different from game theory as we know it, and with my cognitive science background I may be particularly well-positioned to lead that charge.

Still, I don’t think I’ll be entirely satisfied if I can’t somehow bring my career back around to macroeconomic issues, and particularly the great elephant in the room of all economics, which is inequality. Underlying everything from Marxism to Trumpism, from the surging rents in Silicon Valley and the crushing poverty of Burkina Faso, to the Great Recession itself, is inequality. It is, in my view, the central question of economics: Who gets what, and why?

That is a fundamentally macro question, but you can’t even talk about that issue in DSGE as we know it; a “representative agent” inherently smooths over all inequality in the economy as though total GDP were all that mattered. A fundamentally new approach to macroeconomics is needed. Hopefully I can be part of that, but from my current position I don’t feel much empowered to fight this status quo. Maybe I need to spend at least a few more years doing something else, making a name for myself, and then I’ll be able to come back to this fight with a stronger position.

In the meantime, I guess there’s plenty of work to be done on cognitive biases and deviations from game theory.

The “productivity paradox”

 

Dec 10, JDN 2458098

Take a look at this graph of manufacturing output per worker-hour:

Manufacturing_productivity

From 1988 to 2008, it was growing at a steady pace. In 2008 and 2009 it took a dip due to the Great Recession; no big surprise there. But then since 2012 it has been… completely flat. If we take this graph at face value, it would imply that manufacturing workers today can produce no more output than workers five years ago, and indeed only about 10% more than workers a decade ago. Whereas, a worker in 2008 was producing over 60% more than a worker in 1998, who was producing over 40% more than a worker in 1988.

Many economists call this the “productivity paradox”, and use it to argue that we don’t really need to worry about robots taking all our jobs any time soon. I think this view is mistaken.

The way we measure productivity is fundamentally wrongheaded, and is probably the sole cause of this “paradox”.

First of all, we use total hours scheduled to work, not total hours actually doing productive work. This is obviously much, much easier to measure, which is why we do it. But if you think for a moment about how the 40-hour workweek norm is going to clash with rapidly rising real productivity, it becomes apparent why this isn’t going to be a good measure.
When a worker finds a way to get done in 10 hours what used to take 40 hours, what does that worker’s boss do? Send them home after 10 hours because the job is done? Give them a bonus for their creativity? Hardly. That would be far too rational. They assign them more work, while paying them exactly the same. Recognizing this, what is such a worker to do? The obvious answer is to pretend to work the other 30 hours, while in fact doing something more pleasant than working.
And indeed, so-called “worker distraction” has been rapidly increasing. People are right to blame smartphones, I suppose, but not for the reasons they think. It’s not that smartphones are inherently distracting devices. It’s that smartphones are the cutting edge of a technological revolution that has made most of our work time unnecessary, so due to our fundamentally defective management norms they create overwhelming incentives to waste time at work to avoid getting drenched in extra tasks for no money.

That would probably be enough to explain the “paradox” by itself, but there is a deeper reason that in the long run is even stronger. It has to do with the way we measure “output”.

It might surprise you to learn that economists almost never consider output in terms of the actual number of cars produced, buildings constructed, songs written, or software packages developed. The standard measures of output are all in the form of so-called “real GDP”; that is, the dollar value of output produced.

They do adjust for indexes of inflation, but as I’ll show in a moment this still creates a fundamentally biased picture of the productivity dynamics.

Consider a world with only three industries: Housing, Food, and Music.

Productivity in Housing doesn’t change at all. Producing a house cost 10,000 worker-hours in 1950, and cost 10,000 worker-hours in 2000. Nominal price of houses has rapidly increased, from $10,000 in 1950 to $200,000 in 2000.

Productivity in Food rises moderately fast. Producing 1,000 meals cost 1,000 worker-hours in 1950, and cost 100 worker-hours in 2000. Nominal price of food has increased slowly, from $1,000 per 1,000 meals in 1950 to $5,000 per 1,000 meals in 2000.

Productivity in Music rises extremely fast. Producing 1,000 performances cost 10,000 worker-hours in 1950, and cost 1 worker-hour in 2000. Nominal price of music has collapsed, from $100,000 per 1,000 performances in 1950 to $1,000 per 1,000 performances in 2000.

This is of course an extremely stylized version of what has actually happened: Housing has gotten way more expensive, food has stayed about the same in price while farm employment has plummeted, and the rise of digital music has brought about a new Renaissance in actual music production and listening while revenue for the music industry has collapsed. There is a very nice Vox article on the “productivity paradox” showing a graph of how prices have changed in different industries.

How would productivity appear in the world I’ve just described, by standard measures? Well, to say that I actually need to say something about how consumers substitute across industries. But I think I’ll be forgiven in this case for saying that there is no substitution whatsoever; you can’t eat music or live in a burrito. There’s also a clear Maslow hierarchy here: They say that man cannot live by bread alone, but I think living by Led Zeppelin alone is even harder.

Consumers will therefore choose like this: Over 10 years, buy 1 house, 10,000 meals, and as many performances as you can afford after that. Further suppose that each person had $2,100 per year to spend in 1940-1950, and $50,000 per year to spend in 1990-2000. (This is approximately true for actual nominal US GDP per capita.)

1940-1950:
Total funds: $21,000

1 house = $10,000

10,000 meals = $10,000

Remaining funds: $1,000

Performances purchased: 10

1990-2000:

Total funds: $500,000

1 house = $200,000

10,000 meals = $50,000

Remaining funds: $250,000

Performances purchased: 250,000

(Do you really listen to this much music? 250,000 performances over 10 years is about 70 songs per day. If each song is 3 minutes, that’s only about 3.5 hours per day. If you listen to music while you work or watch a couple of movies with musical scores, yes, you really do listen to this much music! The unrealistic part is assuming that people in 1950 listen to so little, given that radio was already widespread. But if you think of music as standing in for all media, the general trend of being able to consume vastly more media in the digital age is clearly correct.)

Now consider how we would compute a price index for each time period. We would construct a basket of goods and determine the price of that basket in each time period, then adjust prices until that basket has a constant price.

Here, the basket would probably be what people bought in 1940-1950: 1 house, 10,000 meals, and 400 music performances.

In 1950, this basket cost $10,000+$10,000+$100 = $21,000.

In 2000, this basket cost $200,000+$50,000+$400 = $150,400.

This means that our inflation adjustment is $150,400/$21,000 = 7 to 1. This means that we would estimate the real per-capita GDP in 1950 at about $14,700. And indeed, that’s about the actual estimate of real per-capita GDP in 1950.

So, what would we say about productivity?

Sales of houses in 1950 were 1 per person, costing 10,000 worker hours.

Sales of food in 1950 were 10,000 per person, costing 10,000 worker hours.

Sales of music in 1950 were 400 per person, costing 4,000 worker hours.

Worker hours per person are therefore 24,000.

Sales of houses in 2000 were 1 per person, costing 10,000 worker hours.

Sales of food in 2000 were 10,000 per person, costing 1,000 worker hours.

Sales of music in 2000 were 250,000 per person, costing 25,000 worker hours.

Worker hours per person are therefore 36,000.

Therefore we would estimate that productivity rose from $14,700/24,000 = $0.61 per worker-hour to $50,000/36,000 = $1.40 per worker-hour. This is an annual growth rate of about 1.7%, which is again, pretty close to the actual estimate of productivity growth. For such a highly stylized model, my figures are doing remarkably well. (Honestly, better than I thought they would!)

But think about how much actual productivity rose, at least in the industries where it did.

We produce 10 times as much food per worker hour after 50 years, which is an annual growth rate of 4.7%, or three times the estimated growth rate.

We produce 10,000 times as much music per worker hour after 50 years, which is an annual growth rate of over 20%, or almost twelve times the estimated growth rate.

Moreover, should music producers be worried about losing their jobs to automation? Absolutely! People simply won’t be able to listen to much more music than they already are, so any continued increases in music productivity are going to make musicians lose jobs. And that was already allowing for music consumption to increase by a factor of over 600.

Of course, the real world has a lot more industries than this, and everything is a lot more complicated. We do actually substitute across some of those industries, unlike in this model.

But I hope I’ve gotten at least the basic point across that when things become drastically cheaper as technological progress often does, simply adjusting for inflation doesn’t do the job. One dollar of music today isn’t the same thing as one dollar of music a century ago, even if you inflation-adjust their dollars to match ours. We ought to be measuring in hours of music; an hour of music is much the same thing as an hour of music a century ago.

And likewise, that secretary/weather forecaster/news reporter/accountant/musician/filmmaker in your pocket that you call a “smartphone” really ought to be counted as more than just a simple inflation adjustment on its market price. The fact that it is mind-bogglingly cheaper to get these services than it used to be is the technological progress we care about; it’s not some statistical artifact to be removed by proper measurement.

Combine that with actually measuring the hours of real, productive work, and I think you’ll find that productivity is still rising quite rapidly, and that we should still be worried about what automation is going to do to our jobs.